EmpathyWorks™ is a framework for modeling personality and behavior, much like Apache Spark™ is a framework for machine learning. Similar to Spark, EmpathyWorks does not prescribe algorithms or other details of how data is processed, instead, both projects provide options for users to decide how they want to construct their system.
EmpathyWorks can be used to model the ‘feelings’ of any type of living or artificial entity, including people, groups, political parties and companies. Artificial personalities have an emotional state, inherited behavioral characteristics and a ‘world view’.
Unlike machine learning systems, which cannot explain how they achieve their results, EmpathyWorks can show the factors that went into an opinion.
Some questions that could be posed to an EmpathyWorks-powered system include:
This project originated from Mike Slinn’s research on modeling how an individual is directly influenced by personal events and is indirectly influenced by the events experienced by others that that they are affiliated with (or the ‘tribes’ they belong to), and how groups aggregate and influence each other. Most artificial intelligence systems do not address the impact that relationship as on expressed behavior. In contrast, EmpathyWorks relies on relationships as the basis to express behavior.
Because EmpathyWorks uses event sourcing:
Here is a typical deployment scenario, showing how events generated by machine learning systems are processed by EmpathyWorks:
Q: Is EmpathyWorks™ a standalone program?
A: No, EmpathyWorks™ is mean to enhance an AI system or robot. If the host program is able to generate events (for example, classifications and regressions) then EmpathyWorks can model relationships and feelings.
Q: Is EmpathyWorks specifically designed for video games, personal robotics or is it a diagnosis tool?
A: EmpathyWorks can be used for all those purposes, and more. EmpathyWorks is a generalized personality modeling system, configured by personality rules and extended and integrated by custom software. Given a sufficient computer resources, it could model billions of individuals and their relationships.
Q: How is EmpathyWorks different from the Sims™ published by Maxis and distributed by Electronic Arts?
A: EmpathyWorks is rather similar to the Sims artificial intelligence program in many ways. EmpathyWorks could enable games like The Sims, but with more control over life stages, species and the types of relationships. EmpathyWorks can also be used to power robots and avatars. This means that the personalities and relationships powered by EmpathyWorks could reach out from a game into the physical world.
Q: Does EmpathyWorks include speech recognition or image recognition?
A: Sensory perception is the job of the host AI application, which must notify EmpathyWorks of events that it perceives.
Q: Are Asimov’s
Three Laws of Robotics supported?
A: That would be up to the host AI system, not EmpathyWorks. EmpathyWorks could help implement support for Asimov’s Three Laws of Robotics, however. Because EmpathyWorks is aware of relationships and is aware of what the response of an individual to an event would be, it could inform an AI system how various individuals might behave as a result of a possible event.
Q: Can an EmpathyWorks character die?
A: Individuals modeled by EmpathyWorks are instantiated as members of species. You define the life stages for the species that you wish to have EmpathyWorks model. One of the implicitly defined life stages is death. Death can occur when advancing the maturity level of an individual the the last stage (death). Death can also occur as a result of interaction with the virtual world in which the character ‘lives’. When a character dies, every other individual in relationship with the deceased receives a notification, and EmpathyWorks also models their responses. The internal state of the character is preserved for the remainder of the simulation.
Q: Can an EmpathyWorks personality evolve, based on experience?
A: Yes. Individuals modeled by EmpathyWorks have both emotional state and a world view. Individuals change their emotional state and world view according to their innate characteristics and life events (nature and nurture are both influences.)
Q: Can a product with EmpathyWorks develop a "bad" personality?
A: Yes. The tendency for an artificial personality towards developing a bad attitude increases if exposed to negative events without sufficient support from others.
Q: How many individual products using EmpathyWorks can interact with each other?
A: There is no limit. Millions of individuals can interact every second.
We are an early state startup, seeking seed funding, advisors, business development professionals and additional software programmers.
The initial EmpathyWorks prototype was written in Java and modeled the emotional state of a single individual. The meta-model was populated by a model specified by English-like domain-specific language (DSL) and an XML-based DSL. The product was designed to run standalone in a device.
A second prototype is under construction, and is intended to model the interactions of billions of individuals through their relationships. The product is designed for large-scale, cloud-based environments. The Scala programming language is used. The two earlier DSLs were replaced by a JSON-based DSL that could be hot-swapped without loss of data. This allows a closed-loop system, whereby the model is continuously tuned while in production without halting the system.
We recently disclosed details of our current work.
Seed funding will allow us to further define the initial target market for this generic technology, productize the current prototype for that market, and acquire our first customers.
The decision was recently made to go with Kafka Streaming instead of Akka. You can read about the reasoning here.
Our most recent work on this project leverages the state-based trigger capability introduced in previous versions. We are working on a highly configurable and programmable monitoring layer for the EmpathyWorks runtime, and a control facility that can replay scenarios while varying events. This should allow us to perform sensitivity analysis and run multiple scenarios in parallel in order to answer 'what-if' questions.
Monitoring will use its own meta-model, distinct from the meta-model used by the runtime system. Monitoring has been considered from 3 points of view, and requires that the model define a taxonomy of users, so each of them can be classified into one or more personas. Patterns of behavior for each type of user persona can be specified in the model:
The term ‘individual’ is quoted because there is generally not a 1:1 match between physical beings and user handles in many online communities.