Emergence of Individuality in Artificial Agents

Yuri Barzov
3 min readFeb 28, 2018

“Human beings, viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves.

Herbert A. Simon

Artificial Agents with Unique Individuality

It is extremely hard to fool a human brain with fake authenticity. It is even harder to create genuinely authentic products or experiences. That’s why studios pay riches to actors which can make it and to artists which can fake it.

Genuinely authentic intelligent artificial agents could create authentically thrilling virtual worlds and make non-player characters which play authentically.

Digital artefacts with genuinely authentic individuality will become the first ever 100% counterfeit-secure exclusive digital products.

People will get an opportunity to have their own first ever digital pets with their absolutely distinct unreplicatable individuality although with an extremely well protected and always up to date backup file.

Artificial agents don’t need to be as complex as humans or animals to have their individuality to emerge. They need, of course, some precise machinery to encode and to reflect the environmental complexity dynamics which will make their individuality to emerge. Their training environment also needs to be enriched with complexity in a very specific way. Let’s have a look.

Two Learning Strategies

To understand how we can achieve the individuality emergence goal, we need to follow the neuroscience and to draw, first, a clear distinction between two learning strategies:

(i) spatial learning — landmarks based navigation across physical and mental spaces;

(ii) response learning — cues based navigation along physical and mind routes.

Humans need to apply spatial learning strategy to act (navigate) according to human values (landmarks).

Humans need to apply response learning strategy to behave (navigate) according to commandments (cues).

Response over-learning leads to the creation of habits, which our brain performs automatically. Habits rely on procedural memory and do not require self-awareness for their execution.

Spatial learning leads to the creation of cognitive maps, which require self awareness to navigate both physical and mental spaces. Narratives and networks of item-nodes, which are used for coding and retrieval of the autobiographical narrative of episodic memory and other memorized narratives, are cognitive maps applied to non-spatial domains (they are also called scenes, mental models or schemata).

Anatomically, spatial learning is associated with hippocampus (part of mammalian brain) and response learning — with caudate nucleus (part of reptilian brain), but they both engage wide and overlapping areas of cortex (human part of the brain) therefore both strategies can not be applied simultaneously. Route integration (adding vectors to cues) that leads to the memorization of an entire route in response learning based navigation is also performed by hippocampus.

Two Types of Consciousness

We also need to draw the distinction between primary and secondary consciousness:

(i) conscious but not self aware entropy-expansion consciousness (the animal self) that performs statistical computations for both response and spatial learning and for making uncertainty driven swaps between spatial and response learning strategies as well as between explorative/exploitative types of behaviour;

(ii) conscious and self aware entropy-suppression consciousness (the human self) that dynamically encodes/retrieves episodic memory both as a network of item-nodes and as an autobiographical narrative. Ego, introspection and metacognition are the products of secondary consciousness.

The entropy-expansion consciousness relies on implicit mechanism of Bayesian ideal learner to make choices between exploration/exploitation types of behavior and spatial/response learning strategies. To influence these choices we need to use uncertainty that modulates behavior. The entropic brain feeds on information. It engages agents in active exploration through directed information foraging. Roaming entropy is the measure of information foraging of an agent.

The entropy suppression consciousness operates with explicit information in the form of narratives and scenes (maps, schemes) and uses landmarks to navigate them.

Then we only need to combine scene construction and dual encoding of narratives (for secondary consciousness) with unexpected uncertainty that modulates behaviour (for primary consciousness) in order to create the complex enough artificial environment for the emergence of authentic individuality in artificial agents.

Voila!

PS: There are several machine learning algorithms which can be used both on the agent’s and on the environment’s sides for individuality emergence. Which will fit best we can tell only from practical implementation experiments.

--

--