Monthly Archives: June 2018

Simulating Consciousness: Capability Model – OODA loop

If we were interested in simulating consciousness, what would we model? In the previous post we described the agent architecture of our interest. This post concentrates on capabilities supported by this architecture. We loosely follow the approach outlined in “Axioms and Tests for the Presence of Minimal Consciousness in Agents” [1], but instead of minimal set of axioms we develop a capability model to clarify directions of our research and implementation. In this post we concentrate on one of the building blocks – OODA loop capabilities.

OODA Loop

Agent is constantly running  OODA loop:

  • except situations when agent is in the “crash mode”

As part of OODA loop, agent produces following explicit artifacts:

  • raw percepts of the external environment (typically from several sensors)
  • raw percepts of the internal environment
  • interpretation of percepts combined into the internal state (including sensor fusion, emotions/feelings substate)
  • list of proposed expert agents relevant for the current situation
  • selected active expert agent (if exists)
  • active goals structure (if exists)
  • list of possible actions relevant to the internal state (heavily mediated)
  • list of recommended actions (heavily mediated)
  • action(s) to execute (could be external or internal)
  • feedback from executing selected actions

OODA artifacts become persistent in episodic memory as “episodes”:

  • There are links between episodes which allow to traverse episodic memory based on “next”, “previous” relationships
  • Episodes are connected to representations of individual objects  and classes of objects known to the agent using associative links

Agent is capable to collect and use information about individual objects in the environment:

  • using direct sensing of the environment
  • using information exchange with other agents
  • dealing with “known” and “unknown” individual objects
  • organizing individual objects into classes, recognizing objects as a member of some classes
  • information about individual objects is an important part of episodes, but it also abstracted in semantic memory

Agent is capable of organizing information about classes of objects:

  • Information about classes is used as part of the interpretation and enrichment of the sensory information in the OODA loop (through object classification, prediction of not directly observed properties, for example)

Simulating interaction  between episodic and semantic memories is a great topic for the following posts.

Support for “crash  mode”:

  • Agent can be in the “crash  mode” without OODA loop (system crash, for example)
  • If agent is “restarted”, we expect agent’s re-orientation in time, states, goals, etc. and renewing of OODA loop

Support for  the “sleeping mode” (as part of OODA loop):

  • Agent has limited interaction with the external environment in the “sleeping mode”
  • It is a good time for background processes such as memory optimization, learning, etc.
  • Agent can “schedule” the “sleeping mode” and it can be “recommended” by the internal state
  • “waking up” from the “sleeping mode” restores time, states orientation, goals, etc.,  and resumes OODA loop in the “active mode”
  • the “sleeping mode” could potentially include dreaming simulation which could be combination of replaying previous episodes, going through/playing with imaginary situations

References:

1. Igor Aleksander and Barry Dunmal (2003), Axioms and Tests for the Presence of Minimal Consciousness in Agents.  Link on researchgate.net