Tag Archives: Simulating consciousness

Simulating Consciousness: Capability Model – OODA loop

If we were interested in simulating consciousness, what would we model? In the previous post we described the agent architecture of our interest. This post concentrates on capabilities supported by this architecture. We loosely follow the approach outlined in “Axioms and Tests for the Presence of Minimal Consciousness in Agents” [1], but instead of minimal set of axioms we develop a capability model to clarify directions of our research and implementation. In this post we concentrate on one of the building blocks – OODA loop capabilities.

OODA Loop

Agent is constantly running  OODA loop:

  • except situations when agent is in the “crash mode”

As part of OODA loop, agent produces following explicit artifacts:

  • raw percepts of the external environment (typically from several sensors)
  • raw percepts of the internal environment
  • interpretation of percepts combined into the internal state (including sensor fusion, emotions/feelings substate)
  • list of proposed expert agents relevant for the current situation
  • selected active expert agent (if exists)
  • active goals structure (if exists)
  • list of possible actions relevant to the internal state (heavily mediated)
  • list of recommended actions (heavily mediated)
  • action(s) to execute (could be external or internal)
  • feedback from executing selected actions

OODA artifacts become persistent in episodic memory as “episodes”:

  • There are links between episodes which allow to traverse episodic memory based on “next”, “previous” relationships
  • Episodes are connected to representations of individual objects  and classes of objects known to the agent using associative links

Agent is capable to collect and use information about individual objects in the environment:

  • using direct sensing of the environment
  • using information exchange with other agents
  • dealing with “known” and “unknown” individual objects
  • organizing individual objects into classes, recognizing objects as a member of some classes
  • information about individual objects is an important part of episodes, but it also abstracted in semantic memory

Agent is capable of organizing information about classes of objects:

  • Information about classes is used as part of the interpretation and enrichment of the sensory information in the OODA loop (through object classification, prediction of not directly observed properties, for example)

Simulating interaction  between episodic and semantic memories is a great topic for the following posts.

Support for “crash  mode”:

  • Agent can be in the “crash  mode” without OODA loop (system crash, for example)
  • If agent is “restarted”, we expect agent’s re-orientation in time, states, goals, etc. and renewing of OODA loop

Support for  the “sleeping mode” (as part of OODA loop):

  • Agent has limited interaction with the external environment in the “sleeping mode”
  • It is a good time for background processes such as memory optimization, learning, etc.
  • Agent can “schedule” the “sleeping mode” and it can be “recommended” by the internal state
  • “waking up” from the “sleeping mode” restores time, states orientation, goals, etc.,  and resumes OODA loop in the “active mode”
  • the “sleeping mode” could potentially include dreaming simulation which could be combination of replaying previous episodes, going through/playing with imaginary situations

References:

1. Igor Aleksander and Barry Dunmal (2003), Axioms and Tests for the Presence of Minimal Consciousness in Agents.  Link on researchgate.net

 

Simulating consciousness

The first wave of my interest in Artificial Intelligence (AI) was before “AI winter” (90s). As a researcher, I concentrated on knowledge representation and acquisition, active memory and behaviour modelling, knowledge-based machine learning, reasoning with defaults and contradictions, and general AI architecture which would allow to combine various modules into an “intelligent system” capable of demonstrating wide range of intelligent behaviours. Today it would be close to working on “Artificial General Intelligence” – AGI.

Then “AI winter” happened (combined with many exciting developments in computing industry), and after many following winters, I found myself with more than 20 years of designing, building, managing… often quite sophisticated but not that intelligent computer systems, well… until recently with revived interest to AI and Machine Learning (ML). I had a chance to refresh Python skills, took contemporary AI, ML courses. Udacity, thank you for thoughtful and practical nano degrees and online courses!. Why? Because many fundamental ideas and promises from 50-90s can be finally implemented and there are so many new opportunities and problems to solve!

After some reflection and thinking about Massively Transformative Purpose (MTP) [1] … for me it is about finding architectures and building intelligent systems with simulated consciousness.

This post is not about why and where simulating consciousness is beneficial/useful, it is more about the pathway of my recent research. It is related to concepts outlined in “Artificial Intelligence A modern Approach” [2], specifically in “Intelligent Agents” chapter. Flavour of intelligent agents that I am interested in can be represented by the following diagram:

We have explicit separation between the environment and the agent. For agents with physical embodiment, this separation comes naturally. For software agents, it takes a discipline to model sensors, sensing, acting, and actuators explicitly. The agent uses its sensors to get raw representation of the environment based on sensor capabilities. This raw representation is interpreted and transformed into an internal state combined with the representation of the agent itself. These representations contribute to evaluation of the current situation and selection of the appropriate actions which is mediated by forecasting, planning, goal management, motivation, emotions and ethics subsystems. Some of the actions are targeted at the environment and some at changing internal states of the agent.

I am specifically interested in agents with multiple skills, capable of working in various domains. This concern is addressed by Expert Agents. They encode knowledge required for some specific domains such as playing chess, solving math puzzles, ordering pizza or supporting “chit chat”. These experts are integrated into the main “observe-orient-decide-act” [3] loop and typically have specialized representations of the environment and domain of their expertise. Expert Agents are active, often run background tasks and compete for “attention”,”focus” specifically when several agents require access to the same unique resource such as communication channel. The main agent architecture is extendable: it supports new/evolving Expert Agents.

Where are the buzz words – “Machine Learning”? ML is in all components of the architecture. For me it is not about one “master algorithm”, it is more about highly modular architecture with various embedded machine learning components. For example, some of the experts can be implemented as reinforcement agents/learners. Training can be done offline or online. In the second case, learning of the expert is part of the expert behaviour and is mediated by other components: “Can we do learning Y now or we have to do something else?”

Everything described before follows “competence without comprehension” meme (I highly appreciate Daniel Dennett’s perspective on the nature and evolution of consciousness). Systems with the outlined architecture can demonstrate sophisticated and intelligent behaviours, including being reactive, establish, follow, change, postpone and revive goals, shift focus, predict situations and results of actions, impact on environment and the agent itself. It includes also basic self-model which mediates selection of actions.

Important component of the proposed architecture is the Reflection Subsystem. Is it a module responsible for “Cartesian Theatre” [4]? Not really. Heavy lifting of intelligent behaviours – competence – is implemented by other components. The Reflection Subsystem implements more advanced agent self-model and has capabilities to influence work of other modules: it is an additional layer of behaviour mediation and control. Fantastic topic for next posts!

References:
1. The Motivating Power of a Massive Transformative Purpose on SingularityHub: https://singularityhub.com/2016/11/08/the-motivating-power-of-a-massive-transformative-purpose/
2. Artificial Intelligence A modern Approach on Wikipedia: https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
3. OODA loop on Wikipedia: https://en.wikipedia.org/wiki/OODA_loop
4. Cartesian Theatre on Wikipedia: https://en.wikipedia.org/wiki/Cartesian_theater