Monthly Archives: June 2021

Simulating Consciousness: Methodology

What kind of methodology could be used  to guide consciousness simulation? One of the candidates could be  ‘science of consciousness’ proposed by David Chalmers. The idea behind science of consciousness is “… to systematically integrate two key classes of data into a scientific framework: third-person data, or data about behavior and brain processes, and first-person data, or data about subjective experience…” [1]

David  Chalmers provides few examples of these subjective experiences: “… 

  • visual experiences (e.g., the experience of color and depth) 
  • other perceptual experiences (e.g., auditory and tactile experience) 
  • bodily experiences (e.g., pain and hunger) 
  • mental imagery (e.g., recalled visual images) 
  • emotional experience (e.g., happiness and anger) 
  • occurrent thought (e.g., the experience of reflecting and deciding) 
  • …” [1].

Of course there are many other very interesting examples:

  • dreaming
  • waking up with the sense of continued existence
  • stream of thoughts / inner speech
  • ability to walk and think about other things (including abstract things) at the same time
  • quick switch between contexts
  • attention
  • ability to compare situations
  • ability to recognize specific individuals and think about multiple unknown individuals  
  • sensory illusions  
  • ability to create goals through mental imagery 
  • transition from conscious to subconscious actions
  • … and many many others

Our goal is to build a computer system that could simulate these subjective experiences. We follow the science of consciousness approach by collecting and organizing ‘first person’ data about subjective experiences of conscious systems available through  attending to our own experiences and lots of documented cases in the literature. 

In our experiments we use techniques developed under the umbrella of hybrid Artificial Intelligence (AI) and we rely mostly on symbolic AI (including spatial representations) for simulating  conscious experiences.  Our reliance on Symbolic AI is a pragmatic choice that allows us to model various subjective experiences quickly without lots of data available. We specifically concentrate on figuring out unified architecture that could support all cases that we are exploring. Interesting alternative – Psi-theory and architecture [2].

Our working hypothesis is that we could build a system that is capable of simulating any conscious experience at the functional level (if we could formulate it). 

Let’s take for example ‘waking up with the sense of continued existence’ experience.

At the centre  of our model is the OODA loop. In normal situation, every ~ 250 ms our system creates  a new episode snapshot in episodic memory,  checks sensor readings, does sensor input interpretation, identifies actions to perform and performs selected actions. All these steps are recorded in each episode. In addition, every episode has a timestamp. We often use shortcuts to get first implementation running (timestamp in this case), instead of deep symbolic modelling of each and every aspect of experience right away (deep time model in this case). Raw episodic memory contains episodes connected by links ‘next’, ‘previous’. Latest episode accommodates some info from several previous episodes. This episode ‘time thickness’  and lingering residual components is a very interesting topic on itself! Generation of episodes on every cycle (and content of each episode connected with previous episodes, long term memory, and explicit Self model with attention) simulates continuity of experience. 

If we shutdown the system and restart it after some time, the first thing the system does – it retrieves last recorded episode and identifies that it was ‘down’ and now is  ‘up’ again. This information becomes part of the description of the current situation in the new episode. As with any other input, it could generate some simulated feelings which could influence selection of the appropriate actions. If we compare this simulated experience with human experience, it corresponds to something like “waking up from a coma”. Less dramatic ‘wakening’ is based on modeling ‘normal’ and ‘sleeping’ modes and finding last recorded episode in ‘normal’ mode. In ‘sleeping’ mode the system continues the OODA cycle, but activities are quite different from the ‘normal’ mode.

For our current experiments we do not use visual or sound sensors, we continue to evolve our research oriented intelligent assistant platform. External sensors in this case monitor events in a dedicated Slack channel, the system interprets text input and builds relatively  deep  symbolic representation of the input. Our system could generate some responses into the Slack channel. Both input and output become part of the (symbolic) conversation model (inspired by [3]). Communication could be bidirectional, asynchronous, with mixed initiative. In addition to external input and output, the system generates inner speech. Building intelligent assistant with conversational interface is not our main goal, but this problem has enough complexity to test our ideas about modelling subjective experiences (on a human level). 

We are not limiting our system to ‘chit chat’, or simple conversational patterns. We try to simulate conversation as part of problem solving and as part of general agent behaviour.   The system does deep symbolic modelling of the reactive and goal oriented behaviours with the ability to shift between goals based on the current situation (external and internal).   There is a basic model of ‘Self’ as an intelligent agent with simulated subjective experience. We also have a basic model of attention which connects the model of Self with some fragments of the current external and internal situation (inspired by [4]). There is also a basic model of other intelligent agents. We start typically with simplistic models and then extent, refactor when needed or when we see an opportunity.

Lots of Python code to model explicitly what is typically ignored/not present in traditional computer systems (even ‘intelligent’ systems)! For us, this deep modelling with the subjectivity at the centre is the goal, but is there any benefit in adding this simulated subjectivity with Self, attention, feelings, etc. to systems that already demonstrate some intelligent behaviours? 

Our observation is that in fact we are exploring quite unique control architecture (in the sense of Cybernetics) that helps organisms to survive and adapt to changes in their  environment. According to Mark Solms,   “Consciousness … is about feeling, and feeling, in turn, is about how well or badly you are doing in life. Consciousness exists to help you do better…”  and later “… Affective valence – our feelings about what is biologically ‘good’ and ‘bad’ for us – guides us in unpredicted situations. We concluded that this way of feeling our way through life’s unpredicted problems, using voluntary behaviour, is the biological function of consciousness. It guides our choices when we find ourselves in the dark. But of course, for it to be able to do that, it must link our internal affects (rooted in our needs) with representations of the external world…” [5]. 

Yes, it is probably possible to build decent conversational assistant for specific tasks without overhead of modelling Self, attention, feelings, etc. However, these deep models and unique control architecture allow us to implement quite naturally behaviours such as multi domain bidirectional, asynchronous conversation with mixed initiative and implement complex conversations such as telling/listening to stories, implementing consulting sessions on various topics, answering ‘why’ and ‘how’, ‘who are you’, ‘how are you’, ‘what are you doing’ questions – in one system with extendable capabilities, and in general – create more resilient and adaptive agents. At least this is the promise.

References:

[1] Chalmers, David J. The Character of Consciousness (Philosophy of Mind). Oxford University Press

[2] Joscha Bach. Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition (Oxford Series on Cognitive Models and Architectures).

[3] Robert J Moore, Raphael Arar. Conversational UX Design: A Practitioner’s Guide to the Natural Conversation Framework. Morgan & Claypool.

[4] Graziano, Michael S. A. Rethinking Consciousness: A Scientific Theory of Subjective Experience. W. W. Norton & Company.

[5] Solms, Mark. The Hidden Spring: A Journey to the Source of Consciousness . W. W. Norton & Company.