Simulating Consciousness: subjective experience by design

One of the interesting theories related to consciousness is  The Simulation Theory of Consciousness concisely presented in [1].  According to the author,  “…  if a computer-based system uses inputs from one or more sensors to create an integrated dynamic model of its reality that it subsequently uses to control its actions, then that system is subjectively aware of its simulation. As long as its simulation executes, the system is conscious, and the dynamic contents of its simulation form its stream of consciousness…”

The book starts with the position that is very close to our OODA control loop perspective, but later in the book there is more emphasis on world modelling than on integration of sensing and acting. 

“… Question: Is the simulation sufficient as well as necessary? Is the execution of the simulation by itself enough to generate subjective awareness of the simulation? Does the simulation have to be based on input from sense organs or sensors? … 

Suspected answer: Yes. Anything that executes a simulation of reality is conscious, and the contents of its simulation form the contents of its stream of consciousness… Although sentient creatures evolved their simulations to enable them to make sense of sensory information and to control their behaviors, it is the simulation itself that creates consciousness, not their sense organs or their effector organs…” [1]

We share the main sentiment of this theory related to importance of the integrated dynamic model of  ‘the reality’ for understanding consciousness (human, animal, other creatures or artificial). However, we believe it is a specific type of dynamic models of the reality embedded  in the control OODA loop that matters.

Not every reality simulation has a potential for subjective experience. Let’s take a look at the idea of digital twin, for example.  With our current and upcoming technologies we could easily picture dynamic digital model  of our world from the human level perspective, with dynamic models for buildings, cars, trees, roads, etc. We would describe this digital twin as a running world model (built from the human centric perspective by design),  but lacking any subjectivity. The same is true for world modelling in electronic games. However, the story could be different if we switch our conversation to world modelling in NPCs – non-player characters. With NPCs we have a choice how shallow/deep we want to model subjective experience similar to simulating perception in NPCs – “… Up until the mid-1990s, simulating sensory perception was rare (at most, a ray cast check was made to determine if line of sight existed). Since then, increasingly sophisticated models of sensory perception have been developed…” [2].

On the other side of the spectrum we could picture a system with the OODA loop and localized perception and actions. This localization means that the system has a model of a limited fragment of the environment at every moment and also has limited actions that could change the relationship between the environment and the system. We also would add the internal motivation sub-system based on feelings and homeostasis (multidimensional with additional aggregated “good/bad for me” dimension, inspired by [3]). Another important capability is attention – ability to concentrate perception on specific fragments of the environment and  the system itself (inspired by [4,5]).  

We believe that following properties are important for a basic system with simulated subjective experience:

  • Each system has boundaries and runs its own world simulation as part of a OODA control loop
  • Localized perception and action – a window to the world from the egocentric perspective
  • Specific sensors (external and internal) and actions define (better to say intertwined with) system ‘reality’
  • Input from sensors is used to create internal representations/models of the world and self
  • Working memory
  • Attention  (as a way to change the window into external and internal world and keep it ‘active’ in working memory for some time until something more important comes into observation)   
  • Predictions as part of the OODA loop and some mechanism to evaluate predictions vs ‘reality’ (in basic cases it is probably embedded into architecture, innate, no learning required)
  • Homeostasis with feelings (as a building block for intentionality, motivation, goals)

There is an interesting question about individual (per system) dynamic memory: episodic and semantic – is it important?  It looks like individual memory could fundamentally enhance  the ability for subjective experience (and intelligence, of course) but basics could be achieved with ‘memory’ embedded as part of a system architecture (innate) and working memory integrated with attention.

There are quite intriguing results coming from studying and modeling insects, for example, dragonflies [6]. It is fascinating because we are looking at basics, boundary cases of subjective experiences. 

What about self-driving cars?  They are engineering marvel even with current limitations and constraints. Does self-driving car have a subjective experience or at least an equivalent of subjective experience? The book suggests that ‘yes’, it does. We would say ‘it depends’. It is possible to build self-driving cars with deep modelling of subjective experience. If we look at the basic list above, self-driving cars do almost everything except probably simulation of feelings and motivation sub-system, although basic model of physical self is required. There is probably some equivalent of attention but it is most likely not modelled after human (or other creatures) attention directly.

We follow [3, 7] with including feelings and homeostasis as a very important part of simulating subjective experience.

In theory, we could explicitly add simulated feelings to self driving cars. But we probably should not. It is better to design a self-driving car as a ‘tool’ without simulating subjective experiences.  The same is true for ‘smart’ refrigerators, TVs, ovens  and millions of other objects. Adding  simulated subjectivity into these devices would be a mistake… However, there is a huge potential for intelligent environments with built-in sensors and smart control loops (just without simulating subjective experiences)

We would reserve simulated subjective experience for special cases… such as J.A.R.V.I.S. and Mister Data.

[1] Firesmith, Donald. The Simulation Theory of Consciousness: (or Your Autonomous Car is Sentient).

[2]  Millington, Ian. Artificial Intelligence for Games.

[3] Solms, Mark. The Hidden Spring: A Journey to the Source of Consciousness.

[4] Graziano, Michael S A. Rethinking Consciousness: A Scientific Theory of Subjective Experience.

[5] Prinz, Jesse J.. The Conscious Brain (Philosophy of Mind).

[6] Fast, Efficient Neural Networks Copy Dragonfly Brains, IEEE Spectrum, link  

[7] Damasio, Antonio. Self Comes to Mind.

Simulating Consciousness: Methodology

What kind of methodology could be used  to guide consciousness simulation? One of the candidates could be  ‘science of consciousness’ proposed by David Chalmers. The idea behind science of consciousness is “… to systematically integrate two key classes of data into a scientific framework: third-person data, or data about behavior and brain processes, and first-person data, or data about subjective experience…” [1]

David  Chalmers provides few examples of these subjective experiences: “… 

  • visual experiences (e.g., the experience of color and depth) 
  • other perceptual experiences (e.g., auditory and tactile experience) 
  • bodily experiences (e.g., pain and hunger) 
  • mental imagery (e.g., recalled visual images) 
  • emotional experience (e.g., happiness and anger) 
  • occurrent thought (e.g., the experience of reflecting and deciding) 
  • …” [1].

Of course there are many other very interesting examples:

  • dreaming
  • waking up with the sense of continued existence
  • stream of thoughts / inner speech
  • ability to walk and think about other things (including abstract things) at the same time
  • quick switch between contexts
  • attention
  • ability to compare situations
  • ability to recognize specific individuals and think about multiple unknown individuals  
  • sensory illusions  
  • ability to create goals through mental imagery 
  • transition from conscious to subconscious actions
  • … and many many others

Our goal is to build a computer system that could simulate these subjective experiences. We follow the science of consciousness approach by collecting and organizing ‘first person’ data about subjective experiences of conscious systems available through  attending to our own experiences and lots of documented cases in the literature. 

In our experiments we use techniques developed under the umbrella of hybrid Artificial Intelligence (AI) and we rely mostly on symbolic AI (including spatial representations) for simulating  conscious experiences.  Our reliance on Symbolic AI is a pragmatic choice that allows us to model various subjective experiences quickly without lots of data available. We specifically concentrate on figuring out unified architecture that could support all cases that we are exploring. Interesting alternative – Psi-theory and architecture [2].

Our working hypothesis is that we could build a system that is capable of simulating any conscious experience at the functional level (if we could formulate it). 

Let’s take for example ‘waking up with the sense of continued existence’ experience.

At the centre  of our model is the OODA loop. In normal situation, every ~ 250 ms our system creates  a new episode snapshot in episodic memory,  checks sensor readings, does sensor input interpretation, identifies actions to perform and performs selected actions. All these steps are recorded in each episode. In addition, every episode has a timestamp. We often use shortcuts to get first implementation running (timestamp in this case), instead of deep symbolic modelling of each and every aspect of experience right away (deep time model in this case). Raw episodic memory contains episodes connected by links ‘next’, ‘previous’. Latest episode accommodates some info from several previous episodes. This episode ‘time thickness’  and lingering residual components is a very interesting topic on itself! Generation of episodes on every cycle (and content of each episode connected with previous episodes, long term memory, and explicit Self model with attention) simulates continuity of experience. 

If we shutdown the system and restart it after some time, the first thing the system does – it retrieves last recorded episode and identifies that it was ‘down’ and now is  ‘up’ again. This information becomes part of the description of the current situation in the new episode. As with any other input, it could generate some simulated feelings which could influence selection of the appropriate actions. If we compare this simulated experience with human experience, it corresponds to something like “waking up from a coma”. Less dramatic ‘wakening’ is based on modeling ‘normal’ and ‘sleeping’ modes and finding last recorded episode in ‘normal’ mode. In ‘sleeping’ mode the system continues the OODA cycle, but activities are quite different from the ‘normal’ mode.

For our current experiments we do not use visual or sound sensors, we continue to evolve our research oriented intelligent assistant platform. External sensors in this case monitor events in a dedicated Slack channel, the system interprets text input and builds relatively  deep  symbolic representation of the input. Our system could generate some responses into the Slack channel. Both input and output become part of the (symbolic) conversation model (inspired by [3]). Communication could be bidirectional, asynchronous, with mixed initiative. In addition to external input and output, the system generates inner speech. Building intelligent assistant with conversational interface is not our main goal, but this problem has enough complexity to test our ideas about modelling subjective experiences (on a human level). 

We are not limiting our system to ‘chit chat’, or simple conversational patterns. We try to simulate conversation as part of problem solving and as part of general agent behaviour.   The system does deep symbolic modelling of the reactive and goal oriented behaviours with the ability to shift between goals based on the current situation (external and internal).   There is a basic model of ‘Self’ as an intelligent agent with simulated subjective experience. We also have a basic model of attention which connects the model of Self with some fragments of the current external and internal situation (inspired by [4]). There is also a basic model of other intelligent agents. We start typically with simplistic models and then extent, refactor when needed or when we see an opportunity.

Lots of Python code to model explicitly what is typically ignored/not present in traditional computer systems (even ‘intelligent’ systems)! For us, this deep modelling with the subjectivity at the centre is the goal, but is there any benefit in adding this simulated subjectivity with Self, attention, feelings, etc. to systems that already demonstrate some intelligent behaviours? 

Our observation is that in fact we are exploring quite unique control architecture (in the sense of Cybernetics) that helps organisms to survive and adapt to changes in their  environment. According to Mark Solms,   “Consciousness … is about feeling, and feeling, in turn, is about how well or badly you are doing in life. Consciousness exists to help you do better…”  and later “… Affective valence – our feelings about what is biologically ‘good’ and ‘bad’ for us – guides us in unpredicted situations. We concluded that this way of feeling our way through life’s unpredicted problems, using voluntary behaviour, is the biological function of consciousness. It guides our choices when we find ourselves in the dark. But of course, for it to be able to do that, it must link our internal affects (rooted in our needs) with representations of the external world…” [5]. 

Yes, it is probably possible to build decent conversational assistant for specific tasks without overhead of modelling Self, attention, feelings, etc. However, these deep models and unique control architecture allow us to implement quite naturally behaviours such as multi domain bidirectional, asynchronous conversation with mixed initiative and implement complex conversations such as telling/listening to stories, implementing consulting sessions on various topics, answering ‘why’ and ‘how’, ‘who are you’, ‘how are you’, ‘what are you doing’ questions – in one system with extendable capabilities, and in general – create more resilient and adaptive agents. At least this is the promise.

References:

[1] Chalmers, David J. The Character of Consciousness (Philosophy of Mind). Oxford University Press

[2] Joscha Bach. Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition (Oxford Series on Cognitive Models and Architectures).

[3] Robert J Moore, Raphael Arar. Conversational UX Design: A Practitioner’s Guide to the Natural Conversation Framework. Morgan & Claypool.

[4] Graziano, Michael S. A. Rethinking Consciousness: A Scientific Theory of Subjective Experience. W. W. Norton & Company.

[5] Solms, Mark. The Hidden Spring: A Journey to the Source of Consciousness . W. W. Norton & Company.

Simulating Consciousness: Capability Model – OODA loop

If we were interested in simulating consciousness, what would we model? In the previous post we described the agent architecture of our interest. This post concentrates on capabilities supported by this architecture. We loosely follow the approach outlined in “Axioms and Tests for the Presence of Minimal Consciousness in Agents” [1], but instead of minimal set of axioms we develop a capability model to clarify directions of our research and implementation. In this post we concentrate on one of the building blocks – OODA loop capabilities.

OODA Loop

Agent is constantly running  OODA loop:

  • except situations when agent is in the “crash mode”

As part of OODA loop, agent produces following explicit artifacts:

  • raw percepts of the external environment (typically from several sensors)
  • raw percepts of the internal environment
  • interpretation of percepts combined into the internal state (including sensor fusion, emotions/feelings substate)
  • list of proposed expert agents relevant for the current situation
  • selected active expert agent (if exists)
  • active goals structure (if exists)
  • list of possible actions relevant to the internal state (heavily mediated)
  • list of recommended actions (heavily mediated)
  • action(s) to execute (could be external or internal)
  • feedback from executing selected actions

OODA artifacts become persistent in episodic memory as “episodes”:

  • There are links between episodes which allow to traverse episodic memory based on “next”, “previous” relationships
  • Episodes are connected to representations of individual objects  and classes of objects known to the agent using associative links

Agent is capable to collect and use information about individual objects in the environment:

  • using direct sensing of the environment
  • using information exchange with other agents
  • dealing with “known” and “unknown” individual objects
  • organizing individual objects into classes, recognizing objects as a member of some classes
  • information about individual objects is an important part of episodes, but it also abstracted in semantic memory

Agent is capable of organizing information about classes of objects:

  • Information about classes is used as part of the interpretation and enrichment of the sensory information in the OODA loop (through object classification, prediction of not directly observed properties, for example)

Simulating interaction  between episodic and semantic memories is a great topic for the following posts.

Support for “crash  mode”:

  • Agent can be in the “crash  mode” without OODA loop (system crash, for example)
  • If agent is “restarted”, we expect agent’s re-orientation in time, states, goals, etc. and renewing of OODA loop

Support for  the “sleeping mode” (as part of OODA loop):

  • Agent has limited interaction with the external environment in the “sleeping mode”
  • It is a good time for background processes such as memory optimization, learning, etc.
  • Agent can “schedule” the “sleeping mode” and it can be “recommended” by the internal state
  • “waking up” from the “sleeping mode” restores time, states orientation, goals, etc.,  and resumes OODA loop in the “active mode”
  • the “sleeping mode” could potentially include dreaming simulation which could be combination of replaying previous episodes, going through/playing with imaginary situations

References:

1. Igor Aleksander and Barry Dunmal (2003), Axioms and Tests for the Presence of Minimal Consciousness in Agents.  Link on researchgate.net

 

Simulating consciousness

The first wave of my interest in Artificial Intelligence (AI) was before “AI winter” (90s). As a researcher, I concentrated on knowledge representation and acquisition, active memory and behaviour modelling, knowledge-based machine learning, reasoning with defaults and contradictions, and general AI architecture which would allow to combine various modules into an “intelligent system” capable of demonstrating wide range of intelligent behaviours. Today it would be close to working on “Artificial General Intelligence” – AGI.

Then “AI winter” happened (combined with many exciting developments in computing industry), and after many following winters, I found myself with more than 20 years of designing, building, managing… often quite sophisticated but not that intelligent computer systems, well… until recently with revived interest to AI and Machine Learning (ML). I had a chance to refresh Python skills, took contemporary AI, ML courses. Udacity, thank you for thoughtful and practical nano degrees and online courses!. Why? Because many fundamental ideas and promises from 50-90s can be finally implemented and there are so many new opportunities and problems to solve!

After some reflection and thinking about Massively Transformative Purpose (MTP) [1] … for me it is about finding architectures and building intelligent systems with simulated consciousness.

This post is not about why and where simulating consciousness is beneficial/useful, it is more about the pathway of my recent research. It is related to concepts outlined in “Artificial Intelligence A modern Approach” [2], specifically in “Intelligent Agents” chapter. Flavour of intelligent agents that I am interested in can be represented by the following diagram:

We have explicit separation between the environment and the agent. For agents with physical embodiment, this separation comes naturally. For software agents, it takes a discipline to model sensors, sensing, acting, and actuators explicitly. The agent uses its sensors to get raw representation of the environment based on sensor capabilities. This raw representation is interpreted and transformed into an internal state combined with the representation of the agent itself. These representations contribute to evaluation of the current situation and selection of the appropriate actions which is mediated by forecasting, planning, goal management, motivation, emotions and ethics subsystems. Some of the actions are targeted at the environment and some at changing internal states of the agent.

I am specifically interested in agents with multiple skills, capable of working in various domains. This concern is addressed by Expert Agents. They encode knowledge required for some specific domains such as playing chess, solving math puzzles, ordering pizza or supporting “chit chat”. These experts are integrated into the main “observe-orient-decide-act” [3] loop and typically have specialized representations of the environment and domain of their expertise. Expert Agents are active, often run background tasks and compete for “attention”,”focus” specifically when several agents require access to the same unique resource such as communication channel. The main agent architecture is extendable: it supports new/evolving Expert Agents.

Where are the buzz words – “Machine Learning”? ML is in all components of the architecture. For me it is not about one “master algorithm”, it is more about highly modular architecture with various embedded machine learning components. For example, some of the experts can be implemented as reinforcement agents/learners. Training can be done offline or online. In the second case, learning of the expert is part of the expert behaviour and is mediated by other components: “Can we do learning Y now or we have to do something else?”

Everything described before follows “competence without comprehension” meme (I highly appreciate Daniel Dennett’s perspective on the nature and evolution of consciousness). Systems with the outlined architecture can demonstrate sophisticated and intelligent behaviours, including being reactive, establish, follow, change, postpone and revive goals, shift focus, predict situations and results of actions, impact on environment and the agent itself. It includes also basic self-model which mediates selection of actions.

Important component of the proposed architecture is the Reflection Subsystem. Is it a module responsible for “Cartesian Theatre” [4]? Not really. Heavy lifting of intelligent behaviours – competence – is implemented by other components. The Reflection Subsystem implements more advanced agent self-model and has capabilities to influence work of other modules: it is an additional layer of behaviour mediation and control. Fantastic topic for next posts!

References:
1. The Motivating Power of a Massive Transformative Purpose on SingularityHub: https://singularityhub.com/2016/11/08/the-motivating-power-of-a-massive-transformative-purpose/
2. Artificial Intelligence A modern Approach on Wikipedia: https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
3. OODA loop on Wikipedia: https://en.wikipedia.org/wiki/OODA_loop
4. Cartesian Theatre on Wikipedia: https://en.wikipedia.org/wiki/Cartesian_theater

Experimenting with Intelligent Personal Assistant Platform

Traditionally the term Intelligent Personal Assistant is associated with Siri, Google Now or Cortana and voice based communication. For me, this term means something different – extendable open modular platform doing useful things for the user with user privacy at the centre. I have been experimenting with various ways to build something like this for the last few years in my spare time. I consider it as a research hobby, although I would love to see some Personal Assistant Platforms in existence and usage. In this post I would like share some ideas and observations.

My approach is more close to the approach used in platforms which became quite popular in resent years because of the ChatOps movement. The idea is to use chat-based user interface and “bots” which listen for patterns and commands in chat rooms and react on messages using coded “skills”. ChatOps solutions typically concentrate on coordination of human activities and task automation with bots. The same ideas (and software) can be used for building Personal Assistants. I have been interested in exploring core assistant architecture so I decided (after trying few different bot frameworks) to use JASON [1]. JASON is not a typical bot platform, it is a framework for building multi-agent systems. JASON is written in Java but agent behaviours and declarative knowledge are coded in AgentSpeak [2] – high level language with some similarity to Prolog.

Current architecture is presented below.

Slide02

I use Slack as a chat platform. Slack provides user apps for various devices, it allows messages with rich content and has APIs for bot integration. There are couple variations of API, but the most attractive from the Personal Assistant perspective is the WebSocket based Realtime API. With this API, bot (assistant) can run in a cloud, on an appliance at your home, or just on your desktop computer (if you still have one). Slack group capabilities are also helpful, it is possible to communicate with various versions and types of the assistants. I use node.js-based gateway to decouple Slack and JASON based infrastructure. I also use Redis as a simple messaging medium. There is nothing special about node.js and Redis in this architecture except that both are easy to work with for implementing “plumbing”. JASON AgentSpeak does not have capabilities to talk to external services directly but it can be extended through Java components. I coded basic integration with Redis pubsub inside of my JASON solution, and for advanced scenarios (when I need services with lots of plumbing) I use node.js or other service frameworks.

Personal Assistant is implemented as a multi-agent system with additional components in node.js (and other frameworks as needed) communicating through Redis. At the centre of the multi-agent system is the Assistant Agent, User Agent and various Expert agents. User Agent is responsible for communicating with the user, Assistant agent plays the role of the coordinator and Expert agents implement specific skills. Expert agents can delegate low level “plumbing” details to services. The system is quite concurrent and runs well on multicore computers. JASON allows also to run multi-agent system on several computers but this is outside of my current experiments. Agents can create other agents, can use direct message based communication (sync and async) or pubsub based message broadcasting. Agents have local storage implemented as prolog-like associative term memory, logical rules and scripts. AgentSpeak follows BDI (Belief–Desire–Intention) software model[3] which allows to code quite sophisticated behaviours in a compact way. Prior to JASON I tried to implement the same agents with traditional software stacks such as node.js, Akka Actors. I also looked quickly at Elixir, Azure Actors. It is all good, it just takes more time and lines of code to get to the essence of interesting behaviours. I actually tried JASON earlier but made a mistake. I started with implementing Personal Assistant as a “very smart” singleton agent. Bad idea! Currently JASON-based assistant is a multi-agent system with many very specialized “experts”.

Let’s look at the “Greeting” expert agent, for example. This agent can send variations of the “Hi” message to the user and will wait for greeting response (for some time). The same agent listens for variations of the “Hi” message from the user. Greeting agent implements bi-directional communication with the user with mixed initiative. It means that the agent can initiate greeting and wait for the user response or it can respond to the user greeting. In addition, greeting agent reacts on user on-line presence change, remembers last greeting exchange. After successful greeting exchange it broadcasts “successful greeting” message which can initiate additional micro conversations.

Reactivity, autonomy, and expertise are the primary properties of the Assistant multi-agent system. Various expert agents run in parallel, they have goals, react on events, develop and execute plans, notify other agents about events, compete for the user attention. Although many agents run simultaneously, there is a concept of the “focus agent” that drives communication with the user. Even without requests from the user, Assistant is active, it may generate notifications to the user, ask questions, suggest to initiate or continue micro conversations.

Multi-agent platform JASON directly supports implementation of manually encoded reactive and goal oriented behaviours (quite sophisticated). It can be also extended with more advanced approaches such as machine learning, simulation of emotions, ethical decision making, self modelling. My current interest is mostly in experimenting with various expert agents, coordination of activities and user experience, but I am looking forward to other topics.

  1. JASON: https://en.wikipedia.org/wiki/Jason_(multi-agent_systems_development_platform)
  2. AgentSpeak: https://en.wikipedia.org/wiki/AgentSpeak
  3. Belief–Desire–Intention software model: https://en.wikipedia.org/wiki/Belief–desire–intention_software_model

Home Automation and Intelligent Personal Assistants

Home automation became a hot topic again. We have more sensors, automation devices, more connectivity options, and better managing apps.

Legacy approach to home automation typically includes using several mobile controlling apps each dedicated to a specific device. This approach is not sustainable with the increasing number of devices. It does not provide optimal user experience: we cannot see the “big picture” – status of a “home” at any moment of time. We are forced to jump between various controlling apps with different user interfaces. There is no simple way to see correlation between device performance. Optimization, predictive modelling are limited by device scope.

More advanced approach includes the idea of a “home automation hub”. In this case, user can have only one app which controls all home automation devices. Hub unifies various physical connectivity protocols, creates integrated “big picture”, allows advanced optimization. Many consumers will be probably quite happy with this level of automation and integration (at least for now).

As the natural next step, I see potential for integration of “home automation hub” into Intelligent Personal Assistants. “Home” is an important concern for many, but not the only one. The same type of consolidation is going on in other areas. We may soon deal with “personal health hub”, “personal finance hub”, “personal investment hub”, “car hub”, “transportation hub”, “travel hub” etc. Each of these hubs will be active, monitoring current situation, identifying possible/recommended actions, “fighting” for our attention. Some mediation will be required and this is one of the main roles that can be played by Intelligent Personal Assistants: optimization of user experience based on integrated view of highly automated world.

Google Glass and Intelligent Personal Assistants

I have been investigating Google Glass Mirror API and this investigation has generated some thoughts about Google Glass and Personal Intelligent Assistants that I would like to discuss.

I am quite enthusiastic about Google Glass, mostly because it creates a framework for context-aware real-time user centric services. I am very interested in the Mirror API service created and maintained by Google and capabilities for developers to deliver services based on the Glass platform.

The Mirror API allows developers to manipulate Glass timeline, react on changes in timeline and other events, it allows also controlled information sharing between apps/services. In principle, it is not that different from the App centric model that we have already with smartphones. We just do not need to check our smartphones from time to time, important information is always available. But with Glass, there is a big difference from my perspective: various services become integrated in one unified timeline with unified interface and user experience. Glass implements basic information and service integration – literally “on the glass” . I also consider Glass as a new generation of notification centre with information delivered proactively to Glass owners

Mirror API service

How can we incorporate Intelligent Personal Assistants into this picture? Personal Assistant can be viewed as a specialized service with one of its interfaces being Glass-aware.

Mirror API and Intelligent Personal Assistant

Intelligent Personal Assistant provides mediated communication between its owner and various service providers. It manages personal data/event clouds and provides integrated view on “things” and events important for its user. Personal Assistants will have multiple user interfaces including smart glasses, watches, phones, tablets, car dashboards, TVs, etc.

There are currently examples of closed services called “Personal Assistants” tightly coupled with specific vendor solutions, but I am more interested in an open extensible platform with various components and options for deployment. I am looking forward to something like “WordPress for Personal Assistants”. My understanding of Intelligent Personal Assistants is very close to classic FIPA Personal Assistants [1] and the idea of “Personal Data Locker” introduced by David Siegel in his book “Pull” [2] and his vision video [3]

References:

  1. FIPA Personal Assistant Specification
  2. Pull: The Power of the Semantic Web to Transform Your Business by David Siegel
  3. Personal Data Locker vision video by David Siegel

Reviving the blog

I decided to revive the Subject-centric blog on the WordPress platform and will try to re-publish some of my old posts soon. The main topic will be the same but with some new categories such as “cognitive computing”, “agent technology”, “personal intelligent agents”, “moral machines”. Many old posts have references to the Ontopedia research project (active 2007-2012) and Ontopedia PSI server (offline currently). New systems/projects/services became available since Ontopedia started (such as Google’s “knowledge graph”, Wikidata), but many research topics are still relevant and I am thinking about relaunching Ontopedia PSI server on updated technical platform.

Google acquired Metaweb (company that maintains Freebase): good news for Subject-centric computing

“Google and Metaweb plan to maintain Freebase as a free and open database for the world. Better yet, we plan to contribute to and further develop Freebase and would be delighted if other web companies use and contribute to the data…” (Google blog)

Links:

* Deeper understanding with Metaweb

* Google Buys Metaweb to Boost Semantic Search