TIBCO and Subject-centric computing

I attended TIBCO’s TUCON 2010 conference this year. It gave me a great opportunity to explore Event-driven Architecture, SOA, BPM and Cloud computing. I had a chance also to listen/think/talk about the future of computing. And this future looks very subject-centric.

Let’s take a look at TIBCO’s entry into the world of enterprise social software – tibbr.
It is a microblogging platform: it allows people to submit/receive short messages. But it is subject-centric in its core. tibbr allows users to define subjects with name and description. Users can submit (“tib”) messages into “subjects” and subscribe to subjects that they are interested in. User experience is amazing!

It is very close to ideas described in my previous posts related to Subject-centric microblogging

iPhone OS 3.0 – ready for Subject-centric computing, Mar 21, 2009

Subject-centric microblogging and Ontopedia’s knowledge map, Feb 21, 2009

Another interesting product is “TIBCO BusinessEvents”. It is a complex events processing platform. Under the hood there is a powerful domain modeling infrastructure that allows to define “concepts”, “events” and “business rules”. “Concepts” help to define what I like to call “Enterprise Knowledge Map”. “Events” define what can happen outside and inside of an enterprise. “Business rules” allow to connect events and concepts. As a result, we can create dynamic enterprise models which facilitate
decision making in real-time.

TIBCO’s founder and CEO Vivek Ranadivé in his presentation “Two-second Advantage” mentioned future “triple store”. But it is not just static triple store that we typically mean in relation to Linked Data. It is a triple store that is integrated and updated by stream of events with ability to reason about concepts and events in time. TIBCO BusinessEvents 4.0 is a great introduction to these ideas.

Carl Hewitt’s Direct Logic, inconsistency tolerant reasoning and Subject-centric computing

I have been fascinated by the idea of building computer systems which are inconsistency tolerant for many years. I usually address this problem from practical perspective: I just try to write code that demonstrates behavior that I would like to model. But I always thought that it should be beneficial to have some kind of a formal logic that can provide foundations for my heuristic approach. I follow Carl Hewitt’s work for many years and it seems that his inconsistency tolerant Direct Logic can play this foundational role. Firstly, let’s take a look at how traditional logic handles contradictions.

Within traditional logic from inconsistency anything can be inferred. If we have a contradiction about a proposition P then we can infer any proposition Q


	P, ⌉P ├ Q	

This is not very helpful if we want to talk about inconsistency tolerant reasoning…

In Carl Hewitt’s Direct Logic [1] situation is different. Formula


	P, ⌉P

describes totally normal situation, but the meaning of this situation is different from traditional logic.

“Direct Logic supports direct inference ( ├τ ) for an inconsistency theory T. ├τ does not support either general proof by contradiction or disjunction introduction. However, ├τ does support all other rules of natural deduction…” [1]

In traditional logic if we have


	P ├τ Q, ├τ Q

then we can infer ⌉P in theory T. In Direct Logic we cannot do this kind of inference.

“Since truth is out the window for inconsistent theories, we have the following reformulation: Inference in a theory T (├τ) carries argument from antecedents to consequents in chains of inference…” [1]

So if we have


	P,  ⌉P

it does not mean that P is true and not true at the same time. It means that we have arguments both for and against the proposition P.
Direct inference ├τ does not “propagate truth”, it propagates arguments.

Even if we have


	P, ⌉P

in Direct Logic we still can continue doing inferences based on these propositions (of course, we can generate new contradictory propositions).

So if we have, for example,


	P,  ⌉P,  P ├ Q,  ⌉P ├  ⌉Q

we can infer


	Q, ⌉Q

which means that we have arguments both for and against the proposition Q. Because we are talking here not about “truth” but about argumentation it makes total sense.

How is it related to inference in Ontopedia (described here)?

If we have foundational inconsistency tolerant logic we can implement reasoning engines that support various inference heuristics on top of this logic. We can think about Ontopedia inference engine as a specific heuristic layer on top of inconsistency tolerant logic such as Direct Logic.

We do have a concept of “truth” in Ontopedia, but it exists only at the heuristic layer as a convenient metaphor. At the foundation layer inference engine just generates propositions with argumentation. In Ontopedia we try to assign truth value to each proposition, but this attempt can fail if we have contradictory arguments with the same strength. We also use contradiction detection for managing/controlling inference. In Ontopedia we do not want to infer lots of assertions based on contradictory propositions. We suppress this kind of inference. Ontopedia is built to be a tool that helps Ontopedia’s user community to create/evolve/improve Ontopedia’s community knowledge map. Identification and resolution of knowledge conflicts is a fundamental activity.

It is important to mention that we do not make an assumption that one found contradiction refutes entire Ontopedia’s knowledge map. We assume that Ontopedia’s knowledge map can have multiple contradictions at any time. Ontopedia is an open system. We can have new users and we can collect new information about new subjects. Openness naturally introduces new (also helps to resolve some) inconsistencies . But we do not just stay and watch growing number of knowledge conflicts. We embed system mechanisms that allow us to monitor level of Ontopedia’s inconsistency. Our assumption is that Ontopedia’s user community will try to resolve some of existing knowledge conflicts and can improve knowledge quality and support reasonable level of inconsistency.

Ontopedia’s inference engine allows to ignore contradictory arguments and select one of the options as a “decision”. It means that engine does inferences only based on a “decision” and suppresses inferences based on alternative. Probably not all users can agree with decisions recorded in Ontopedia’s knowledge map. We consider possibility of “forking” knowledge map into separate maps with own user communities. Same assertions can be assimilated by some communities and ignored/rejected by others. In general, inconsistency tolerant reasoning allows various communities to exchange and utilize information even when they disagree with it.

I am looking forward to explore more in the areas of inconsistency tolerant logic and inconsistency tolerant reasoning.

References:

** 1. Common sense for concurrency and inconsistency tolerance using Direct Logic, Carl Hewitt, 2010

iPad, Multi-touch interaction and Subject-centric computing

I am very excited about iPad. It makes multi-touch interaction mainstream. iPad revives and introduces new generation of developers and users to the idea of “direct object manipulation” which is one of the key concepts of Subject-centric computing.

On the surface it looks like iPad (and iPhone) application-centric model (with thousands various Apps) is in contradiction with application-less model of Subject-centric computing. In reality, applications developed for iPad and iPhone are often can be considered as “functions” which can be combined to provide subject-centric experience.

What is missing? Integration with global knowledge map that can be used from various apps and a mechanism to pass subject context that allows to launch/continue applications in specific subject context.

Apple provides support for embedding geo maps into any application using MapKit framework. Let’s assume for a minute that Apple creates SubjectKit. This (imaginary) framework provides access to information collected in global sources such as Freebase, Wikipedia the same way MapKit provides access to Google Maps. In this case iPad and iPhone applications can leverage information collected in global knowledge map. SubjectKit also can allow applications to record current subject context in some shared storage available for all apps. When a user launches an app, this app can read current subject context and use it to provide subject-centric experience.

Let’s take a look at iTunes, for example. It simplifies “buying experience”, but it is not currently integrated with global knowledge map. We often need to launch a browser and search to get additional info about subjects that we are interested in (movies, actors, directors, tracks, groups, …). iTunes has some reference data but it is quite minimal in comparison with what we can get in global knowledge map, and this reference data is limited to iTunes app.

With (imaginary) SubjectKit iTunes (and any other app) on iPad can leverage information available in global knowledge map directly without manual search.

What about leveraging subject context between various applications? Let’s say I open my Weather app and check weather in New York City. Weather app can record that one of my currently active subjects has identifier http://en.wikipedia.org/wiki/New_York_City. Let’s assume that as a next step I launch iTunes. iTunes (in my imaginary scenario) can retrieve this subject identifier from shared context storage. If I click on “Movies” tab then iTunes can suggest, for example, “Sleepless in Seattle” movie in the new “Related to your active subjects” section. iTunes can do it because (in my imaginary scenario) it can leverage current subject context, internal iTunes database and global knowledge map.

With sharing subject context between apps comes an issue of protecting user privacy. SubjectKit framework can maintain “white list” of apps which are allowed to save into and restore from current subject context. SubjectKit can block applications that try to access subject context if they are not allowed to see it. SubjectKit also can prevent applications to save active subjects into shared subject context if they are not allowed to do so.

Of course, this approach is not limited to OS X and iPad. It’s just combination of iPad design, powerful multi-touch interface and strength of OS X creates a winning platform for Subject-centric computing.

Inference in Ontopedia

I just finished reading “Semantic Web for the Working Ontologist: Modeling in RDF, RDFS and OWL”. Great book! Lots of examples and deep exploration of Semantic Web fundamentals. It inspired me… not to use OWL, no… but to describe how we approach inference/reasoning in Ontopedia.

There are several fundamental principles that we try to follow developing inference capabilities in Ontopedia.

Paraconsistency

We assume that Ontopedia’s knowledge base can have contradictions at any time. We try to develop a system that can do “reasonable” inferences within inconsistent knowledge base.

Non-monotonic, Adaptive Knowledge Base

People change opinions, modify existing and create new theories (formal and informal). People learn new things, they can be convinced and taught (sometimes). There is evolution in general and personal knowledge. We would like to support these “subject-centric” features.

In simple cases, Ontopedia users can change their factual assertions. This can trigger truth maintenance processes and revision of other assertions. Ontopedia also keeps history of assertions – “Who asserted What and When”

Minimization of Inconsistency

We try to create a system that can operate/reason within inconsistent knowledge base. But it is not enough. We embed mechanisms that allow to identify inconsistencies and help to resolve them. Knowledge conflicts are “first class objects” in Ontopedia and are organized based on conflict types. Each knowledge conflict type has conditions that describe how conflicts of this type can be identified. There are also some recommendations for resolving conflicts. In general, Ontopedia’s user community tries to minimize knowledge inconsistency and Ontopedia system “tries” to help users to achieve this goal.

Inference Transparency and Information Provenance

Inference is not an easy process, contradictions are tough, knowledge evolution is challenging. We do not try to create an illusion that these things are easy and that a user can just “click a button” and magically get “all” inferred assertions, we do not try to “virtualize” inference and hide it behind a query language, for example. We think about inference as a process that can be time and resource consuming and can include multiple steps. Ontopedia provides facilities to record various steps of inference process. Inference tracing is an important part of Information Provenance in Ontopedia. We keep track of “Who asserted What and When” and we also keep track of “What was Inferred based on What and Why”.

Multiple Inference Modules and Decision Procedure

RDFS inference rules are useful, RDFS+ adds some new tricks. OWL 1.0 inference looks interesting in many contexts and OWL 2.0 looks even better. What about Common Logic? What about Cyc-like inference?… Ontopedia’s system architecture supports various inference modules. Each module can generate proposals based on the current state of Ontopedia’s knowledge base. These proposals are recorded in the knowledge base but they do not automatically become Ontopedia’s “visible” assertions. Different inference modules can produce controversial proposals, it’s OK. Various proposals are considered by Ontopedia’s decision procedure that calculates “final” assertion that becomes “visible” on Ontopedia’s web site. Decision procedure can be invoked on any assertion at any time. Ontopedia’s knowledge base is not limited by “true” only statements. We utilize multi-valued truth including “unknown.”

Loosely Coupled Inference, Decision Making, and Truth Maintenance

Ontopedia is a system that can “infer a little bit something here”, “find some knowledge conflicts there”, “make some new decisions”, “infer a little bit more”, “review some decisions” etc. All these activities can be performed in any order by various components.
All activities are recorded in the “activity log” (it is available as RSS/ATOM feed). Various modules can explore activity log and use it for managing own activities and cooperation between modules. In general, modules can run in parallel in various areas of Ontopedia’s knowledge base. These activities can be directed buy user community through user interface or can be directed by “controller” software components.

I developed this approach and related system architecture in late 80s and used it successfully in various projects for relatively small data sets during last 20 years. What is exciting about 2010? It’s availability of huge data sets on the Web. There is also experience in building Social Web and much better understanding of Collective Intelligence. I could only envision in 1990 that it would be possible to build large stable evolving paraconsistent open knowledge based systems. In 2010, I am pretty sure about this.

Subject-centric applications: toward subject-centric computing

Recently we added a small framework that allows us to build/use subject-centric applications in Ontopedia. Within traditional paradigm of application-centric computing, we have to start an application (or go to some domain/function specific website) and then we can change application/website context. Within subject-centric environment, we can select a subject and then we can have access to various applications/functions that can be used with the subject in context.

As an example, we implemented basic subject-centric application Company headquarters map. It can be used for various companies. This app tries to find a geo point of company headquarters and map it using Google Maps service.

Live example: Tibco Software Headquarters on a map

Subject-centric apps can use information recorded in Ontopedia knowledge map and provide rich, user friendly interactivity. They also can use external data sources and submit pieces of information to Ontopedia knowledge map.

If we have just one application, this is not too exciting. But we are talking about a framework that allows us to integrate multiple subject-centric apps which are relevant to various subjects. All these applications have a very important feature in common – they can ‘sense’ current subject context. This approach can be used on the Web or on a new generation of desktops.

More info about my perspective on knowledge maps/grids, subject-centric pages, widgets, apps can be found in these references:

Enterprise Search, Faceted Navigation and Subject-Centric Portals; Topic Maps, 2008

Ruby, Topic Maps and Subject-centric Blogging: Tutorial, Topic Maps, 2008

Enterprise Knowledge Map; Topic Maps, 2007

Topic Maps Grid; Extreme Markup 2006

Fundamentals and possibilities are described perfectly in Steve Pepper’s presentation on Topic Maps 2008 (PPT format):

Everything is a Subject

“Creating Linked Data” by Jeni Tennison

Jeni Tennison published several excellent blog entries which describe process of creating Linked Data. If you are interested in semantic technologies, you will find lots of important ideas in these postings.

From Jeni’s Musings :

Creating Linked Data – Part I: Analysing and Modelling

Creating Linked Data – Part II: Defining URIs

Creating Linked Data – Part III: Defining Concept Schemes

Creating Linked Data – Part IV: Developing RDF Schemas

Creating Linked Data – Part V: Finishing Touches

We have more and more Linked Data (published in RDF). I think that it is very important to find a way of using RDF data in Linked Topic Maps, and vice-versa. Traditional approach to RDF/Topic Maps mapping is based on idea of mapping between RDF Object Properties and Associations. I have been playing with the idea of mapping between Object Properties and Occurrences for quite some time. In Ontopedia, it is possible to represent RDF object properties directly as “occurrences”. Mapping between subject-centric occurrences and associations can be done later as results of inference. In Ontopedia, from user experience, there is no big difference between associations and subject-centric occurrences: identifiers, locators, names, associations, and occurrences are rendered as subject “properties”. I think that with this approach, RDF data can be “naturally” integrated with Topic Maps-based data sets.

Modeling Time and Identity

I really like ideas described in this presentation by Rich Hickey.

Link on InfoQ: http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey

One of the fundamental requirements for future computing: explicit representation of “values” and changes in time.

“… The future is a function of the past, it doesn’t change it …”

“… We associate identities with a series of causally related values …”

Presented ideas are very close to my own understanding of new subject-centric programming model

Paraconsistent Reasoning in Ontopedia

I did a short presentation about paraconsistent reasoning in Ontopedia on TMRA 2009.

Summary:

  • Paraconsistent reasoning allows to collect assertions from
    various sources and “safely” infer new information
  • Paraconsistent reasoning works well together with constraint
    languages (such as TMCL)
  • Paraconsistent reasoning supports evolutionary approach to
    building large assertion systems
  • We do not need to “fix all errors” before we can reason

iPhone OS 3.0 – ready for Subject-centric computing

Apple just introduced iPhone OS 3.0 (beta) and 3.0 SDK. There are lots of improvements and new features. iPhone is a great platform for developing mobile applications. OS 3.0 makes it even more compelling for building Subject-centric solutions. One of my favorite features is Push Notification Service.

We introduced Subject-centric RSS feeds some time ago on Ontopedia PSI server . With RSS feeds in place, we can subscribe and monitor information about subjects that we are interested in using RSS (including mobile) aggregators. As Ontopedia user, I can submit an assertion, for example, that I am thinking about Blogging Vocabulary . Everyone with RSS subscription to Blogging Vocabulary or to my PSI will be notified about this new assertion.

But, of course, existing RSS aggregators and pull model do not allow to realize full potential of Subject-centric micro-blogging. Services like iPhone Push Notification Service are game changers. I wrote this blog post many years ago about Subject-centric real-time messaging. Now is the right time to implement it. And with new Apple iPhone SDK it should be fun.

Finding “facts” (without scanning millions of documents)

The new version of Ontopedia PSI server is out. There are several interesting features in this release. We introduced auto-reification of all assertions, “everything is a subject” now. In the new version, preferable and recommended way to model web resources is to model them as first class “subjects”. Another interesting feature is ability to search for ‘facts’ related to various subjects.

Every assertion created in Ontopedia’s knowledge map is automatically reified as a ‘subject’. Starting from the moment of ‘creation’, assertion-based PSIs have a regular ‘life cycle’. Users can change PSI default name, description. It is also possible to deprecate PSIs and introduce new PSIs for the same subject. Of course, users can make assertions about other assertions. This feature is quite helpful for modeling changes in time (combined with time interval scoping), for example.

Speaking about modeling resources on the web, we continue to support URI-based properties/occurrences, but main modeling practice moving forward is based on creation of explicit subjects for web resources and using associations for connecting resources and other subjects. Ontopedia’s generic user interface will be optimized in next releases to support dual nature of web resources (as “subjects” and “links”).

Next feature is related to improving ‘findability’. I think that in many cases we are looking for ‘facts’, not documents, so we try to do the first step in providing direct access to ‘facts’ collected in Ontopedia’s knowledge map. We use basic faceted search/navigation with three main facets: ‘Concepts’, ‘Web Resources’, and ‘Assertions’. For example, if we type ‘apple’ in Ontopedia’s search box, we can find some information items in all three tabs on the front search page. The most interesting tab is probably ‘Assertions’. This tab provides direct access to facts which include reference to ‘apple’. Future versions of ‘Assertions’ tab will include additional facets which will allow to ‘slice and dice’ assertions.

With this new feature, our goal is to demonstrate that Subject-centric computing can change ‘search paradigm’ by providing direct, reliable access to ‘facts’. Of course, it will take lots of efforts to make this approach scalable. But recent enhancements in commercial and open source faceted search engines, achievements in creating “knowledge maps”/”smart indices ” make me believe that we are not that far from ability to directly find ‘facts’ that we are interested in.