Tag Archives: subject-centric

iPhone OS 3.0 – ready for Subject-centric computing

Apple just introduced iPhone OS 3.0 (beta) and 3.0 SDK. There are lots of improvements and new features. iPhone is a great platform for developing mobile applications. OS 3.0 makes it even more compelling for building Subject-centric solutions. One of my favorite features is Push Notification Service.

We introduced Subject-centric RSS feeds some time ago on Ontopedia PSI server . With RSS feeds in place, we can subscribe and monitor information about subjects that we are interested in using RSS (including mobile) aggregators. As Ontopedia user, I can submit an assertion, for example, that I am thinking about Blogging Vocabulary . Everyone with RSS subscription to Blogging Vocabulary or to my PSI will be notified about this new assertion.

But, of course, existing RSS aggregators and pull model do not allow to realize full potential of Subject-centric micro-blogging. Services like iPhone Push Notification Service are game changers. I wrote this blog post many years ago about Subject-centric real-time messaging. Now is the right time to implement it. And with new Apple iPhone SDK it should be fun.

Finding “facts” (without scanning millions of documents)

The new version of Ontopedia PSI server is out. There are several interesting features in this release. We introduced auto-reification of all assertions, “everything is a subject” now. In the new version, preferable and recommended way to model web resources is to model them as first class “subjects”. Another interesting feature is ability to search for ‘facts’ related to various subjects.

Every assertion created in Ontopedia’s knowledge map is automatically reified as a ‘subject’. Starting from the moment of ‘creation’, assertion-based PSIs have a regular ‘life cycle’. Users can change PSI default name, description. It is also possible to deprecate PSIs and introduce new PSIs for the same subject. Of course, users can make assertions about other assertions. This feature is quite helpful for modeling changes in time (combined with time interval scoping), for example.

Speaking about modeling resources on the web, we continue to support URI-based properties/occurrences, but main modeling practice moving forward is based on creation of explicit subjects for web resources and using associations for connecting resources and other subjects. Ontopedia’s generic user interface will be optimized in next releases to support dual nature of web resources (as “subjects” and “links”).

Next feature is related to improving ‘findability’. I think that in many cases we are looking for ‘facts’, not documents, so we try to do the first step in providing direct access to ‘facts’ collected in Ontopedia’s knowledge map. We use basic faceted search/navigation with three main facets: ‘Concepts’, ‘Web Resources’, and ‘Assertions’. For example, if we type ‘apple’ in Ontopedia’s search box, we can find some information items in all three tabs on the front search page. The most interesting tab is probably ‘Assertions’. This tab provides direct access to facts which include reference to ‘apple’. Future versions of ‘Assertions’ tab will include additional facets which will allow to ‘slice and dice’ assertions.

With this new feature, our goal is to demonstrate that Subject-centric computing can change ‘search paradigm’ by providing direct, reliable access to ‘facts’. Of course, it will take lots of efforts to make this approach scalable. But recent enhancements in commercial and open source faceted search engines, achievements in creating “knowledge maps”/”smart indices ” make me believe that we are not that far from ability to directly find ‘facts’ that we are interested in.

Subject-centric micro-blogging and Ontopedia’s knowledge map

Traditionally, when we think about subject-centric approach to organizing information, we have in mind equivalent of “master data” – main entities, their properties and relationships. This type of information is relatively static. Of course, subject-centric approach works well also for representing/organizing information about “transactions” and “events”.

“Master data” (PSIs for people, places, companies, products etc.) is a conceptual frame/”endoskeleton” of Ontopedia’s knowledge map. For example, http://psi.ontopedia.net/Apple_Inc is a core, “master” entity.

Assertions
Apple Inc is a Company , Apple’s product line includes Mac Mini, iPhone, … are also part of this core knowledge map.

But Ontopedia’s knowledge map is not limited by this relatively static information. Ontopedia’s knowledge map also has PSIs for events, such as
http://psi.ontopedia.net/Apple_reports_financial_results_Q4_2008
and http://psi.ontopedia.net/Apple_Event_October_14th_2008

“Master Data” combined with “Events” create amazingly powerful conceptual framework for mapping of our knowledge.

Ontopedia’s knowledge map has explicit concept of time and has focus on “current moment on Earth at human size level of (real) world” with recording of history and results of forecasting. History does not disappear in the knowledge map. For example, Ontopedia can “remember” that Apple Inc was called “Apple Computer Inc” at some point and that eMac was in Apple’s product line. History is available for referencing and continues to play an essential role in organizing information.

Explicit modeling of time helped us to introduce even more intriguing features such as Subject-centric micro-blogging.
We are experimenting with “dynamic” associations and properties such as Currently Reading [Person, Book], Currently Located At [Person, City], “Currently Thinking About [Person, Subject]”, “My favorite link of the day” etc.

To support this “dynamic” perspective on Ontopedia’s knowledge map, we recently added subject-centric RSS feeds. Each subject page in Ontopedia’s knowledge map has own RSS feed which provides quick access to all assertions about specific subject. Each assertion has associated time stamps which allow to track changes in the knowledge map and report them in RSS feeds.

In addition to traditional “source-centric” RSS feeds in my RSS aggregator, I have now folders like People, Companies, etc. with subject-centric RSS feeds from Ontopedia’s knowledge map. These feeds are available on my laptop, but I also have a synchronized RSS aggregator on my mobile phone. Mobile RSS aggregator and mobile browser allow me to work with Ontopedia’s knowledge map when I need it. It makes me feel like Subject-centric computing is (almost) here…

Carl Hewitt – Actor model, OWL, knowledge inconsistency and paraconsistent logic

ITConversations published recently Jon Udell’s interview with Carl Hewitt. In this interview – “Interdependent Message-Passing ORGs”, Carl Hewitt shares his ideas about distributed computations, Actor model, inconsistent knowledge, paraconsistent logic and semantic web.

Carl Hewitt’s work has been an inspiration to me for more than 20 years. Knowledge inconsistency is a fundamental reality of our life. When we build computer systems, we can ignore it, we can try to create artificial boundaries, artificial worlds with “guaranteed” knowledge consistency. Alternative approach is to accept from the beginning that we have to deal with inconsistency and create systems that can represent inconsistent knowledge, reason within inconsistent knowledge bases and utilize mechanisms which help to keep inconsistency “under control”.

I made the choice many years ago in favor of this alternative approach and used it in building many computer systems over the years. Our recent project – Ontopedia PSI server is not an exception. Ontopedia PSI server allows to represent opinions from various sources, including contradictory opinions. Ontopedia’s reasoning engine is justification based (as everything in Ontopedia – work in progress :) which means that decision about each assertion is based on comparison between various opinions and their justifications. Reasoning inside of Ontopedia PSI server is paraconsistent. Inference engine can find contradictory assertions in some areas of Ontopedia’s knowledge base. Local contradictions do not prevent reasoning engine from inferring reasonable assertions in other areas of knowledge base and there is no ‘explosion of assertions’.

Reasoning in Ontopedia PSI server is also ‘adaptive’. We anticipate that when various sources ‘see’ results of comparison between various opinions and ‘see’ consequences of their statements in several ‘steps ahead’, then sources can change their original opinions.

Ontopedia PSI server actually ‘likes’ contradictions. Contradictions are starting points of identifying errors, negotiations, improving knowledge models and as a result – knowledge evolution.

Resources:

Interdependent Message-Passing ORGs, interview on ITConversations

The new version of Ontopedia PSI server

The new version of Ontopedia PSI server is out now. It is possible to represent various types of assertions related to subjects (names, occurrences, associations). The new PSI server allows also to record and integrate opinions of different users. Its internal knowledge representation is optimized for paraconsistent reasoning.

I started to play with some topics that I am interested in. For example, Subject-centric Computing , Apple Inc .
As with typical Topic Maps-based system, we can easily add new subject and assertion types, we are not limited by fixed domain models. In addition, the new PSI server supports recording of assertion provenance and five truth values.

We also tried to follow the Resource-Oriented Architecture: each subject, each assertion, each subject-centric group of assertions of the same type has own Uri and “page”.

The main goal of this version is to experiment with assertion level subject-centric representations vs. more traditional portal-based approach.

Serendipitous reuse and representations with basic ontological commitments

Steve Vinoski published a very interesting article: Serendipitous reuse. He also provided additional comments in his blog. The author explores benefits of RESTful uniform interfaces based on HTTP “verbs” GET, PUT, POST and DELETE for building expansible distributed systems. He also compares RESTful approach with traditional SOA implementations based on strongly typed operation-centric interfaces.

Serendipitous reuse is one of the main goals of Subject-centric computing. In addition to uniform interfaces, Subject-centric computing promotes usage of uniform representations with basic ontological commitments (as one of the possible representations).

One of the fundamental principles of the Resource-Oriented Architecture is the support for multiple representations for the same resource. For example, if we have a RESTful service which collects information about people, GET request can return multiple representations.

Example using JSON:


{
	"id":          "John_Smith",
	"type":        "Person",
	"first_name":  "John",
	"last_name":   "Smith",	
	"born_in":      {
			   "id": "Boston_MA_US", 
			   "name": "Boston"
			}
} 

Example using one of the “domain specific” XML vocabularies:


<person id="John_Smith">
	<first_name>John</first_name>
	<last_name>Smith</last_name>
	<born_in ref="Boston_MA_US">Boston</born_in>
</person>	

Example using one of the “domain independent” XML vocabularies:


<object obj_id="John_Smith">
        <property prop_id="first_name" prop_name="first name">John</property>
        <property prop_id="last_name" prop_name="last name">Smith</property>
        <property prop_id="born_in" prop_name="born in" val_ref="Boston_MA_US">
                 Boston
        </property>
</object>	

Example using HTML:


<div class="object">
	<div class="data-property-value">
		<div class="property">first name</div>
		<div class="value">John</div>
	</div>	
	<div class="data-property-value">
		<div class="property">last name</div>
		<div class="value">Smith</div>
	</div>	
	<div class="object-property-value">
		<div class="property">born in</div>
		<div class="value">
			<a href="/Boston_MA_US">Boston</a>
		</div>
	</div>	
</div>	

Example using text:


John Smith was born in Boston

These five formats are examples of data-centric representations without built-in ontological commitments. These formats do not define any relationship between representation and things in the “real world”. Programs which communicate using JSON, for example, do not “know” what “first_name” means. It is just a string that is used as a key in a hash table.

Creators of RESTful services typically define additional constraints and default interpretation for corresponding data-centric representations. For example, we can agree to use “id” string in JSON-based representation as an object identifier and we can publish some human readable document which describes and clarifies this agreement. But the key moment is that this agreement is not a part of JSON format.

Even if we are talking about a representation based on a domain specific XML vocabulary, semantic interpretation is outside of this vocabulary and is a part of an informal schema description (using comments or annotations).

Interestingly enough, level of usefulness is different for various representations. In case of a text, for example, computer can show text “as is”. It is also possible to do full-text indexing and to implement simple full-text search.

HTML-based representations add some structure, ability to use styles and linking between resources. Some links analysis can help to improve results of basic full-text search.

If we look at representations based on Topic Maps, situation is different. Topic Maps technology is a knowledge representation formalism and it embeds a set of ontological commitments. Topic Maps-based representations, for example, commit to such categories as topics, subject identifiers, subject locators, names, occurrences (properties) and associations between topics. There is also the commitment to two association types: “instance-type” and “subtype-supertype”. Topic Maps also support contextual assertions (using scope).

In addition, Topic Maps promote usage of Published Subject Identifiers (PSIs) as a universal mechanism for identifying “things”.

Topic Maps – based representations are optimized for information merging. For example, computers can _automatically_ merge fragments produced by different RESTful services:

Fragment 1 (based on draft of Compact Syntax for Topic Maps: CTM):


p:John_Smith
   isa po:person; 
   - "John Smith"; 
   - "John" @ po:first_name; 
   - "Smith" @ po:last_name
.

g:Boston_MA_US - "Boston"; isa geo:city. 

po:born_in(p:John_Smith : po:person, g:Boston_MA_US : geo:location)

Fragment 2:


g:Paris_FR - "Paris"; isa geo:city. 

po:likes(p:John_Smith : po:person, g:Paris_FR : o:object)

Result of automatic merging:


p:John_Smith
   isa po:person; 
   - "John Smith"; 
   - "John" @ po:first_name; 
   - "Smith" @ po:last_name
.

g:Boston_MA_US - "Boston"; isa geo:city. 

g:Paris_FR - "Paris"; isa geo:city. 

po:born_in(p:John_Smith : po:person, g:Boston_MA_US : geo:location)

po:likes(p:John_Smith : po:person, g:Paris_FR : o:object)

As any other representation formalism, Topic Maps are not ideal. But Topic Maps enthusiasts think that Topic Maps capture a “robust set” of ontological commitments which can drastically improve our ability to organize and manage information and to achieve real reuse of information with added value.

Resource-Oriented Architecture and Subject-centric computing vs. traditional SOA: modeling business transactions

If we look at traditional SOA, business transactions are modeled typically as service operations that are part of a service contract. Operation invocations in traditional SOA are not treated as first class “objects”. Operation invocations do not have own identity. Components/processes inside of a service and service clients cannot reference individual operation calls. Situation is different if we look at subject-centric and RESTFul services.

If a client of some subject-centric (or RESTFul) service needs to start a transaction, this client should create a new subject “request for a transaction” with own identity and internal state. Subject-centric service processes this request and some other subjects can be created/updated/deleted as a result of this operation. Service clients have direct access to subjects that represent transactions. Clients can check status of any initiated transaction. It is also possible to use a general query/search interface for finding various subsets of transactions.

Service invocation results can be presented as a special kind of subjects which are linked to original requests for transactions. Subject-centric services also can record “cause and effect” relationships that connect a request for a transaction and results of implementing this transaction as a network of related “events”. Subject-centric computing promotes (and helps) to build transparent services.

It is true that subject-centric services can generate many more subjects (and assertions about subjects) in comparison with modern SOA/object-oriented systems. But Subject-centric computing is in a unique position to leverage available hardware parallelism and distributed storage. Subject-centric services model changes in time differently from traditional computing systems. Subject-centric services do not do “updates”, they just add new assertions about subjects in a new time-specific context. Subject-centric services also have a built-in mechanism for merging assertions from multiple sources so new assertions can be created on different physical storage devices. Computations in subject-centric world can be described using data flow abstractions which allow perfect (natural) parallelism.

Multi-touch interaction, iPhone and Subject-centric computing

If you follow news related to HCI (human-computer interaction), then you probably saw multi-touch interaction demonstrations by Jeff Han. You probably already use (or played) with iPhone or iPod touch. So you know what multi-touch interaction is about. This kind of interface goes hand in hand with Subject-centric computing. Why?

Multi-touch interaction promotes direct manipulation with various kinds of objects. iPhone follows more traditional application-centric paradigm (with smooth integration of different applications). On the other hand, Jeff Han demonstrated almost application-less interface. Not only “documents”, but “things” that we are interested in can be surfaced through multi-touch interface. People, places, events, driving routes, songs can be represented as “subjects” in multi-touch interface and we can easily (naturally) interact with them. That is the way we would like to interact in subject-centric computing environment.

Multi-touch interface translates gesture-based interactions into operations on subjects (“things” that we are interested in). Subject-centric infrastructure can help to implement ‘glue’ that allows to identify and interconnect subjects “hosted” by various applications/services on desktops, intranets and the Internet.

OS X Leopard and subject-centric computing

I upgraded one of my Mac-based systems with OS X Leopard. It is great. I like it. But from the subject-centric perspective it is still more or less a traditional application/document-centric OS. How can we make it more subject-centric?

I think that most of my comments from this old blog entry.Actually, I did many experiments with OS X Tiger, search, subject-centric documents and topic maps over the last couple of years. It is quite promising, but I did not have enough time to do required Objective-C/Cocoa programming to build a real application with full desktop integration. It looks like it can be easier to implement described features with Leopard:

– Objective-C has now a garbage collector
– built-in Ruby-Cocoa bridge and Xcode integration (good news for Ruby enthusiasts)
– streamlined support for document types (Uniform Type Identifiers)
– Dashcode (simple way to create widgets)
– and… it can look so nice …

We also have much better understanding of how public and personal subject identification servers can work based on our experiments with Ontopedia and PSI server. Missing part is desktop integration.