The new version of Ontopedia PSI server

The new version of Ontopedia PSI server is out now. It is possible to represent various types of assertions related to subjects (names, occurrences, associations). The new PSI server allows also to record and integrate opinions of different users. Its internal knowledge representation is optimized for paraconsistent reasoning.

I started to play with some topics that I am interested in. For example, Subject-centric Computing , Apple Inc .
As with typical Topic Maps-based system, we can easily add new subject and assertion types, we are not limited by fixed domain models. In addition, the new PSI server supports recording of assertion provenance and five truth values.

We also tried to follow the Resource-Oriented Architecture: each subject, each assertion, each subject-centric group of assertions of the same type has own Uri and “page”.

The main goal of this version is to experiment with assertion level subject-centric representations vs. more traditional portal-based approach.

2008 Semantic Technology Conference: random observations

I am back from Semantic Technology Conference. It is becoming bigger and bigger each year. This year there were more than hundred sessions, full day of tutorials, product exhibition. It was quite crowded and energizing.

Just some random observations:

Oracle improves RDF / OWL support in 11g database, considers RDF/OWL as strategic/enabling technologies which will be leveraged in future versions of Oracle products.

Yahoo uses RDF to organize content on various web sites. It also introduced SearchMonkey – extension to Yahoo search platform which allows to provide more detailed information about information resources.

– Consumer oriented web sites powered by semantic technologies are here. Twine, Freebase, Powerset are good examples, more to come.

Resource Oriented Architecture and RDF could be a very powerful combination. More and more people understand the value of exposing data through URIs in the form of information resources.
Linked Data initiative looks quite interesting.

– Some advanced semantic applications use knowledge representation formalisms that go beyond basic RDF/OWL model.
But RDF/OWL can be used to surface/exchange information based on W3C standards. Lots of discussions about
information provenance, trust, “semantic spam”.

– It looks like there is a workable solution (compromise) for ’Web’s Identity Crisis’. The idea is to reserve HTTP 303 (“See Other”) code for indication of “Concept URIs”. 303 response should include an additional URI for “See Other” information resource. This approach combined
with new PURL -like servers allows to keep RDF “as is” and to implement something close to the idea of Published Subject Identifiers

Franz demonstrated a new version of AllegroGraph 64-bit RDFStore. Franz implemented support for Named Graphs (can be used for representing weights, trust factors, provenance)
and incorporated geospatial and temporal libraries. Named Graphs allow to deal with contexts using RDF.

– Text analysis tools become better and better. Interesting example is AllegroGraph.
Incorporating natural language processors allows to extract entities and relationships with reasonable level of precision (News Portal sample).

Doug Lenat did a great presentation on the conference about the history of Cyc project. It looks like in 5-10 years we can expect “artificial intelligent assistants” with quite sophisticated abilities to reason.

Serendipitous reuse and representations with basic ontological commitments

Steve Vinoski published a very interesting article: Serendipitous reuse. He also provided additional comments in his blog. The author explores benefits of RESTful uniform interfaces based on HTTP “verbs” GET, PUT, POST and DELETE for building expansible distributed systems. He also compares RESTful approach with traditional SOA implementations based on strongly typed operation-centric interfaces.

Serendipitous reuse is one of the main goals of Subject-centric computing. In addition to uniform interfaces, Subject-centric computing promotes usage of uniform representations with basic ontological commitments (as one of the possible representations).

One of the fundamental principles of the Resource-Oriented Architecture is the support for multiple representations for the same resource. For example, if we have a RESTful service which collects information about people, GET request can return multiple representations.

Example using JSON:


{
	"id":          "John_Smith",
	"type":        "Person",
	"first_name":  "John",
	"last_name":   "Smith",	
	"born_in":      {
			   "id": "Boston_MA_US", 
			   "name": "Boston"
			}
} 

Example using one of the “domain specific” XML vocabularies:


<person id="John_Smith">
	<first_name>John</first_name>
	<last_name>Smith</last_name>
	<born_in ref="Boston_MA_US">Boston</born_in>
</person>	

Example using one of the “domain independent” XML vocabularies:


<object obj_id="John_Smith">
        <property prop_id="first_name" prop_name="first name">John</property>
        <property prop_id="last_name" prop_name="last name">Smith</property>
        <property prop_id="born_in" prop_name="born in" val_ref="Boston_MA_US">
                 Boston
        </property>
</object>	

Example using HTML:


<div class="object">
	<div class="data-property-value">
		<div class="property">first name</div>
		<div class="value">John</div>
	</div>	
	<div class="data-property-value">
		<div class="property">last name</div>
		<div class="value">Smith</div>
	</div>	
	<div class="object-property-value">
		<div class="property">born in</div>
		<div class="value">
			<a href="/Boston_MA_US">Boston</a>
		</div>
	</div>	
</div>	

Example using text:


John Smith was born in Boston

These five formats are examples of data-centric representations without built-in ontological commitments. These formats do not define any relationship between representation and things in the “real world”. Programs which communicate using JSON, for example, do not “know” what “first_name” means. It is just a string that is used as a key in a hash table.

Creators of RESTful services typically define additional constraints and default interpretation for corresponding data-centric representations. For example, we can agree to use “id” string in JSON-based representation as an object identifier and we can publish some human readable document which describes and clarifies this agreement. But the key moment is that this agreement is not a part of JSON format.

Even if we are talking about a representation based on a domain specific XML vocabulary, semantic interpretation is outside of this vocabulary and is a part of an informal schema description (using comments or annotations).

Interestingly enough, level of usefulness is different for various representations. In case of a text, for example, computer can show text “as is”. It is also possible to do full-text indexing and to implement simple full-text search.

HTML-based representations add some structure, ability to use styles and linking between resources. Some links analysis can help to improve results of basic full-text search.

If we look at representations based on Topic Maps, situation is different. Topic Maps technology is a knowledge representation formalism and it embeds a set of ontological commitments. Topic Maps-based representations, for example, commit to such categories as topics, subject identifiers, subject locators, names, occurrences (properties) and associations between topics. There is also the commitment to two association types: “instance-type” and “subtype-supertype”. Topic Maps also support contextual assertions (using scope).

In addition, Topic Maps promote usage of Published Subject Identifiers (PSIs) as a universal mechanism for identifying “things”.

Topic Maps – based representations are optimized for information merging. For example, computers can _automatically_ merge fragments produced by different RESTful services:

Fragment 1 (based on draft of Compact Syntax for Topic Maps: CTM):


p:John_Smith
   isa po:person; 
   - "John Smith"; 
   - "John" @ po:first_name; 
   - "Smith" @ po:last_name
.

g:Boston_MA_US - "Boston"; isa geo:city. 

po:born_in(p:John_Smith : po:person, g:Boston_MA_US : geo:location)

Fragment 2:


g:Paris_FR - "Paris"; isa geo:city. 

po:likes(p:John_Smith : po:person, g:Paris_FR : o:object)

Result of automatic merging:


p:John_Smith
   isa po:person; 
   - "John Smith"; 
   - "John" @ po:first_name; 
   - "Smith" @ po:last_name
.

g:Boston_MA_US - "Boston"; isa geo:city. 

g:Paris_FR - "Paris"; isa geo:city. 

po:born_in(p:John_Smith : po:person, g:Boston_MA_US : geo:location)

po:likes(p:John_Smith : po:person, g:Paris_FR : o:object)

As any other representation formalism, Topic Maps are not ideal. But Topic Maps enthusiasts think that Topic Maps capture a “robust set” of ontological commitments which can drastically improve our ability to organize and manage information and to achieve real reuse of information with added value.

Authoring topic maps using Ruby-based DSL: CTM, the way I like it

Designing and using Domain Specific Languages (DSL) is a popular programming style in Ruby community.
I am experimenting with Ruby-based DSL for authoring topic maps. Surprisingly, the result is very close to
my view on the “ideal” CTM (Compact Topic Maps syntax).

I just would like to share a sample that should demonstrate main ideas of this approach. It is a piece of Ruby code that generates topic maps (behind the scenes).

First topic map defines some simple ontology.


# some definitions to support DSL
# should be included

topic_map :ontology_tm do
  
  tm_base "http://www.example.com/topic_maps/people/"

  topic(:person) {
    sid   "http://psi.example.com/Person"
    name  "Person"
    isa :topic_type
  }
  
  topic(:first_name) {
    sid   "http://psi.example.com/first_name"
    name  "first name"
    isa :name
  }

  topic(:last_name) {
    sid   "http://psi.example.com/last_name"
    name  "last name"
    isa :name
  }
  
  topic(:web_page) {
    sid   "http://psi.example.com/web_page"
    name  "web page"
    isa :occurrence
    datatype :uri
  }

  topic(:age) {
    sid   "http://psi.example.com/age"
    name  "age"
    isa :occurrence
    datatype :integer
  }
  
  topic(:description) {
    sid   "http://psi.example.com/description"
    name  "description"
    isa :occurrence
    datatype :string
  }
  
  topic(:works_for) {
    sid   "http://psi.example.com/works_for"
    name  "works for"
    isa :property
    association :employment
    first_role :employee
    second_role :employer
    third_role :position_type  
    third_role_prefix :as
  }
  
  topic(:likes) {
    sid   "http://psi.example.com/likes"
    name  "likes"
    isa [:property, :association]
    association :likes
    first_role :person
    second_role :object
  }
  
end

Second topic map includes ontology and asserts some facts.

	
topic_map :facts_tm do  
  
  tm_base "http://www.example.com/topic_maps/people/john_smith"

  tm_include :ontology_tm
 
  topic :john_smith do
      sid "http://psi.example.com/JohnSmith"
      name  "John Smith"
      name  "Johnny", :scope => :alt_name
      first_name "John" ; last_name  "Smith"
      web_page "http://a.example.com/JohnSmith.htm"
      works_for topic(:example_dot_com){
                              sid "http://www.example.com"
                              name "example.com"; isa :company
                         }, 
    	                :as => :program_manager, 
    	                :scope => :date_2008_02_28
      likes [:italian_opera, :new_york]
      age 35
      description <

Subject-centric blog in XTM (Topic Maps interchange) format

XTM export has been available on Subject-centric blog from the first day. But, I think, it was not obvious what readers can do with it. I added a link to Subject-centric topic map in Omnigator (Topic Maps browser).

I also recently made XTM export compatible with Expressing Dublin Core Metadata Using Topic Maps recommendations.

My plan is to connect (aggregate) Subject-centric with other Topic Maps related blogs based on core “Subject-Resource” and simple “Blogging” ontologies.

I see XTM export as a small first step in promoting SAVE AS XTM INITIATIVE
and building Topic Maps Grid

Additional resources:

Expressing Dublin Core in Topic Maps

Subject-centric computing and robotics: Osaka will soon be known as the capital of the robotics world..?

I was in Kyoto for three days in December. Osaka-Kobe-Kyoto is a region with high concentration of companies involved in robotics. I cannot stop thinking about robotics and Subject-centric computing after this trip. Traditionally, when we talk about Subject-centric computing (SCC) and Topic Maps (as enabling technology), we assume more or less slowly evolving models. In the world of robotics, models are evolving in real time.

There are many specialized technologies in robotics such as motion control, sensor information processing, image and speech recognition, planning. But the fundamental SCC concepts of identity and assertions-in-a-context are equally applicable to real- and close-to-real-time scenarios. Robots have to “understand” subjects that are important for humans. “Understanding” means (at least) explicit representations of these subjects inside of robot “brains”.

Interesting observation is that robots will explore new subjects and will generate a lot of new subject identifiers. For example, action planning generates goals-subgoals. Working in real-life environment means constantly dealing with new subjects, constructing assertions and identifiers for these subjects and trying to match them with subject representations in memory.

Create-new-or-reuse-existing-subject-proxy is a fundamental question in Subject-centric computing. Traditionally, we rely on a human to make this decision. In the world of robotics, we need to dive into the core of subject identity and subject recognition process.

I like Lego Mindstorms. I am looking forward to try some ideas related to Subject-centric computing and robotics in 2008. Specifically I am interested in investigation of these scenarios: creating a map of unknown “territory” using sensors, “identifying” subjects on a map in a dialog with a human, enriching information about subjects on a map with information from external “information grid”, evolving “territory” and automatic recognition of “old” and “new” subjects.

Resource-Oriented Architecture and Subject-centric computing vs. traditional SOA: modeling business transactions

If we look at traditional SOA, business transactions are modeled typically as service operations that are part of a service contract. Operation invocations in traditional SOA are not treated as first class “objects”. Operation invocations do not have own identity. Components/processes inside of a service and service clients cannot reference individual operation calls. Situation is different if we look at subject-centric and RESTFul services.

If a client of some subject-centric (or RESTFul) service needs to start a transaction, this client should create a new subject “request for a transaction” with own identity and internal state. Subject-centric service processes this request and some other subjects can be created/updated/deleted as a result of this operation. Service clients have direct access to subjects that represent transactions. Clients can check status of any initiated transaction. It is also possible to use a general query/search interface for finding various subsets of transactions.

Service invocation results can be presented as a special kind of subjects which are linked to original requests for transactions. Subject-centric services also can record “cause and effect” relationships that connect a request for a transaction and results of implementing this transaction as a network of related “events”. Subject-centric computing promotes (and helps) to build transparent services.

It is true that subject-centric services can generate many more subjects (and assertions about subjects) in comparison with modern SOA/object-oriented systems. But Subject-centric computing is in a unique position to leverage available hardware parallelism and distributed storage. Subject-centric services model changes in time differently from traditional computing systems. Subject-centric services do not do “updates”, they just add new assertions about subjects in a new time-specific context. Subject-centric services also have a built-in mechanism for merging assertions from multiple sources so new assertions can be created on different physical storage devices. Computations in subject-centric world can be described using data flow abstractions which allow perfect (natural) parallelism.

Multi-touch interaction, iPhone and Subject-centric computing

If you follow news related to HCI (human-computer interaction), then you probably saw multi-touch interaction demonstrations by Jeff Han. You probably already use (or played) with iPhone or iPod touch. So you know what multi-touch interaction is about. This kind of interface goes hand in hand with Subject-centric computing. Why?

Multi-touch interaction promotes direct manipulation with various kinds of objects. iPhone follows more traditional application-centric paradigm (with smooth integration of different applications). On the other hand, Jeff Han demonstrated almost application-less interface. Not only “documents”, but “things” that we are interested in can be surfaced through multi-touch interface. People, places, events, driving routes, songs can be represented as “subjects” in multi-touch interface and we can easily (naturally) interact with them. That is the way we would like to interact in subject-centric computing environment.

Multi-touch interface translates gesture-based interactions into operations on subjects (“things” that we are interested in). Subject-centric infrastructure can help to implement ‘glue’ that allows to identify and interconnect subjects “hosted” by various applications/services on desktops, intranets and the Internet.

OS X Leopard and subject-centric computing

I upgraded one of my Mac-based systems with OS X Leopard. It is great. I like it. But from the subject-centric perspective it is still more or less a traditional application/document-centric OS. How can we make it more subject-centric?

I think that most of my comments from this old blog entry.Actually, I did many experiments with OS X Tiger, search, subject-centric documents and topic maps over the last couple of years. It is quite promising, but I did not have enough time to do required Objective-C/Cocoa programming to build a real application with full desktop integration. It looks like it can be easier to implement described features with Leopard:

– Objective-C has now a garbage collector
– built-in Ruby-Cocoa bridge and Xcode integration (good news for Ruby enthusiasts)
– streamlined support for document types (Uniform Type Identifiers)
– Dashcode (simple way to create widgets)
– and… it can look so nice …

We also have much better understanding of how public and personal subject identification servers can work based on our experiments with Ontopedia and PSI server. Missing part is desktop integration.