Author Archives: Subject centric

Subject-centric micro-blogging and Ontopedia’s knowledge map

Traditionally, when we think about subject-centric approach to organizing information, we have in mind equivalent of “master data” – main entities, their properties and relationships. This type of information is relatively static. Of course, subject-centric approach works well also for representing/organizing information about “transactions” and “events”.

“Master data” (PSIs for people, places, companies, products etc.) is a conceptual frame/”endoskeleton” of Ontopedia’s knowledge map. For example, http://psi.ontopedia.net/Apple_Inc is a core, “master” entity.

Assertions
Apple Inc is a Company , Apple’s product line includes Mac Mini, iPhone, … are also part of this core knowledge map.

But Ontopedia’s knowledge map is not limited by this relatively static information. Ontopedia’s knowledge map also has PSIs for events, such as
http://psi.ontopedia.net/Apple_reports_financial_results_Q4_2008
and http://psi.ontopedia.net/Apple_Event_October_14th_2008

“Master Data” combined with “Events” create amazingly powerful conceptual framework for mapping of our knowledge.

Ontopedia’s knowledge map has explicit concept of time and has focus on “current moment on Earth at human size level of (real) world” with recording of history and results of forecasting. History does not disappear in the knowledge map. For example, Ontopedia can “remember” that Apple Inc was called “Apple Computer Inc” at some point and that eMac was in Apple’s product line. History is available for referencing and continues to play an essential role in organizing information.

Explicit modeling of time helped us to introduce even more intriguing features such as Subject-centric micro-blogging.
We are experimenting with “dynamic” associations and properties such as Currently Reading [Person, Book], Currently Located At [Person, City], “Currently Thinking About [Person, Subject]”, “My favorite link of the day” etc.

To support this “dynamic” perspective on Ontopedia’s knowledge map, we recently added subject-centric RSS feeds. Each subject page in Ontopedia’s knowledge map has own RSS feed which provides quick access to all assertions about specific subject. Each assertion has associated time stamps which allow to track changes in the knowledge map and report them in RSS feeds.

In addition to traditional “source-centric” RSS feeds in my RSS aggregator, I have now folders like People, Companies, etc. with subject-centric RSS feeds from Ontopedia’s knowledge map. These feeds are available on my laptop, but I also have a synchronized RSS aggregator on my mobile phone. Mobile RSS aggregator and mobile browser allow me to work with Ontopedia’s knowledge map when I need it. It makes me feel like Subject-centric computing is (almost) here…

Carl Hewitt – Actor model, OWL, knowledge inconsistency and paraconsistent logic

ITConversations published recently Jon Udell’s interview with Carl Hewitt. In this interview – “Interdependent Message-Passing ORGs”, Carl Hewitt shares his ideas about distributed computations, Actor model, inconsistent knowledge, paraconsistent logic and semantic web.

Carl Hewitt’s work has been an inspiration to me for more than 20 years. Knowledge inconsistency is a fundamental reality of our life. When we build computer systems, we can ignore it, we can try to create artificial boundaries, artificial worlds with “guaranteed” knowledge consistency. Alternative approach is to accept from the beginning that we have to deal with inconsistency and create systems that can represent inconsistent knowledge, reason within inconsistent knowledge bases and utilize mechanisms which help to keep inconsistency “under control”.

I made the choice many years ago in favor of this alternative approach and used it in building many computer systems over the years. Our recent project – Ontopedia PSI server is not an exception. Ontopedia PSI server allows to represent opinions from various sources, including contradictory opinions. Ontopedia’s reasoning engine is justification based (as everything in Ontopedia – work in progress :) which means that decision about each assertion is based on comparison between various opinions and their justifications. Reasoning inside of Ontopedia PSI server is paraconsistent. Inference engine can find contradictory assertions in some areas of Ontopedia’s knowledge base. Local contradictions do not prevent reasoning engine from inferring reasonable assertions in other areas of knowledge base and there is no ‘explosion of assertions’.

Reasoning in Ontopedia PSI server is also ‘adaptive’. We anticipate that when various sources ‘see’ results of comparison between various opinions and ‘see’ consequences of their statements in several ‘steps ahead’, then sources can change their original opinions.

Ontopedia PSI server actually ‘likes’ contradictions. Contradictions are starting points of identifying errors, negotiations, improving knowledge models and as a result – knowledge evolution.

Resources:

Interdependent Message-Passing ORGs, interview on ITConversations

Watching an interview about Powerset

InfoQ published an interview with Tom Preston-Werner on Powerset, GitHub, Ruby and Erlang. I really like projects that try to analyze text/resources on the web and try to implement “smart search”. Powerset is one of these projects. But what I like even more is the approach when we explicitly represent facts/information items using open knowledge representation standards such as Topic Maps or RDF.

Topic Maps can play the role of “knowledge middleware” that helps to integrate various components of “smart search puzzle”. A topic map-based index allows to represent and connect subjects and resources. Explicit representation of relatively small number of relationships (“facts”,”assertions”) between resources and subjects can dramatically change the world of smart search.

Topic Maps based-knowledge middleware is a disruptive technology because it replaces proprietary knowledge organization schemas and modules and it allows multiple players to build various solutions that help to create or use smart index.

Topic Maps-based Ontopedia PSI server, for example, can represent assertions that are manually created by users or generated by some algorithms. We do not have our own text analysis infrastructure, but I hope that in the future we can leverage some services on the web (such as OpenCalais) which can perform text analysis on “as needed” basis. The core ability of Ontopedia PSI server is maintaining explicit representations of subjects that are important for people and ability to maintain assertions about these subjects.

The new version of Ontopedia PSI server can play a role of an aggregator that can extract assertions from existing topic maps/fragements hosted on other websites. Assertions from multiple sources are aggregated into one assertion set/information map/semantic index. Ontopedia PSI server keeps track of information provenance and supports multiple truth values. The server, for example, can handle a situation when one source on the web asserts that Person X did a Presentation P and someone else makes the opposite assertion.

I think that natural language processing can play a huge role in improving search. Ideal text analysis tool should allow to provide ‘clues’ about subjects in a text. I am looking for equivalent of some kind of ‘binding’ that is used in programming quite often these days. I would love to have the ability to provide list of main subjects in a form of PSIs to text analysis tool (using embedded markup or attached external assertions). If I do so, I expect much more precise results. If I do not have an initial list of subjects I expect some kind of suggestions from text analysis tools that I can check against existing information map.

Ontopedia (as many other Topic Maps-based projects) promotes usage of Public Subject Identifiers (PSIs) for “all thinkable” subjects. For example, there is an identifier for TMRA 2008 conference – http://psi.ontopedia.net/TMRA_2008 .
There are identifiers for each presenter and presentation. Basic relationships between various subjects are also “mapped”/explicitly represented. Each basic resource, such as a blog post can have a small assertion set that describes metadata (using Dublin Core metadata vocabulary, for example) and maybe some main assertions. Traditional websites can provide combined assertion sets in XTM or RDF which can be consumed by semantic aggregators such as Ontopedia PSI server. Text analysis is great (when it is good enough). But even simple (semi-)manual “mapping” of subjects, resources and relationships can change the search game.

When we manually try to “map” an existing resource such as a conference website for the first time, it can look as a complicated and time consuming task. Mapping a website for another conference will take much less time. And, of course, in many cases it is possible to reverse traditional website building/assertion extraction paradigm.

It is possible to build nice looking and functional web sites based on “assertion sets”. Topicmaps.com is a great example of this approach. It is driven by a topic map. Humans can enjoy HTML-based representation of this site and aggregators like Ontopedia PSI Server can consume raw XTM-based representation and aggregate it with other assertion sets such as TMRA 2008 conference assertion set.

References

Interview link on InfoQ

Extending Ontopedia PSI server to handle PURLs: support for RDF, step one

I have been thinking about RDF support on Ontopedia PSI server for quite some time. Semantic Technology Conference that I attended this spring gave me some new ideas in this direction. I decided to follow recommendations from Eric Miller’s and David Wood’s presentation “Persistent Identifiers for the ‘Real Web'” regarding PURLs (Persistent Uniform Resource Locators). Ontopedia PSI server was extended to handle PURLs

Each Published Subject Identifier (PSI) on http://psi.ontopedia.net has an equivalent PURL on http://purl.ontopedia.net. For example, http://psi.ontopedia.net/TMRA_2008 has the corresponding PURL http://purl.ontopedia.net/TMRA_2008. What happens when we type in our browser PURL http://purl.ontopedia.net/TMRA_2008? Ontopedia PURL server returns HTTP code 303 “See Other” with “Location” header set to http://psi.ontopedia.net/TMRA_2008.

For RDF-based applications, code 303 is an indication that URI does not correspond to a “digital resource”. Web browsers will automatically jump to http://psi.ontopedia.net/TMRA_2008 which will provide nice subject/resource description.

When we need to export RDF assertions from Ontopedia, we can do something like this:

<rdf:Description rdf:about="http://purl.ontopedia.net/TMRA_2008">
      <rdfs:label>
           TMRA 2008 (Topic Maps Research and Applications  Conference)
      </rdfs:label>
      <rdfs:comment>
               Fourth International Conference on 
               Topic Maps Research and Applications
       </rdfs:comment>
       <rdf:type rdf:resource="http://purl.ontopedia.net/Conference"/>	
</Definition>	

In topic maps-based version we can have:

<topic id="id_98c49a0d3d87f067a4ba13b6d2f6d086">
	<subjectIdentifier href="http://psi.ontopedia.net/TMRA_2008"/>
	<instanceOf>
           <topicRef href="http://psi.ontopedia.net/Conference"/>
        </instanceOf>
	<name>
	   <value>
              TMRA 2008 (Topic Maps Research and Applications  Conference)
           </value>
	</name>
	<occurrence>
            <type>
	        <topicRef href="http://psi.ontopedia.net/Description"/>
            </type>
            <resourceData>
                      Fourth International Conference on 
                      Topic Maps Research and Applications
            </resourceData>
	</occurrence>
</topic>

RDF-based version uses PURLs and Topic Maps-based version uses PSIs for identification of subjects/resources.

Reference:

Persistent Identifiers for the ‘Real Web’, David Wood, Eric Miller, May 2008, PDF

The new version of Ontopedia PSI server

The new version of Ontopedia PSI server is out now. It is possible to represent various types of assertions related to subjects (names, occurrences, associations). The new PSI server allows also to record and integrate opinions of different users. Its internal knowledge representation is optimized for paraconsistent reasoning.

I started to play with some topics that I am interested in. For example, Subject-centric Computing , Apple Inc .
As with typical Topic Maps-based system, we can easily add new subject and assertion types, we are not limited by fixed domain models. In addition, the new PSI server supports recording of assertion provenance and five truth values.

We also tried to follow the Resource-Oriented Architecture: each subject, each assertion, each subject-centric group of assertions of the same type has own Uri and “page”.

The main goal of this version is to experiment with assertion level subject-centric representations vs. more traditional portal-based approach.

2008 Semantic Technology Conference: random observations

I am back from Semantic Technology Conference. It is becoming bigger and bigger each year. This year there were more than hundred sessions, full day of tutorials, product exhibition. It was quite crowded and energizing.

Just some random observations:

Oracle improves RDF / OWL support in 11g database, considers RDF/OWL as strategic/enabling technologies which will be leveraged in future versions of Oracle products.

Yahoo uses RDF to organize content on various web sites. It also introduced SearchMonkey – extension to Yahoo search platform which allows to provide more detailed information about information resources.

– Consumer oriented web sites powered by semantic technologies are here. Twine, Freebase, Powerset are good examples, more to come.

Resource Oriented Architecture and RDF could be a very powerful combination. More and more people understand the value of exposing data through URIs in the form of information resources.
Linked Data initiative looks quite interesting.

– Some advanced semantic applications use knowledge representation formalisms that go beyond basic RDF/OWL model.
But RDF/OWL can be used to surface/exchange information based on W3C standards. Lots of discussions about
information provenance, trust, “semantic spam”.

– It looks like there is a workable solution (compromise) for ’Web’s Identity Crisis’. The idea is to reserve HTTP 303 (“See Other”) code for indication of “Concept URIs”. 303 response should include an additional URI for “See Other” information resource. This approach combined
with new PURL -like servers allows to keep RDF “as is” and to implement something close to the idea of Published Subject Identifiers

Franz demonstrated a new version of AllegroGraph 64-bit RDFStore. Franz implemented support for Named Graphs (can be used for representing weights, trust factors, provenance)
and incorporated geospatial and temporal libraries. Named Graphs allow to deal with contexts using RDF.

– Text analysis tools become better and better. Interesting example is AllegroGraph.
Incorporating natural language processors allows to extract entities and relationships with reasonable level of precision (News Portal sample).

Doug Lenat did a great presentation on the conference about the history of Cyc project. It looks like in 5-10 years we can expect “artificial intelligent assistants” with quite sophisticated abilities to reason.

Serendipitous reuse and representations with basic ontological commitments

Steve Vinoski published a very interesting article: Serendipitous reuse. He also provided additional comments in his blog. The author explores benefits of RESTful uniform interfaces based on HTTP “verbs” GET, PUT, POST and DELETE for building expansible distributed systems. He also compares RESTful approach with traditional SOA implementations based on strongly typed operation-centric interfaces.

Serendipitous reuse is one of the main goals of Subject-centric computing. In addition to uniform interfaces, Subject-centric computing promotes usage of uniform representations with basic ontological commitments (as one of the possible representations).

One of the fundamental principles of the Resource-Oriented Architecture is the support for multiple representations for the same resource. For example, if we have a RESTful service which collects information about people, GET request can return multiple representations.

Example using JSON:


{
	"id":          "John_Smith",
	"type":        "Person",
	"first_name":  "John",
	"last_name":   "Smith",	
	"born_in":      {
			   "id": "Boston_MA_US", 
			   "name": "Boston"
			}
} 

Example using one of the “domain specific” XML vocabularies:


<person id="John_Smith">
	<first_name>John</first_name>
	<last_name>Smith</last_name>
	<born_in ref="Boston_MA_US">Boston</born_in>
</person>	

Example using one of the “domain independent” XML vocabularies:


<object obj_id="John_Smith">
        <property prop_id="first_name" prop_name="first name">John</property>
        <property prop_id="last_name" prop_name="last name">Smith</property>
        <property prop_id="born_in" prop_name="born in" val_ref="Boston_MA_US">
                 Boston
        </property>
</object>	

Example using HTML:


<div class="object">
	<div class="data-property-value">
		<div class="property">first name</div>
		<div class="value">John</div>
	</div>	
	<div class="data-property-value">
		<div class="property">last name</div>
		<div class="value">Smith</div>
	</div>	
	<div class="object-property-value">
		<div class="property">born in</div>
		<div class="value">
			<a href="/Boston_MA_US">Boston</a>
		</div>
	</div>	
</div>	

Example using text:


John Smith was born in Boston

These five formats are examples of data-centric representations without built-in ontological commitments. These formats do not define any relationship between representation and things in the “real world”. Programs which communicate using JSON, for example, do not “know” what “first_name” means. It is just a string that is used as a key in a hash table.

Creators of RESTful services typically define additional constraints and default interpretation for corresponding data-centric representations. For example, we can agree to use “id” string in JSON-based representation as an object identifier and we can publish some human readable document which describes and clarifies this agreement. But the key moment is that this agreement is not a part of JSON format.

Even if we are talking about a representation based on a domain specific XML vocabulary, semantic interpretation is outside of this vocabulary and is a part of an informal schema description (using comments or annotations).

Interestingly enough, level of usefulness is different for various representations. In case of a text, for example, computer can show text “as is”. It is also possible to do full-text indexing and to implement simple full-text search.

HTML-based representations add some structure, ability to use styles and linking between resources. Some links analysis can help to improve results of basic full-text search.

If we look at representations based on Topic Maps, situation is different. Topic Maps technology is a knowledge representation formalism and it embeds a set of ontological commitments. Topic Maps-based representations, for example, commit to such categories as topics, subject identifiers, subject locators, names, occurrences (properties) and associations between topics. There is also the commitment to two association types: “instance-type” and “subtype-supertype”. Topic Maps also support contextual assertions (using scope).

In addition, Topic Maps promote usage of Published Subject Identifiers (PSIs) as a universal mechanism for identifying “things”.

Topic Maps – based representations are optimized for information merging. For example, computers can _automatically_ merge fragments produced by different RESTful services:

Fragment 1 (based on draft of Compact Syntax for Topic Maps: CTM):


p:John_Smith
   isa po:person; 
   - "John Smith"; 
   - "John" @ po:first_name; 
   - "Smith" @ po:last_name
.

g:Boston_MA_US - "Boston"; isa geo:city. 

po:born_in(p:John_Smith : po:person, g:Boston_MA_US : geo:location)

Fragment 2:


g:Paris_FR - "Paris"; isa geo:city. 

po:likes(p:John_Smith : po:person, g:Paris_FR : o:object)

Result of automatic merging:


p:John_Smith
   isa po:person; 
   - "John Smith"; 
   - "John" @ po:first_name; 
   - "Smith" @ po:last_name
.

g:Boston_MA_US - "Boston"; isa geo:city. 

g:Paris_FR - "Paris"; isa geo:city. 

po:born_in(p:John_Smith : po:person, g:Boston_MA_US : geo:location)

po:likes(p:John_Smith : po:person, g:Paris_FR : o:object)

As any other representation formalism, Topic Maps are not ideal. But Topic Maps enthusiasts think that Topic Maps capture a “robust set” of ontological commitments which can drastically improve our ability to organize and manage information and to achieve real reuse of information with added value.

Authoring topic maps using Ruby-based DSL: CTM, the way I like it

Designing and using Domain Specific Languages (DSL) is a popular programming style in Ruby community.
I am experimenting with Ruby-based DSL for authoring topic maps. Surprisingly, the result is very close to
my view on the “ideal” CTM (Compact Topic Maps syntax).

I just would like to share a sample that should demonstrate main ideas of this approach. It is a piece of Ruby code that generates topic maps (behind the scenes).

First topic map defines some simple ontology.


# some definitions to support DSL
# should be included

topic_map :ontology_tm do
  
  tm_base "http://www.example.com/topic_maps/people/"

  topic(:person) {
    sid   "http://psi.example.com/Person"
    name  "Person"
    isa :topic_type
  }
  
  topic(:first_name) {
    sid   "http://psi.example.com/first_name"
    name  "first name"
    isa :name
  }

  topic(:last_name) {
    sid   "http://psi.example.com/last_name"
    name  "last name"
    isa :name
  }
  
  topic(:web_page) {
    sid   "http://psi.example.com/web_page"
    name  "web page"
    isa :occurrence
    datatype :uri
  }

  topic(:age) {
    sid   "http://psi.example.com/age"
    name  "age"
    isa :occurrence
    datatype :integer
  }
  
  topic(:description) {
    sid   "http://psi.example.com/description"
    name  "description"
    isa :occurrence
    datatype :string
  }
  
  topic(:works_for) {
    sid   "http://psi.example.com/works_for"
    name  "works for"
    isa :property
    association :employment
    first_role :employee
    second_role :employer
    third_role :position_type  
    third_role_prefix :as
  }
  
  topic(:likes) {
    sid   "http://psi.example.com/likes"
    name  "likes"
    isa [:property, :association]
    association :likes
    first_role :person
    second_role :object
  }
  
end

Second topic map includes ontology and asserts some facts.

	
topic_map :facts_tm do  
  
  tm_base "http://www.example.com/topic_maps/people/john_smith"

  tm_include :ontology_tm
 
  topic :john_smith do
      sid "http://psi.example.com/JohnSmith"
      name  "John Smith"
      name  "Johnny", :scope => :alt_name
      first_name "John" ; last_name  "Smith"
      web_page "http://a.example.com/JohnSmith.htm"
      works_for topic(:example_dot_com){
                              sid "http://www.example.com"
                              name "example.com"; isa :company
                         }, 
    	                :as => :program_manager, 
    	                :scope => :date_2008_02_28
      likes [:italian_opera, :new_york]
      age 35
      description <

Subject-centric blog in XTM (Topic Maps interchange) format

XTM export has been available on Subject-centric blog from the first day. But, I think, it was not obvious what readers can do with it. I added a link to Subject-centric topic map in Omnigator (Topic Maps browser).

I also recently made XTM export compatible with Expressing Dublin Core Metadata Using Topic Maps recommendations.

My plan is to connect (aggregate) Subject-centric with other Topic Maps related blogs based on core “Subject-Resource” and simple “Blogging” ontologies.

I see XTM export as a small first step in promoting SAVE AS XTM INITIATIVE
and building Topic Maps Grid

Additional resources:

Expressing Dublin Core in Topic Maps