If we look at traditional SOA, business transactions are modeled typically as service operations that are part of a service contract. Operation invocations in traditional SOA are not treated as first class “objects”. Operation invocations do not have own identity. Components/processes inside of a service and service clients cannot reference individual operation calls. Situation is different if we look at subject-centric and RESTFul services.
If a client of some subject-centric (or RESTFul) service needs to start a transaction, this client should create a new subject “request for a transaction” with own identity and internal state. Subject-centric service processes this request and some other subjects can be created/updated/deleted as a result of this operation. Service clients have direct access to subjects that represent transactions. Clients can check status of any initiated transaction. It is also possible to use a general query/search interface for finding various subsets of transactions.
Service invocation results can be presented as a special kind of subjects which are linked to original requests for transactions. Subject-centric services also can record “cause and effect” relationships that connect a request for a transaction and results of implementing this transaction as a network of related “events”. Subject-centric computing promotes (and helps) to build transparent services.
It is true that subject-centric services can generate many more subjects (and assertions about subjects) in comparison with modern SOA/object-oriented systems. But Subject-centric computing is in a unique position to leverage available hardware parallelism and distributed storage. Subject-centric services model changes in time differently from traditional computing systems. Subject-centric services do not do “updates”, they just add new assertions about subjects in a new time-specific context. Subject-centric services also have a built-in mechanism for merging assertions from multiple sources so new assertions can be created on different physical storage devices. Computations in subject-centric world can be described using data flow abstractions which allow perfect (natural) parallelism.
If you follow news related to HCI (human-computer interaction), then you probably saw multi-touch interaction demonstrations by Jeff Han. You probably already use (or played) with iPhone or iPod touch. So you know what multi-touch interaction is about. This kind of interface goes hand in hand with Subject-centric computing. Why?
Multi-touch interaction promotes direct manipulation with various kinds of objects. iPhone follows more traditional application-centric paradigm (with smooth integration of different applications). On the other hand, Jeff Han demonstrated almost application-less interface. Not only “documents”, but “things” that we are interested in can be surfaced through multi-touch interface. People, places, events, driving routes, songs can be represented as “subjects” in multi-touch interface and we can easily (naturally) interact with them. That is the way we would like to interact in subject-centric computing environment.
Multi-touch interface translates gesture-based interactions into operations on subjects (“things” that we are interested in). Subject-centric infrastructure can help to implement ‘glue’ that allows to identify and interconnect subjects “hosted” by various applications/services on desktops, intranets and the Internet.
I upgraded one of my Mac-based systems with OS X Leopard. It is great. I like it. But from the subject-centric perspective it is still more or less a traditional application/document-centric OS. How can we make it more subject-centric?
I think that most of my comments from this old blog entry.Actually, I did many experiments with OS X Tiger, search, subject-centric documents and topic maps over the last couple of years. It is quite promising, but I did not have enough time to do required Objective-C/Cocoa programming to build a real application with full desktop integration. It looks like it can be easier to implement described features with Leopard:
– Objective-C has now a garbage collector
– built-in Ruby-Cocoa bridge and Xcode integration (good news for Ruby enthusiasts)
– streamlined support for document types (Uniform Type Identifiers)
– Dashcode (simple way to create widgets)
– and… it can look so nice …
We also have much better understanding of how public and personal subject identification servers can work based on our experiments with Ontopedia and PSI server. Missing part is desktop integration.