Tag Archives: Multi-touch interaction

iPad, Multi-touch interaction and Subject-centric computing

I am very excited about iPad. It makes multi-touch interaction mainstream. iPad revives and introduces new generation of developers and users to the idea of “direct object manipulation” which is one of the key concepts of Subject-centric computing.

On the surface it looks like iPad (and iPhone) application-centric model (with thousands various Apps) is in contradiction with application-less model of Subject-centric computing. In reality, applications developed for iPad and iPhone are often can be considered as “functions” which can be combined to provide subject-centric experience.

What is missing? Integration with global knowledge map that can be used from various apps and a mechanism to pass subject context that allows to launch/continue applications in specific subject context.

Apple provides support for embedding geo maps into any application using MapKit framework. Let’s assume for a minute that Apple creates SubjectKit. This (imaginary) framework provides access to information collected in global sources such as Freebase, Wikipedia the same way MapKit provides access to Google Maps. In this case iPad and iPhone applications can leverage information collected in global knowledge map. SubjectKit also can allow applications to record current subject context in some shared storage available for all apps. When a user launches an app, this app can read current subject context and use it to provide subject-centric experience.

Let’s take a look at iTunes, for example. It simplifies “buying experience”, but it is not currently integrated with global knowledge map. We often need to launch a browser and search to get additional info about subjects that we are interested in (movies, actors, directors, tracks, groups, …). iTunes has some reference data but it is quite minimal in comparison with what we can get in global knowledge map, and this reference data is limited to iTunes app.

With (imaginary) SubjectKit iTunes (and any other app) on iPad can leverage information available in global knowledge map directly without manual search.

What about leveraging subject context between various applications? Let’s say I open my Weather app and check weather in New York City. Weather app can record that one of my currently active subjects has identifier http://en.wikipedia.org/wiki/New_York_City. Let’s assume that as a next step I launch iTunes. iTunes (in my imaginary scenario) can retrieve this subject identifier from shared context storage. If I click on “Movies” tab then iTunes can suggest, for example, “Sleepless in Seattle” movie in the new “Related to your active subjects” section. iTunes can do it because (in my imaginary scenario) it can leverage current subject context, internal iTunes database and global knowledge map.

With sharing subject context between apps comes an issue of protecting user privacy. SubjectKit framework can maintain “white list” of apps which are allowed to save into and restore from current subject context. SubjectKit can block applications that try to access subject context if they are not allowed to see it. SubjectKit also can prevent applications to save active subjects into shared subject context if they are not allowed to do so.

Of course, this approach is not limited to OS X and iPad. It’s just combination of iPad design, powerful multi-touch interface and strength of OS X creates a winning platform for Subject-centric computing.