Tag Archives: Carl Hewitt

Carl Hewitt’s Direct Logic, inconsistency tolerant reasoning and Subject-centric computing

I have been fascinated by the idea of building computer systems which are inconsistency tolerant for many years. I usually address this problem from practical perspective: I just try to write code that demonstrates behavior that I would like to model. But I always thought that it should be beneficial to have some kind of a formal logic that can provide foundations for my heuristic approach. I follow Carl Hewitt’s work for many years and it seems that his inconsistency tolerant Direct Logic can play this foundational role. Firstly, let’s take a look at how traditional logic handles contradictions.

Within traditional logic from inconsistency anything can be inferred. If we have a contradiction about a proposition P then we can infer any proposition Q


	P, ⌉P ├ Q	

This is not very helpful if we want to talk about inconsistency tolerant reasoning…

In Carl Hewitt’s Direct Logic [1] situation is different. Formula


	P, ⌉P

describes totally normal situation, but the meaning of this situation is different from traditional logic.

“Direct Logic supports direct inference ( ├τ ) for an inconsistency theory T. ├τ does not support either general proof by contradiction or disjunction introduction. However, ├τ does support all other rules of natural deduction…” [1]

In traditional logic if we have


	P ├τ Q, ├τ Q

then we can infer ⌉P in theory T. In Direct Logic we cannot do this kind of inference.

“Since truth is out the window for inconsistent theories, we have the following reformulation: Inference in a theory T (├τ) carries argument from antecedents to consequents in chains of inference…” [1]

So if we have


	P,  ⌉P

it does not mean that P is true and not true at the same time. It means that we have arguments both for and against the proposition P.
Direct inference ├τ does not “propagate truth”, it propagates arguments.

Even if we have


	P, ⌉P

in Direct Logic we still can continue doing inferences based on these propositions (of course, we can generate new contradictory propositions).

So if we have, for example,


	P,  ⌉P,  P ├ Q,  ⌉P ├  ⌉Q

we can infer


	Q, ⌉Q

which means that we have arguments both for and against the proposition Q. Because we are talking here not about “truth” but about argumentation it makes total sense.

How is it related to inference in Ontopedia (described here)?

If we have foundational inconsistency tolerant logic we can implement reasoning engines that support various inference heuristics on top of this logic. We can think about Ontopedia inference engine as a specific heuristic layer on top of inconsistency tolerant logic such as Direct Logic.

We do have a concept of “truth” in Ontopedia, but it exists only at the heuristic layer as a convenient metaphor. At the foundation layer inference engine just generates propositions with argumentation. In Ontopedia we try to assign truth value to each proposition, but this attempt can fail if we have contradictory arguments with the same strength. We also use contradiction detection for managing/controlling inference. In Ontopedia we do not want to infer lots of assertions based on contradictory propositions. We suppress this kind of inference. Ontopedia is built to be a tool that helps Ontopedia’s user community to create/evolve/improve Ontopedia’s community knowledge map. Identification and resolution of knowledge conflicts is a fundamental activity.

It is important to mention that we do not make an assumption that one found contradiction refutes entire Ontopedia’s knowledge map. We assume that Ontopedia’s knowledge map can have multiple contradictions at any time. Ontopedia is an open system. We can have new users and we can collect new information about new subjects. Openness naturally introduces new (also helps to resolve some) inconsistencies . But we do not just stay and watch growing number of knowledge conflicts. We embed system mechanisms that allow us to monitor level of Ontopedia’s inconsistency. Our assumption is that Ontopedia’s user community will try to resolve some of existing knowledge conflicts and can improve knowledge quality and support reasonable level of inconsistency.

Ontopedia’s inference engine allows to ignore contradictory arguments and select one of the options as a “decision”. It means that engine does inferences only based on a “decision” and suppresses inferences based on alternative. Probably not all users can agree with decisions recorded in Ontopedia’s knowledge map. We consider possibility of “forking” knowledge map into separate maps with own user communities. Same assertions can be assimilated by some communities and ignored/rejected by others. In general, inconsistency tolerant reasoning allows various communities to exchange and utilize information even when they disagree with it.

I am looking forward to explore more in the areas of inconsistency tolerant logic and inconsistency tolerant reasoning.

References:

** 1. Common sense for concurrency and inconsistency tolerance using Direct Logic, Carl Hewitt, 2010

Paraconsistent Reasoning in Ontopedia

I did a short presentation about paraconsistent reasoning in Ontopedia on TMRA 2009.

Summary:

  • Paraconsistent reasoning allows to collect assertions from
    various sources and “safely” infer new information
  • Paraconsistent reasoning works well together with constraint
    languages (such as TMCL)
  • Paraconsistent reasoning supports evolutionary approach to
    building large assertion systems
  • We do not need to “fix all errors” before we can reason

Carl Hewitt – Actor model, OWL, knowledge inconsistency and paraconsistent logic

ITConversations published recently Jon Udell’s interview with Carl Hewitt. In this interview – “Interdependent Message-Passing ORGs”, Carl Hewitt shares his ideas about distributed computations, Actor model, inconsistent knowledge, paraconsistent logic and semantic web.

Carl Hewitt’s work has been an inspiration to me for more than 20 years. Knowledge inconsistency is a fundamental reality of our life. When we build computer systems, we can ignore it, we can try to create artificial boundaries, artificial worlds with “guaranteed” knowledge consistency. Alternative approach is to accept from the beginning that we have to deal with inconsistency and create systems that can represent inconsistent knowledge, reason within inconsistent knowledge bases and utilize mechanisms which help to keep inconsistency “under control”.

I made the choice many years ago in favor of this alternative approach and used it in building many computer systems over the years. Our recent project – Ontopedia PSI server is not an exception. Ontopedia PSI server allows to represent opinions from various sources, including contradictory opinions. Ontopedia’s reasoning engine is justification based (as everything in Ontopedia – work in progress :) which means that decision about each assertion is based on comparison between various opinions and their justifications. Reasoning inside of Ontopedia PSI server is paraconsistent. Inference engine can find contradictory assertions in some areas of Ontopedia’s knowledge base. Local contradictions do not prevent reasoning engine from inferring reasonable assertions in other areas of knowledge base and there is no ‘explosion of assertions’.

Reasoning in Ontopedia PSI server is also ‘adaptive’. We anticipate that when various sources ‘see’ results of comparison between various opinions and ‘see’ consequences of their statements in several ‘steps ahead’, then sources can change their original opinions.

Ontopedia PSI server actually ‘likes’ contradictions. Contradictions are starting points of identifying errors, negotiations, improving knowledge models and as a result – knowledge evolution.

Resources:

Interdependent Message-Passing ORGs, interview on ITConversations