Monthly Archives: March 2010

Carl Hewitt’s Direct Logic, inconsistency tolerant reasoning and Subject-centric computing

I have been fascinated by the idea of building computer systems which are inconsistency tolerant for many years. I usually address this problem from practical perspective: I just try to write code that demonstrates behavior that I would like to model. But I always thought that it should be beneficial to have some kind of a formal logic that can provide foundations for my heuristic approach. I follow Carl Hewitt’s work for many years and it seems that his inconsistency tolerant Direct Logic can play this foundational role. Firstly, let’s take a look at how traditional logic handles contradictions.

Within traditional logic from inconsistency anything can be inferred. If we have a contradiction about a proposition P then we can infer any proposition Q


	P, ⌉P ├ Q	

This is not very helpful if we want to talk about inconsistency tolerant reasoning…

In Carl Hewitt’s Direct Logic [1] situation is different. Formula


	P, ⌉P

describes totally normal situation, but the meaning of this situation is different from traditional logic.

“Direct Logic supports direct inference ( ├τ ) for an inconsistency theory T. ├τ does not support either general proof by contradiction or disjunction introduction. However, ├τ does support all other rules of natural deduction…” [1]

In traditional logic if we have


	P ├τ Q, ├τ Q

then we can infer ⌉P in theory T. In Direct Logic we cannot do this kind of inference.

“Since truth is out the window for inconsistent theories, we have the following reformulation: Inference in a theory T (├τ) carries argument from antecedents to consequents in chains of inference…” [1]

So if we have


	P,  ⌉P

it does not mean that P is true and not true at the same time. It means that we have arguments both for and against the proposition P.
Direct inference ├τ does not “propagate truth”, it propagates arguments.

Even if we have


	P, ⌉P

in Direct Logic we still can continue doing inferences based on these propositions (of course, we can generate new contradictory propositions).

So if we have, for example,


	P,  ⌉P,  P ├ Q,  ⌉P ├  ⌉Q

we can infer


	Q, ⌉Q

which means that we have arguments both for and against the proposition Q. Because we are talking here not about “truth” but about argumentation it makes total sense.

How is it related to inference in Ontopedia (described here)?

If we have foundational inconsistency tolerant logic we can implement reasoning engines that support various inference heuristics on top of this logic. We can think about Ontopedia inference engine as a specific heuristic layer on top of inconsistency tolerant logic such as Direct Logic.

We do have a concept of “truth” in Ontopedia, but it exists only at the heuristic layer as a convenient metaphor. At the foundation layer inference engine just generates propositions with argumentation. In Ontopedia we try to assign truth value to each proposition, but this attempt can fail if we have contradictory arguments with the same strength. We also use contradiction detection for managing/controlling inference. In Ontopedia we do not want to infer lots of assertions based on contradictory propositions. We suppress this kind of inference. Ontopedia is built to be a tool that helps Ontopedia’s user community to create/evolve/improve Ontopedia’s community knowledge map. Identification and resolution of knowledge conflicts is a fundamental activity.

It is important to mention that we do not make an assumption that one found contradiction refutes entire Ontopedia’s knowledge map. We assume that Ontopedia’s knowledge map can have multiple contradictions at any time. Ontopedia is an open system. We can have new users and we can collect new information about new subjects. Openness naturally introduces new (also helps to resolve some) inconsistencies . But we do not just stay and watch growing number of knowledge conflicts. We embed system mechanisms that allow us to monitor level of Ontopedia’s inconsistency. Our assumption is that Ontopedia’s user community will try to resolve some of existing knowledge conflicts and can improve knowledge quality and support reasonable level of inconsistency.

Ontopedia’s inference engine allows to ignore contradictory arguments and select one of the options as a “decision”. It means that engine does inferences only based on a “decision” and suppresses inferences based on alternative. Probably not all users can agree with decisions recorded in Ontopedia’s knowledge map. We consider possibility of “forking” knowledge map into separate maps with own user communities. Same assertions can be assimilated by some communities and ignored/rejected by others. In general, inconsistency tolerant reasoning allows various communities to exchange and utilize information even when they disagree with it.

I am looking forward to explore more in the areas of inconsistency tolerant logic and inconsistency tolerant reasoning.

References:

** 1. Common sense for concurrency and inconsistency tolerance using Direct Logic, Carl Hewitt, 2010