Tag Archives: Paraconsistent logic

Carl Hewitt’s Direct Logic, inconsistency tolerant reasoning and Subject-centric computing

I have been fascinated by the idea of building computer systems which are inconsistency tolerant for many years. I usually address this problem from practical perspective: I just try to write code that demonstrates behavior that I would like to model. But I always thought that it should be beneficial to have some kind of a formal logic that can provide foundations for my heuristic approach. I follow Carl Hewitt’s work for many years and it seems that his inconsistency tolerant Direct Logic can play this foundational role. Firstly, let’s take a look at how traditional logic handles contradictions.

Within traditional logic from inconsistency anything can be inferred. If we have a contradiction about a proposition P then we can infer any proposition Q


	P, ⌉P ├ Q	

This is not very helpful if we want to talk about inconsistency tolerant reasoning…

In Carl Hewitt’s Direct Logic [1] situation is different. Formula


	P, ⌉P

describes totally normal situation, but the meaning of this situation is different from traditional logic.

“Direct Logic supports direct inference ( ├τ ) for an inconsistency theory T. ├τ does not support either general proof by contradiction or disjunction introduction. However, ├τ does support all other rules of natural deduction…” [1]

In traditional logic if we have


	P ├τ Q, ├τ Q

then we can infer ⌉P in theory T. In Direct Logic we cannot do this kind of inference.

“Since truth is out the window for inconsistent theories, we have the following reformulation: Inference in a theory T (├τ) carries argument from antecedents to consequents in chains of inference…” [1]

So if we have


	P,  ⌉P

it does not mean that P is true and not true at the same time. It means that we have arguments both for and against the proposition P.
Direct inference ├τ does not “propagate truth”, it propagates arguments.

Even if we have


	P, ⌉P

in Direct Logic we still can continue doing inferences based on these propositions (of course, we can generate new contradictory propositions).

So if we have, for example,


	P,  ⌉P,  P ├ Q,  ⌉P ├  ⌉Q

we can infer


	Q, ⌉Q

which means that we have arguments both for and against the proposition Q. Because we are talking here not about “truth” but about argumentation it makes total sense.

How is it related to inference in Ontopedia (described here)?

If we have foundational inconsistency tolerant logic we can implement reasoning engines that support various inference heuristics on top of this logic. We can think about Ontopedia inference engine as a specific heuristic layer on top of inconsistency tolerant logic such as Direct Logic.

We do have a concept of “truth” in Ontopedia, but it exists only at the heuristic layer as a convenient metaphor. At the foundation layer inference engine just generates propositions with argumentation. In Ontopedia we try to assign truth value to each proposition, but this attempt can fail if we have contradictory arguments with the same strength. We also use contradiction detection for managing/controlling inference. In Ontopedia we do not want to infer lots of assertions based on contradictory propositions. We suppress this kind of inference. Ontopedia is built to be a tool that helps Ontopedia’s user community to create/evolve/improve Ontopedia’s community knowledge map. Identification and resolution of knowledge conflicts is a fundamental activity.

It is important to mention that we do not make an assumption that one found contradiction refutes entire Ontopedia’s knowledge map. We assume that Ontopedia’s knowledge map can have multiple contradictions at any time. Ontopedia is an open system. We can have new users and we can collect new information about new subjects. Openness naturally introduces new (also helps to resolve some) inconsistencies . But we do not just stay and watch growing number of knowledge conflicts. We embed system mechanisms that allow us to monitor level of Ontopedia’s inconsistency. Our assumption is that Ontopedia’s user community will try to resolve some of existing knowledge conflicts and can improve knowledge quality and support reasonable level of inconsistency.

Ontopedia’s inference engine allows to ignore contradictory arguments and select one of the options as a “decision”. It means that engine does inferences only based on a “decision” and suppresses inferences based on alternative. Probably not all users can agree with decisions recorded in Ontopedia’s knowledge map. We consider possibility of “forking” knowledge map into separate maps with own user communities. Same assertions can be assimilated by some communities and ignored/rejected by others. In general, inconsistency tolerant reasoning allows various communities to exchange and utilize information even when they disagree with it.

I am looking forward to explore more in the areas of inconsistency tolerant logic and inconsistency tolerant reasoning.

References:

** 1. Common sense for concurrency and inconsistency tolerance using Direct Logic, Carl Hewitt, 2010

Inference in Ontopedia

I just finished reading “Semantic Web for the Working Ontologist: Modeling in RDF, RDFS and OWL”. Great book! Lots of examples and deep exploration of Semantic Web fundamentals. It inspired me… not to use OWL, no… but to describe how we approach inference/reasoning in Ontopedia.

There are several fundamental principles that we try to follow developing inference capabilities in Ontopedia.

Paraconsistency

We assume that Ontopedia’s knowledge base can have contradictions at any time. We try to develop a system that can do “reasonable” inferences within inconsistent knowledge base.

Non-monotonic, Adaptive Knowledge Base

People change opinions, modify existing and create new theories (formal and informal). People learn new things, they can be convinced and taught (sometimes). There is evolution in general and personal knowledge. We would like to support these “subject-centric” features.

In simple cases, Ontopedia users can change their factual assertions. This can trigger truth maintenance processes and revision of other assertions. Ontopedia also keeps history of assertions – “Who asserted What and When”

Minimization of Inconsistency

We try to create a system that can operate/reason within inconsistent knowledge base. But it is not enough. We embed mechanisms that allow to identify inconsistencies and help to resolve them. Knowledge conflicts are “first class objects” in Ontopedia and are organized based on conflict types. Each knowledge conflict type has conditions that describe how conflicts of this type can be identified. There are also some recommendations for resolving conflicts. In general, Ontopedia’s user community tries to minimize knowledge inconsistency and Ontopedia system “tries” to help users to achieve this goal.

Inference Transparency and Information Provenance

Inference is not an easy process, contradictions are tough, knowledge evolution is challenging. We do not try to create an illusion that these things are easy and that a user can just “click a button” and magically get “all” inferred assertions, we do not try to “virtualize” inference and hide it behind a query language, for example. We think about inference as a process that can be time and resource consuming and can include multiple steps. Ontopedia provides facilities to record various steps of inference process. Inference tracing is an important part of Information Provenance in Ontopedia. We keep track of “Who asserted What and When” and we also keep track of “What was Inferred based on What and Why”.

Multiple Inference Modules and Decision Procedure

RDFS inference rules are useful, RDFS+ adds some new tricks. OWL 1.0 inference looks interesting in many contexts and OWL 2.0 looks even better. What about Common Logic? What about Cyc-like inference?… Ontopedia’s system architecture supports various inference modules. Each module can generate proposals based on the current state of Ontopedia’s knowledge base. These proposals are recorded in the knowledge base but they do not automatically become Ontopedia’s “visible” assertions. Different inference modules can produce controversial proposals, it’s OK. Various proposals are considered by Ontopedia’s decision procedure that calculates “final” assertion that becomes “visible” on Ontopedia’s web site. Decision procedure can be invoked on any assertion at any time. Ontopedia’s knowledge base is not limited by “true” only statements. We utilize multi-valued truth including “unknown.”

Loosely Coupled Inference, Decision Making, and Truth Maintenance

Ontopedia is a system that can “infer a little bit something here”, “find some knowledge conflicts there”, “make some new decisions”, “infer a little bit more”, “review some decisions” etc. All these activities can be performed in any order by various components.
All activities are recorded in the “activity log” (it is available as RSS/ATOM feed). Various modules can explore activity log and use it for managing own activities and cooperation between modules. In general, modules can run in parallel in various areas of Ontopedia’s knowledge base. These activities can be directed buy user community through user interface or can be directed by “controller” software components.

I developed this approach and related system architecture in late 80s and used it successfully in various projects for relatively small data sets during last 20 years. What is exciting about 2010? It’s availability of huge data sets on the Web. There is also experience in building Social Web and much better understanding of Collective Intelligence. I could only envision in 1990 that it would be possible to build large stable evolving paraconsistent open knowledge based systems. In 2010, I am pretty sure about this.

Paraconsistent Reasoning in Ontopedia

I did a short presentation about paraconsistent reasoning in Ontopedia on TMRA 2009.

Summary:

  • Paraconsistent reasoning allows to collect assertions from
    various sources and “safely” infer new information
  • Paraconsistent reasoning works well together with constraint
    languages (such as TMCL)
  • Paraconsistent reasoning supports evolutionary approach to
    building large assertion systems
  • We do not need to “fix all errors” before we can reason

Carl Hewitt – Actor model, OWL, knowledge inconsistency and paraconsistent logic

ITConversations published recently Jon Udell’s interview with Carl Hewitt. In this interview – “Interdependent Message-Passing ORGs”, Carl Hewitt shares his ideas about distributed computations, Actor model, inconsistent knowledge, paraconsistent logic and semantic web.

Carl Hewitt’s work has been an inspiration to me for more than 20 years. Knowledge inconsistency is a fundamental reality of our life. When we build computer systems, we can ignore it, we can try to create artificial boundaries, artificial worlds with “guaranteed” knowledge consistency. Alternative approach is to accept from the beginning that we have to deal with inconsistency and create systems that can represent inconsistent knowledge, reason within inconsistent knowledge bases and utilize mechanisms which help to keep inconsistency “under control”.

I made the choice many years ago in favor of this alternative approach and used it in building many computer systems over the years. Our recent project – Ontopedia PSI server is not an exception. Ontopedia PSI server allows to represent opinions from various sources, including contradictory opinions. Ontopedia’s reasoning engine is justification based (as everything in Ontopedia – work in progress :) which means that decision about each assertion is based on comparison between various opinions and their justifications. Reasoning inside of Ontopedia PSI server is paraconsistent. Inference engine can find contradictory assertions in some areas of Ontopedia’s knowledge base. Local contradictions do not prevent reasoning engine from inferring reasonable assertions in other areas of knowledge base and there is no ‘explosion of assertions’.

Reasoning in Ontopedia PSI server is also ‘adaptive’. We anticipate that when various sources ‘see’ results of comparison between various opinions and ‘see’ consequences of their statements in several ‘steps ahead’, then sources can change their original opinions.

Ontopedia PSI server actually ‘likes’ contradictions. Contradictions are starting points of identifying errors, negotiations, improving knowledge models and as a result – knowledge evolution.

Resources:

Interdependent Message-Passing ORGs, interview on ITConversations