Abduction: The missing puzzle of AI (Part 2)

rohola zandie
16 min readFeb 25, 2021

Keywords: DEFEASIBLE, PLAUSIBLE, AND PRESUMPTIVE REASONING

Defeasible arguments are ones that can be acceptable at the moment even though in the future they may be open to defeat. New evidence may come in later that defeats the argument.

The canonical example of a defeasible argument, used so often in AI, is the Tweety argument:

THE TWEETY ARGUMENT

Birds fly.

Tweety is a bird.

Therefore Tweety flies.

The Tweety argument may be rationally acceptable assuming that we have no information about Tweety except that he is a bird. But suppose new information comes in telling us that Tweety is a penguin. A penguin is a bird, but it cannot fly.

The first premise of the Tweety argument is not a universal generalization of the absolute kind that can be rendered by the universal quantifier of deductive logic. It is not really an inductive generalization, either. It states that birds normally fly or that one can normally expect a bird to fly, subject to exceptions.

Not all possible exceptions can be predicted in advance. Thus a defeasible argument is one that is open-ended, whereas a deductively valid argument is closed in that it necessarily implies its conclusion. Deductive logic is monotonic which means new facts or knowledge will not change the conclusion of valid deductive inference. On the other hand, defeasible reasoning is non-monotonic which means given new facts the conclusions can change.

A presumption, then, is something you move ahead with, for practical purposes, even though it is not known to be true at the present time. It is a kind of useful assumption that can be justified on practical grounds in order to take action, for example, even though the evidence to support

it may be insufficient or inconclusive.

The practical justification of presumptive reasoning, despite its uncertain and inconclusive nature, is that it moves a dialogue forward part way to drawing a final conclusion, even in the absence of such evidence at a given point. Because of its dependence on use in a context of dialogue, it is different in nature from either deductive or inductive inference.

Presumptive inference is easily confused with abductive inference, and the two often tend to be seen as either the same thing or very closely related. The notion of presumptive inference tends to be more prominent in writings on legal argumentation, whereas the term “abductive inference” is much more commonly used in describing scientific argumentation and in computer science. Both types of inference are provisional in nature. Both are also hypothetical and have to do with the reasoning that moves forward in the absence of complete evidence.

The practical justification of presumptive reasoning, despite its uncertain and inconclusive nature, is that it moves a dialogue forward part way to drawing a final conclusion, even in the absence of such evidence at a given point. Because of its dependence on use in the context of dialogue, it is different in nature from either deductive or inductive inference.

Presumption is best understood dialectically, by seeing how it operates in a dialogue by reversing the obligation to prove. Abduction, as indicated by the analysis above, is also best understood as a dialogue sequence with several distinctive steps. The first step is the existence of a given set of facts (or presumed facts) in a given case. A why question or a how question is then asked about this fact. In other words, an explanation for this fact is requested by one participant in the dialogue. Then the other participant answers the question by offering an explanation. Through a series of questions and answers, several alternative explanations are elicited. There is then an evaluation of these explanations, and the best one is selected.

Argumentation schemes represent a different standard of rationality from the one represented by deductive and inductive argument forms. This third class of presumptive (or abductive) arguments results only in plausibility, meaning that if the premises seem to be true, then one is justified in inferring that the conclusion also seems to be true. But seeming to be true can be misleading. You can go wrong with these kinds of arguments. For example, if an expert says that a particular statement is true, but you have direct empirical evidence that it is false, you had better suspend judgment. Or, if you have to act on a presumption one way or the other, go with the empirical evidence. But a presumptive argument based on an argumentation scheme should always be evaluated in a context of the dialogue of which it is a part. When the dialogue has reached the closing stage, and the argumentation in it is complete, only then can an evaluator

reach a firm determination on what plausibility the argument has. And this evaluation must always and only be seen as relative to the dialogue as a whole. Typically, one individual argument has only a small weight of plausibility in itself. The significance of the argument is only that it can be combined with a lot of other relevant plausibilistic arguments used in the case. The important factor is the combined mass of evidence in the case. The case will have two sides, and there will be a mass of evidence on both of them. The final outcome of the case should be determined by how the mass of evidence on both sides tilts the burden of proof allocated at the initial stages of the dialogue.

A Dialogue Model of Explanation

Lipton (1991, p. 2) wondered why inference to the best explanation has been so little developed as a theoretical model of reasoning, given its evident importance and popularity in so many fields. He suggested (p. 2) the following reason: “The model is an attempt to account for inference in terms of explanation, but our understanding of explanation is so patchy that the model seems to account for the obscure in terms of the equally obscure.” Hintikka (1998, p. 507) commented that most people who speak of “inference to the best explanation” seem to think they know what an explanation is, but “in reality, the nature of explanation is scarcely any clearer than the nature of abduction.”

MODELS OF SCIENTIFIC EXPLANATION

Hempel(1965, p. 174) schematized his model of explanation as a deductive inference based on three variables. C1, …,Ck are conditions called “statements of particular occurrences.” For example, they could represent positions and movements of celestial bodies such as stars. L1,.., Lk represent general laws. Hempel wrote that, for example, they could be laws of Newtonian mechanics. E is a sentence stating what is to be explained. Thus E represents the so-called explanandum or thing to be explained. Hempel presented the following schema to represent the form of an explanation.

C1, …,Ck

L1,.., Lk J

— — — — — — — -

E

The two components above the line represent the explanans J or the part that does the explaining. Hempel’s model has also been called the covering law model of explanation.

This metal was heated.

All metals expand when heated.

— — — — — — —

This metal expanded.

Thus Dieks and de Regt argue that although reduction may sometimes lead to greater understanding, it does not always do so. In their view, reducing something to a deeper level can sometimes make things more complicated and harder to understand (p. 57). They concluded that when scientists probe into deeper layers of reality, their aim is not to achieve greater understanding but to find theories of greater generality. Thus although reduction is important as one way of operationally defining scientific understanding in some way that can be made precise, by itself it does not seem to be sufficient to yield an understanding-based model of scientific explanation.

Even so, current developments based on such a user-based approach suggest that explanation should be seen as interactive and cooperative, with the system producing explanations based on what it takes the viewpoint of the user to be. Dialogue systems for explanation have in fact been produced (Cawsey, 1992) based on this assumption. Such systems must be able to respond to feedback from the user and must also be based on some way to estimate how the user understands or fails to understand, some explanation that was offered. In short, it looks very much like recent research in AI is moving beyond the older idea of an explanation as simply a sequence of reasoning from a set of laws or even from a system’s knowledge base. It seems to be moving toward a richer notion of explanation as an interactive process between two agents or parties in a dialogue, an explainer and a user, where the explainer must have some understanding of what the user understands.

Schank (I986, p. 6) clarified these matters by his insightful remark that understanding needs to be understood as a spectrum. At one end is complete empathy of the kind found between twins, close siblings, or old friends. At the other end is a minimal form of understanding that Schank called making sense. Schank (I986, p. 6) defined it as “the point where events that occur in the world can be interpreted by the understander in terms of a coherent (though probably incomplete) picture of how those events came to pass.” Understanding between agents in this minimal sense is made possible because agents share certain ways of acting and thinking in relation to kinds of situations with which they are both familiar.

To understand what understanding is in the ordinary conversational discourse, we need to realize that things in everyday life work in fairly predictable patterns that are familiar to all of us by habit and common experience. For example, if you ask me why I broke my favorite sunglasses, I could explain by replying that I accidentally dropped them. You can understand this explanation quite well because you are familiar with dropping objects accidentally and you know exactly how they drop on a hard surface and are broken or damaged unintentionally. Thus my explanation answers your question by helping you to understand that I did not break the sunglasses deliberately but did so accidentally.

Witness testimony in court often takes the form of an explanation. The examiner asks the witness a question. Sometimes the question requires a simple yes or no or a factual answer. But sometimes the answer given by a witness takes the form of a “story” or connected account that rep- resents the witness’s account of what happened as the person saw it. The examiner can then question parts of the account found to be incomplete or implausible. This notion of the “story” has been put forward by Wagenaar, van Koppen, and Crombag (1993) in their theory of anchored narratives. An anchored narrative is an account of something that allegedly happened that is subject to questioning. If doubts are raised by questions asked, the proponent of the account can then support the account by giving reasons or “anchors” that ground the account in some independent facts or considerations that support it.

Work in AI has also been built around the idea that much common-sense reasoning is based on unstated assumptions in a text of discourse that can be added in to fill gaps by someone presented with the text. A script, in the sense of the word used in AI (Schank and Abelson, 1977), is a body of commonsense knowledge that enables a language user to understand how things typically happen in stereotypical situations. This knowledge enables a language user to fill in what is not explicitly stated in a given text of discourse.

  1. John went to a restaurant.
  2. The hostess seated John.
  3. The waitress gave John a menu.
  4. John ordered a lobster.
  5. He was served quickly.
  6. He left a large tip.
  7. He left the restaurant.

When presented with this set of statements, anyone can create an account or “story” out of it, filling in various gaps by drawing plausible inferences. For example, it is plausible to assume that lobster was listed on the menu. It is plausible to assume that John ate the lobster. It is plausible to assume that John paid something for the meal after he ate it and before he left the restaurant. None of these statements was explicitly made, but we can plausibly fill them in because they represent the normal ways of doing things when one goes to a restaurant. They are familiar routines because they are part of a normal sequence of actions.

In both the Margie case and the restaurant case, the statements inserted to fill gaps in the account might be false. But as noted above in the Margie case, they are inserted by default. If there is no reason given to think they are false, then we can infer that they are plausibly meant to be part of the story. You could say that they are suggested by the given statements, along with the script or background knowledge representing normal routines in the typical case of the kind given. Such inferences could be classified under the heading of what was called “implicature” by Grice (1975). An implicature is an inference based on contextual presumptions drawn by one party in a conversation from assumptions about the collaborative goal of the conversation. According to the “Cooperative Principle;’ a participant in a conversation is expected to make a contribution “such as is required at the stage at which it occurs, by the accepted purpose of the talk exchange in which you are engaged” (Grice, 1975, p. 67).

So what we have is a dialogue. The sequence of questions and replies takes the form of a story that is gradually presented as prompted by the questions asked. Each answer fits in with the previous ones, and a connected or coherent account is produced through the dialogue exchanges. The same kind of dialogue framework applies to explanations generally. And the same ideas of scripts and conversational implicatures can be applied to the study of explanations as conversational exchanges.

THE DIALOGUE MODEL OF EXPLANATION

Van Fraassen’s pragmatic theory of explanation (198o, pp. 97–157) is based on a theory of why questions and how such questions are used in dialogue. His theory seems to go beyond the positivistic theories that have dominated the scene for so long in analytical philosophy. According to his theory, a why question asking for an explanation is always based on a contrast class. For example, the question, “Why did the sample burn green?” should be taken to pose the contrastive question, “Why did the sample burn green as opposed to some other color?”

Much of the problem with explanations lies in understanding the question and trying to figure out, in light of what the questioner already knows or understands, what the questioner seeks to understand. The problem with analyzing any given explanation is to situate it in a context of dialogue.

Moore (1995,p. I) outlined six main characteristics of the dialogue model of explanation. First, explanation takes the form of a sequence of moves between two parties in such a dialogue. It is “an inherently incremental and interactive process, requiring a dialogue between the advice giver and the advice seeker.” It is incremental in the sense that each exchange in the dialogue sequence is built on previous exchanges in the sequence. Second, an explanation evolves as new information comes in during a sequence of dialogue. The new information facilitates understanding and learning during the dialogue process. Third, each party to an explanation must understand ways of thinking and beliefs shared by both parties. The process of explanation requires each party to make assumptions about the other party’s beliefs, plans, and goals. Fourth, understanding of an explanation can be tested through feedback (Moore, 1991). The respondent, or advice seeker, can indicate verbally by responses whether an explanation offered by the proponent, or advice giver, has been understood correctly or not. Fifth, when the proponent presents an explanation to the respondent, the proponent expects the respondent “to ask further questions, request clarification, or provide some kind of indication when something is not understood.” Thus the respondent also has obligations to meet in order to help make an explanation successful.

Sixth, what constitutes a successful explanation is defined in dialogue terms. A successful explanation is reached through questioning in response to which the proponent continues to supply information or clarification until the respondent is satisfied. The dialogue model views an explanation as going through various stages of being asked for, being presented, being questioned, being improved, and so forth, as the dialogue proceeds. Thus both the notions of an explanation attempt and of a successful explanation are defined in relation to the stages of such a sequence and the moves made by both parties at a given stage of the dialogue.

A participant in a dialogue begins with a designated commitment set, and that set then expands or contracts according to the rules of a dialogue and according to how the participant makes moves governed by these rules. That is the dialectical framework that has been used to analyze and evaluate argumentation so far.

What is now being added to Hamblin’s dialogue framework is that the arguer who has a commitment set is seen as an agent. But when an agent has a set of commitments, this set will have a certain structure that helps the agent to organize the individual’s thinking and make it instrumentally useful. Suppose the agent is capable of practical reasoning. The agent’s commitment set will be composed of statements that can be linked to each other by means of practical reasoning. For example, one statement might represent an outcome that can be brought about by making another statement in the set true. Or to put it another way, one item in the agent’s commitment set might be a means to an end represented by another item in the commitment set. Suppose I am committed to making a copy of this sheet of paper. I may know that the only way to do this is to push a button on the photocopy machine. Therefore, by practical reasoning, I am also committed to pushing the button on the photocopy machine. One commitment leads to another and is connected to another in an agent’s commitment set just because the agent is an agent and is assumed to be capable of practical reasoning. Because there will be many such connections in an agent’s commitment set, even in the most ordinary and simple case of everyday reasoning and thinking, the commitment set will have a structure. It will be more than just a set of statements. It will be a connected set of them joined to each other by inference links. It will be a web or interlocked set of statements knit together by threads of inference. So if an agent is committed to one particular statement, that agent will automatically be committed to many neighboring ones as well, unless the person retracts commitment to each of them individually.

An agent in a dialogue will have a set of commitments that lock together into a structure formed by logical inferences connecting each commitment to other commitments. The whole network hangs together. It does not have to be logically consistent. The final step is to see that this structure, imposed by the network of an agent’s commitment set in a dialogue, can be used to define what can be called the agent’s “understanding” of the issue or subject being discussed in the dialogue. Understanding is not meant here in a purely psychological sense, but rather in the normative sense of rational understanding. It is a kind of understanding that, ideally, should be shared by the proponent and the respondent in a dialogue. When the two parties fail to have a satisfactory mutual understanding of something they are discussing there is a puzzlement or lack of understanding that leads to the appropriateness of a request for an explanation. To be successful, the explanation must answer the question by removing the inconsistency or failure of the commitment set to make sense to the questioner.

It may seem strange at first to think of scientific argumentation as taking place within a so-called framework of dialogue. The term “dialogue” suggests a conversation between two persons, and it sounds subjective and personal. Scientific reasoning, most of us think, is objective and impersonaL Scientific theories, hypotheses, and results are supposedly objective. They are propositions that are true or false and are proved to be so by factual investigations that eliminate the personal element. What matters are the “facts;’ and the theories that explain the facts are expressed in objective mathematical equations and quantitative laws. But it has to be remembered that the participants in a dialogue, according to the view of argumentation as dialectical, need not be actual human beings. They are agents, entities that can make moves in dialogues and that have commitments inserted into or removed from commitment sets based on these moves. Agents can be software entities, for example. They do have the capacity for making speech acts of well defined kinds and thereby for communicating with other agents. But

they can only do so in a way that is structured by the rules for the type of dialogue in which they are taking part. Thus it is possible, and indeed very useful, to think of various kinds of scientific argumentation not only as logical reasoning but also as logical reasoning used for some purpose in a framework of dialogue representing some kind of scientific activity. The problem is that there are many philosophies of science, ranging from positivism and foundationalism, which see scientific argumentation as rigidly structured, to other views, such as that of Kuhn and Feyerabend, that see it as looser, more like a persuasion dialogue.

The problem is that deductive and inductive logic have been taken in the past to be the models of scientific argumentation. These are context-free models of rational argument that do not seem to require the study of any pragmatic framework of argumentation use. It is only more recently that the advent of interest in abduction has made Peirce’s pragmatic approach begin to seem a plausible contender against these earlier theories.

Scientific investigation can be seen dialectically in two main ways, using the nature model or the community model. In the nature model, the scientific community can be seen as engaging in an information-seeking dialogue with nature. The scientific community asks questions, and nature provides information in answer to the questions. In the community model, the investigator is seen as an agent who engages in dialogue with other members of the scientific community. Thus the investigator is seen as the proponent of an argument who presents a new hypothesis, theory, or argument. The other scientists who take part in the dialogue express doubts about this argument. In order to reply to these doubts, the proponent has to present evidence of a kind that can properly be used to support the claim. Thus the scientific community will be divided into two camps in any given case, according to this model.

A typical problem in examination dialogue is that one party cannot understand why the other fails to understand something because there is too much of a gap between the background, commitments, and shared understanding of the two parties. Explanation by transfer of understanding may be impossible if the one party has an insufficient basis of shared knowledge to simulate the thinking in the other.

What is shown is that understanding is a reflexive notion. One party in a dialogue must try to understand the understanding of the other party. That party must also try to understand what the other party fails to understand and why it is not understood.

The lesson I would draw is that it is necessary to have a dialogue theory of explanation in which these phenomena can be dealt with by means of a dialogue between the two parties.

--

--

rohola zandie

I am a PhD student in NLP and Dialog systems, I am curious about mathematics, machine learning, philosophy and languages.