page 1  (8 pages)
2to next section

Combining Model Generation with Theorem Proving: Problems and

Prospects

John Slaney and Timothy J. Surendonk

Automated Reasoning Project, Australian National University Canberra 0200, Australia

fJohn.Slaney, [email protected]

1 Proof search, model search and

their interaction

1.1 Background

Traditional theorem provers are stupid. Faced with the problem of searching for a proof|that is, for a tree of formulae connecting the axioms of some theory to one of its theorems|they may reason forwards from the axioms or backwards from the theorem or both, but in either direction they rapidly find themselves in an exponentially growing search space of possible proof fragments which they explore in a manner at once admirably industrious and remarkably dull. Much good work in automatic proof search has gone into the discovery and refinement of methods for controlling the explosion of the search space or for reducing the amount of duplicated work undertaken in traversing it. At the same time, there has been vigorous research on heuristics for directing the search for proofs, but many of the most successful ideas in that field are either unexciting suggestions such as exploring short formulae first or else mysterious ones such as deleting any formula with a previously proved theorem as a subformula or preferring some operator to be nested to the left rather than to the right. The root problem is that most powerful theorem provers work only locally, focussing on the specific formulae being transformed by an inference rather than on global aspects of the situation, and more significantly they are based on pure syntax.1 They may take into account questions like how many function symbols a formula has and at what depth of nesting, which is the first literal in a clause, whether this unifies with that and the like, but they do not consider what the inference step is supposed to achieve, whether it is establishing a general law for the structures under consideration or an accidental property of the case, whether the conclusion of the inference is just saying the same thing as one already proved or the like. That is, traditional provers do

1This is not to deny a semantic basis for the usual rules and methods. The point is that at the level of the individual inference step everything is most easily characterised in syntactic terms.

not understand what they are doing.

Still less do they understand the problems they are attempting to solve. The contrast with human theorem provers is striking. When we reason, in mathematics or in any aspect of intelligent behaviour, we appeal constantly to conceptual structures within which particular problems make sense. We are not capable of exploring search spaces of millions of clauses, and it seems that somehow we do not need to. What we are able to use is some sense of when a proof search is getting closer to the goal. It seems evident that what lies behind this capacity is not so much an ability to recognise syntactic patterns in formulae (though we importantly have that too) as an understanding of the problem: we know what the symbols, theories, axioms, goals and subgoals mean, and this puts us at an advantage in the investigation.

These remarks may be taken as leading to a recommendation that researchers in automated reasoning direct their efforts towards mechanical emulation of human cognitive processes in the attempt to build programs that reason in the same way we do. This, however, would be a mistake, certainly in the present state of development of the discipline. In the details of what they do|in what they find easy and what hard| computers are quite unlike us. Without becoming totally discouraged from projects in programming naturalistic intelligence, we should recognise that for practical purposes the indicated way ahead is to continue to let the machines be mechanical, doing what they do best at the high speeds of which they are capable.2 Algorithmic proof search is good for some things anyway; it no more needs pseudo-human cognition in order to out-infer a mathematician than a car needs legs to outrun an athlete.

So we do not know how to program understanding, and attempting to do so is likely to lead to inefficient systems anyway. What we may hope to do instead is to

2And of course to facilitate the right kind of human-machine interactions. In order to secure the insights of a mathematician, it is more effective to plug in a mathematician at some appropriate point than to try to emulate one in an unsuitable medium like (present) software.