| ![]() |
Proceedings of the 14th Annual Conference of the Cognitive Science Society, 1992,
pp.653-658. Hillsdale, N.J.: Erlbaum.
Taking connectionism seriously:
the vague promise of subsymbolism and an alternative.
Paul F.M.J.Verschure
AI lab, Institute for Informatics, University of Z?rich,
Winterthurerstrasse 190, CH- 8057 Z?rich,
Switzerland.
e-mail: [email protected]
Abstract
Connectionism is drawing much attention as a
new paradigm for cognitive science. An important
objective of connectionism has become the
definition of a subsymbolic bridge between the
mind and the brain.
By analyzing an important example of this
subsymbolic approach, NETtalk, I will show that
this type of connectionism does not fulfil its
promises and is applying new techniques in a
symbolic approach.
It is shown that connectionist models can only
become part of such a new approach when they are
embedded in an alternative conceptual framework
where the emphasis is not placed upon what
knowledge a system must posses to be able to
accomplish a task but on how a system can develop
this knowledge through its interaction with the
environment.
Introduction
Connectionism has been gaining much attention in
cognitive science. On of the reasons is that
problems of the traditional cognitivistic approach,
like the need for noise and fault tolerance and the
capability to generalize, are solvable with
connectionist, brain-like, techniques.
This proposal makes the problem of complete
reduction (PCR) (Haugeland, 1978), or of how a
symbolic description of cognition can be reduced to
a non-symbolic one, again highly relevant.
In the traditional cognitivistic view cognition is
seen as formal symbol manipulation. The basic
steps of this approach can be defined as: ?1,
Characterize the situation in terms of identifiable
objects with well defined properties. 2, Find general
rules that apply to situations in terms of those
objects and properties. 3, Apply the rules to the
situation of concern, drawing conclusions about
what should be done.? (Winograd and Flores, 1986,
p.15).
The physical symbol system hypothesis (Newell,
1980) can be taken as the most influential
formulation of this approach. The hypothesis states
that a physical symbol system (PSS) constitutes the necessary and sufficient conditions for general intelligence. A PSS consists of a set of actions and is embedded in a world that consists of discrete states; objects and their relations. Moreover, a PSS has a "body of knowledge" that specifies the relations between the events in the world and the actions of the system, we can also refer to this body of knowledge as a world model built up with symbolic representations. The actions of the system, either in the world, or internal inferences, are organized around the goals of the system according to the principle of rationality: roughly a system will use its knowledge to reach its goals. An important implication of this conceptualization of cognition is that it can (and must) be modelled at the abstract level of symbol manipulation. The specifics of the implementation are, therefore, of no importance. PCR is no longer an issue since the non symbolic level of brain dynamics is not taken to be very relevant in explaining cognition. The hypothesis of physical symbol systems is often seen as the only plausible model for general intelligence which has no serious competitors (e.g. Pylyshyn, 1989). Despite this claim this paradigm also confronts some serious problems. One of these problems is the symbol grounding problem (Harnad, 1990), or the question of how symbols acquire their meaning. In the cognitivistic tradition the meaning of symbols is taken as given (Newell, 1981), which implies that cognitivism has to resort to a nativistic position: that the "body of knowledge" is just present from the start on. Moreover, one has to assume that the system possesses very reliable transduction functions that allow the coupling between events and objects in the world and their internal symbolic representation. These assumption have been criticized on several grounds. For instance, the genome does not have the coding capacity to represent this body of knowledge (Edelman, 1987), or it still needs to be explained how during evolution this "body of knowledge" could have been acquired (Piaget in Piatelli-Palmarini, 1980). Moreover, practical applications developed within this paradigm, for instance robot control architectures, have not been