close this bookVolume 5: No. 17
View the documentCompetitions and awards
View the documentBusiness news
View the documentCareers and education
View the documentJob opportunities
View the documentJournals and e-journals
View the documentDiscussion groups
View the documentAI conjectures
View the documentScience and art on the net
View the documentComputists' news

Louis Savain and Curtis L. May II, founders of Marengo Media, Inc. (South Pasadena, CA), are proposing a machine-learning architecture based on AND and XOR nodes. Connections (or "assumptions") in a 3D [systolic?] lattice space would be made at random, creating logical transformations of sensed/input values. A semi-local process would watch the connections for logical contradictions: AND inputs that are repeatedly unequal or XOR inputs that are repeatedly equal. (Equivalently, for any node output that is always zero. I assume that logical tautologies would also be pruned.) The monitor would then break one or the other of the connections -- usually the newer one -- or a contributing connection "upstream." Other connections would grow in permanence (but not "weight"). From this simple basis, Savain and May believe that all aspects of intelligent behavior can be derived. To encourage research in AND/XOR intelligent systems (AXIS), they are planning a $1,000 prize for the fastest- learning tic-tac-toe program. Chess and Go contests are also being considered. Tune in to comp.ai if you'd like to discuss the approach. [, comp.ai, 5/3/95.] (By minimizing the number of dead nodes in a "brain," such pruning would have a good chance of computing "interesting" functions of the input space. This seems to relate to the "hedonistic neuron" discussion a few years back. But how would task-specific output nodes be selected? Feedback training is needed, at least as a final layer.)

An alternative theory is the 'SP' conjecture that all kinds of computing and formal reasoning may usefully be understood as information compression by pattern matching, unification and metrics-guided search. This is related to algorithmic information theory (AIT) and minimum-length encoding (MLE). See for a summary of recent research. The theory and a model implementation have been described by Gerry Wolff in "New Generation Computing," V13, pp. 187-214 and 215-241. [, comp.ai.fuzzy, 5/3/95.]

At a higher level, Gene Levinson is leading the SMPLMIND project "to identify questions of interest that would take too long to answer, or that are unanswerable, by wet-lab biology." In SMPLMIND, words and grammar are nodes that can be connected into sentences and higher-level syntactic structures. The SMPLMIND asks questions of the user, as a child would ask an adult. It also "thinks" on its own, continuously, and records all "thoughts" explicitly. "Even if it doesn't act like a simple mind, it will tell us why our assumptions were not valid." Levinson would welcome discussion, at or on comp.ai.alife, sci.cognitive, or comp.ai. [comp.ai.alife, 5/5/95.]

Researchers at the Brookings Institution have been using alife simulations to study economic principles. Their Computerrarium tracks "societies" of up to 1,000 individuals. "If we make the agents less like Homo economicus [the "rational" agents in economics textbooks] and more like Homo sapiens, important laissez-faire assumptions of standard economic theory do not hold up very well." [Tampa Tribune, 5/5/95, BayLife 3. EDUPAGE.]

Much of AI is computational ontology: slicing the world into parts in a way that makes a computational difference. And the main critique of definitions is "What difference does it make?" -- Charles Petrie , 4/26/95, DAI-List.