Interactive Automaton - formalism of neurocomputing

 

Present-day computers are "universal". Meanwhile they still have difficulties with some tasks which are elementary for humans and other living beings. Maybe some basic principles are missed? What does it mean universal and what does it mean computing? Two major concepts are usually mentioned at the foundation: a Turing Machine and a Finite-State Automaton. A Turing Machine is a rather specialized construct. Its application to different domains is possible only by analogy. A Finite-State Automaton is more generalized and mathematically formulated, but is said to have less computational power. Extensive research was done into the issue of equivalence. The general conclusion is that a Turing Machine, a Finite-State Automaton, and other similar constructs have much in common, but each of them is best suited for the particular class of applications.

The standard formalism for a Finite-State Automaton is undoubtedly correct because it was checked on multiple domains, but very inconvenient for some cases. Say, when a robot manipulates an object, the state of the object, not the machine itself is changing.

A Finite-State Automaton is perfect for modelling in general, but when it comes to modelling a real machine, the formalism of an Interactive Automaton is better.

This is a more natural variant of a Finite-State Automaton. Several improvements are suggested.

1. The concept of the environment is introduced. A machine shouldn't hang in vacuum.

2. States are attached to the environment, not to the machine and the number of states is infinite. This will make it computationally as powerful as a Turing Machine. States of the machine are possible too, but as an extension.

The most detailed description of a particular Finite-State machine is provided by State Transition Table. It becomes redundant now. Instead, it is replaced by the Action-Result List. Also the familiar concepts of a state, an action, a transition, an event receive more elegant semantics. A state belongs to the environment. An action is executed by the machine. A result of the action will be a new state which depends on the current state. Events are generated by the environment and perceived by the machine. For example, the environment may notify the machine that a new state was established. Each machine is defined by a list of associations Event-Reaction. The latter is one of the machine's actions. Note that Action-Result is a characteristic of the environment, while Event-Reaction - of the machine.

A well formulated theory can render conclusions which immediately become practically useful. Several important concepts are hinted by this formalism. If the machine keeps an internal copy of the environment, it can have only a finite number of states. Gear that reduces an infinite set to a finite one is called attention.

If the state of the machine is changing, this is learning. For example, it may happen when the length of the Action-Result list is changing.

This approach also explains why people like collective thinking so much. In this case, a brain is interacting with another brain rather than with the passive environment.

 

 

Copyright (c) I. Volkov, December 29, 2015