Theory of cognitive regulation 2.0

 

A Turing machine remains the main theoretical concept used for discussion of computations in various particular situations. Meanwhile it is too general and for many applications it would be useful to have a concept which is more close to human central nervous system. Theory of automatic regulation is a mathematical expression of the principle of self-maintenance. This principle is at the foundation of life. Stable objects of non-living nature exist due to rigid links which keep their parts together. In contrast, living organisms are soft, but they are capable of regeneration. If something tries to destroy them, they actively return themselves to the normal state. A classical regulator has a few standard parts.

 


Fig. 1.

 


A memory cell keeps a normal value of a regulated parameter. A sensor measures its current, real value. A comparator calculates the difference between the 2 values. This difference passes a transfer function and drives an executive organ which affects the input value back.

This scheme is still too general. It equally describes operation of the nervous system and evolutionally more ancient systems which use biochemical methods. Now let's add yet another sensor.


Fig. 2.

 


A classical regulator looks inside. This additional receptor will look outside hence cognitive principle. Its destination is initially undefined so it is essentially useless. Meanwhile this addition opens broad horizons. Changes of the state are often the result of influence from the environment. Using of the external effector for outside manipulations will enhance possibilities, but without an external sensor it will be blind. It can gauge efficiency of own operation only by the final effect on the input parameter. A cognitive regulator can do it earlier. It also can detect various circumstances which make operation impossible. Moreover, the external sensor may become a proxy for the main one. It can warn about possible deterioration in advance.

Let's apply this principle to the example of a thermostat. The classical variant is simple. It has a work chamber, a thermometer, and 2 internal effectors - a refrigerator and a heater. Next, let's add real-world environment. It will stand on the table near the window. Now we know the reason of rising temperature. It is the direct sunlight. Suppose a half of the table is in shadow. Then we will have already 2 variants of regulatory actions. We can turn on the refrigerator or just move the device into shadow. Let's add wheels and use a light sensor to determine the way. Live animals use their muscles instead of wheels.


Fig. 3.

 


This scheme shows that while external events may provoke some behavior, they play a modulatory role. The main trigger is inside the organism.

Additional input-output requires more complicated control infrastructure. One thing is clear from the beginning. A simple transfer function is not enough anymore.

 

Of course, we can invent it step-by-step by trial-and-error method, but this work was already done by nature. The human brain is a result of long evolution and already contains optimized solutions. Which methods has Nature found useful?

The brain has clear layered structure. Inner parts appeared earlier in evolution. This structure is repeated in functional architecture. This means that inner parts remained self-sufficient. More recent additions only improved and enhanced their operation.

When one constructs a system of automatic regulation, the first question - which actions to associate with deflections of particular parameters. Simple nervous systems used the simplest solution. It is even simpler than trial-and-error method. Deflection of a vitally important parameter triggered overall excitation and the organism made random movements until the situation improved. The method is very rough and the probability of success is low. Next steps of development were taken towards more goal-oriented behavior, but in the middle of our brain we still have a large amorphous structure called the reticular formation. It works like the Power button of a computer - controls sleep, awakedness, and the overall level of excitation.

The human organism is a complicated biochemical factory, and like other technological processes it requires thorough maintenance of certain parameters like the body temperature, the concentration of internal liquors, etc. Corresponding receptors and a bulk of control circuits are located in the hypothalamus. Its output controls internal organs. Thus we have the first complete regulatory system which is fully contained inside the body.

The next layer has receptors directed to the outside such as hearing and vision. Its executive organs are muscles so it operates in the outer environment too. Its higher control center is seemingly the basal ganglia - a complex of structures which keeps small automated elements of behavior.

Finally, there is the neocortex - the youngest layer. It has no own receptors and effectors. It is connected to the input-output of the previous layer. The neocortex enhances operation exactly how universal computers improved specialized automata - it adds programmability.

The brain as a whole is a regulator, but like modern automatic control systems it has got an embedded computer. That's what is interesting for us. As mentioned in the example with a thermostat, the need in a computer appears with the acquisition of an additional pair of external receptor-effector. The surrounding world is much more complicated so simple regulatory circuits become insufficient. Meanwhile, the task for this computer comes from internal receptors. Now, such signals activate 2 pathways. An internal organ can start a compensatory process or muscles can produce movements. In the second case, muscular activity is usually coordinated with sensory input.

 

Fig. 4.

 

Which parts of the brain should we include in this computer? The answer requires understanding of some nuance about labour division between hardware and software. In the nervous system it is different from traditional computers. Even simplest neural nets are capable of some processing such as decision making. If the net has an internal microstructure, this processing may be rather complicated. It is distributed. In the live net, memory elements work simultaneously in parallel mode, and transformation of data goes over all the array at once. If you connect several nets together, this construct can perform some functionality which is traditionally provided by software. The examples include IF-THEN-ELSE, cycles, procedure calls.

As a result, the human neocortex alone already may be considered as a fully-functional computer. The flow of neuronal activation within it corresponds to the flow of operators within a program. To decompile a neuro-program, it is sufficient to trace connections between cortical areas.

The overall picture will be as follows. Approximately a half of the neocortex processes input from several sensory channels. Another half is involved in generation of muscular output. Both subsystems have clear hierarchical organization from peripheral organs to subcortical nuclei of the brain, till the most abstract cortical fields. This structure 1:1 corresponds to event handling in a computer. How does an event links to the corresponding handler subroutine? There are 2 main possibilities. The first uses direct links between sensory and motor areas. Other links go through the basal ganglia which reside in the middle of each hemisphere. In this case, the basal ganglia work like BIOS of a computer. They don't just relay input to output, but coordinate them via small, system-level programs.

The brain uses associative memory and rule-based software. Work of this computational system is quite different from Von Neumann's processor. It has no clock generator. Instead, circulation of activity is maintained via feedback through the external environment. When muscles do some action, this changes the surrounding world. These changes will be perceived and will modify conditions of some rules. These rules will be activated and produce new actions. The process cycles.

Such a computational system is self-sufficient and minimally workable, but it will be unstable. It is too dependent on external developments. The need for a separate processor still exists. Such a processor would provide selective event filtering and coordinated work of various links.

 

Copyright (c) I. Volkov, January 27, 2014