Artificial Neural Nets

 

By some unknown reason people link neural nets to machine learning and vice versa. There are methods of machine learning which don't use neural nets and neural nets which can do useful job without training. Let's sort out the existing opportunities.

 

Formal neuron

A neuron is an elementary voting device. It has many inputs and only one output. A popular concept is that a neuron is similar to logical functions (AND, OR, NOT ...). The difference is the number of inputs (thousands in live nature) and that live neurons function on probabilistic principles. That is, neuron is an elementary unit of Fuzzy Logic. Nevertheless, in most models formal neurons are implemented as deterministic units of computing (which are a simple particular case) while probabilistic effects emerge when diverse input data flow through multiple connections of the net.

Simple perceptron

Was introduced by Rosenblatt in 1957. Since then, it was overhyped, then discredited by the reason that it can't solve the XOR problem, then rehabilitated again in the enhanced, multilayered variant. Despite the simple, 2-layered construct has certain, well reported limitations, it is very computationally efficient. If your application is within these constraints, why not use it?

Internal layers

Note that the concept of the layer is not the same in models and live stratified structures like the human neocortex. In the latter, one layer can contain neurons of different types with complicated local connections. In the former, one layer is one array of equal elements. If you want to model interaction between pyramidal and stellate neurons, you will probably create 2 arrays and define connections between them. That is, the number of layers in your model will be greater.

Convolution

What is called Convolutional Neural Nets (CNN) in many publications are indeed hybrid solutions which use this term as a brand name. Here, we will use it in the direct meaning. That is CNN is a 2-layered net with limited connectivity. Each neuron is linked only to some its neighbours in the next layer. If you know the properties of this building block, you can always understand how it works in more complicated, multilayered solutions.

Feed-forward nets

They may contain several layers, but information flows in one direction. They implement one step of image transformation.

Recurrent nets

In this variant, the output loops back to input. The corresponding function usually has the number of cycles as its parameter. RNNs are used in methods of successive approximation. As usual, there is a tradeoff: quality versus time. If you set more cycles, the result will be better, but the function will run longer.

Internal structure

The main difference of the brain from our computers is that it has no separation between storage and processing. The neocortex may be regarded as both operative and long-term memory, but it also performs important functions of data processing. These functions are hard-wired in the microstructure of intracortical connections. When a computer processes some data, this involves permanent swapping between the memory and processor. When you think, the process may be triggered by some external event but happens completely within your cortical fields.

Plasticity

Human memory is implemented as changing the efficiency of synapses, but how specifically does it work? Physiology undertook substantial efforts in this direction. The overall result is that there are many different mechanisms which operate in parallel. For theoretical models and practical implementation, it is needed to choose some algorithm which represents the sum effect. One popular variant was the Hebb synapse which is enhanced when active input coincides in time with active output. This principle is no more than a brave hypothesis that transfers Pavlovian learning to the level of microstructure. Meanwhile for practical purposes, it is better not to reproduce that mechanisms 1:1 but to try and understand the general idea implemented by Nature. This idea is long-term adaptation. That is, when you train your artificial neural net, just change synapses in the way which produces the desired effect.

Multi-net architecture

For useful projects, it is necessary to link several homogeneous neural nets. Adding more layers is not equivalent. In such constructs, internal connections between neurons define processing functions; connections between nets - the architecture of the computing system. Most interesting, links between various fields of the neocortex correspond to software of common computers.

Hybrid solutions

In practice, several aforementioned features are used in combination. For example, you may use CNN and add some biofeedback that is RNN.

 

 

 

Copyright (c) 1998-2020 I. Volkov

 

 

www.000webhost.com