Artificial Neural Nets

 

By some unknown reason people link neural nets to machine learning and vice versa. There are methods of machine learning which don't use neural nets and neural nets which can do useful job without training. Let's sort out the existing opportunities.

 

Formal neuron

A neuron is an elementary voting device. It has many inputs and only one output. A popular concept is that a neuron is similar to logical functions (AND, OR, NOT ...). The difference is the number of inputs (thousands in live nature) and that live neurons function on probabilistic principles. That is, neuron is an elementary unit of Fuzzy Logic. Nevertheless, in most models formal neurons are implemented as deterministic units of computing (which are a simple particular case) while probabilistic effects emerge when diverse input data flow through multiple connections of the net.

Simple perceptron

Was introduced by Rosenblatt in 1957. Since then, it was overhyped, then discredited by the reason that it can't solve the XOR problem, then rehabilitated again in the enhanced, multilayered variant. Despite the simple, 2-layered construct has certain, well reported limitations, it is very computationally efficient. If your application is within these constraints, why not use it?

Internal layers

Note that the concept of the layer is not the same in models and live stratified structures like the human neocortex. In the latter, one layer can contain neurons of different types with complicated local connections. In the former, one layer is one array of equal elements. If you want to model interaction between pyramidal and stellate neurons, you will probably create 2 arrays and define connections between them. That is, the number of layers in your model will be greater.

Convolution

What is called Convolutional Neural Nets (CNN) in many publications are indeed hybrid solutions which use this term as a brand name. Here, we will use it in the direct meaning. That is CNN is a 2-layered net with limited connectivity. Each neuron is linked only to some its neighbours in the next layer. If you know the properties of this building block, you can always understand how it works in more complicated, multilayered solutions.

Feed-forward nets

They may contain several layers, but information flows in one direction. They implement one step of image transformation.

Recurrent nets

In this variant, the output loops back to input. The corresponding function usually has the number of cycles as its parameter. RNNs are used in methods of successive approximation. As usual, there is a tradeoff: quality versus time. If you set more cycles, the result will be better, but the function will run longer.

Internal structure

 

Plasticity

 

Multi-net architecture

 

Hybrid solutions

 

 

 

 

Copyright (c) 1998-2018 I. Volkov

 

 

www.000webhost.com