Thus, algorithmic methods of information processing even theoretically are not sufficient to model the processes of cognition. But that was not the only reason why cognitive science applied to the concept of networks. The fact is that the structure of the human brain has nothing to do with the Turing machine but is a network of neurons, and this neural network is in many cases a much more efficient means of processing information.
Examples of tasks which neural networks perform much faster than even the most advanced computer relate just to the most basic functions of the body: sight, hearing, movement coordination. In order to calculate these actions in digital (symbolic) form, the computer would require several hours. The body performs them in a fraction of a second.
Picture 6. Brain neurons12
Because of this, a different direction emerged within the cognitive science, engaged in the study of information networks, of which the most typical are networks of abstract (or artificial) neurons. This direction is referred to as connectionist.
Work on artificial neural networks, commonly referred to as ‘neural networks’, has been motivated right from its inception by the recognition that the human brain computes in an entirely different way from the conventional digital computer. The brain is a highly complex, nonlinear, and parallel computer (information-processing system). It has the capability to organize its structural constituents, known as neurons, so as to perform certain computations (e.g., pattern recognition, perception, and motor control) many times faster than the fastest digital computer in existence today13.
Connectionism (and neural networks) - is almost synonymous with the concept of Parallel Distributed Processing of information, PDP 14.
Artificial neural networks are a system of interconnected and interacting simple elements (artificial neurons). Each network element has to deal only with the signals that it periodically receives and signals that it periodically sends to other elements.
Picture 7. Simple neural network. White color indicates the input neurons, gray - hidden neurons, black – the output neuron 15
Artificial neural network has the following characteristics:•
Activation function: each neuron in the network generates its own signal in response to the strength of signals coming to it,
A set of synapses, i.e. connections that transmit signals from one neuron to another (shown in the figure by arrows)
Weights: the signal going from neuron i to neuron j is enhanced or weakened; this is done by its multiplying by the weight wij.
For a given network, the activation function and the set of connections remain constant, while the set of weights may change. That is how the network is configured to solve a specific problems. Neural networks are not programmed like a computer, they are trained. For this purpose, some rule of correcting the weights wij is used in such a way that the results of the network operation are gradually approaching the desired effect.
Both the activation function, and the rules adjusting the weights are usually very simple. Nevertheless, it turns out that such networks of simple elements can by repeated ‘training’ be taught to perform fairly complicated actions.