I believe it is worthwhile to provide here some examples of how people enrich vocabulary of their natural or special language. This will allow the reader to develop a more or less tangible feeling of how language and other symbolic systems accumulate information. One example presents the analysis of the formation of concepts by a child from a classic book Thinking and Speech by Lev Vygotsky. The second is a description of the emergence of scientific concepts from the book Cycles of Activity and Ideal Objects by Simon Kordonsky. The third describes the mechanism of the Common Law and is borrowed from the brilliant book The Nature and Functions of Law by Harold Berman.
2.2. Informational networks
Brain and distributed knowledge
Initially, cognitive science dealt with the task of creating artificial computing systems. At the same time ideas for building computers were borrowed from the knowledge about the natural unit of information processing, the human brain. Hence not only the ‘computer centeredness’ of cognitive science, but also its ‘brain centeredness’. Until today, cognitive science is much closer to those areas of knowledge that are focused on the study of cognitive processes of the individual (such as psychology or neuroscience) than to the social science.
The idea that the human brain is extended mind, and that in its cognition it is inscribed in the world and is based on the interaction with other people and the material reality - this idea is certainly present in the studies. However, this external environment is seen primarily as an extension of the individual brain, as the infrastructure that supports it.
In the social sciences the focus of thinking is placed differently. As already mentioned, since Adam Smith, economists have emphasized the importance of specialization and the distribution of knowledge among people. Thus, the study of information networks in cognitive science is focused somewhat differently than this book. However, the results obtained there will be extremely useful to us.
‘Classical’ cognitive science and the Turing machine
There are two directions – or perspectives – usually distinguished in cognitive science. One of them is called classical, and the other connectionist.
The classical line deals with the processing of information in symbolic form. In fact, we were talking about it when considering the sign systems. Yet the notion of sign systems traditionally is not used as the starting point. Introduction to cognitive science usually begins with a description of the concepts of algorithm and of the Turing machine. This latter model will also be useful here in the book.
Turing machine is an abstract operator (abstract computer). It was proposed by Alan Turing in 1936 to formalize the concept of algorithm.
Picture 5. Turing machine10
The structure of the Turing machine includes doubly infinite tape (some Turing machines may operate several endless tapes) divided into cells, and a control device that can be in one of the many states. The number of possible states of the control device is finite and precisely specified.
The control unit can move left and right along the tape, and read and write characters of a finite alphabet into the cells. A special null character is used that fills all the cells, except for those of them (a finite number) on which the input is written.
The control unit operates under the rules of the transition, which represent an algorithm, implemented by a given Turing machine. Each transition rule prescribes the machine, depending on its current state and the current symbol observed in the cell, to write in this cell a new symbol, shift to a new state and move one cell to the left or to the right. Some of the states of the Turing machine can be labeled as terminal, and a transition to any of them means the end of work, i.e. stop of the algorithm11.
The so-called Church’s thesis (or Church – Turing thesis) claims that any algorithm, i.e., predetermined precise instructions for performing certain actions can be implemented as a Turing machine. For example, it is possible to implement a rule for calculating the constant π with a certain number of characters. Moreover, it is known how to construct a universal Turing machine, which will be performing any algorithm: one needs only to insert the appropriate program into it.
Limits of computability
Church's thesis can not be verified, it is a philosophical generalization, based on the fact that so far there hasn’t been a single case found where some algorithm could not be represented as a Turing machine. But then the next question is: can we turn all knowledge into an algorithm? In particular, is it possible to construct a Turing machine that will answer any question formulated in its language? For example, will it be able to recognize all valid mathematical theorems and reject the false? This issue was extensively discussed during the period between the world wars, and it was also then that it was given a negative answer in the form of the Gödel’s theorem.