Computational Neuroscience (Computational Neuroscience) Introduction

Lecture



Introduction to the subject of Computational Neuroscience (Computational Neuroscience). The origins of neuroscience: advances in biology and physiology, psychology, discrete mathematics, cybernetics, statistical physics and synergetics. The role of computer simulation. Philosophical foundations of neuroscience. Historical overview. Course Structure Educational and introductory literature.

"Wink the computer - he will understand." At the beginning of the 90s, an article appeared under such a headline in the oldest and most respected newspaper in the New York Times, describing current achievements and trends in the field of intelligent computer systems. Among the main ways of development of this industry, experts of the publication identified

  • Computers with a high degree of parallelism of information processing, which can divide a particular task into parts and process them simultaneously, thereby significantly reducing the total computing time;
  • Computers in which optics are used instead of electronic signals. Optical signals have already begun to be used to transfer data between computers;
  • Computers with neural networks, which are machines that work in the same way as our modern concepts, the brain functions.

The last, third, direction, which essentially relies on the first two, and constitutes the main theme of the proposed course of Lectures. At the same time, the course is focused only on one of the sections of the direction of artificial neural networks, namely, on neuroinformatics, as a science studying neural-like methods of processing information using computers.

The diversity, large volume and inconsistency of various diagnostic information bring to the fore the problem of finding physical systems capable of processing it. The solution of this complex task is closely connected with new information technologies, an important place among which is occupied by methods of recognizing and categorizing images. Neural networks are a powerful and today, perhaps, the best method for solving image recognition problems in situations where there are no significant pieces of information in the experimental data and the available information is extremely noisy. The high degree of parallelism allowed in the implementation of neural systems provides for the processing of information volumes inaccessible to the operator for times shorter or comparable to the permissible measurement times.

By the turn of the 1980s, significant results were achieved in a very young synergetic, the science of self-organization in non-equilibrium systems; facts were systematized and numerous new experiments in neurophysiology were carried out, in particular, the structure and mechanism of action of individual neurons were studied in detail; The principle of work was formulated and the first computer with parallel architecture was created. These circumstances, apparently, stimulated the beginning of intensive studies of neural networks, as models of associative memory.

A wide interest in neural networks was initiated after the appearance of Hopfield's work (Hopfield JJ, 1982), which showed that the problem with Ising neurons can be reduced to generalizations of a number of models developed by that time in the physics of disordered systems. The work of the Hopfield network (most discussed in the physical literature) consists in relaxing the initial “spin portrait” of a matrix of binary codes to one of the stationary states defined by the learning rule (Hebba rule). Thus, this network can be used for recognition tasks.

In 1986, the work of Rumelhart, Hinton and Williams (Rumelhart DE, Hinton GE, Williams RJ, 1986) appeared, containing an answer to a question that for a long time restrained the development of neuroinformatics - how hierarchical layered neural networks are taught, for which "classics" are still in 40 The 50s were proved universally for a wide class of problems. In subsequent years, the algorithm proposed by Hinton back-propagation of errors underwent countless variations and modifications.

The diversity of the proposed algorithms, characterized by varying degrees of detail, the possibilities of their parallel implementation, as well as the presence of hardware implementation, leads to special relevance of the study on the comparative characteristics of various techniques.

Neuroscience in the modern moment is going through a period of transition from a young state to maturity. Development in the field of the theory and applications of neural networks goes in various directions: new nonlinear elements are being searched for, which could implement complex collective behavior in an ensemble of neurons, new architectures of neural networks are proposed, and applications for neural networks are being searched for in image processing systems and pattern recognition speeches, robotics, etc. Mathematical modeling traditionally occupies a significant place in these studies.

The need to write a systematic course on the theory of neural networks and computer systems based on them is largely determined by the lack of domestic training monographs on this topic. In addition, the topic itself has not yet taken its place in the traditional courses of universities and universities. And while industrial experts from the American Advanced Research Authority DARPA expect the mass distribution of new neural network technology to begin in the late 1990s, the current level of theoretical understanding and practical use of neural networks in the global information industry increasingly requires professional knowledge in this area.

The main objective of the proposed course is a practical introduction to modern methods and information processing systems, united in the scientific literature by the term Computational Neuroscience (computational neuroscience), as well as an introduction to promising approaches to constructing computing and information systems of new generations. The peculiarity of the topic under consideration is its interdisciplinary character. Biology and physiology of higher nervous activity, psychology of perception, discrete mathematics, statistical physics and synergetics, and, of course, cybernetics and, of course, computer modeling have contributed to the development of neuroscience.

Lectures contain basic information about the principles of organization of natural (biological) neural networks and their mathematical models - artificial neural networks necessary for the synthesis of neural network algorithms for practical tasks. For this purpose, the book includes two introductory topics - a mathematical introduction (Lecture 2) and introductory biological information (Lecture 3). The formal mathematical content of the course is minimized and is based on basic knowledge of courses in linear algebra and differential equations. Therefore, it can be recommended and mainly intended for engineering students, as well as applied mathematicians and programmers.

The main sections of the course

  • Introduction, information from biology, physiology of higher nervous activity, psychology, cybernetics, statistical physics and discrete mathematics;
  • Biological neuron and its mathematical model;
  • PERSEPTRON, linear separability and the Rosenblatt theorem on learning;
  • Teaching a neural network as a combinatorial optimization problem;
  • Hebb's rule, the Hopfield model and its generalizations;
  • Hierarchical neural networks;
  • Error Propagation Algorithm;
  • Models of Lippmann-Hemming, Hecht-Nielsen, Kosko;
  • Ways of presenting information in neural networks;
  • Modern neural network architectures, KOGNITRON and NEOKOGNITRON Fukushima;
  • Theory of adaptive resonance;
  • Genetic search algorithms for building topology and learning of neural networks;
  • Adaptive cluster analysis and Kohonen self-organization map;
  • State machines and neural networks;
  • Conclusion - modern day neuroscience, neuro-computer of the sixth generation, neuroprocessors, software, scientific and commercial applications.

Literature

A. Primary

  • F. Wasserman. Neurocomputer technology. Moscow: World, 1992.
  • A.N. Gorban, D.A. Rossiev. Neural networks on a personal computer. Novosibirsk: Science, 1996.
  • Computer science. Directory. Under. Ed. D.A. Pospelov. Moscow: Pedagogy, 1996.

B. Additional

  • T. Kohonen. Associative memory. Moscow: World, 1980.
  • F. Rosenblatt. Principles of neurodynamics. Moscow: World, 1965.
  • Automatic. Under. ed. K.E. Shannon and J. McCarthy. Moscow: Foreign Literature Publishing House, 1956.
  • D. Marr. Vision. Moscow: Radio and Communications, 1987.
  • M. Minsky, S. Papert. Perceptrons. Moscow: World, 1971.
  • N. Wiener. Cybernetics. Moscow: Soviet Radio, 1968.
  • A.A. Vedenov. Modeling elements of thinking. Moscow: Science, 1988.
  • A.Yu. Loskutov, A.S. Mikhailov. Introduction to synergetics. Moscow: Science, 1990.
  • C.O. Mkrtchyan. Neurons and neural networks. Moscow: Energy, 1971.
  • A.N. Gorban. Neural network training. Moscow: Paragraph JV, 1990.
  • A.I. Galushkin. Synthesis of multilayer pattern recognition schemes. Moscow: Energy, 1974.
  • F.G. Gantmacher. Theory of matrices. Moscow: Science, 1988.
  • N. Green, W. Stout, D. Taylor. Biology. Ed. R.Soper. T.1-3, Moscow: World, 1990.
  • G. Shepherd. Neurobiology. Tm 1-2, Moscow: World, 1987.
  • F. Bloom, A. Leishers, L. Hofstedter. Brain, mind and behavior. Moscow: World, 1988.
  • B. Bundy. Optimization methods. M. Radio and communication, 1988
A remark to the electronic version of 1998. Over the past 5 years since the writing of the Lectures, there have been significant changes in Russian neuroinformatics. A variety of courses on neural networks began to be widely used in higher education programs for various technical specialties. Textbooks, which are few in number, have appeared, among which, first of all, the book of A.N. Gorban and D.A. Rossieva (1996). Alas, a circulation of 500 copies does not allow us to consider this excellent (albeit relatively complex) publication as a basic textbook.
created: 2016-01-16
updated: 2021-03-13
132492



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Computational Neuroscience (Theory of Neuroscience) Theory and Applications of Artificial Neural Networks

Terms: Computational Neuroscience (Theory of Neuroscience) Theory and Applications of Artificial Neural Networks