Artificial Intelligence as an Empirical Problem

Lecture



(Source: Luger, George, F. Artificial Intelligence: Strategies and Methods for Solving Difficult Problems, 4th ed.: Trans. From English - Moscow: Williams Publishing House, 2003. p.777-807)

Part VII

Epilogue

The potential of computer science (with its full study and implementation) will put us on a higher level of knowledge about the world. Computer science will help us achieve a deeper understanding of intellectual processes. They will deepen our knowledge of the processes of learning, thinking and analysis. We will be able to build models and conceptual tools for the development of the science of knowledge. Like the dominant role of physics in our century when studying the nature of matter and the beginning of the universe, today the forefront is the study of the intellectual universe of ideas, knowledge structures and language. In my opinion, this will lead to significant improvements that will fundamentally change our lives ... It seems to me that the hour is not far when we understand the principles of the organization of knowledge and their management ...

- J. Hopcroft, lecture on the occasion of the Turing Award, 1987

What is thinking? Intangible substance. What is matter? This is never comprehend ...

- Homer Simpson (Homer Simpson)

We will learn when we realize that this is important. - Earl Weaver (Weaver)

Reasoning on the nature of intelligence

Although this book touches on many of the philosophical aspects of artificial intelligence, the main focus is on engineering technologies used to build computer-based intellectual artifacts. In conclusion, we will return to the more complex questions of the philosophical foundations of artificial intelligence, try again to rethink the possibilities of the science of knowledge on the basis of AI methodology, and to discuss future directions of development of this discipline.

As was repeatedly noted earlier, the study of human knowledge and ways of solving problems by man made a significant contribution to the theory of artificial intelligence and the development of its software. In turn, work in the field of AI provided the possibility of building models and experimental verification of scientific results in many disciplines, including biology, linguistics, and the psychology of cognition. In conclusion, we will discuss topics such as the limitations of ideas, the importance of physical reification of thought processes and the role of culture in the accumulation and interpretation of knowledge. These questions lead to such new scientific and philosophical problems as the refutation of models or the nature and possibilities of the scientific method itself. The author's experience led him to an interdisciplinary approach that combines work in the field of AI with the research of psychologists, linguists, biologists, anthropologists, epistemologists, and specialists in other areas who study the entire spectrum of problems of human thinking.

Traditionally, work in the field of artificial intelligence was based on the hypothesis of a physical symbol system [Newell and Simon, 1976]. As part of this approach, complex data structures and search strategies were developed, which, in turn, led to a number of important results. Systems with elements of intellectual behavior were created, and many components that make up the human intellect were identified. It is important to note that most of the results based on these early approaches were limited to assumptions arising from the rationalism philosophy. According to the rationalist tradition, the intellect itself is regarded as a process of logical reasoning and the solution of scientific problems, based on a direct, empirical approach to understanding the universe. This philosophical rationalism too restricts the development of artificial intelligence at the present stage.

The book presents many more modern developments, including alternative learning models, agent-oriented and distributed systems for solving problems, approaches related to the reification of the intellect, as well as research on the implementation of evolutionary computing and artificial life. These approaches to understanding intellect provide the necessary alternatives to the ideas of rationalistic reductionism. Biological and social models of intelligence have shown that the human mind is largely a product of our body and sensations. It is associated with cultural and social traditions, inspired by works of art, our experience and the experience of the people around them. Creating methods and computer models of such complex processes as the evolution or adaptation of the neural network structures of the human brain, researchers in the field of artificial intelligence obtained many new powerful results that complement the more traditional methodologies.

Artificial intelligence, like computer science itself, is a fairly new field. If the development of physics or biology is measured in centuries, then the age of modern computer science is tens of years. In Chapter 16, we will try to integrate various approaches to AI into a single science of creating intelligent systems. The author believes that this science, technology, philosophy and life credo will lead to the possibility of creating new artifacts and experiments that, if used correctly, will allow a deeper understanding of the general principles of building intelligent systems. This chapter discusses the traditions proposed in Chapter 1 of studying the epistemological foundations of AI. This is not done in order to give a decent response to critics (although many of their attacks still require an answer), but with a positive goal - to explore and highlight the paths of development of this science.

Artificial Intelligence as an Empirical Problem

The theory of computing systems is an empirical discipline. One could call it experimental science, but, like astronomy, economics, and geology, some of its original forms of testing and observation cannot be squeezed into the narrow stereotype of the experimental method. However, these are experiments. Designing each new computer is an experiment. The very fact of the creation of the machine puts the question before nature and we get an answer to it, watching the machine in action, analyzing it in all available ways. Each new program is an experiment. She puts the question to nature, and her behavior gives us clues. Neither machines nor programs are “black boxes”, these are creations of our hands, designed both in hardware and software; we can remove the cover and look inside. We can relate their structure to behavior and draw many lessons from a single experiment.

- Newell (A. Newell) and Simon (N. And. Simon), lecture on the occasion of the Turing Award, 1976

Studying thinking machines gives us more knowledge about the brain than self-analysis. Western man embodies himself in the devices.

- William S. Burroughs, Naked Breakfast

Where is the knowledge that is lost in the information? - TS Eliot (TS Eliot), choirs from the poem "The Rock"

16.0. Introduction

For many people, the most surprising aspect of working in the field of artificial intelligence is the degree to which AI, and most of the theory of computing systems, turns out to be an empirical discipline. This aspect is surprising, since most consider these areas in terms of their mathematical or engineering education. Punctual mathematicians tend to desire to apply the logical reasoning and analysis they are used to in constructing intelligent devices. From the point of view of "slovenly" engineers, the task often consists only in creating systems that society would call "reasonable." Unfortunately, or vice versa (depending on the point of view), the complexity of intellectual programs and the uncertainty inherent in their interaction with the world of nature and human activity make analysis from a purely mathematical or purely engineering points of view impossible.

Moreover, if we are trying to bring the study of artificial intelligence to the level of science and make them an integral part of the theory of intelligent systems, then in the process of designing, using and analyzing artifacts, we must apply a mixture of analytical and empirical methods. From this point of view, each AI program should be considered as an experiment: it puts the question before nature, and the answer to it is the result of the program. The response of nature to the underlying design and program principles forms our understanding of formalism, patterns and the very essence of thinking.

Unlike many traditional sciences that study human knowledge, the developers of intelligent computer systems can explore the internal mechanisms of their "experimental" ones. They can stop the execution of the program, study its internal state and modify its structure in any way. As Newell and Simon noted, the design of computers and computer programs predetermines their potential behavior, the possibility of comprehensive research, and accessibility for understanding. The power of computers as tools for learning intelligence derives from this duality. Properly programmed computers can achieve a high degree of complexity in both semantics and behavior. Such systems are naturally characterized in terms of physiology. In addition, it is possible to investigate their internal states, which in most cases cannot be accomplished by scientists involved in intelligent life forms.

Fortunately for modern work in the field of AI, as well as for the development of the theory of intelligent systems, modern physiological methods, especially those related to neurophysiology, have shed light on many aspects of human thinking. For example, today we know that the function of human intelligence is not complete and homogeneous. Rather, it is modular and distributed. The advantages of this approach are manifested in the work of the senses, for example, the retina of the eye, which is able to filter and pre-process visual information. Similarly, learning cannot be called homogeneous, homogeneous ability. Rather, it is a function of many different systems, each of which is adapted for specific purposes. Magnetic resonance scanning, positron emission tomography and other methods of obtaining images of the nervous system provide a vivid and accurate picture of the internal structure of natural intelligent systems.

Approaching AI to the scale of science, it is necessary to solve important philosophical problems, especially those related to epistemology, or the question of how the intellectual system "knows" its world. These problems range from the question of what is the subject of artificial intelligence to the more profound ones, such as the validity and applicability of the hypothesis about the physical symbolic system. This is followed by questions about what a “symbol” is in a symbolic system, and how symbols can relate to nodes of the connectionist model. The question of the role of rationalism, expressed in the form of an inductive threshold and presented in most of the training programs, is considered. The question arises how to relate this to the lack of structuredness inherent in unsupervised learning, reinforcement learning and evolutionary approaches. And finally, it is necessary to consider the role of constructive execution, the introduction of agents and the sociological prerequisites for solving problems. In conclusion of the discussion on philosophical questions, epistemology will be proposed in the spirit of constructivism, which naturally agrees with the proposed approach to the consideration of the AI ​​of both science and the empirical problem.

So, in this final chapter, we again return to the question posed in chapter 1: what is intelligence? Is it amenable to formalization? How to build a system exhibiting this property? How does artificial and human intelligence fit into the wider context of the theory of intelligent systems? Section 16.1 discusses the revised definition of artificial intelligence. It indicates the following. Although the work in the field of AI is based on the hypothesis about the physical symbolic system of Newell and Simon, today the set of its tools and methods has expanded significantly. This area covers a much wider range of issues. Analyzed alternative approaches to the question of intelligence. They are considered both as tools for designing intelligent devices, and as components of the theory of intelligent systems. In section 16.2, the reader’s focus will be on the use of modern cognitive psychology, neural network computing, and epistemology for a better understanding of the field of artificial intelligence.

Finally, section 16.3 discusses the challenges facing both AI practitioners and conceptualization specialists. Although traditional approaches to AI are often blamed for rationalist reductionism, new interdisciplinary methods also often suffer from such shortcomings. For example, developers of genetic algorithms and researchers of artificial life define intelligence from the point of view of Darwinism: "It is reasonable that it survives." Knowledge in the complex world of embedded agents often comes down to the formula “I know how to do”, and not to “I know what I'm doing.” But for scientists, the answers require clarification; they are not satisfied with just the “success” or “survival” of the models. In this final chapter, we will discuss the future of AI, formulating the questions that are vital for creating a computational theory of intelligence, and conclude that empirical methodology is an important, if not the best tool for studying the nature of intelligence.

16.1. Artificial Intelligence: Revised Definition

16.1.1. Intellect and the hypothesis of a physical symbol system

Based on the material of the previous 15 chapters, we can formulate a revised definition of artificial intelligence.

AI is a discipline that explores the patterns that underlie intelligent behavior, by building and studying artifacts that predetermine these patterns.

According to this definition, artificial intelligence is to a lesser extent a theory of the laws underlying intelligence, and to a greater extent the empirical methodology of creating and researching all sorts of models on which this theory is based. This conclusion stems from the scientific method of designing and conducting experiments with the aim of improving the current model and setting up further experiments. However, this definition, like the field of AI itself, challenges the centuries-old philosophical obscurantism regarding the nature of mind. It gives people who crave for understanding (which is perhaps the main characteristic of a person), an alternative to religion, superstition, Cartesian dualism, empty theories of the new time or searching for the mind in some uncovered nooks of quantum mechanics [Penrose, 1989]. If the science that studies artificial intelligence has made some contribution to human knowledge, then it confirms the following. Reason is not a mystical ether pervading people and angels, but rather a manifestation of principles and laws that can be comprehended and applied in the design of intelligent machines. It should be noted that our revised definition does not apply to the intellect, it determines the role of artificial intelligence in the study of the nature and the phenomenon of rationality.

Historically, the dominant approach to artificial intelligence has included the construction of formal models and the corresponding reasoning mechanisms based on enumeration. The leading principle of the early methodology of artificial intelligence was the hypothesis about the physical symbol system (physical symbol system), first formulated by Newell and Simon [Newell and Simon, 1976]. This hypothesis reads as follows.

A physical system exhibits intelligent, broadly defined behavior if and only if it is a physical character system.

Sufficiency means that rationality can be achieved by every properly organized physical symbolic system.

Necessity means that every agent who displays intelligence in the generally accepted sense must be a physical symbolic system. The necessary condition of this hypothesis requires that any sensible agent, be it a human being, an alien or a computer, achieve reasonable behavior through the physical implementation of operations on character structures.

Behavior in a broad sense (general intelligent action) means actions that are characteristic of human behavior. A physically limited system behaves according to its goals, adapting to the requirements of the environment.

Newell and Simon gathered arguments in favor of necessary and sufficient conditions [Newell and Simon, 1976]; [Newell, 1981]; [Simon, 1981]. In subsequent years, experts in the field of AI and cognitive studies explored the territory outlined by this hypothesis.

The hypothesis about the physical symbolic system led to the three most important principles of the methodology: the use of symbols and symbolic systems as a means to describe the world; the development of enumeration mechanisms, especially heuristic ones, to study the boundaries of the potential conclusions of such systems; abstraction of cognitive architecture. This refers to the assumption that a properly constructed symbol system can manifest intelligence in a broad sense, regardless of the means of implementation. Finally, from this point of view, AI becomes an empirical and constructive discipline that studies intelligence, building its operational models.

Language signs, called symbols, are used to refer to or refer to various third-party objects. Like verbal signs in natural language, symbols replace or refer to specific things in the world of the intelligent agent. For example, a certain variant of semantics can be proposed for these object-reference links (see Section 2.3).

From the point of view of character systems, the use of symbols in AI goes far beyond the limits of such semantics. The symbols here represent all forms of knowledge, experience, concepts and causality. All such constructive works are based on the fact that symbols, together with their semantics, can be used to build formal systems. They define representation language (representation language). This ability to formalize character models is fundamental for modeling intelligence as an executable computer program. In this book, several representations have been studied in detail, including predicate calculus, semantic networks, scenarios, conceptual graphs, frames, and objects.

Mathematics of formal systems allows us to talk about such things as consistency, completeness and complexity, as well as discuss the organization of knowledge.

The evolution of representational formalisms allows one to establish more complex (wide) semantic relations. For example, inheritance systems form a semantic theory of taxonomic knowledge and its role in intelligence. Formally defining the inheritance of classes, such languages ​​facilitate the construction of intelligent programs and provide conveniently tested models for the organization of possible categories of intelligence.

Representation schemes and their use in formal reasoning are closely related to the concept of search. A search is a sequential check of nodes in a priori semantically described representation network for finding a solution to a problem or subtasks, identifying a symmetry of a problem, and the like (depending on the aspect under consideration).

Representation and search are related, since the assignment of a task to a specific representation determines the a priori search space. Indeed, the solution of many problems can be considerably more difficult, if not impossible, by choosing the presentation language unsuccessfully. The following discussion of the inductive threshold in this chapter will illustrate this point.

An expressive and frequently cited example of the connection between search and representation, as well as the difficulty of choosing a convenient representation, is the task of placing the dominoes on a truncated chessboard. Suppose there is a chess board and a set of domino bones, each covering exactly two cells on the board. We also assume that the board lacks several cells — in fig. 16.1 missing upper left and lower right corners.

Fig. 16.1. Truncated chess board with two domino cells

The task is to establish whether it is possible to place the dominoes on the board so that all fields are closed, and at the same time, each bone covers two and only two cells. You can try to solve the problem by going through all the options for the location of the bones. This is a typical example of a search-based solution, which is a natural consequence of the representation of the board as a simple matrix, ignoring such seemingly insignificant features as the color of the field. The complexity of this search is incredible. For an effective solution, it is necessary to use heuristic methods. For example, private solutions can be cut off, which leave isolated cells isolated. You can also start by solving the problem for smaller boards, such as 2x2, 3x3, and try to expand the solution to a situation of 8x8.

Based on a more complex view, you can get an elegant solution. To do this, take into account the fact that each bone must simultaneously cover the white and black cells. There are 32 black cells on the truncated board, but only 30 white, therefore, the required placement is impossible. Thus, in systems based on symbolic reasoning, a serious question arises: are there representations that allow operating knowledge with such a degree of flexibility and creative approach? How can a particular view change its structure as new information about the subject area appears?

Heuristics are the third important component of symbolic AI after presentation and retrieval. Heuristics is a mechanism for organizing a search among alternatives proposed by a particular submission. Heuristics are developed to overcome the difficulty of complete enumeration, which is an insurmountable barrier to obtaining useful solutions to many classes of interesting problems. In the computer environment, as in human society, the intellect needs an informed decision "what to do next." Throughout the history of the development of AI, heuristics took many forms.

Such early methods for solving problems, such as the method of searching for extremum (hill climbing) in a checker program (see Chapter 4) or analyzing goals and means (means-ends analysis) in a generalized system for solving problems of the General Problem Solver (see Chapter 12), came in AI from other disciplines, such as operations research (operations research), and gradually grew to generally applicable methods for solving AI problems. Search characteristics, including admissibility, monotonicity and informedness, are important results of these early works. Such methods are often called weak methods. Weak methods have been developed as universal search strategies designed for use in entire classes of subject areas [Newell and Simon, 1972], [Ernst and Newell, 1969]. These methods and their characteristics are discussed in Chapters 3, 4, 5, and 12.

Chapters 6, 7 and 8 present strong methods for solving AI problems using expert systems based on production rules, reasoning using models (model-based reasoning) and examples (case-based reasoning), as well as symbolic training (symbol-based learning). Unlike the weak, strong methods focus on information specific to each subject area, be it internal medicine or integral calculus. Strong methods underlie expert systems and other approaches to solving problems with the active assimilation of knowledge. In strong methods, special importance is attached to the amount of data needed to solve the problem, to train and replenish knowledge, to provide syntactic representation, to manage uncertainty, and to questions about the quality of knowledge.

Why is there still no truly intelligent character systems?

The characteristic of intellect as a physical symbolic system causes many complaints. Most of them are easy to reject by considering the issues of semantic meaning and grounding for the concepts of symbols and symbol systems. The question of "meaning", of course, also beats the idea of ​​intelligence as a search in pre-interpreted symbolic structures. The concept of meaning in traditional AI is very poorly developed. Nevertheless, the temptation to move towards more “mathematized” semantics, for example, the theory of possible worlds (possible worlds), seems to be erroneous. This method is rooted in the rationalist idea of ​​replacing the flexible evolving intelligence of a materialized agent with a world of clear, well-defined ideas.

Justifying meaning is a problem that has always confused the plans of both adherents and critics of artificial intelligence, as well as cognitologists. The problem of justification comes down to this: how can symbols make sense? In [Searle, 1980], she examines the example of the so-called "Chinese room". The author places himself in a room designed to translate Chinese sentences into English. He is given a set of Chinese characters, and he searches for the meaning of hieroglyphs in a large catalog and transmits the corresponding set of English characters to the output. The author declares that even without knowledge of Chinese, his “system” can be viewed as a machine - a translator from Chinese to English.

But there is one problem. Any specialist working in the field of machine translation or understanding of natural languages ​​(see Chapter 13) may argue that a "translator" blindly matching one set of characters with another produces a very poor quality result. Moreover, the capabilities of the current generation of intelligent systems in terms of the “meaningful” interpretation of the character set are very limited. The problem of too poor supporting semantics extends to computationally implemented sensory modalities, be they visual, kinesthetic or verbal.

With regard to the understanding of natural language, Lakoff and Johnson [Lakoff and Johnson, 1999] object that the ability to create, use, exchange and interpret meaningful symbols is a consequence of the integration of a person into a changing social environment. Thanks to her, human abilities of survival, evolution and procreation have arisen. She made possible reasoning by analogy, humor, music, and art. Modern means and methods of artificial intelligence are indeed very far from the ability to encode and use systems equivalent in meaning.

A direct consequence of poor semantics is that the search methodology in traditional AI only considers previously interpreted states and their contexts. This means that the creator of the AI ​​program associates semantic meaning with the symbols used. Therefore, intelligent systems, including systems for learning and understanding natural language, can construct only a certain computed function in this interpretation. Thus, most of the AI ​​systems are very limited in the ability to build new semantic associations as they study the world around them [Lewis and Luger, 2000].

Вследствие стесненных возможностей семантического моделирования наиболее значительные успехи связаны с разработкой приложений, в которых можно абстрагироваться от чересчур широкого контекста и в то же время описать основные компоненты решения задачи с помощью заранее интерпретируемых символьных систем. Большая их часть упоминалась в этой книге. Но и такие системы не поддерживают множественных интерпретаций и ограничены в способности восстанавливать работоспособность после сбоя.

Throughout the short history of artificial intelligence, various versions of the hypothesis about the physical symbol system have been studied. Alternatives to this approach have been developed. As shown in the last chapters of this book, the symbolic system and the search are not the only possible means of implementing an intelligent system. Computational models based on the work of the organic brain, as well as on the processes of biological evolution, provide an alternative basis for understanding intelligence in terms of scientifically knowable and empirically reproducible processes. The rest of this section is devoted to discussing these approaches.

16.1.2. Connectionist, or neural network computing systems

A significant alternative to the hypothesis of a physical symbol system is research in the field of neural networks and others, borrowed from biology, computational models. Neural networks, for example, are physically realizable computational models of cognition, not based on previously interpreted symbols, which precisely describe the subject area. Since knowledge in a neural network is distributed throughout its structure, it is often difficult (if not impossible) to relate specific concepts to individual nodes or weights. In fact, any part of the network can serve to represent different concepts. Consequently, neural networks are a good counterexample, at least to the hypothesis condition of physical symbol systems.

Neural networks and genetic algorithms have shifted the focus of AI research from problems of symbolic representation and strategies of formal reasoning to problems of learning and adaptation. Neural networks, like human beings and animals, can adapt to the world. The structure of the neural network is formed not only during its development, but also during training. Intellect based on the neural network does not require translating the world into the language of the character model. Rather, the network is formed through interaction with the world, which is reflected in the implicit form of experience. This approach has made a significant contribution to our understanding of intelligence. He gave a plausible description of the mechanisms underlying the physical realization of thinking processes; a more viable learning and development model; a demonstration of the possibility of creating a complex system that reacts to real phenomena by means of simple local adaptation; as well as a powerful tool for the cognitive theory of neural systems (neuroscience).

It is because of its versatility that neural networks help answer many questions that lie beyond the impressive possibilities of symbolic AI. An important class of such questions concerns the problem of perception. Nature is not so generous as to represent the work of our perception as a set of exact formulas of predicate calculus. Neural networks provide a model for isolating “meaningful” images from the chaos of sensory stimuli.

Because of their distributed representation, neural networks are often more stable than similar character systems. An appropriately trained neural network can effectively classify new inputs, manifesting human-like perceptions based not on strict logic, but on “similarities”. Similarly, the loss of several neurons does not seriously affect the performance of a large neural network. This is a consequence of redundancy, often intrinsic to network models.

Probably the most attractive aspect of connectionist networks is their ability to learn. Instead of building a detailed character model of the world, neural networks, thanks to the flexibility of their structure, can be adapted based on experience. They do not so much build a model as they themselves form under the influence of the world. Learning is one of the main aspects of intelligence. And it is precisely from the problem of learning that the most complex issues related to neural network computing systems arise.

Why we still have not created the brain

Recent studies in the cognitive theory of neural systems [Squire and Kosslyn, 1998], [Gazzaniga, 2000] present a new aspect in understanding the cognitive architecture of the human brain. In this section, we briefly review some of the discoveries in this area, drawing a parallel between them and artificial intelligence. These questions will be considered from three positions: at the level of, first, the neuron, secondly, the neural architecture, and, thirdly, the cognitive representation of the encoding problem.

At the level of an individual neuron, Shepard [Shephard, 1998] and Carlson [Carlson, 1994] define many different types of neural architecture, built of cells, each of which performs a specialized function and plays a role in a larger system. They secrete receptor cells similar to skin cells, which transmit input information to other cell clusters, internal neurons, whose main task is to transfer information inside cell clusters, and motor neurons that form the output of the system.

Neural activity has an electrical nature. The state of excitation or dormancy is determined by the nature of the ionic fluxes into and out of the neuron. In a typical neuron, the resting potential is approximately -70 mV. When the cell is active, the end of the axon secretes certain substances. These chemicals, called neurotransmitters, interact with the postsynaptic membrane, usually infusing into the right receptors and thus stimulating further ionic currents. Ion fluxes, reaching a critical level of about -50 mV, form an excitation potential (action potential) —a trigger mechanism that uniquely determines the degree of cell excitation. Thus, neurons communicate by exchanging sequences of binary codes.

There are two types of postsynaptic changes caused by the achievement of the excitation potential: inhibitory, observed mainly in the interneuronal cell structures, and excitatory. Such positive and negative potentials are constantly generated in the synapses of the dendritic system. When the net effect of all these events changes the potentials of the membranes of the respective neurons from -70 mV to about -50 mV, the threshold value is exceeded, and the ionic currents in the axons of these cells are re-initiated.

At the level of neural architecture in the cerebral cortex (a thin layer covering the brain hemispheres) contains approximately 1010 neurons. Most of the bark has a folded shape, which increases its surface area. From the point of view of the computing system, it is necessary to take into account not only the number of synapses, but also the input and output load capabilities. Shepard [Shephard, 1998] roughly estimates both of these parameters with the number 105.

In addition to the differences in cells and architectures of neural and computer systems, there is a deeper problem of cognitive representation. We, for example, know absolutely nothing about how even the simplest memories are coded in the cortex. Or how a person recognizes faces, and how a facial recognition can relate an agent to feelings of joy or sadness. We know a lot about the physicochemical aspects of the brain, but relatively little about how the nervous system encodes information and uses these images in the right context.

One of the most difficult issues faced by researchers, both neural and computational, concerns the role of innate knowledge in learning. Is it possible to conduct effective training from scratch, without initial knowledge, based solely on experience; or should some inductive threshold be present? Experience in developing student programs implies the need for some initial knowledge, usually expressed in the form of an inductive threshold. It turned out that the ability of neural networks to build a meaningful generalization based on the training set depends on the number of neurons, network topology and the specifics of the learning algorithms. The combination of these factors constitutes an inductive threshold, which plays a no less important role than in any symbolic representation. For example, there is more and more evidence that children inherit a set of “hard-wired” cognitive prerequisites (thresholds), thanks to which learning is possible in areas such as language and an intuitive understanding of the laws of nature. The presentation of congenital thresholds in neural networks today is an area of ​​active research [Elman et al., 1996].

The question of congenital thresholds fades into the background if you pay attention to more complex learning problems. Suppose you need to develop a computational model of scientific discovery and model the passage of Copernicus from a geocentric to a heliocentric view of the structure of the universe. For this purpose it is required to present the theory of Copernicus and Ptolemy in a computer program. Although these views can be presented as activation schemes of a neural network, such networks will not say anything about them as about theories. Man prefers to receive explanations like: "Copernicus was puzzled by the complexity of the Ptolemy system and preferred a simpler model in which the planets revolve around the sun." Similar explanations require a character expression. Obviously, neural networks must provide symbolic justification. In the end, the human brain is a neural network, but it knows how well to handle symbols. Nevertheless, the symbolic justification in neural networks is an important but still open problem.

Another problem is the role of development in learning. Children cannot simply learn from the available data. Their ability to learn in specific areas is manifested in well-defined stages of development [Karmiloff-Smith, 1992]. A curious question arises: is this fact entirely a consequence of human biology, or does it reflect some fundamentally necessary restrictions on the ability of the intellect to understand the patterns of the surrounding world? Can similar developmental stages serve as a mechanism for splitting the task of learning into simpler subtasks? Is it possible that a system of artificially imposed restrictions on the development of an artificial network is a necessary basis for learning in a complex world?

The use of neural networks to solve practical problems poses additional problems for researchers. The very properties of neural networks that make them so attractive (ability to adapt, resistance to data insufficiency or ambiguity), simultaneously create obstacles for their practical application. Since neural networks are trained, not programmed, their behavior is more difficult to predict. There are several general principles of network design that would behave as needed in a given domain. However, it is difficult to explain why the neural network came to a certain conclusion. Usually such explanations take the form of some statistical considerations. All of the above problems are the subject of ongoing research.

The question arises, are connectivity networks and AI character systems so different in the quality of intelligence models? They have a lot in common. In both approaches, the process of "thinking" is reduced to computational operations that have fundamental and formal limitations, such as the Church-Turing hypothesis described in Chapter 2 [Luger, 1994]. Both approaches offer a mind model applicable in practical tasks. Moreover, both approaches reject the philosophy of dualism and place the origins of the mind in the structure and function of physical devices.

We believe that the reconciliation of these two very different approaches is inevitable. An exceptional contribution to science would be the construction of a theory for transforming a symbolic representation into a network configuration, and, in turn, influencing its further adaptation. Such a theory would help in many studies, for example, integration into a single intellectual system of neural network perception and knowledge-based reasoning systems. Nevertheless, in the near future, adherents of both views will have enough work, and we do not see any reasons preventing them from coexisting. For those who are embarrassed by the similar inconsistency of two models of intelligence, let us give an example from physics, in which light is sometimes more conveniently viewed as a wave, and sometimes as a stream of particles.

16.1.3. Agents, Intelligence and Evolution

Agent calculations and module theories of knowledge pose interesting questions for researchers of AI systems. One of the famous schools of cognitive science believes that the mind is formed from sets of specialized functional elements [Minsky, 1995], [Fodor, 1983]. All modules have a clear specialization and use a wide range of inherent structures and functions, from the “hard-wired” problem solving to the inductive threshold. This is due to the variety of problems that they as practical agents should deal with. It makes sense: how can a neural network be trained to engage in perception, and motor functions, and memorization, and logical reasoning? Modular theories of intelligence provide a basis for finding an answer to this question, as well as a direction for further research of such issues as the nature of the inductive threshold in individual modules or modular interaction mechanisms.

Genetic and evolutionary computational models provide exciting new approaches to understanding both human and artificial intelligence. Demonstrating the possibility of considering rational behavior as the totality of the work of a large number of limited independent agents, the theory of genetic and evolutionary computations solves the problem of representing a complex mind as the result of an interaction of relatively simple structures.

An example is given in [Holland, 1995] in which the mechanisms for providing bread to a large city (for example, New York) demonstrate the process of the emergence of intelligence in the agency system. It is hardly possible to write a centralized planner who would provide the residents of New York with the usual variety of breads. And the unsuccessful experiment of the communist world in the introduction of centralized management clearly showed the limitations of this approach. Even in spite of the practical difficulties of writing an algorithm for such centralized planning, city bakers, merchants and suppliers of raw materials poorly coordinated in their actions cope with the problem quite well. Evolutionary agent systems do not have a central plan. Any baker has very limited information about the needs of the city in bread. He just tries to optimize the possibilities of his business. The solution to a global problem is made up of the collective activities of such independent local agents.

Demonstrating how purposeful, resistant to fluctuations and almost optimal behavior can be made up of the interaction of individual local agents, these models provide another answer to the old philosophical question about the origin of mind. The main lesson of evolutionary approaches to the intellect is that integral intelligence can arise and arises from the interaction of many simple, separate, local, materialized agent intelligence.

The second main feature of evolutionary models is that they are based on the Darwinian principle of natural selection as the main mechanism that forms the behavior of individual agents. Returning to the example of the bakers, it is difficult to argue that every single baker behaves in a "globally optimal" way. The source of such optimality is not a centralized project, but a simple fact. Bakers who poorly meet the needs of their customers are going bankrupt. Through relentless, multiple selective influences, individual bakers come to a behavior model that provides both their own survival and social utility.

The combination of distributed, agent-based architecture and the adaptive effects of natural selection is a powerful model for the evolution and operation of the mind. Evolutionary psychologists [Cosmides and Tooby, 1992, 1994], [Vagkoshidr., 1992] developed a model for the formation of the innate structure and inductive threshold of the human brain in the process of natural selection. The basis of evolutionary psychology is the consideration of the mind as a modular system of interacting specialized agents. In evolutionary psychology, the mind is often compared to a Swiss knife - a set of specialized tools designed to solve various problems. There is growing evidence that human intelligence is indeed highly modular. In [Fodor, 1983] there are philosophical arguments in favor of the modular structure of the intellect. Minsky [Minsky, 1985] investigated various applications of modularity theories in the field of artificial intelligence. Such an architecture is important for theories of the evolution of intelligence. It is difficult to imagine how evolution could form a complete system of such complexity as the human brain.On the other hand, the plausible is the fact that evolution over millions of years could successfully form separate, specialized cognitive skills. Then she could work on the combinations of modules, forming the mechanisms of their interaction, information exchange and cooperation, allowing as a result to solve more and more complex knowledge problems [Mithen, 1996].

Теории нейронной селекции [Edelman, 1992] показывают, как эти же процессы могут отвечать за адаптацию отдельной нервной системы. Модели нейронного дарвинизма описывают адаптацию нейронных систем в терминах Дарвина: в процессе селекции в ответ на воздействия внешней среды происходит усиление одних цепочек в мозге и ослабление других. В отличие от символьных методов обучения, которые пытаются выделить информацию из обучающих данных и использовать ее для построения модели мира, теории нейронной селекции рассматривают влияние селективных воздействий на популяции нейронов и их взаимодействие. Эдельман [Edelman, 1992, С. 81] утверждает:

Considering the study of the brain as a science of knowledge, I mean that knowledge is not an instructional process. There is no direct transfer of information, as it is not in evolutionary or immune processes. On the contrary, knowledge is selective.

Agent technologies also provide models of social interaction. Using agent-based approaches, economists have built informative (if not fully predictive) models of economic markets. Agent technologies have an increasing influence on the construction of distributed computer systems, the development of Internet search tools and the design of collaborative development environments.

Finally, agent-based models had a considerable influence on the theory of consciousness. For example, Daniel Dennett [Dennett, 1991] examines the function and structure of consciousness, starting from the agent's intelligence architecture. He begins by noting that the question of the location of consciousness in the brain or mind is incorrect. On the contrary, his multiple theory of consciousness (multiple draft theory of consciousness) is based on the consideration of consciousness in the interaction of agents in a distributed architecture of intelligence. During perception, control of motor functions, problem solving, training, and other mental activity, associations of interacting agents are formed. These combinations are very dynamic and change depending on the needs in different situations. Consciousness, according to Dannet, serves as the connecting mechanism of these associations of agents, it supports the interaction of agents. A set of agents becomes the basis of cognitive data processing.

Limitations of agent intelligence

The development of an evolutionary approach has led to the emergence of new issues. For example, the whole path of evolution of the higher-level cognitive abilities of a language has yet to be understood. Like the attempts of paleontologists to reconstruct the evolution of species, tracking the development of these high-level problems is associated with a large amount of painstaking work. It is necessary to list all the agents underlying the architecture of the mind, and track their evolution over time.

An important issue for agent theories is the explanation of interactions between modules. Although the model of the mind as a "Swiss knife" is useful for intuitive understanding, the modules that make up the intellect are not as independent as the blades of a penknife. Reason demonstrates broad, highly volatile interactions between cognitive areas: we can talk about things that we see, which reveals the interaction between visual and linguistic modules. We can construct buildings that serve specific social goals, which indicates the interaction between the technical and social sides of the intellect. Poets can build metaphors to describe visual scenes that demonstrate the flexible interaction between visual and tactile modules. The study of perceptions and processes that make these intermodular interactions possible is an area of ​​ongoing research [Karmiloff-Smith, 1992], [Mithen, 1996], [Lakoff and Johnson, 2000].

Practical applications of agent technologies are becoming more and more important. Using agent-based computer simulations, you can create complex systems for which there is no analytical description, and which, because of this, were previously unavailable for study. Modeling techniques are used to describe a wide range of phenomena, for example, the adaptation of the human immune system, the management of complex processes, including particle acceleration, the behavior of world currency markets, meteorology. The issues of representation and computational implementation, which are to be solved to create such models, determine the direction of research in the construction of knowledge representations, algorithms, and even the development of computer equipment.

Other practical issues that agent architectures have to deal with include inter-agent interaction protocols, especially in cases where local agents have limited knowledge about the problem as a whole or what other agents may already have. Moreover, there are several algorithms for splitting large tasks into agent-oriented subtasks and distributing limited resources among agents. These and other issues of agent representation were discussed in subsection 6.4.2.

Perhaps the most fascinating aspect of the evolutionary theory of intelligence is the ability to describe various types of mental activity with a single model for the emergence of order from chaos. A brief overview in this section allows us to highlight work using evolutionary theory to model a number of processes: from the evolution of intelligence to the construction of economic and social models of behavior. There is something extremely attractive in the idea that the evolutionary processes described by Darwin's theory can explain rational behavior on many scales: from the interaction of individual neurons to the formation of the molecular structure of the brain or the functioning of economic markets and social systems. It turns out that intelligence is similar to the fractal structure, where the same processes operate on all scales and throughout the system as a whole.

Next will be considered the psychological and philosophical aspects of human thinking, which influenced the creation, development and application of the theory of artificial intelligence.

16.2. Theory of Intelligent Systems

The main group of researchers of artificial intelligence has focused its activities on the understanding of the human mind is not accidental. People provide prototypes and examples of intellectual actions. Therefore, the developers of artificial intelligence rarely ignore the "human" approach, although they usually do not set themselves the goal of forcing programs to follow the example of people. Thus, in computer diagnostics applications, programs often model the reasoning of expert people working in the relevant field. And most importantly, understanding human intelligence is an exciting and still open scientific problem.

Modern cognitive science (cognitive science), or the theory of intelligent systems (science of intelligent systems) [Luger, 1994], originated with the invention of the digital computer. However, in chapter 1 it was mentioned that this discipline had its predecessors. Their list begins with Aristotle, Descartes and Buhl and includes many modern theorists such as Turing, McCulloch and Pitts, the founders of neural network models, as well as John von Neumann - one of the first advocates of the concept of artificial life. These studies became a science only when the opportunity arose to conduct experiments based on theoretical considerations, or rather, with the advent of computers. Finally, you can ask the question: "Is there a comprehensive theory of intelligence?". You can also ask: "Can the theory of intelligent systems help in the creation of artificial intelligence?".

The following sections will briefly describe how psychology, epistemology and sociology influenced research in the field of AI.

16.2.1. Limitations of psychology

Early cognitive studies related to the solution of logical problems by man. Such systems learned to play simple games, to plan and carry out the study of concepts [Feigenbaum and Feldman, 1963], [Newell and Simon, 1972], [Simon, 1981]. While working on the Logic Theorist system (see Section 12.1), Newell and Simon began to compare their computational approaches with the search strategies used by humans. The initial data for them was the thought-aloud protocols protocols - the descriptions of their thoughts by people in the process of thinking about the solution of the problem, for example, the proof of some theorem. Newell and Simon compared these protocols with the behavior of a computer program working on the same task. The researchers found striking similarities and interesting differences, both in the tasks and in the research subjects.

These early projects led to the creation of a methodology used in cognitive science in the coming decades.

On the basis of data obtained from people solving a certain class of problems, a representation scheme and a corresponding search strategy for solving the problem are constructed.

The behavior of the computer model that solves this problem is monitored.

People who work on solving problems are monitored, and the measured parameters of their decision process are monitored, namely: protocols “thinking out loud”, eye movements, and recording intermediate results.

Solutions of a person and a computer are compared and analyzed.

The computer model is revised for the next cycle of experiments.

This empirical methodology is described in a lecture by Newell and Simon given on the occasion of the Turing Award. Her fragment was chosen as an epigraph to this chapter. An important aspect of cognitive science is the use of experiments to confirm the operability of a solver's architecture, be it a production system, a network of connections, or an architecture based on the interaction of distributed agents.

In recent years, a fundamentally new dimension has been added to these concepts. Now, in the process of solving problems, one can "disassemble" and study not only programs, but also people, and other forms of life. Many new imaging technologies have expanded the range of tools used to study the activity of the cerebral cortex. These technologies include magnetic encephalography (magnetoencephalography - MEG), which records magnetic fields created by groups of neurons. In contrast to the electric magnetic field is not distorted by the bones and skin of the skull, therefore, you can get a clearer image of it.

Another technology is positron emission tomography (PET). A radioactive substance is injected into the blood vessel, usually O15. When a specific region of the brain is excited, a larger amount of a radioactive agent is detected by sensitive detectors than at rest in that region of the brain. Comparing images of excited and resting regions helps to detect functional localization with a resolution of about 1 cm [Stytz and Frieder, 1990].

Another method of analyzing the nervous system is functional magnetic resonance scanning (functional magnetic resonance imaging (F-MRI)). This approach comes from a more standard method based on nuclear magnetic resonance (nuclear magnetic resonance - NMR). Like the PET method, this approach uses a comparison of images of excited and resting brain regions to identify functional localization.

Software algorithms developed by Barak Perlmutter and his colleagues [Pearlmutter and Ragga, 1997], [Tang et al., 1999, 2000a, 20006] made further contributions to the localization of brain functions. They are closely related to the methods listed above. Thanks to these studies, it became possible to obtain complex samples of noise, often arising at the output of devices for imaging the nervous system. This is an important step in research, since indicators such as eye movement, respiration, and heartbeat are associated with the neuron excitation patterns under study.

The results of recent studies in the cognitive theory of neural systems [Squire and Kosslyn, 1998], [Shephard, 1998], [Gazzaniga, 2000] have significantly improved understanding of the role of the neural component in intellectual activity. Although the analysis and criticism of such results goes beyond the consideration of this book, let us touch on several important issues.

In the area of ​​perception and attention, there is a binding problem. Researchers such as Ann Trisman [Triesman, 1993, 1998] note that perception perception depends on distributed neural codes that are responsible for comparing parts and qualities of objects, and ask what mechanisms "connect" information related to a particular object. , and allow to distinguish this object from others.

In the field of visual search, the question of

продолжение следует...

Продолжение:


Часть 1 Artificial Intelligence as an Empirical Problem
Часть 2 - Artificial Intelligence as an Empirical Problem

created: 2014-09-22
updated: 2021-12-04
132567



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Approaches and directions for creating Artificial Intelligence

Terms: Approaches and directions for creating Artificial Intelligence