Artificial intelligence as a science

Lecture



In recent years, there has literally been a revolution in both the content and the methodology of work in the field of artificial intelligence. Currently, works that are based on existing theories are much more common and do not contain descriptions of fundamentally new discoveries; the statements made in these papers are based on rigorous theorems or reliable experimental evidence, and not on intuition; at the same time, the validity of the conclusions made is confirmed on real practical applications, and not on toy examples.

The emergence of artificial intelligence was partly the result of efforts to overcome the limitations of such existing scientific fields as control theory and statistics, but now artificial intelligence has included these areas.

In one of his works, David McAullester expressed this idea as follows. In the early period of the development of artificial intelligence, it seemed likely that as a result of the emergence of new forms of symbolic calculations, such as frames and semantic networks, the main part of the classical theory would become outdated.

This led to a certain form of self-isolation, characterized by the fact that artificial intelligence was largely separated from the rest of the computer sciences. At present, this isolationism is overcome. There was a recognition that machine learning should not be separated from information theory, that reasoning under uncertainty should not be isolated from stochastic modeling, that search should not be considered separately from classical optimization and control, and that automated reasoning should not be interpreted as independent of formal methods. and statistical analysis.

From the point of view of methodology, artificial intelligence has finally firmly turned to scientific methods. Now, in order to be accepted, the hypotheses must be tested in rigorous practical experiments, and the significance of the results must be confirmed by statistical analysis data. In addition, it is now possible to reproduce experiments using the Internet, as well as shared test data and code repositories.

It is on this principle that the speech recognition area develops. In the 1970s, a wide variety of different architectures and approaches were tested. Many of them turned out to be rather far-fetched and short-lived and were demonstrated only on a few specially selected examples. In recent years, approaches based on the use of hidden Markov models (HMD) have occupied a dominant position in this area.

The above-described state of the art of artificial intelligence is confirmed by two features of the MMO models. First, they are based on rigorous mathematical theory. This allows speech researchers to use in their work the mathematical results accumulated in other areas over several decades. Secondly, they were obtained in the process of training programs on a large array of real speech data. This ensures reliable performance indicators, and in strict blind testing, the MMO models consistently improve their performance.

Speech recognition technology and the handwriting recognition area associated with it are already making the transition to creating widely used industrial and consumer applications.

Neural networks also follow this trend. Most of the work on neural networks carried out in the 1980s was carried out in an attempt to assess the scope of what needs to be done, and also to understand how neural networks differ from “traditional” methods. As a result of using improved methodology and theoretical foundations, researchers in this field have reached such a level of understanding that now neural networks have become comparable with the corresponding technologies from the field of statistics, pattern recognition and machine learning, and the most promising methodology can be applied to each of these applications.

As a result of these developments, a so-called technology for analyzing hidden patterns in data (data mining) was created, which formed the basis of a new, rapidly growing branch of the information industry.

Familiarity of the public at large with Judy Pearl Probabilistic Reasoning in Intelligent Systems led to the recognition of the importance of probability theory and the theory of artificial intelligence, which followed the revival of interest in this topic, brought about by Peter Chizman in Defense of Probability.

To ensure the effective representation of uncertain knowledge and to conduct rigorous reasoning on their basis, formal means of Bayesian networks were developed. This approach has allowed to overcome many problems of systems of probabilistic reasoning that arose in the 1960-1970s; now he has become dominant in such areas of artificial intelligence research as the formation of reasoning in the face of uncertainty and expert systems.

This approach allows you to organize learning based on experience and combines the best achievements of classical artificial intelligence and neural networks.

In the works of Judy Pearl, as well as Eric Gorvits and David Heckerman, the idea of ​​regulatory expert systems was developed. These are systems that act rationally, in accordance with the laws of the theory of solutions, and do not try to imitate the mental stages in the work of people - experts.

The Microsoft Windows operating system includes several regulatory diagnostic expert systems used to troubleshoot disruption.

Similar bloodless revolutions have occurred in the fields of robotics , computer vision and knowledge representation. Thanks to a better understanding of the research tasks and properties that determine their complexity, in combination with the ever-increasing complexity of the mathematical apparatus, it was possible to achieve the formation of real plans for scientific research and proceed to the use of more reliable methods.

But in many cases, formalization and specialization have also led to the fragmentation of areas, for example, topics such as machine vision and robotics are increasingly separated from the “mainstream” of work on artificial intelligence. Again, to achieve the unification of these disparate areas is possible on the basis of a single view on artificial intelligence as the science of designing rational agents.

Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.