Possible errors in the results of artificial intelligence systems

Lecture



A neural network processing information can be compared to a black box. No, of course, experts imagine the principle of data processing by a neural network in general terms. But the problem is that self-training is not a completely predetermined process, so sometimes absolutely unexpected results can be expected at the output. At the heart of everything lies deep learning, which has already allowed us to solve a number of important problems, including image processing, speech recognition, translation. It is possible that neural networks will be able to diagnose diseases at an early stage, make the right decisions when trading on the stock exchange, and perform hundreds of other important human actions.

But first you need to find ways that will better understand what is happening in the neural network itself when processing data. Otherwise, it is difficult, if at all possible, to predict the possible errors of systems with a weak form of AI. And such errors will certainly be. This is one of the reasons why the car from Nvidia is still under testing.

A person now applies mathematical models in order to facilitate the task of choosing for himself - for example, to determine a reliable borrower of funds or to find an employee with the necessary experience for some kind of work. In general, the mathematical models and processes that use them are relatively simple and straightforward. But military, commercial companies, scientists now use much more complex systems, whose "decisions" are not based on the results of one or two models. Deep learning differs from the usual principles of computer operation. According to Tommy Jaakol, a professor at MIT, this problem is becoming increasingly relevant. “Whatever you do, make a decision about your investments, try to make a diagnosis, choose an attack point on the battlefield, all this should not depend on the black box method,” he says.

This is understood not only by scientists, but also by officials. Starting next summer, the European Union introduces new rules for developers and suppliers of automated computer systems solutions. Representatives of such companies will be required to explain to users how the system works, and by what principle decisions are made. The problem is that this may not be possible. Yes, it is possible to explain the basic principles of the operation of neural networks without problems, but few can tell exactly what happens there during the processing of complex information. Even the creators of such systems cannot explain everything “from and to,” since the processes that occur in the neural network during the processing of information are very complex.

Never before has man built machines whose operating principle is not fully understood by the creators themselves and is very different from the way information is used by the person himself. So can we expect normal interaction with machines whose operation is unpredictable?


Painting by artist Adam Ferriss with Google Deep Dream

In 2015, the Mount Sinai Hospital research team from New York used deep learning to process a patient record database. The database included information on thousands of patients with hundreds of lines of information for each person, such as test results, date of visit to the doctor, etc. As a result, the Deep Patient program appeared, which was trained on the example of records of 700 thousand people. The results that this program showed were unusually good. For example, she was able to predict the emergence of certain diseases at an early stage in a number of patients.

However, the results turned out to be a bit strange. For example, the system began to perfectly diagnose schizophrenia. But even for experienced psychiatrists, diagnosing schizophrenia is a complex problem. But the computer coped with it with a bang. Why? No one can explain, not even the creators of the system.

Initially, AI developers were divided into two camps. Supporters of the first said that the machine must be programmed so that all the processes that occur in the system can be seen and understood. The second camp adhered to the idea that the machine should learn by itself, receiving data from the maximum number of sources, followed by independent processing of such data. That is, supporters of this point of view, in fact, suggested that each neural network should be "its own boss."

All this remained pure theory until the present moment, when computers became powerful enough so that experts in artificial intelligence and neural networks could begin to put their ideas into practice. Over the past ten years, a huge number of ideas have been implemented, excellent services have appeared that help to translate texts from one language to another, recognize speech, process a video stream in real time, work with financial data, and optimize production processes.

But the problem is that almost any machine learning technology is not too transparent for specialists. In the case of "manual" programming, the situation is much simpler. Of course, one cannot say that future systems would


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Theory of Reliability

Terms: Theory of Reliability