Concept of rationality

Lecture



Concept of rationality   Concept of rationality
A rational agent is an agent who performs the right actions; more formally, an agent is one in which each entry in the table for an agent function is filled in correctly. Obviously, doing the right thing is better than doing the wrong thing, but what is meant by “doing the right thing”?

In the first approximation, we can say that the correct action is that which ensures the most successful functioning of the agent. Therefore, a certain way to measure success is required. Success criteria, along with a description of the environment, as well as the sensors and actuators of the agent, provide a complete specification of the task that the agent encounters. Having these components, we can define more precisely what is meant by the word "rational".


Performance indicators

Performance indicators embody criteria for evaluating successful agent behavior. After being immersed in the environment, the agent generates a sequence of actions corresponding to the perceptions he received. This sequence of actions forces the environment to go through a sequence of states.

If such a sequence corresponds to the desired one, then the agent functions well. Of course, there can not be one permanent indicator suitable for all agents. One could find out from the agent his subjective opinion about how satisfied he is with his own performance, but some agents will not be able to answer, while others tend to engage in self-deception.

Therefore, it is necessary to persevere in the application of objective performance indicators, and, as a rule, the designer who designs the agent provides for such indicators. Consider a vacuum cleaner described in the previous section . It could be proposed to measure the performance indicators for the volume of garbage collected in one eight-hour shift. But, of course, when dealing with a rational agent, you get what you ask.

A rational agent can maximize such a performance indicator by clearing garbage, then dumping it all on the floor, then cleaning it again, and so on. Therefore, more acceptable performance criteria should reward the agent for keeping the floor clean. For example, one point could be awarded for each clean square in each time interval (possibly in combination with a penalty for the electricity consumed and the noise generated).

As a general rule, it should be pointed out that it is best to develop performance indicators in accordance with what is really necessary to achieve in this environment, and not in accordance with how, in the designer’s opinion, the agent should behave.

The task of choosing performance indicators is not always simple. For example, the concept of "clean floor", which was discussed above, is based on the definition of the average floor cleanliness over time. But it is also necessary to take into account that the same averaged purity can be achieved by two different agents, one of which constantly, but slowly, does its work, and the other from time to time vigorously does the cleaning, but takes long breaks.

It may seem that the definition of the method of action that is most preferable in this case belongs to the intricacies of economics, but in fact it is a deep philosophical question with far-reaching consequences. What is better - reckless life with ups and downs or a safe but monotonous existence? What is better - an economy in which everyone lives in moderate poverty, or an economy in which some do not need anything, while others are barely able to make ends meet? We leave the task of finding answers to these questions as an exercise for the inquisitive reader.


Rationality

At any particular time, the assessment of the rationality of an agent’s actions depends on the four factors listed below.

  • Performance indicators that define success criteria.
  • Agent knowledge of the environment acquired previously.
  • Actions that can be performed by the agent.
  • The sequence of acts of perception of an agent that have occurred to date.

Given these factors, we can formulate the following definition of a rational agent.

For each possible sequence of acts of perception, a rational agent must choose an action that is expected to maximize his performance indicators, taking into account the facts provided by the sequence of acts of perception and all the built-in knowledge possessed by the agent.

Consider an example of a simple vacuum cleaner that cleans a square if it contains garbage, and goes into another square if there is no garbage in it; the results of partial tabulation of such an agent function are shown in the table . Is this agent rational? The answer to this question is not so simple! First, it is necessary to determine what the performance indicators are, what is known about the environment and what sensors and actuators the agent has. Take the listed
below assumption.

  • The performance indicators used provide a reward of one point for each net square in each time interval during the "lifetime" of an agent consisting of 1000 time intervals.
  • The “geography” of the environment is known in advance, but the distribution of garbage and the initial location of the agent are not determined. Clean squares remain clean, and debris absorption clears the current square. Left and Right actions cause the agent to move left and right, respectively, unless they could take the agent out of the environment, and in these cases the agent remains where it is.
  • The only actions available are Left , Right , Suck , (suck up trash) and NoOp (do nothing).
  • The agent correctly determines its location and perceives the sensor readings, allowing to find out if there is debris in this location.

The authors argue that in these circumstances the agent is indeed rational; its expected performance is at least not lower than that of
any other agents.

You can easily find that in other circumstances the same agent may become irrational. For example, after all the garbage is cleared, the agent will begin to make unnecessary periodic back and forth movements; if the performance indicators provide a penalty of one point for each movement in one direction or another, then the agent will not be able to earn well.

In this case, the best agent would have to do nothing as long as he was sure that all the squares remained clean. If the clean squares may become dirty again, the agent must check them from time to time and clean them again as necessary. And if the geography of the environment is unknown, then the agent may need to investigate it, and not be limited to squares A and B.


Knowledge, learning and autonomy

It is necessary to carefully distinguish between rationality and omniscience. An omniscient agent knows the actual result of his actions and can act accordingly; but omniscience is really impossible.

Consider the following example: a certain gentleman once walks in Paris through the Champs Elysées and sees on the other side of the street an old friend. There are no cars up close, and our master is not in a hurry, therefore, being a rational agent, he begins to cross the road. Meanwhile, at the height of 10,000 meters, the cargo compartment door of the flying airplane falls off, and before the unfortunate man has time to reach the other side of the street, he flattens him into a flat cake. Was it irrational that this gentleman decided to cross to the other side of the street? It is highly unlikely that his obituary would have written: "Victim of an idiotic attempt to cross the street."

This example shows that rationality cannot be considered equivalent to perfection. Rationality is maximizing expected performance, and excellence is maximizing actual performance.

Refusing to strive for perfection, we not only apply fair criteria to agents, but also take into account reality. The fact is that if an agent is required to perform the actions that are best after they are performed, then the task of designing an agent that meets this specification becomes impossible (at least until we can increase the efficiency of time machines). or crystal balls used by fortunetellers).

Therefore, our definition of rationality does not require omniscience, because rational choice depends only on the sequence of acts of perception formed by this moment. It is also necessary to ensure that we inadvertently prevent the agent from participating in activities that are certainly not intellectual. For example, if the agent does not look left and right before crossing the road with heavy traffic, then the sequence of perception acts received by him so far cannot tell that a huge truck is approaching him at high speed. Does our definition of rationality indicate that the agent can now cross the road? Far from it! First, the agent would not be rational if he tried to cross to the other side, having received such an uninformative sequence of acts of perception: the risk of an accident in such an attempt to cross the highway without looking back is too great. Secondly, a rational agent must choose an action “look back” before stepping onto the road, since such an inspection maximizes the expected performance. Performing in order to modify future perceptions of certain actions (sometimes called information gathering) is an important part of rationality.

The second example of the collection of information is expressed in the study of the situation that should be undertaken by a vacuum cleaner in an environment that was originally unknown to him.

Our definition requires that a rational agent not only collects information, but also learns as much as possible from the data he perceives. The initial configuration of the agent may reflect some preliminary knowledge of the environment, but as the agent gains experience, this knowledge may be modified and expanded. There are extreme cases in which the environment is completely known in advance. In such cases, the agent is not required to perceive information or learn; he just acts right away.

Of course, such agents are highly vulnerable. Consider a modest dung beetle. Digging up the nest and laying eggs, he rolls a ball of manure, picking it from the nearest manure heap to plug the entrance to the nest. If a manure ball is removed just before the beetle grabs it, the beetle continues to manipulate it and portrays such a pantomime as if it mutes the nest with a non-existent manure ball, without even noticing that this ball is missing. As a result of evolution, the behavior of this beetle was formed on the basis of a certain assumption, and if this assumption is violated, then this is followed by unsuccessful behavior.

A bit more intelligent are the wasp spas. A female spaxis digs a mink out of it, stings a caterpillar and pulls it into the mink, then goes out of the mink again to check if everything is in order, pulls the caterpillar out and lays eggs in it. The caterpillar serves as a power source during the development of eggs. So far, everything is going well, but if the entomologist moves the caterpillar a few inches to the side while the sphlex performs its check, this insect returns to the “dragging” stage of its plan and continues to carry out the plan unchanged, even after dozens of interventions into the caterpillar movement procedure . Osa-sphex cannot learn to act in a situation where its innate plan is violated and therefore cannot change it.

In successfully operating agents, the task of calculating an agent's function is divided into three separate periods: when designing an agent, some calculations are carried out by its designers; the agent performs additional calculations by selecting one of its next actions; and as the agent learns from experience, he performs other auxiliary calculations to decide how to modify his behavior.

If the degree to which an agent relies on the a priori knowledge of his designer, and not on his perceptions, is too high, then such an agent is considered to have insufficient autonomy. A rational agent must be autonomous — he must be trained in everything he can master, to compensate for incomplete or incorrect a priori knowledge. For example, a vacuum cleaner who learns to predict where and when additional debris will appear will certainly work better than an agent who is incapable of it.

From the point of view of practice, an agent is rarely required to be completely autonomous from the very beginning: if the agent has little or no experience, he is forced to act randomly if the designer has not provided him with some help. Therefore, just as evolution provided animals with a sufficient number of innate reflexes, allowing them to live after birth for so long to learn independently, so it would be reasonable for an artificial intellectual agent to provide some basic knowledge, and not just to endow it with the ability to learn. After sufficient experience of the existence of a rational agent in its environment, it may in fact become independent of its a priori knowledge. Therefore, the inclusion in the project of the ability to learn allows you to design simple rational agents who can operate successfully in extremely diverse environment.
created: 2014-09-22
updated: 2021-03-13
132521



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Intelligent Agents. Multi agent systems

Terms: Intelligent Agents. Multi agent systems