You get a bonus - 1 coin for daily activity. Now you have 1 coin

Models and methods for solving problems in artificial intelligence systems

Lecture



  • Classification of the presentation of tasks.
  • Logical models.
  • Network models
  • Production models.
  • Scripts.
  • Intelligent interface
  • Classification of levels of understanding
  • Methods for solving problems.
  • Problem solving by state space search.
  • Solving problems by the method of reduction.
  • Solving the problems of deductive choice
  • Problem solving using non-monotonic logics, probabilistic logics.


Classification of the presentation of tasks.

Logical models.

The formulation and solution of any problem is always associated with its “immersion” in a suitable subject area. So, solving the task of scheduling parts processing on machine tools, we involve objects such as specific machines, parts, time intervals, and the general concepts of "machine", "part", "machine type", etc. into the subject area. All Objects and events that form the basis of a general understanding of the information necessary for solving a problem are called the subject area. Mentally, the subject area is represented as consisting of real or abstract objects, called entities.

The entities of the subject domain are in certain relationships to each other (associations), which can also be considered as entities and included in the subject domain. There are various similarity relationships between entities. The combination of such entities is a class of entities, which is a new entity of the subject domain.

Relationships between entities are expressed through judgment. Judgment is a mentally possible situation that may occur for the entities being presented or not. In the language (formal or natural) sentences respond . Judgments and sentences can also be viewed as entities and included in the subject area.

Languages ​​intended for describing subject areas are called knowledge representation languages. The universal language of knowledge representation is natural language. However, the use of natural language in systems of machine representation of knowledge encounters great difficulties due to its inherent irregularities, ambiguities, presuppositions, etc. But the main obstacle is the absence of formal semantics of natural language that would have sufficiently effective operational support.

For the presentation of mathematical knowledge in mathematical logic, logical formalisms have long been used - mainly predicate calculus, which has clear formal semantics and operational support in the sense that inference mechanisms have been developed for it. Therefore, the predicate calculus was the first logical language that was used for the formal description of subject areas related to the solution of applied problems.

Subject domain descriptions made in logical languages ​​are called (formal) logical models.

Network models

We introduce a number of definitions. By essence we mean an object of arbitrary nature. This object can exist in the real world. In this case, it will be called the P-entity. In the knowledge base, it corresponds to some description, the completeness of which is determined by the information that it has about the P-essence of the IP. Such a representation in the knowledge base is called M-entity. Note that there may exist M-entities for which there are no corresponding P-entities in the surrounding IP world. Such M-entities are abstract objects obtained as a result of operations like generalization within the knowledge base.

The division into two types of entities allows the use of ideas in network models that were first formulated in the theory of semiotic models and situational management based on them. Semiotic models of problem areas will be understood as a set of procedures that allow to display in the knowledge base P-entities and their connections, fixed in the problem domain by a knowledge engineer, into a set of related M-entities. The way of interpreting interconnected P-entities will be called denotative semantics, and the way of interpreting interconnected M-entities - connotative semantics.

A P-entity in relation to its corresponding in the knowledge base of an M-entity is called a denotate or referent of this M-entity, and an M-entity in relation to the original P-essence is its designate, name, label, identifier , etc. The designator This is the simplest element in the network model. It belongs to the class of terminal objects of the network model. A terminal object is an M-entity, which cannot be decomposed into simpler entities. The remaining M-entities are called derived objects or derived M-entities.

The list of terminal objects that can form classes or types is specified when designing an IC. They can be whole real numbers, identifiers, strings, lists, etc. The semantics of terminal objects is determined by a set of valid procedures that operate on them, for example: arithmetic operations on numbers, comparison between strings or identifiers, input-output operations that include the necessary transformations of representations, etc.

Production models.

Products along with frames are the most popular means of representing knowledge in IP. Products, on the one hand, are close to logical models, which allows them to organize effective withdrawal procedures, and, on the other hand, they more clearly reflect knowledge than classical logical models. There are no hard limitations typical of logical calculus, which makes it possible to change the interpretation of the elements of production.

In general, under the products refers to the expression of the following form:

(i); Q; R; A => B; N.

Here i is the name of the product with which this product is distinguished from the entire set of products. The name can be a certain lexeme, reflecting the essence of this product (for example, "buying a book" or "lock code set"), or a serial number of products in their set stored in the system's memory.

The element Q characterizes the scope of production. Such areas are easily distinguished in human cognitive structures. Our knowledge is "laid out on the shelves." On one shelf, knowledge is stored on how to cook food, on the other, how to get to work, etc. Splitting knowledge into separate areas saves time in finding the right knowledge. The same division into spheres in the knowledge base of IP is also useful when production models are used to represent knowledge.

The main element of the product is its core: A => B. The interpretation of the product core may be different and depends on what stands to the left and right of the sequence sign =>. The usual reading of the product core looks like this: IF A, TH B, more complex kernel constructions allow an alternative choice on the right, for example IF A , TH B 1 , ELSE B 2 . The sequence can be interpreted in the usual logical sense as a sign of the logical following of B from true A (if A is not a true expression, then nothing can be said about B ). Other interpretations of the production core are also possible; for example, A describes some condition necessary for the action B to be taken .

Element P is a condition for the applicability of the core products. Usually P is a logical expression (usually a predicate). When P becomes true, the production core is activated. If P is false, then the production core cannot be used. For example, if the products "AVAILABLE MONEY; IF YOU WANT TO BUY X, THEN PAY ITS COST AND GIVE THE CHECK TO THE SELLER" the condition of the product core applicability is false, i.e. there is no money, then it is impossible to use the product core.

The element N describes the postconditions of the product. They are updated only if the core of the product is realized. Product post-conditions describe the actions and procedures that must be performed after the implementation of B. For example, after buying a certain item in a store, it is necessary in the inventory of goods available in this store to reduce the number of such items per unit. N execution may not occur immediately after the implementation of the product core.

If a set of products is stored in the system's memory, then they form a production system. In the production system, special product management procedures should be set, with the help of which the products are updated and the choice to perform one or another product from the number of updated ones.

A number of IPs use combinations of network and production knowledge representation models. In such models, declarative knowledge is described in the network component of the model, and procedural knowledge is described in the production component. In this case, talk about the work of the production system on the semantic network.

Scripts.

Stereotypical knowledge describing the well-known standard situations of the real world play a special role in knowledge representation systems. Such knowledge allows us to recover information that was missed in describing a situation, to predict the appearance of new facts that can be expected in a given situation, to establish the meaning of the origin of the situation from the point of view of a more general situational context.

Different models are used to describe stereotypical knowledge. Among them are the most common scenarios. The scenario is a formalized description of a standard sequence of interrelated facts that define a typical situation of the subject area. These can be sequences of actions or procedures that describe ways to achieve the goals of the actors in the scenario (for example, lunch at a restaurant, a business trip, an airplane flight, admission to a university). In IP, scenarios are used in procedures for understanding natural language texts, behavior planning, training, decision making, change management, etc.

Intelligent interface

Suppose that the text arrives at the input of the IC. We say, the IP understands the text, if it gives answers, right from the point of view of the person, to any questions relating to what is said in the text. By "person" is meant a specific expert person who is charged with assessing the ability of the system to understand. This brings a share of subjectivism, because different people may understand the same texts in different ways.

Classification of levels of understanding

There are five basic levels of understanding and two levels of meta-understanding in existing IS.

The first level is characterized by a diagram showing that the system generates any answers to questions only on the basis of direct content entered from the text. If, for example, the text is entered into the system: “At eight in the morning, after breakfast, Petya went to school. At two o'clock he returned home. After lunch, he went for a walk”, then at the first level of understanding the system must be able to answer questions like: "When did Peter go to school?" or "What did Petya do after lunch?" In the linguistic processor, morphological, syntactic and semantic analysis of the text and questions relating to it takes place. The output of the linguistic processor is an internal representation of the text and the issues with which the output unit can work. Using special procedures, this block forms the answers. In other words, the understanding already at the first level requires from the IP certain means of data presentation and output on this data.

Second level: At the second level, logical inference tools are added based on the information contained in the text . These are various text logics (temporal, spatial, causal, etc., which are capable of generating information that is clearly missing in the text. For our example at the second level, it is possible to form correct answers to questions like: “What happened before: Petit’s departure to school or his lunch? "or" walked Peter after returning from school? "Only by building a temporary structure of the text , the EC can answer such questions.

The IP scheme with which the second level of understanding can be realized has one more knowledge base. It stores the laws relating to the temporal structure of events, their possible spatial organization, causal dependence, etc., and the logical block has all the necessary tools for working with pseudo-physical logics.

The third level: The rules for supplementing the text with the system's knowledge of the environment are added to the second-level tools. This knowledge in IP is usually logical and is recorded in the form of scenarios or procedures of another type. At the third level of understanding, IP should give correct answers to questions like: "Where was Peter at ten in the morning?" or "Where did Petya come back from at two in the afternoon?" To do this, you need to know what the process "staying at school" means and, in particular, that this process is continuous and that the subject participating in it is always "at school".

The IP scheme, in which the understanding of the third level is realized, does not outwardly differ from the second level scheme. However, in the logical block, means should be provided not only for purely deductive inference, but also for output by scenarios.

The three levels of understanding listed are implemented in all practically working IS. The first level and part of the second are part of a variety of natural language communication systems.

The following two levels of understanding are only partially implemented in existing IS.

Fourth level: Instead of text , it uses extended text , which is generated only if there are two channels for receiving information. One by one, the text is transferred to the system , on the other, additional information that is missing in the text . In human communication, the role of the second channel is usually played by sight. More than one communication channel has intelligent robots with vision.

The visual communication channel allows you to record the state of the environment "here and now" and enter the observed information into the text. The system becomes capable of understanding texts in which words are entered that are directly related to the situation in which the text is generated. At lower levels of understanding it is impossible to understand, for example, the text: "Look what Petya did! He should not have taken it!" With a visual channel, the process of understanding becomes possible.

If there is a fourth level of understanding, IP can answer questions like: "Why shouldn't Peter have to take it?" or "What did Peter do?" If the question submitted to the system corresponds to the third level, the system issues the necessary answer. If for the answer it is necessary to attract additional ("exegetic") information, then the internal presentation of the text and the question is transmitted to the block that correlates the text with the actual situation of its generation, which is available to the IP via the visual or some other channel of fixing the situation of the external world.

Fifth level: To answer at this level, the IP uses information about a specific subject, which is the source of the text , and general information stored in the system’s memory that is related to communication (knowledge about the organization of communication, about the goals of the participants of communication, about the norms of participation in communication). The theory corresponding to the fifth level is the so-called theory of speech acts.

Attention was drawn to the fact that any phrase not only denotes a certain phenomenon of reality, but also unites in itself three actions: locus, illocution, and perlocution. Lokutsiya is speaking as such, i.e., those actions that the speaker made to express his thought. Illocution is an action by speaking: a question, an impulse (an order or request) and an affirmation. Perlocution is an action with which the speaker tries to make some impact on the listener: “flatter”, “surprise”, “persuade”, etc. The speech act can be defined as the minimum meaningful (or expedient) unit of speech behavior. Each speech act consists of a locative, illocutionary, and perlocutionary act.

For the fourth and fifth levels of understanding, the results on non-verbal (nonverbal) components of communication and psychological principles underlying communication are interesting. In addition, the rules of replenishment of the text include the rules of inference, based on knowledge of this particular subject of communication, if the system has such knowledge. For example, a system can trust a given subject, considering that the text it generates is true. But he may not trust him and understand the text , adjusting it in accordance with his knowledge of the subject that produced the text . Knowledge of this type should be based on psychological theories of communication, which are not yet sufficiently developed.

For example, the text goes to the input of the system: "Nina promised to come soon." If the system does not have any information about Nina, she can refer to the knowledge base and use the temporary index “some regulatory information soon. You can learn from this information that with a high degree of certainty you don’t soon have half an hour. But the system can have special information about the Nina in question in the input text.In this case, the system, having received the necessary information from the knowledge base, can get ready, for example, to ensure that Nina most likely comes no earlier than in an hour.

Первый метауровень: На этом уровне происходит изменение содержимого базы знаний. Она пополняется фактами, известными системе и содержащимися в тех текстах, которые в систему введены. Разные ИС отличаются друг от друга характером правил порождения фактов из знаний . Например, в системах, предназначенных для экспертизы в области фармакологии, эти правила опираются на методы индуктивного вывода и распознавания образов. Правила могут быть основаны на принципах вероятностей, размытых выводов и т. п. Но во всех случаях база знаний оказывается априорно неполной и в таких ИС возникают трудности с поиском ответов на запросы. В частности, в базах знаний становится необходимым немонотонный вывод.

Второй метауровень: На этом уровне происходит порождение метафорического знания. Правила порождения знаний метафорического уровня, используемые для этих целей, представляют собой специальные процедуры, опирающиеся на вывод по аналогии и ассоциации. Известные в настоящее время схемы вывода по аналогии используют, как правило, диаграмму Лейбница, которая отражает лишь частный случай рассуждений по аналогии. Еще более бедны схемы ассоциативных рассуждений.

Если рассматривать уровни и метауровни понимания с точки зрения архитектуры ИС, то можно наблюдать последовательное наращивание новых блоков и усложнение реализуемых ими процедур. На первом уровне достаточно лингвистического процессора с базой знаний, относящихся только к самому тексту. На втором уровне в этом процессоре возникает процедура логического вывода. На третьем уровне необходима база знаний. Появление нового канала информации, который работает независимо от исходного, характеризует четвертый уровень. Кроме процедур, связанных с работой этого канала, появляются процедуры, увязывающие между собой результаты работы двух каналов, интегрирующие информацию, получаемую по каждому из них. На пятом уровне развитие получают разнообразные способы вывода на знаниях и данных. На этом уровне становятся важными модели индивидуального и группового поведения. На метауровнях возникают новые процедуры для манипулирования знаниями, которых не было на более низких уровнях понимания. И этот процесс носит открытый характер. Понимание в полном объеме - некоторая, по-видимому, недостижимая мечта. Но понимание на уровне "бытового понимания" людей в ИС вполне достижимо.

Существуют и другие интерпретации феномена понимания. Можно, например, оценивать уровень понимания по способности системы к объяснению полученного результата. Здесь возможен не только уровень объяснения, когда система объясняет, что она сделала, например, на основе введенного в нее текста, но и уровень обоснования (аргументации), когда система обосновывает свой результат, показывая, что он не противоречит той системе знаний и данных, которыми она располагает. В отличие от объяснения обоснование всегда связано с суммой фактов и знаний, которые определяются текущим моментом существования системы. И вводимый для понимания текст в одних состояниях может быть воспринят системой как истинный, а в других-как ложный. Кроме объяснения и обоснования возможна еще одна функция, связанная с пониманием текстов,- оправдание. Оправдать нечто означает, что выводимые утверждения не противоречат той системе норм и ценностей, которые заложены в ИС. Существующие ИС типа экспертных систем, как правило, способны давать объяснения и лишь частично обоснования. В полном объеме процедуры обоснования и оправдания еще не реализованы.

Методы решения задач.

Функционирование многих ИС носит целенаправленный характер (примером могут служить автономные интеллектуальные роботы). Типичным актом такого функционирования является решение задачи планирования пути достижения нужной цели из некоторой фиксированной начальной ситуации. Результатом решения задачи должен быть план действий - частично-упорядоченная совокупность действий. Такой план напоминает сценарий, в котором в качестве отношения между вершинами выступают отношения типа: "цель-подцель" "цель-действие", "действие-результат" и т. п. Любой путь в этом сценарии, ведущий от вершины, соответствующей текущей ситуации, в любую из целевых вершин, определяет план действий.

Поиск плана действий возникает в ИС лишь тогда, когда она сталкивается с нестандартной ситуацией, для которой нет заранее известного набора действий, приводящих к нужной цели. Все задачи построения плана действий можно разбить на два типа, которым соответствуют различные модели: планирование в пространстве состояний (SS-проблема) и планирование в пространстве задач (PR-проблема).

В первом случае считается заданным некоторое пространство ситуаций. Описание ситуаций включает состояние внешнего мира и состояние ИС, характеризуемые рядом параметров. Ситуации образуют некоторые обобщенные состояния, а действия ИС или изменения во внешней среде приводят к изменению актуализированных в данный момент состояний. Среди обобщенных состояний выделены начальные состояния (обычно одно) и конечные (целевые) состояния. SS-проблема состоит в поиске пути, ведущего из начального состояния в одно из конечных. Если, например, ИС предназначена для игры в шахматы, то обобщенными состояниями будут позиции, складывающиеся на шахматной доске. В качестве начального состояния может рассматриваться позиция, которая зафиксирована в данный момент игры, а в качестве целевых позиций - множество ничейных позиций. Отметим, что в случае шахмат прямое перечисление целевых позиций невозможно. Матовые и ничейные позиции описаны на языке, отличном от языка описания состояний , характеризуемых расположением фигур на полях доски. Именно это затрудняет поиск плана действий в шахматной игре.

При планировании в пространстве задач ситуация несколько иная. Пространство образуется в результате введения на множестве задач отношения типа: "часть - целое", "задача - подзадача", "общий случай - частный случай" и т. п. Другими словами, пространство задач отражает декомпозицию задач на подзадачи (цели на подцели). PR-проблема состоит в поиске декомпозиции исходной задачи на подзадачи, приводящей к задачам, решение которых системе известно. Например, ИС известно, как вычисляются значения sin x и cos x для любого значения аргумента и как производится операция деления. Если ИС необходимо вычислить tg x, то решением PR-проблемы будет представление этой задачи в виде декомпозиции tgx=sin x/cos x (кроме х= p /2+k p ).

Решение задач методом поиска в пространстве состояний.

Представление задач в пространстве состояний предполагает задание ряда описаний: состояний, множества операторов и их воздействий на переходы между состояниями, целевых состояний. Описания состояний могут представлять собой строки символов, векторы, двухмерные массивы, деревья, списки и т. п. Операторы переводят одно состояние в другое. Иногда они представляются в виде продукций A=>B, означающих, что состояние А преобразуется в состояние В.

Пространство состояний можно представить как граф, вершины которого помечены состояниями, а дуги - операторами.

Таким образом, проблема поиска решения задачи <А,В> при планировании по состояниям представляется как проблема поиска на графе пути из А в В. Обычно графы не задаются, а генерируются по мере надобности.

Различаются слепые и направленные методы поиска пути. Слепой метод имеет два вида: поиск вглубь и поиск вширь. При поиске вглубь каждая альтернатива исследуется до конца, без учета остальных альтернатив. Метод плох для "высоких" деревьев, так как легко можно проскользнуть мимо нужной ветви и затратить много усилий на исследование "пустых" альтернатив. При поиске вширь на фиксированном уровне исследуются все альтернативы и только после этого осуществляется переход на следующий уровень. Метод может оказаться хуже метода поиска вглубь, если в графе все пути, ведущие к целевой вершине, расположены примерно на одной и той же глубине. Оба слепых метода требуют большой затраты времени и поэтому необходимы направленные методы поиска.

The method of branches and boundaries. Из формирующихся в процессе поиска неоконченных путей выбирается самый короткий и продлевается на один шаг. Полученные новые неоконченные пути (их столько, сколько ветвей в данной вершине) рассматриваются наряду со старыми, и вновь продлевается на один шаг кратчайший из них. Процесс повторяется до первого достижения целевой вершины, решение запоминается. Затем из оставшихся неоконченных путей исключаются более длинные, чем законченный путь, или равные ему, а оставшиеся продлеваются по такому же алгоритму до тех пор, пока их длина меньше законченного пути. В итоге либо все неоконченные пути исключаются, либо среди них формируется законченный путь, более короткий, чем ранее полученный. Последний путь начинает играть роль эталона и т. д.

Алгоритм кратчайших путей Мура. Исходная вершина X 0 помечается числом 0. Пусть в ходе работы алгоритма на текущем шаге получено множество дочерних вершин X(x i ) вершины x i . Тогда из него вычеркиваются все ранее полученные вершины, оставшиеся помечаются меткой, увеличенной на единицу по сравнению с меткой вершины x i , и от них проводятся указатели к X i . Далее на множестве помеченных вершин, еще не фигурирующих в качестве адресов указателей, выбирается вершина с наименьшей меткой и для нее строятся дочерние вершины. Разметка вершин повторяется до тех пор, пока не будет получена целевая вершина.

Алгоритм Дейкстры определения путей с минимальной стоимостью является обобщением алгоритма Мура за счет введения дуг переменной длины.

Алгоритм Дорана и Мичи поиска с низкой стоимостью. Используется, когда стоимость поиска велика по сравнению со стоимостью оптимального решения. В этом случае вместо выбора вершин, наименее удаленных от начала, как в алгоритмах Мура и Дийкстры, выбирается вершина, для которой эвристическая оценка расстояния до цели наименьшая. При хорошей оценке можно быстро получить решение, но нет гарантии, что путь будем минимальным.

Algorithm of Hart, Nilson and Raphael. The algorithm combines both criteria: the cost of the path to the vertex g [x) and the cost of the path from the vertex h (x) are in the additive estimate function f (x) = g (x} -h (x). Assuming h (x) < hp (x), where hp (x) is the actual distance to the target, the algorithm guarantees finding the optimal path.

Path finding algorithms on the graph also differ in the direction of the search. There are direct, inverse, and bidirectional search methods. Direct search comes from the source state and, as a rule, is used when the target state is implicitly specified. Reverse search comes from the target state and is used when the initial state is specified implicitly, and the target state is explicit. Bidirectional search requires a satisfactory solution to two problems: changing the direction of search and optimizing the “meeting point”. One of the criteria for solving the first problem is to compare the "width" of the search in both directions - the direction that narrows the search is chosen. The second problem is caused by the fact that the forward and reverse paths can diverge and the narrower the search, the more likely it is.

Solving problems by the method of reduction.

This method leads to good results because often problem solving has a hierarchical structure. However, it is not necessary to require that the main task and all its subtasks be solved by the same methods. The reduction is useful for representing the global aspects of the problem, and for solving more specific problems, the state planning method is preferable. The state planning method can be considered as a special case of the planning method using reductions, because each application of an operator in the state space means reducing the initial problem to two simpler ones, one of which is elementary. In the general case, the reduction of the original problem does not reduce to the formation of such two subtasks, of which at least one was elementary.

The search for planning in the task space consists in the sequential reduction of the initial task to an ever more simple one until only elementary tasks are received. A partially ordered set of such tasks will make up the solution to the original problem. The division of the problem into alternative sets of subtasks is conveniently represented in the form of an AND / OR graph. In such a graph, each vertex, except the terminal, has either conjunctively connected child vertices (AND-vertex) or disjunctively connected (OR-vertex). In the particular case, in the absence of I-vertices, there is a graph of the state space. Terminal vertices are either final (they correspond to elementary problems) or dead-end. The initial vertex (root of an AND / OR graph) represents the original problem. The purpose of the search for an AND / OR graph is to show that the initial vertex is solvable. The solvable are final vertices (AND-vertices), in which all child vertices are solvable, and OR-vertices, in which at least one child vertex is solvable . The resolving graph consists of solvable vertices and indicates the solvability method of the initial vertex. The presence of dead-end vertices leads to unsolvable vertices. Dead-end vertices are unsolvable, AND-vertices for which at least one daughter vertex is insoluble, and OR-vertices for which each daughter vertex is unsolvable.

Algorithm of Cheng and Slaig. It is based on the transformation of an arbitrary AND / OR-graph into a special OR-graph, each OR-branch of which has AND-vertices only at the end. The transformation uses the representation of an arbitrary AND / OR graph as an arbitrary formula of propositional logic with a further transformation of this arbitrary formula into a disjunctive normal form. Such a transformation allows you to further use the algorithm of Hart, Nielson and Raphael.

Key operator method. Let the task > be given and it is known that the operator f must necessarily be included in the solution of this problem. Such an operator is called key. Suppose that for application of f a state C is necessary , and the result of its application is I (c). Then the I-vertex generates three child vertices: , and , of which the average is an elementary problem. The tasks and also match the key operators, and the indicated reduction procedure is repeated as long as possible. As a result, the initial problem > is divided into an ordered set of subtasks, each of which is solved by the state space planning method.

Alternatives are possible at the choice of key operators, so in the general case the AND / OR graph will take place. In most of the tasks, it is possible not to single out the key operator, but only to indicate the set containing it. In this case, for task > , the difference between A and B is computed, to which the operator that resolves this difference is assigned. The latter is the key.

The method of planning a common problem solver (ORZ). ORZ was the first most famous planner model. It was used to solve problems of integral calculus, inference, grammatical analysis, and others. The ORZ unites two basic principles of search:

analysis of goals and means and recursive problem solving. In each search cycle, the ORZ decides in a rigid sequence three types of standard tasks: convert object A to object B, reduce the difference D between A and B, apply operator f to object A. The solution of the first task determines the difference D of the second — the appropriate operator f, and the third — the required condition for the application of C. If C is not different from A, then the operator f is applied, otherwise C is presented as another goal and the cycle repeats, starting with the task "convert A to C". In general, the strategy of ORZ performs a reverse search — from a given goal B to the required means of achieving it C, using the reduction of the original task B> to the tasks and .

Note that in the ARD, the independence of the differences from each other is tacitly assumed, whence comes the guarantee that the reduction of some differences will not lead to an increase in the others.

3. Planning with inference. Such planning assumes: the description of states in the form of correctly constructed formulas (PPF) of some logical calculus, the description of operators in the form of either PPF, or the rules for transferring some PPF to others. Representation of operators in the form of PPF allows you to create deductive planning methods, the representation of operators in the form of translation rules - planning methods with elements of deductive inference.

The deductive method of planning the system QA3 , ORZ did not justify the hopes placed on it mainly due to the unsatisfactory presentation of the tasks. An attempt to rectify the situation led to the creation of a QA3 question-answer system. The system is designed for an arbitrary subject area and is able, by inference, to answer the question: is it possible to achieve state B from A? The resolution principle is used as a method of automatic withdrawal. For the direction of logical inference, QA3 applies various strategies, mainly of a syntactic nature, taking into account the peculiarities of the formalism of the principle of resolutions. Operation QA3 showed that the output in such a system is slow, detailed, which is not typical of human reasoning.

STRIPS production method . In this method, the operator represents the products P, A => B, where P, A and B are the sets of FSFs of the first-order predicate calculus, P expresses the conditions for using the core of the products A => B, where B contains the list of PPFs to be added and the list of excluded PPFs, t . e. postconditions. The method repeats the ORZ method with the difference that the standard tasks of determining differences and applying suitable operators are solved on the basis of the principle of resolutions. A suitable operator is selected in the same way as in the ORZ, on the basis of the principle “analysis of means and goals”. The presence of a combined planning method made it possible to limit the process of logical inference to a description of the state of the world, and leave the process of generating new such descriptions to the heuristics “from the goal to the means to achieve it”.

The method of productions using macrooperators [Faix et al., 1973]. Macro operators are generalized solutions to problems obtained by the STRIPS method. The use of macro-operators allows reducing the search for a solution; however, this raises the problem of simplifying the applied macro-operator, the essence of which lies in identifying its required part for a given difference and excluding unnecessary operators from the latter.

Solving the problems of deductive choice

In the deductive models of representation and processing of knowledge, the problem being solved is written in the form of a statement of the formal system, the goal is in the form of a statement, the validity of which should be established or refuted on the basis of axioms (general laws) and the rules of inference of the formal system. As a formal system, the first-order predicate calculus is used.

In accordance with the rules established in the formal system, the final statement-theorem obtained from the initial system of statements (axioms, premises) is assigned the value TRUE, if each package, the axiom is also assigned the value TRUE.

The output procedure is a procedure that, from a given group of expressions, derives a different expression.

Usually, the predicate logic uses the formal method of proving theorems, which allows the possibility of its machine realization, but there is also the possibility of proving in a nonaxiomatic way: by direct inference, inverse conclusion.

The resolution method is used as a full-fledged (formal) method for proving theorems.

To apply this method, the initial group of given logical formulas is required to be transformed into some normal form. This conversion is carried out in several stages that make up the output machine.

Problem solving using non-monotonic logics, probabilistic logics.

The data and knowledge that one has to deal with in IP is seldom absolutely accurate and reliable. The uncertainty inherent in knowledge can be diverse, and a wide range of formalisms are used to describe it. Consider one of the types of uncertainty in data and knowledge - their inaccuracy. We will call the statement inaccurate if its truth (or falsehood) cannot be established with certainty. The fundamental concept in the construction of models of inaccurate inference is the concept of probability, therefore, all the methods described below are associated with the probabilistic concept.

The model of operating with inaccurate data and knowledge includes two components: an inaccuracy representation language and an inference engine with inaccurate knowledge. To build a language, it is necessary to choose the form of inaccuracy representation (for example, scalar, spacing, distribution, linguistic expression, set) and provide for the possibility of assigning a measure of inaccuracy to all statements.

The mechanisms of operating with inaccurate statements can be divided into two types. The first includes mechanisms that are "attached" in nature: the recalculation of measures of inaccuracy, as it were, accompanies the process of inference, conducted on precise statements. In order to develop an attached model of inaccurate inference in a rule-based system, it is necessary to specify recalculation functions that allow to calculate: a) measure of the inaccuracy of the antecedent rule (its left side) according to the measures of inaccuracy of its constituent statements; b) a measure of the inaccuracy of the sequential rule (its right-hand side) according to the measures of inaccuracy of the rule and the premise of the rule ; c) the combined measure of inaccuracy of statements on measures derived from the rules.

Introducing a measure of inaccuracy will allow us to introduce something fundamentally new into the process of withdrawal — the possibility of combining the strength of several evidences confirming or disproving the same hypothesis. In other words, when using measures of inaccuracy, it is advisable to derive the same statement in different ways (with the subsequent combination of the values ​​of inaccuracy), which is completely meaningless in traditional deductive logic. To combine evidence , a conversion function is required , which is central to the conversion. It should be noted that, despite the “connection” of inference mechanisms of this type, their implementation in knowledge bases influences the overall output strategy: on the one hand, it is necessary to derive a hypothesis in all possible ways in order to take into account all relevant evidence of this hypothesis; prevent the repeated influence of the strength of the same evidence.

Mechanisms of operating with inaccurate statements of the second type are characterized by the presence of inference schemes specifically oriented to the used inaccuracy representation language. As a rule, each step of the conclusion corresponds to a recalculation of measures of inaccuracy, due to the ratio on the set of statements (the ratio can be an elementary logical connection , regardless of whether this relation is a fragment of a rule). Thus, the mechanisms of the second type are applicable not only to knowledge expressed in the form of rules. At the same time, for them, as well as for the mechanisms of the “attached” type, one of the main problems is the problem of combining evidence.


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.