The structure of the semantic neural network extracting real-time meaning from the text

Lecture



The structure of the semantic neural network extracting real-time meaning from the text

Semantic neural network can be considered as an algebra of logic . Logic algebra operations are represented in such a network by individual neurons that perform logical operations of disjunction, conjunction and inversion, the values ​​of subject variables are in the form of gradient values ​​processed by the neural network, and the sequence of application of operations is determined by the structure of connections between neurons. The individual neurons in the considered neural network represent the elementary concepts of the processed meaning (predicates), and the connections between the neurons represent the elementary relations between the concepts.

When implementing a semantic neural network by means of sequential digital computers, the problem arises of synchronizing a huge, on the order of several millions, number of simultaneously processing data processing processes. To solve this problem, the concepts of synchronized and unsynchronized neurons are introduced. Unsynchronized neurons continuously process the input data and continuously output the results of their processing, synchronized neurons also give the result continuously, however, the input data is processed only in certain time quanta. The moment of activation is determined by a special synchronization input.

Natural language text is an ordered stream of characters. Understanding the meaning of the text is a process that develops in real time. Characters are processed sequentially, one after the other, in the order determined by their location in the text. As new information becomes available, the meaning of the already processed data is simply clarified, usually without re-processing the already received data. The neural network does not directly perceive and process symbols, therefore, methods are needed to formally represent the symbolic information in the neural network, transform the input symbol sequence into this formal representation, and process the symbolic information in a formalized form. There is also a need for a method of outputting the processing results in a form useful to the recipient of these results, for example, in the form of text representing an ordered stream of characters.

Transformation of the input stream of characters in the state of the neural network can be performed by receptors. Receptors convert the input action into a signal that is then applied to the dendrites of the neurons. Receptors that recognize the same type of external influences, such as text symbols, are organized into specialized receptor layers. The input stream of characters is fed to the layer of neurons - receptors, while each receptor in the layer recognizes only one character of the alphabet of the input sequence, ignoring all the others. If a symbol is recognized, a gradient value is set at the receptor output corresponding to the level of successful character recognition, for example, the value of "logical truth". Recognition of individual characters will occur as text arrives from the input stream.

If, at a time unit, only one symbol is presented for recognition by receptors, then only one neuron will have the logical value "true", and the remaining neurons will have the logical value "false." Consequently, only one character from the text will be recognized by receptors per cycle. If necessary, several different characters can be simultaneously applied to the receptor layer, then several different receptors will be activated simultaneously.

The output of text processing results from the neural network can be performed by effectors. Effector converts axon signals into effects on the external environment. Effectors that implement the same type of environmental impact are organized into specialized layers of effectors. So, the output of texts in the form of a symbolic sequence can be carried out by a layer of effectors, in which one output symbol of the alphabet will correspond to each neuron. In this case, per unit time, only one effector is activated by a gradient level, which corresponds to one symbol printed on the dump terminal. It is possible to aggregate several concepts with one effector. For example, when an adjective is output, the effector corresponding to the adjective will be in the active state, and nouns will be in the passive state. In this case, simultaneous activation of several effectors corresponding to different concepts is possible.

Note that in the case of the sequential connection of two neural subnets, the same neurons can perform the functions of receptors on one subnet, and the functions of effectors on another subnet. Due to this, the process of extracting meaning from the text can be divided into several relatively simple steps. Each stage is implemented by a separate subnet, made in the form of a layer of neurons. The result of the work of one subnet will simultaneously be the input data for another subnet.

In the subnet of extracting meaning from the text, a separate neuron denotes an elementary concept corresponding to the processing stage to which this neural network sublayer belongs. Elementary concepts are any concepts of a natural language with a complete meaning, such as a symbol, syllable, word, phrase, sentence, paragraph, all text. Different stages of processing correspond to different levels of aggregation of elementary concepts, for example: symbol, syllable, word, phrase, ... In the case of the corresponding concept in the analyzed text, the neuron assumes the value “true”, and in the absence case - “false”.

As the text analysis process develops over time, as new data arrives in the neural network, activity waves appear in the neural network, spreading from receptors to effectors. It can be postulated that one front of such a wave corresponds to the meaning of the text adopted by the network immediately before the beginning of this front. Then the end of text processing can be considered to reach the effector layer by the wave front generated by the last character of the text being processed. Thus, on the layer of effectors the result of extracting meaning from the text in the form of effectors' states will be obtained.

After analyzing the text in the processing layer, the result of processing is dynamically formed in the layer of effectors, as the cumulative state of all neurons of the layer of effectors. Each neuron of the effectors layer represents some elementary notion of extracted meaning. For example, a neuron representing the concept of "feminine noun" is activated when the wave front reaches the layer of effectors, corresponding to the end of entering the network with such words as "mom", "mom", "car", "car" ...

The meaning extraction layer can be constructed in various ways. Consider a linear tree as a way to build a neural network that extracts meaning from the text. To facilitate synchronization of parallel processes, as well as to facilitate the implementation of existing hardware, we apply synchronized and unsynchronized neurons. A processing layer built on the principle of a synchronized linear tree will contain synchronized neurons that perform a conjunction operation and unsynchronized neurons that perform disjunction and negation operations. Neurons connect as a set of intersecting trees, the roots of which face the receptors, and the peaks in the direction of the effectors. The processing layer is divided into sublayers. Each sublayer has one processing wave front.

In a simplified synchronized linear tree that extracts individual words from an input stream of characters, all neurons perform a synchronized conjunction function. Linear tree consists of sublayers. Each sublayer corresponds to the wave front processing. The neurons of the first sublayer correspond to the first letter of the word, the second to the second, and so on. The total number of sublayers is equal to the maximum number of letters in one word. The first sublayer consists of neurons that recognize the first letter, the second layer consists of neurons that recognize the first two letters, the third - the first three letters. Each neuron has one input connection with the neuron from the previous sublayer corresponding to the previous letter of the word, and one input connection with the neuron from the receptor layer corresponding to the current letter.

The same signal arrives at the sync inputs of all neurons: the “new symbol”, which is not shown in the figure. The “new symbol” sync signal is produced by the receptor layer when the receptors have successfully recognized the newly arrived symbol. At the time of arrival of the synchronizing pulse, all the neurons of a simple linear tree process their input signals. Since these inputs are generated from the results of the previous clock cycle, by other synchronized neurons, the effect of a processing wave occurs in the tree. Note that for the recognition of four words with a total number of characters 4 + 4 + 6 + 6 = 20 it took only 10 neurons that perform the function of conjunction, 6 receptors and 4 effectors.

In the presented network there are not enough functions for determining types or classes of objects, such as "noun", "masculine gender", "feminine gender" and other similar ones. The functions of determining the types of objects are easily realized with the help of an aggregating sublayer consisting of unsynchronized neurons that perform the functions of disjunction and, if necessary, inversion.

All exits of neurons representing complete concepts are combined by a non-synchronized neuron performing a disjunction operation. If necessary, a synchronization signal is removed from its output, which assumes the value of logical truth, in the event that another recognized concept has formed at the output of the linear tree. All neuron outputs of the same type are combined by a non-synchronized neuron that performs a disjunction operation and is responsible for recognizing this type.

Sublayers of non-synchronized neurons that perform disjunction functions are placed between sublayers of synchronized neurons that perform conjunction functions (Fig. 3). The result is a multi-layer structure in which, after each sublayer of the wave front, there is an aggregation sublayer. In the aggregation sublayer, if necessary, are placed unsynchronized neurons that perform the function of inversion.

The synchronous input of each synchronized linear tree is connected with the sync output of the previous linear tree. For the current linear tree, each previous network is a layer of receptors, and each subsequent network is a layer of effectors. Note that the types of objects formed by the type recognition layer are processed by the next tree in the same way as ordinary words. Linear trees handle stable combinations of types, as well as stable combinations of words. For example: "adjective-noun" or "bitter-truth." The presence of non-synchronized layers of neurons performing disjunction operations makes it possible to simultaneously obtain several solutions.

This feature is the advantage of the proposed structure of the neural network, because it allows you to simultaneously receive all possible extracts of meaning from the input stream of characters. This ensures consistent processing of concepts, the level of abstraction of which increases with each linear tree. The first tree processes symbols, the second one - morphemes, the third one - words, the fourth one - phrases and so on.

Different variants of the state of the input stream of symbols in the neural network will correspond to different variants of the state of this network. The meaning extracted from the processed part of the text within one time slice is the instantaneous state of all linear trees. The instantaneous state includes a snapshot of multiple neurons, multiple connections between neurons, and multiple internal states of neurons.

The considered structure of the semantic neural network allows the input stream of characters to be processed in real time due to the high degree of parallelization of computations. The lag time of the processing result depends on the number of input processing layers connected in series, and not on the number of neurons entering this neural network.
created: 2014-09-23
updated: 2021-01-10
132566



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Automatic extraction of facts from the text (fact extraction)

Terms: Automatic extraction of facts from the text (fact extraction)