7: Information Processes

Lecture



The concept of information process

The world is immersed in information processes thanks to fields, among which physical fields known by science are apparently in the minority compared to the unknown. But the fields are only information carriers, as horses are riders carriers, which, in turn, are also carriers (clothes, bacteria, knowledge, ranks, etc.). The traditional concept of an information process is presented as the transfer of information through a carrier-field.

Consider an elementary information process - one-way communication (in the form of arrows connecting the source to the information consumer through intermediary structures): source (meaning1 in latent characters) → encoder (message codes-signs) → transmitter (message signal) → medium (signal, interference) → receiver (message codes) → decoder (meaning2 in characters) → consumer (The brackets indicate the forms of information at the output of the preceding structure and at the input of the next.). This process is fundamental because included in the increasingly complex intersystem and intrasystem information processes (two-way, interactive, hierarchical, multiply connected, cellular, circular, ring, etc.). That is why we are exploring it.

The information process begins with the source presenting a part of its internal information ( meaning 1 ) at the request of the consumer or on the initiative of the source. “Meaning1” is initiated by the latter in the hidden (intangible) form of symbols (There are many interpretations of the concept of “symbol.” Here we follow the interpretations of A.F. Losev (the symbol is “untapped sign”, “ideal construction of thing”) and E. Cassirer ( the symbol is “sensual perception of the ideal”) only because we do not have a better term for defining hidden signs unknown to us, codes used to store ideal meanings.). If the information process is manifested in the material and energy form, then this form finds itself for the first time in the coder, which converts intangible symbols into material codes (the symbolic interpretation of the “meaning1”) and forms a message from the codes. In other words, the internal ideal information of the source is manifested in the external material and energy information at the coding stage. In energy-free (non-power) information processes, the stage of coding is hidden, the structures of the coders and the messages they form are, frankly, unknown to science. Let us suggest that, perhaps, there is no need for coders (and coding) - the characters perform the functions of codes directly, without intermediate transformations.

Let's return to the usual material and energy processes. The coded message arrives at the transmitter, whose task is to form a signal as a cooperation of the material and energy carrier and the message ("horse and rider tandem"). For example, when using the electromagnetic field as a carrier, the transmitter contains a generator of oscillations (waves) of such a field and a modulator , the input of which receives an encoded message from the output of the encoder. As a result of the joint operation of the generator and the modulator, a signal is generated at the transmitter output with a message — field oscillations modulated by codes. In non-force information processes, signals are energy-free or low-energy (imperceptible), if the message carrier is an information field.

Medium is the region of space-time between the transmitter and the receiver where the signal encounters interference, the disturbing effects of the medium on the signals. Interference has a varied nature. It is impossible to completely avoid their distorting effects on signals ("see sections 7.3, 7.5"). In information theory, the medium is often identified with the concept of a communication line, and the totality of a transmitter, a communication line and a receiver is called a communication channel.

The signal distorted by interference is fed to the receiver input, the task of which is to restore, if possible, the character codes of the original message for their subsequent decoding. However, the recovered codes, most likely, are not those that were in the original message, and not only because of the influence of interference in the communication line ("see section 7.4"). Therefore, at the output of the decoder, which transforms the received code-signs into information symbols, a “sense2” is formed that does not coincide with the “meaning1” of the initial information sent by the source (“meaning2” ”meaning1”). The problem of meaning is aggravated by the fact that the consumer not only discovers the meaning of the source information, but is also able to bring his own meaning to the decoded “sense2”.

From the communication scheme presented, the border structures of which are the source and consumer of information, and the boundary information phenomena are the original and final meanings, it follows: communication as a process is an act of reaching the consumer’s understanding of the information source, and if it’s shorter, establishing the source’s understanding of the information consumer. Communication as a result is an understanding of the source by the consumer. When two-way communication is established mutual understanding - in the process of two-way communication, each of the two interacting sources is also a consumer of information. With interactive communication between several sources of information, the information process is “split up” into an appropriate number of one-way or two-way communication subprocesses with the establishment of individualized or cumulative understanding (mutual understanding).

Hence the problem of communication is the hermeneutic problem of understanding. A complete (undistorted) understanding of the purpose of communication is possible only if a) the adequacy of the codes to symbols, b) messages transmitted by signals, coded messages, c) received messages transmitted and, finally, d) if the meanings of the sent and received information are adequate.

These statements are to some extent at variance with the generally accepted concept of communication as communication. Perhaps such a notion of a connection is due to the etymological polysemy of dictionaries: in particular, more than a dozen Russian words correspond to the English word communication , including communication (as a process); This view may have migrated from cybernetics, theories of information and communication, steadily oriented to the procedural role of communication as the transfer of messages. The above philosophical-hermeneutic concept of communication seems to us wider than its procedural interpretation.

Thus, the information process is the establishment of an understanding of the source of information by a consumer or the establishment of mutual understanding of sources. Understanding is a communicative process for which reliability and speed are important.

Coding

The first converter in the information process scheme is the coder (Coders are included in connected (communication) structures of both natural and artificial systems, acquiring the most bizarre and often latent forms, which we can only guess at.). “What you sow, you reap” —for the information process, this means: as you first encode information, you will understand it at the end. It is important that the coder language is consistent with the source language (preserves the information diversity of the source), is understandable to the transmitter (agreed with it) and at the same time is simple and brief. Simple code is simple to implement, a short code is needed to ensure the speed of the information process. The conciseness of the code (the brevity of its code combinations in messages) is extremely important for "mortal" systems that exist in finite time . Their life depends on the duration of information processes - the speed of development in a hostile environment, competitiveness, speed of reactions to disturbances, adaptability. The more concisely the coded messages in the information process, the faster the process and the system are more viable; the longer the message, the more vulnerable the system. On the other hand, the simplicity of the code and the brevity of the encoded messages are mutually contradictory.

Example 1. In living systems, starting from a biocell and above, there is a tendency of increasing complexity of codes (in terms of alphabet): from genetic (four-digit nucleotide) to protein (20-digit amino acid), from protein to tissue, from tissue to organ codes and further - up to the languages ​​of communication of populations. This is natural - the more complex the system, the richer its internal information, the more complex the code that displays this information. Simple code can encode complex text, but it will require unreasonably long code combinations. Enough in the binary alphabet "a" and "b" to create intelligible text to verify this. But “words should be cramped, thoughts should be spacious,” “short is the sister of talent”. Therefore, alphabetic and, moreover, hieroglyphic languages ​​use complex codes to create moderately brief messages about complex objects and phenomena of the surrounding world.

At each level of the hierarchy of biosystems, the problem of code selection arises. Suppose that for an encoder it is necessary to make a choice from four types of integer codes: single (sign 1), binary (signs 0, 1), quaternary (signs 0, 1, 2, 3) and decimal (signs 0 ... 9 (Code marks in principle can be any; here are accepted generally accepted in the theory of numerical codes.)). We represent the number "seven" with each code: 1111111 - single, 111 - binary, 13 - fourfold, 7 - decimal. The decimal code is more informative than the codes of other codes, so less numbers are needed to encode the same information in a decimal code than when encoding in simpler codes whose characters are less informative. But the decimal coder is more complicated in its structure than the other three - for its implementation we need structures that can uniquely recognize ten characters. A binary coder needs a structure that confidently recognizes only two characters (for example, in a modern computer it is a trigger with two stable states; in a nanocomputer, a microparticle with two spins: left-handed and right-handed). And in the unit coder it is necessary to recognize only one character - there is simply no place easier!

Thus, the simpler the code, the longer the code combinations, which means that the longer they need to be transmitted, the longer the information process. For biosystems that are struggling for survival, it is of fundamental importance that the intrasystem information processes proceed as quickly as possible. Late response of the system to environmental disturbances is deadly. But for the speed of vital processes, biosystems have to pay the complexity of their coders, which, in turn, is due to the need to preserve information (hereafter, “preservation” is understood as the identity of the code mapping and the displayed primary information).

To overcome this contradiction, nature, when creating its coders, "solved, unaware of it," optimization problems:

  1. for the genetic coder, the four-fold code was chosen — the most concise of the triad optimal integer codes known in computer science (binary, ternary, and quadruple (Strictly speaking, the tertiary code is optimal from integer codes, the other two are quasioptimal, slightly inferior to the ternary code according to the multiplicative optimality criterion "value" * conciseness ". Theoretically, the optimal code of significance is e≈2,718281828459 ...));
  2. for the remaining coders, she “chose” codes that maximize the speed of the processes for a given level of information integrity.

Whether the “fast” imaginative, intuitive, sub-and supra-conscious codes are acceptable for solving the second optimization problem — languages ​​of parapsychological communication in the dialogs “person - person”, “person - information field” with code symbols in the form of gestalts (complete images) and with messages in the form of sequences of associative gestalt? After all, the gestalt is more informative than any numeric code.

Example 2. In frequency dictionaries of natural sublanguages ​​(colloquial, journalistic, literary, scientific and technical) and, therefore, in the language as a whole, the highest frequencies correspond to the use of the shortest parts of speech - prepositions, conjunctions, articles, particles, pronouns, interjections. Perhaps this effect is the heritage of the animal genesis of man. Indeed, in the languages ​​of the fauna, frequency is also dominated by short “word forms”, in particular, interjections, which is confirmed by zoopsychology. Thus, in linguistics, the imperative of the speed of communication can be traced (with a certain acceptable level of security of the encoded information).

Example 3. In modern computers, the genetic code is a binary code, inferior to the "natural" genetic code in speed. Well, design engineers did not know this and turned out to be more stupid than nature? All they knew, but they had much less time to choose a code than nature had. I had to choose the code that is the simplest to implement in the twentieth-century pulse technique. and well-known in discrete mathematics — it turned out to be a quasi-optimal binary code (Optimal ternary code was implemented in only two computers in the entire history of computing technology (USSR, Japan). But the ternary coder turned out to be so unreliable (in recognition of three states), and ternary arithmetic not developed, that the idea of ​​a ternary computer had to be abandoned until better times.). In a binary code that is understandable to a computer, texts, decimal numbers, special characters (linguistic, mathematical), images, sounds, etc. are encoded. In general, “second-generation” artificial gene technologies are too far from analogous natural technologies. Presenting 10-digit, 27-digit, 33-digit codes and coders - this is a nightmare for computer developers!

Example 4. In communication technology, so-called effective (economy) codes (Morse, Shannon – Fano codes, alphabetic keyboards, etc.) are used to encode texts. The length of the code combinations of the effective code depends on the frequency of use in the text of the letters of the alphabet and punctuation marks: the higher the frequency, the shorter the combination. The goal of efficient coding is still the same communication speed; the code is the more effective, the shorter the messages it encodes.

Example 5. On the other hand, the code must have some redundancy (for example, due to the repetition of code characters, insertion of separators between information ones) in order for the transmitted message to withstand the destructive influence of interference in the communication channel (error-correcting coding). The concept of information redundancy of messages (in the above sense) is not a sophisticated technical invention, but a reasonable following to the usual manifestations of redundancy, whether it is a volcanic eruption or a politician's speech, a turn of the wheel or a paradigm shift, a thought or a word. The noise-resistant coding lengthens the code combinations and thereby reduces the speed of communication, but increases its reliability. It means that noiseproof and efficient coding types are mutually contradictory.

From the above examples it follows that the art of coding information consists in choosing the optimal strategies in the circle of mutual contradictions in the choice of code value, coding methods, and requirements for the encoder.

Separate attention should be paid to the problem of consistency of the coder language with the source language. This problem is insoluble when the coder language is discrete and the source language is continuous. Signs are fundamentally discrete, and symbols are optional. If so, the initial information (its meaning) may be subject to distortion starting from the coding stage (Discreteness as a negative of rational knowledge is discussed in Topic 6 (Section 6.3).), And at the subsequent stages other distorting factors take effect.

Information coding is a mandatory subprocess of any information process - field interactions, language practices of nature, contemplation, actions, etc. The universality of coding determines its philosophical objectivity, regardless of the nature of the information process, especially since it is during coding that the ideal internal information, materializing, manifests itself in external information; those. the coding mechanism works at the junction of the "thin" (hidden) and "rough" (manifested) worlds.

Broadcast

The “packed” code enters the transmitter, where it is converted into a message, ready for transmission in the form of a signal. The signal from the transmitter output enters the communication link, the propagation medium, where it is inevitably influenced by interference.

Example 6. Internal noise is characteristic of any object. It is due to the natural heat exchange of the elements of the object, the chaotic motion of charges during the ionization of atoms and molecules (thermal emission, injection, etc.), and the quantum nature of the radiation. External noise is due to similar processes in the propagation environment, as well as interference of artificial origin. Noise is harmful to communication, and they are always trying to reduce it. However, our possibilities in this are limited. We cannot reduce the temperatures of participants in the information process and the environment to absolute zero (-273.16 ° C), when heat transfer and the presence of free electrons are excluded (According to the third law of thermodynamics (W. Nernst), the absolute zero temperature is unattainable.). Ведь даже реликтовое излучение вселенной, которому столько же лет, сколько и ей, не остыло ниже -270°C.

Внутри каждой сложной системы происходят внутренние информационные процессы, мешающие внешним информационным процессам, в которых участвует система. Так, человек как сложная система постоянно подвержен внутрисистемным помехам на уровне тканей и органов, информационно взаимодействующих друг с другом помимо воли хозяина и даже во сне. Мозг как сложная система постоянно "шумит", мешая восприятию информации.

Кроме объективных природных помех, на информационный процесс могут воздействовать умышленные (организованные) помехи субъективного происхождения. Безопасность информации постоянно подвергается испытаниям ( "см. раздел 7.5" ). Есть и другие (помимо помех) негативные факторы, связанные со средой распространения. Известно, что в N-мерном макропространстве сила полевых взаимодействий объектов обратно пропорциональна (N-1)-й степени от расстояния между ними. В нашем трехмерном макропространстве (N=3) этот закон сводится к закону обратных квадратов (Указанный закон справедлив для гравитационного, электромагнитного и др. полей, действующих в N-мерном макропространстве. В микропространстве атомов данный закон нарушается.) . Ему подчиняются и сигналы в информационных каналах связи: мощность сигнала обратно пропорциональна квадрату расстояния от передатчика. Кроме того, любой сигнал "вязнет" (теряет мощность, в том числе за счет искажения частотного спектра) в среде в степени, зависящей от природы среды (твердотельной, воздушной, водной, органической, проводной и т.д.).

Помимо ограничения, налагаемого на качество связи помехами и природой среды, канал связи имеет и другое фундаментальное ограничение – конечность пространственно-временных параметров передатчика и приемника.

Пример 7. Природа соотношений неопределенности в физике и ее технических приложениях, по большому счету, заключается в пространственно-временной конечности физических объектов. Таковы, например, соотношение неопределенностей в квантовой физике (невозможность одновременно точного измерения положения и импульса элементарной частицы), соотношение неопределенностей в эхо-локации (невозможность одновременно точного измерения координат и скорости лоцируемого объекта). Вообще в системах эхо-локации (радары, сонары, летучие мыши, дельфины и др.) абсолютно точное измерение координат (дальности, азимута и угла места (высоты)) потребовало бы передатчиков с бесконечной мощностью, параболических антенн с бесконечно большой апертурой (площадью раскрыва), абсолютно бесшумных приемников с бесконечно широкой полосой пропускания. Некоторые из этих гипотетических устройств не обязательны при совместном использовании, но даже одного из них достаточно, чтобы повергнуть конструкторов в мистический ужас. Другой пример: в математической статистикеunknown true average value of a random variable in the confidence interval, built on a limited sample of this value. The limited size of the sample is due to the spatial-temporal framework for collecting statistical data, be it natural science or social humanitarian studies.

Передача сообщения в канале связи конечна во времени. У любого конечного процесса есть начало и конец в виде некоторых скачков (в материальном мире из "небытия в бытие" (начало) или наоборот – из "бытия в небытие" (конец)). В моменты идеальных мгновенных скачков, когда длительность скачка равна нулю, крутизна процесса бесконечна. Значит, частотный спектр этого процесса включает бесконечную спектральную составляющую (частота обратна длительности), и, следовательно, ширина спектра конечного во времени процесса бесконечна, что нереально. Если ограничить ширину спектра сигнала некоторой конечной максимальной частотой, свойственной реальному частотному спектру передатчика, то сигнал, строго говоря, не имеет начального и конечного скачков и поэтому бесконечен во времени, что тоже нереально. Иными словами, в природе противоестественны процессы, одновременно конечные по длительности и спектру . Но поскольку и длительность, и спектр сигналов в материально-энергетических каналах связи ограничены, конечны в силу пространственно-временной конечности параметров каналов, точное воспроизведение конечных сообщений невозможно любыми сигналами в любом канале связи. Примем это положение, вытекающее из известной теоремы Котельникова–Шеннона, в качестве принципа неопределенности сигнала и смиримся с тем, что даже в идеальном (без помех) канале связи переносимые сигналами закодированные сообщения искажены по сравнению с теми же сообщениями на входе передатчика (об искажениях в приемнике "см. раздел 7.4" ).

It seems important for philosophical understanding to compare, at first glance, incomparable - the principle of signal uncertainty and communication logic . Если сигнал по длительности и спектру не противоречит своей реализуемости, он не полон в одном из смыслов (спектральном или временнoм) или в обоих сразу. Если же сигнал полон в указанных смыслах, он противоречив в своей реальности, ибо он не может быть одновременно и даже порознь (во времени и/или по спектру) реально бесконечным. Подобная аналогия между физикой и логикой (между неполнотой и противоречивостью, с одной стороны, сигнала, а с другой стороны, математической логики) наводит на размышление, что известные физические соотношения неопределенности, проистекающие из эмпирических реалий конечного пространства-времени и обобщаемые на общенаучном языке математики теоремой Гёделя о неполноте арифметической логики, приоткрывают завесу над латентной квазибесконечной "логикой" Универсума, данной нам лишь частично в своей конечной (дискретной) неполноте и кажущейся противоречивости. Из изложенного также следует, что природа "логики" Универсума, предположительно, информационна и континуальна (непрерывна).

Прием и декодирование

Искажения, не меньшие, чем в передатчике и среде, претерпевает сигнал и в приемнике, ибо вероятность абсолютно точного (идеального) совпадения амплитудно-фазо-частотных характеристик передатчика и приемника (или хотя бы несущественность ухудшения этих характеристик в приемнике) как залог безошибочного приема сигналов близка к нулю. В результате переданный и он же принятый сигналы никогда не совпадают. Таким образом, конечные по длительности и спектру сообщения искажаются согласно принципу неопределенности сигнала дважды – сначала в передатчике, затем в е-приемник (Гуманитариям небесполезно привыкать к вторжению в "экзистенциальные эмпиреи" приземленной научно-технической терминологии. Принципы связи всеобщи, относится ли это к радиосвязи, телепатии или интерсубъективным актам.) .

К спектрально-временным искажениям сигналов добавляются искажения и потери сигналов, обусловленные конечным отношением "сигнал/шум" на входе приемника. Дело в том, что обнаружение сигнала на фоне шумов возможно, если сигнал превысил порог обнаружения, характерный для приемников любой природы – электрических, электромагнитных, химических, оптических, вибрационных, психических, социальных, цифровых, аналоговых и т.п. Часть сигнала или даже весь сигнал могут оказаться ниже порога и, соответственно, потеряться для приемника. Информация, переносимая этим сигналом, будет искажена или утеряна для потребителя.

Пороговый эффект объективно обусловлен маскированием слабых (субпороговых) сигналов шумами, внешними и внутрисистемными помехами. Для обнаружения субпороговых (подпороговых) сигналов пришлось бы повысить чувствительность приемника, снижая порог обнаружения до ожидаемого уровня сигнала, и, следовательно, "пустить" в оконечное устройство шумы и другие помехи. А это чревато ложными срабатываниями оконечных устройств от шумов (помех). Последнее не менее опасно, чем пропуск сигналов, поэтому повышать чувствительность приемника целесообразно до определенного предела, зависящего от допустимой вероятности ложных срабатываний при заданном шуме (уровне помех). Этот предел и есть порог обнаружения сигнала. Желая избежать ложных сигналов от помех, потребитель загрубляет вход приемника: повышает порог обнаружения, ухудшая тем самым чувствительность приемника. В результате вместе с исключением шумов и помех теряется и субпороговая информация. У сложных систем со сравнительно большим энергоинформационным внутренним шумом (высокоинтеллектуальные системы) относительная доля субпороговой информации оказывается больше, чем у простых малошумящих (неинтеллектуальных) систем, и сложные системы теряют больше субпороговой информации, чем простые. Такова плата за сложность и интеллект, приводящая к нечувствительности сложных систем к информации "тонких миров" – той информации, которая, возможно, доступна "братьям нашим меньшим" и лишь редким людям.

Более общим по сравнению с порогом обнаружения является понятие порога различения сигналов. Ведь обнаружение, в сущности, есть не что иное как различение (распознавание) сигналов и помех. Порог различения есть проявление весьма широкого спектра философских отношений: различия и тождества, существенного и несущественного, определенного и неопределенного, дискретного и непрерывного, конечного и бесконечного. Если бы не существовало порогов различения (обнаружения), не существовало бы и информации, ибо ее нельзя было бы отличить от помех и дезинформации, один сигнал отличить от другого. Связь как передача информационного разнообразия была бы невозможна, т.к. в приемнике отсутствовали бы пороговые критерии отбора (распознавания) разных сигналов. Важно отметить, что эти пороги устанавливает потребитель информации, ибо только он решает, какая информация имеет для него значение.

Example 8. As far as the concepts of code and signal are philosophically broad and significant, the concept of threshold is just as philosophically significant. Thus, one of the most important procedures for the development of systems — the selection of valuable information — is impossible without the limitations of diversity imposed by the selection criteria, which (criteria) are physically realized through the thresholds for detecting and distinguishing signals. Thresholds protect the system, limiting the diversity of its input effects (disturbances) to the required value (according to the law of the required variety of Ashby). Thresholds allow you to separate meaningful information from insignificant, to recognize the information element in a number of other information elements.

Example 9. A message carried by a signal represents its (signal) variable (diverse) component in contrast to the carrier itself - the constant (uniform) component of the signal. Informative variety. Communication as the establishment of understanding is the transfer of diversity. With regard to the signal, this means that its variable asymmetric components are informative, since the transmitted diversity lies precisely in them. The constant, symmetrical component, not possessing diversity, is therefore uninformative and, as insignificant for the consumer, is cut off at the receiver when the informative variable component is detected that is significant to the consumer (Detection (in English detect - open, detect) at the receiver back modulation at the transmitter, t. E. This is demodulation, in which the cooperation of the carrier and the message in the received signal is destroyed. After detection, the message is “free” from the carrier.). What kind of diversity, what kind of information can we talk about, observing field media in the form of equal amplitude and frequency of oscillations on the oscilloscope screen or staring at a blank sheet of paper? Unless to fix the presence of the carrier itself, the diversity of which is equal to the two states "is - no". The same variety of states has a switch, a coin. This is clearly not enough to consider the switch and coin informative developed systems, however, as well as carriers of signals.

So, the signal with the message is distorted and weakened in all structures of the communication channel - the transmitter, the communication line (medium) and the receiver. At the same time, negative factors affect both the carrier and the message — the “addressee” is indifferent. As a result, an objective need arises to restore and amplify the transmitted signals on the receiving side of the communication channel to a level above the threshold for distinguishing signals (also known as the decoder response threshold). In other words, the receiver should be a signal amplifier and, if possible, a reducing agent of the original message.

The amplification phenomenon is known in chemistry (catalysis, fermentation), biology (reproduction, growth), psychology (intelligence development), technology (communication, control), etc. Signal amplification does not mean only an increase in its amplitude or power - this is a particular physical aspect of signal amplification spreading in space-time. The synergistic aspect of amplification is the growth of diversity (self-organization) and intelligence (self-study). In this case, technologically amplification can consist not only of powering, but also of extracting (filtering) the latent signal from noise, latent information from information noise and misinformation, essential from insignificant, significant from insignificant, useful from harmful, etc. In this sense, we are dealing not only with amplification, but also with the recovery of the message. Next, combine both receiver functions in the conventional concept of amplification, giving it a philosophical status (by analogy with code, signal, threshold).

If the diversity (complexity) and / or intelligence of the system is enhanced, then the potential amplification limit is determined by the information power (informativeness) of the system "power source", namely, by the diversity and intelligence of the environment. The general principle of amplification: a small amount of energy used to transfer information, controls large masses and large amounts of energy. It is important here that the energy carrier carries the information that initiates control — otherwise, control is untenable. Information is the source of any control. In control theory, the general principle of amplification is called the cybernetic principle of control.

In “Topic 2, Section 2.2,” an example of optimal filtering is well known in radio engineering, where it is often necessary to detect subthreshold signals hidden in noise. But sub-threshold signals have to be dealt not only with radio receivers.

Example 11. Each of us has to face the memory of long-buried traces of memory, inspired creativity is familiar to many people, when the information necessary to solve the problem and usually inaccessible, hidden in information noise, comes to the creator through unknown ways. Does the self-tuning of mediums for interaction with sub-threshold information mean precisely the concentrated internal tuning of the mental optimal filter? Do not the geniuses, visionaries, telepaths, yogis, etc., have this ability, perceiving the subliminal signals of the metaphysical information field of the Universe? The mechanisms of such perception are still not known, often ignored by the scientific community even in spite of obvious facts. Maybe psychologists should look for the answer in technical communication systems and locations, where these mechanisms have long been implemented? After all, the problem of loss of subthreshold information is perhaps one of the most important in psychology. We believe that the effect of optimal filtering is essential for understanding non-standard information processes associated with the psychological perception of latent information, with penetration into the unconscious.

Hence a point of contact with I. Kant's apriorism, according to which in each act of cognition, the knowing subject possesses in advance certain forms, categories, archetypes that existed before this act, which give meaning to the cognition of the object and provide its understanding. In this, Kantian apriorism "finds mutual understanding" with optimal filtering so far from it. Truly Kant's wisdom is for all times!

The reception of information is followed by decoding, which transforms the received code-symbols into symbols that the consumer can understand. Accordingly, decoding is not only the restoration of characters distorted by the link, but also the translation of characters into implicit characters. The problem of turning explicit characters into implicit characters from the same area as the transformation of characters into characters before encoding. This is the already familiar problem of the physical bases for the mutual transformation of material and ideal, transcendental and empirical, the problem of replication of internal information into external and inverse transformation of external information into internal, the problem of the meeting of worlds - "gross" and "subtle". From the standpoint of the attributive approach, codes-signs are products of the material manifestation of the information field (carrier of characters) in physical fields, if by the very nature of the information process external information must be manifested, and its carrier, accordingly, energy. Then decoding codes into symbols should be the reverse process, i.e. virtualization of physical fields in the latent structure of the information field. But the original meaning of the message, laid at the very beginning of the information process, may be unknown at the end. In this case, the consumer, translating the received external information (after decoding it into "meaning2" - "see section 7.1") into his own internal information, is forced to bring his own meaning (directly or through "meaning2"). With the non-energy nature of the information process, when coding becomes an implicit symbolic interpretation of a source in the same information field, the need for all subsequent stages of the process, including decoding, simply disappears. This hypothesis is of interest for further philosophical understanding.

Information Security

Watch out!

Kozma bars

By information security, we mean "the security of information from a person" together with "the security of a person against information."

Accordingly, we will not confuse the security of information with its noise immunity. Interference can not be avoided - this is repeatedly stated above. The objective increase in the level of natural and artificial interference with the intensification of information metabolism in developing systems leads to an increase in the selection thresholds ("passing score") of valuable information, with the result that the signals transmitting it, in an increasingly large part, are lost or, at best, subliminal, unrecognized, not detected gain in the receiver and difficult to optimal filtering. Signals that exceed the detection (recognition) threshold may be significantly distorted, as a result, harmful misinformation (false) or information noise may reach the consumer instead of useful information. Information is difficult (if at all possible) to be protected from unintentional and intentional interference when transmitting not only in space but also in time - during storage. In general, in any information process, the absolute noise immunity of external information is not guaranteed and is not even targeted.

Measures to protect information from the human factor are known, but man is inventive in his attempts to violate the security of information. And most often he succeeds in this, as the criminal manages to find loopholes in the legislation.

There is an entropic tendency, dissipative in relation to external information, but, paradoxically, favorable for internal information in the aspect of maintaining the evolutionary potential of developing systems. After all, the “moderate” dissipation (dispersion) of external information is a stimulus for its potential generation, and accumulation without dispersion is dangerous for development as a whole. Indeed, moderate dispersion of information is useful for its creation, as it is beneficial for the body to moderately destroy digestible organic substances, making room for maintaining an active material-energy metabolism. Zero-entropy of the system would mean the end of its development, its freedom. Not only the artist, but also any developing ("creative") system must be free. We believe that this pattern is justified not only from a speculative, but also from a practical point of view. If information had not been dispersed, the world would have been drowned in informational garbage as the human world is now drowning in physical and chemical garbage. Creation and dissemination of information as objectively interrelated processes in the aggregate are one of the consequences of the law of conservation of information.

The task of determining the measure of "moderation" of information scattering is interpreted as the problem of harmony between the generation and scattering of information. Such harmony can be understood by analogy with harmonious (weighted) treatment, aiming at a positive effect at an acceptable level of inevitable negative "side" effects, or by analogy with the aesthetic harmony of the "golden section". Harmony in terms of ratio means making an optimal decision on one of the well-known criteria of optimality (Bayes, Pareto, Neumann – Pearson, etc.) according to the chosen goal. Examples of possible targets:

  • protection of information against scattering and distortion of interference;
  • consumer protection from misinformation and information noise;

Each of the approaches to "information protection" and "protection from information" requires independent in-depth research in all aspects of the existence of information, including the social aspect.

Example 12. One of the human rights legally enshrined in a democratic society is the right of access to objective information. Information should be available to citizens if its openness does not threaten the security of the individual, society, or the state. But to the same extent that information is available to everyone, it is also available for threats from "interested parties", whose task is the malicious use of information or their falsification, destruction, damage. The goals can be very different, and the result is one - a violation of the aforementioned right of law-abiding citizens. The problem of information security is always relevant. Now only the lazy does not discuss this problem, especially in computer and network applications. Therefore, it is unrealistic to cover all the diverse aspects of the problem of information security, especially since the problem did not arise today (Rastorguev S.P., Dmitrievsky N.N. "The Art of Protection and" Undressing "Programs", 1991; Tiley E. "Computer Security", 1997 ; Gukhman VB, Tyurin EI "Basics of data protection in Microsoft Office", 2005 (and others.).). Data are entered into a computer that is of interest not only for the primary, but also for the secondary analysis, when researchers are forced to repeatedly and repeatedly refer to stored information and previous results of its analysis. It means that data entry operators together with researchers should ensure data integrity regardless of their access category. Otherwise, the data may be accidentally lost due to negligence, destroyed by malicious intent or distorted for the same reasons. The problem of data integrity has for the researcher and ethical meaning associated with respect for their own work and the work of colleagues who helped collect data and enter it into a computer.

The terms "security" and "security" are often not identified in the specialized literature on information security. Security of information (data) is associated with security from unauthorized access, and data integrity is associated with their security against distortion and accidental deletion. But from the standpoint of “attack” (attack) on data, such a distinction seems to be quite conditional, therefore by the term “information security” we mean joint measures to ensure data integrity and protect information from unauthorized access (Data may be physically available, but the content of information cryptographically encrypted in them but it is not difficult to distort such information, destroy it or unauthorized copy it for subsequent cryptanalysis (decryption).

Example 13. According to the current information legislation, the objectives of protecting information from a person and protecting a person from information are:

  • prevention of leakage, theft, loss, distortion, falsification of information;
  • prevention of threats to the security of the individual, society, state;
  • prevention of unauthorized actions to destroy, modify, distort, copy, block information;
  • preventing other forms of unlawful interference with information resources and information systems, ensuring the legal regime of documented information as an object of property;
  • protection of the constitutional rights of citizens to preserve personal secrets and confidentiality of personal data available in information systems;
  • maintaining state secrets, confidentiality of documented information in accordance with the law;
  • ensuring the rights of subjects in information processes and in the development, production and use of information systems, technologies and means to ensure them.

Note: protecting a person from information, in addition to these goals, is also protecting him from information expansion - the avalanche-like development of information and communication technologies (ICT), which, providing unprecedented opportunities, simultaneously make our civilization and every person dependent on the normal functioning of ICT in all spheres of life ("see topic 8"). And this, in turn, creates an unhealthy temptation to create an "information weapon", no less dangerous than a nuclear missile, but cheaper and more secretive.

Information security problems in the considered aspects acquire a philosophical meaning.


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

philosophiya

Terms: philosophiya