Algorithm Theory

Lecture



The theory of algorithms is a branch of computer science that studies the general properties and laws of algorithms and various formal models of their presentation. The problems of the theory of algorithms include formal proof of the algorithmic unsolvability of problems, asymptotic analysis of the complexity of algorithms, classification of algorithms according to complexity classes, development of criteria for comparative evaluation of the quality of algorithms, etc. Together with mathematical logic, the theory of algorithms forms the theoretical basis of computational sciences. [1] [2]

Content

  • 1 The emergence of the theory of algorithms
  • 2 Calculation Models
  • 3 Thesis of Church - Turing and algorithmically unsolvable problems
  • 4 The current state of the theory of algorithms
    • 4.1 Analysis of the complexity of algorithms
    • 4.2 Classes of difficulty
  • 5 Mathematical applications of the theory of algorithms
  • 6 See also
  • 7 Literature
  • 8 References
  • 9 Notes

The emergence of the theory of algorithms [edit]

The development of the theory of algorithms begins with the proof by K. Gödel of incompleteness theorems for formal systems including arithmetic, the first of which was proved in 1931. The assumption arising in connection with these theorems about the impossibility of algorithmic resolution of many mathematical problems (in particular, problems of derivability in the predicate calculus ) caused the need for standardization of the concept of the algorithm. The first standardized versions of this concept were developed in the 1930s in the works of A. Turing, A. Church and E. Post. The Turing machine they proposed, the Post machine and the Church's lambda calculus turned out to be equivalent to each other. Based on the work of Gödel, S. Kleene introduced the concept of recursive function, which also turned out to be equivalent to the above.

One of the most successful standardized variants of the algorithm is the concept of a normal algorithm introduced by A. A. Markov. It was developed ten years later by Turing, Post, Church, and Kleene in connection with the proof of the algorithmic undecidability of a number of algebraic problems.

It should be noted also a considerable contribution to the theory of algorithms made by D. Knut, A. Aho and J. Ulman. One of the best works on this topic is the book Algorithms: Construction and Analysis by Thomas H. Corman, Charles I. Leiserson, Ronald L. Rivest, Clifford Stein.

Calculation Models [edit]

  • Turing machine - abstract performer (abstract computer). It was proposed by Alan Turing in 1936 to formalize the concept of an algorithm. A Turing machine is an extension of a finite state machine and, according to the Church-Turing thesis, is able to imitate all other executors (by setting transition rules) that somehow implement the step-by-step calculation process, in which each step of the calculation is quite elementary.
  • Lambda calculus — a pair is considered — a λ-expression and its argument — and the calculation is considered to be the application, or application, of the first member of the pair to the second. This allows you to separate the function and what it applies to. In the more general case, the calculation is considered to be chains starting with the original λ-expression, followed by a finite sequence of λ-expressions, each of which is obtained from the previous one by applying β-reduction, that is, the substitution rules.
  • Combinatorial logic - the interpretation of the calculation is similar to the λ-calculus, but there are important differences (for example, the combinator of the fixed point Y has a normal form in combinatorial logic, but in the λ-calculus it does not). Combinatorial logic was originally developed to study the nature of paradoxes and to build conceptually clear foundations of mathematics, and the idea of ​​a variable was completely excluded, which helped clarify the role and place of variables in mathematics.
  • Register Machines
    • A RAM machine is an abstract computer that simulates a random access memory computer. It is this computational model that is most often used in the analysis of algorithms.

The Church-Turing Thesis and Algorithmically Unsolvable Problems [edit]

Main articles: Thesis of Church Turing , Turing Machine

Alan Turing suggested (known as the Thesis of Church - Turing) that any algorithm in the intuitive sense of the word can be represented by an equivalent Turing machine. Refining the concept of computability on the basis of the notion of the Turing machine (and other equivalent concepts) opened up possibilities for rigorous proof of the algorithmic unsolvability of various mass problems (that is, problems of finding a single method for solving a certain class of problems whose conditions can vary within certain limits). The simplest example of an algorithmically unsolvable mass problem is the so-called algorithm applicability problem (also called the stop problem). It consists in the following: it is required to find a general method that would allow for an arbitrary Turing machine (given through its program) and an arbitrary initial state of the tape of this machine to determine whether the machine will complete in a finite number of steps, or will continue indefinitely.

During the first decade of the history of the theory of algorithms, unsolvable mass problems were discovered only within this theory itself (this includes the applicability problem described above), as well as within mathematical logic (the problem of deducibility in the classical predicate calculus). Therefore, it was believed that the theory of algorithms is a fringe of mathematics that does not matter for such classical sections as algebra or analysis. The situation changed after A. A. Markov and Л..L. Post in 1947 established the algorithmic unsolvability of the equality problem known in algebra for finitely generated and finitely defined semigroups (the so-called Thue problem). Subsequently, the algorithmic undecidability of many other “purely mathematical” mass problems was established. One of the most famous results in this field is the algorithmic undecidability of the tenth Hilbert problem proved by Yu. V. Matiyasevich.

The current state of the theory of algorithms [edit]

At present, the theory of algorithms develops mainly in three directions.

  • The classical theory of algorithms studies the problems of formulating problems in terms of formal languages, introduces the concept of the problem of resolution, and classifies problems according to the complexity classes (P, NP, etc.).
  • The theory of asymptotic analysis of algorithms considers methods for obtaining asymptotic estimates of resource intensity or run-time algorithms, in particular, for recursive algorithms. Asymptotic analysis allows us to estimate the growth in the resource requirement of the algorithm (for example, runtime) with an increase in the amount of input data.
  • The theory of practical analysis of computational algorithms solves the problem of obtaining explicit functions of complexity, interval analysis of functions, finding practical criteria for the quality of algorithms, developing methods for selecting rational algorithms.

Analysis of the complexity of the algorithms [edit]

Main article: Computational complexity theory

The purpose of the analysis of the complexity of the algorithms is to find the optimal algorithm for solving this problem. As the algorithm's optimality criterion, the complexity of the algorithm is chosen, understood as the number of elementary operations that must be performed to solve the problem using this algorithm. The function of labor is the relation that connects the input data of the algorithm with the number of elementary operations.

The complexity of the algorithms depends differently on the input data. For some algorithms, the workload depends only on the data volume, for other algorithms, on the data values, in some cases, the order of data entry can affect the workload. The complexity of many algorithms may in one way or another depend on all the factors listed above.

One of the simplified types of analysis used in practice is asymptotic analysis of algorithms. The goal of asymptotic analysis is to compare the time and other resources spent by different algorithms designed to solve the same problem, with large amounts of input data. Used in the asymptotic analysis, the evaluation of the complexity function, called the complexity of the algorithm , allows you to determine how quickly the complexity of the algorithm increases with increasing data volume. In the asymptotic analysis of algorithms, the notation used in mathematical asymptotic analysis is used. Below are the main estimates of complexity.

The basic estimate of the complexity function of the algorithm f (n) is the estimate   Algorithm Theory . Here n is the value of the data volume or the length of the input. We say that the estimation of the complexity of the algorithm

  Algorithm Theory

if for g> 0 for n> 0 there are positive ones with 1 , with 2 , n 0 , such that:

  Algorithm Theory

for n> n 0 , in other words, it is possible to find such with 1 and c 2 that for sufficiently large nf (n) will be between

  Algorithm Theory and   Algorithm Theory .

In this case, it is also said that the function g (n) is an asymptotically exact estimate of the function f (n), since by definition the function f (n) does not differ from the function g (n) up to a constant factor (see asymptotic equality) . For example, for the heapsort sorting method, the evaluation of the laboriousness is

  Algorithm Theory i.e   Algorithm Theory

Of   Algorithm Theory follows that   Algorithm Theory .

It is important to understand that   Algorithm Theory is not a function, but a multitude of functions describing the growth   Algorithm Theory accurate to a constant factor.

  Algorithm Theory gives simultaneously upper and lower estimates of the growth of the function. It is often necessary to consider these estimates separately. Evaluation   Algorithm Theory is the upper asymptotic estimate of the complexity of the algorithm. We say that   Algorithm Theory if a

  Algorithm Theory

In other words, recording   Algorithm Theory means that f (n) belongs to the class of functions that grow no faster than the function g (n) up to a constant factor.

Evaluation   Algorithm Theory sets the lower asymptotic estimate of the growth of the function f (n) and determines the class of functions that grow no slower than g (n) up to a constant factor.   Algorithm Theory if a

  Algorithm Theory

For example, the entry   Algorithm Theory denotes a class of functions that do not grow more slowly than   Algorithm Theory , all polynomials with a degree greater than one fall into this class, as well as all power functions with a base greater than one. Equality   Algorithm Theory performed if and only if   Algorithm Theory and   Algorithm Theory .

Asymptotic analysis of algorithms is not only practical, but also theoretical. For example, it was proved that all sorting algorithms based on pairwise comparison of elements will sort n elements in a time no less than   Algorithm Theory .

An important role in the development of asymptotic analysis of algorithms was played by A. Aho, J. Ullman, J. Hopcroft.

Classes of difficulty [edit]

Main article: Difficulty class

Within the framework of the classical theory, the classification of problems according to the complexity classes (P-complex, NP-complex, exponentially complex, etc.) is carried out. Class P includes tasks that can be solved in a time that polynomially depends on the volume of the original data using a deterministic computer (for example, a Turing machine), and class NP includes tasks that can be solved in a polynomially pronounced time using a non-deterministic a computer, that is, a machine whose next state is not always uniquely determined by the previous ones. The work of such a machine can be represented as a process branching out at every ambiguity: the problem is considered solved if at least one branch of the process came to the answer. Another definition of the NP class: the NP class includes problems whose solution with the help of additional information of polynomial length given to us from above, we can check in polynomial time. In particular, the class NP includes all problems whose solution can be checked in polynomial time. Class P is contained in class NP. A classic example of an NP problem is the traveling salesman problem.

Since the class P is contained in the class NP, the belonging of one or another task to the class NP often reflects our current understanding of how to solve this problem and is inconclusive. In general, there is no reason to believe that a P-solution cannot be found for a particular NP problem. The question of the possible equivalence of the classes P and NP (that is, the possibility of finding a P-solution for any NP-problem) is considered by many to be one of the main questions of modern complexity theory of algorithms. The answer to this question has not been found so far. The very question about the equivalence of the classes P and NP is possible due to the introduction of the concept of NP-complete problems. NP-complete tasks constitute a subset of NP-problems and are distinguished by the property that all NP-problems can be reduced to them in one way or another. From this it follows that if a P-solution is found for an NP-complete problem, then a P-solution will be found for all problems of class NP. An example of an NP-complete problem is the conjunctive form problem.

Studies of the complexity of algorithms made it possible to take a fresh look at the solution of many classical mathematical problems and to find solutions for a number of such problems (multiplication of polynomials and matrices, solving linear systems of equations, etc.) that require fewer resources than traditional ones.

Mathematical applications of the theory of algorithms [edit]

  1. Research of mass problems.
  2. Applications to the foundations of mathematics: constructive semantics.
  3. Applications to mathematical logic: analysis of formalized languages ​​of logic and arithmetic.
  4. Computable analysis.
  5. Numbered structures.
  6. Applications to probability theory: the definition of a random sequence.
  7. Applications to information theory: an algorithmic approach to the concept of the amount of information.
  8. Estimates of the complexity of solving individual problems.
  9. The influence of the theory of algorithms on algorithmic practice. [3]

See also [edit]

  • List of the main sections of the theory of algorithms
  • Incompleteness of mathematics
  • Computation model
  • Computation strategy
  • Semantics (programming)
  • Lambda calculus
  • Lambda calculus with types
  • Combinatorial logic
  • Applicative computing systems
  • Categorical abstract machine
  • Supercombinators

Literature [edit]

  • Thomas H. Cormen, Charles I. Leiserson, Ronald L. Rivest, Clifford Stein Algorithms: Construction and Analysis = INTRODUCTION TO ALGORITHMS. - 2nd ed. —M .: Williams, 2006. — p. 1296. — ISBN 0-07-013151-1.
  • Donald Knut, Art of Programming, Volume 1. Basic Algorithms = The Art of Computer Programming, vol.1. Fundamental Algorithms. - 3rd ed. - M .: Williams, 2006. - p. 720. - ISBN 0-201-89683-4.
  • A. N. Kolmogorov. Information Theory and Theory of Algorithms. - M .: Science, 1987. - 304 p.
  • Markov A. A., Nagorny N. M. Theory of Algorithms, ed. 2. - M .: FASIS, 1996.

References [edit]

  • Algorithm Theory
  • Mini-encyclopedia on complexity theory and combinatorial algorithms
  • Lectures on complexity theory and combinatorial algorithms

Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Algorithms

Terms: Algorithms