Supercomputer

Lecture



Supercomputer is a fairly flexible and very broad term. In the general understanding of a super-computer, this computer is much more powerful than all computers available on the market. Some engineers, jokingly, call a supercomputer any computer whose mass exceeds one ton. And although most modern super-computers really weigh more than a ton. Not every computer can be called "super", even if it weighs more than a ton. Mark-1 , Eniak are also heavyweights, but they are not considered supercomputers even for their time.

The speed of technical progress is so great that today's super-computer will be inferior to a home computer in 5-10 years. The term supercomputing appeared as early as the 20s of the last century, and the term supercomputer in the 60s. But it became widespread largely due to Seymour Creil and his super-computer Cray-1 , Cray-2 . Although Seymour Cray himself does not prefer to use this term. Calls their cars, just a computer.

Cray-1 is considered to be one of the first super-computers. He appeared in 1974. In computer processors there was a huge, for those times, set of registers. Which were divided into groups. Each group had its own functional purpose. The block of address registers that was responsible for addressing in the computer memory. Block of vector registers, block of scalar registers. The performance of a super-computer was 180 million operations per second on floating-point numbers. Used 32 bit commands. This is taking into account the fact that contemporaries of this computer were just beginning to move from 8-bit commands to 16-bit ones.

  Supercomputer
Cray-1 computer assembly

  Supercomputer
Computer Cray-2

Computers Kreia used in government organizations, industrial and scientific research centers. Many supercomputers called the computer that was created by Seymour Cream.

Creil had many competing companies. But many of them did not succeed. In the 90s, these firms went bankrupt. And the super-computer niche was occupied by computer giants such as IBM . Crea Cray Inc. It is still one of the leading manufacturers of supercomputers .

At the very beginning of the emergence of super-computers was associated with the need for fast processing of large data arrays and complex mathematical and analytical calculations. Therefore, the first supercomputers in their architecture differed little from conventional computers. Only their power was many times more than standard workstations. Initially, supercomputers were equipped with vector processors, the usual scalar. By the 80th, they switched to parallel operation of several vector processors. But this path of development was not rational. Supercomputers switched to parallel scalar processors.

Massively parallel processors have become the basis for a super-computer. Thousands of processor elements combined to create a powerful computing platform. Most of the parallel processors were based on the RISC architecture. RISC (Reduced Instruction Set Computing) - calculations with a reduced instruction set. By this term, processor manufacturers understand the concept where simpler instructions are executed faster. This method allows to reduce the cost of production of processors. At the same time increase their productivity.

The idea is to create a RISC processor . Came IBM. Back in the 70s, they noticed that many architectural and functional features are not used by software developers. In addition, one of the first compilers of high-level languages ​​does not use many instructions of standard processors. As a consequence of this. Programs written on such compilers do not use processor resources efficiently. It turns out that often the processor runs at idle. Execute complex commands - simple processor instructions. Much more efficient than creating complex instructions for such commands in a processor.

The need for powerful computing solutions grew rapidly. Super computers are too expensive. An alternative was required. And they were replaced by clusters. But today powerful computers are called supercomputers.

A cluster is a set of servers united in a network and working on one task. This server group has high performance. Many times more than the same number of servers that would work separately. The cluster gives high reliability. The failure of one server will not lead to an emergency stop of the entire system, but only not much impact on its performance. It is possible to replace the server in the cluster without stopping the entire system. No need to immediately lay out huge sums for super-computers. The cluster can be increased gradually, which significantly depreciates the costs of the enterprise.

Server clustering is implemented by software. There is a cluster manager. It is installed on the main server and manages all the other nodes in the cluster. Client software is installed on the rest of the cluster servers.

Supercomputers are different from servers that are needed for prompt processing of requests. They differ from mainframes, which also have high performance but are used to work simultaneously with multiple users. Supercomputers can be used to work with a single program. Which requires powerful resources. This is a simulation of the weather, the calculation of the production process.


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

History of computer technology and IT technology

Terms: History of computer technology and IT technology