MISD (Multiple Instruction stream, Single Data stream)
This page was last modified on 20 December 2016, at 14:58.
MISD-архитектура (Multiple Instruction stream, Single Data stream) —the type of parallel computing architecture where many functional units perform different operations on the same data. One of the classes of computer systems Flynn's Classification.[1]
Pipeline architecture belongs to this type, although not all agree with that, because the data after the processing in each pipeline stage are different. Fault-tolerant computers, redundantly performing the same commands to detect and correct errors (so called replication), can be attributed to MISD. There are quite a few computer architecture MISD due to the fact that MIMD and SIMD are often more appropriate for common data parallel techniques. So, they provide the best scaling and the use of computing resources compared to MISD.
An example of MISD architecture is a systolic array of processors wave, first described by Hsiang Tsung Kung and Charles E. Leiserson. In a conventional systolic array parallel input data passes through the wired network, the processor nodes resembling the human brain, which is able to combine, process, merge and sort the input data in the derived result.
Systolic arrays usually programmed for a specific operation, like multiplication with accumulation to perform large-scale parallel integration, convolution, correlation, matrix multiplication or sorting data. Usually the systolic array contains a large integrated circuit primitive compute nodes that can be hardware or software configured for a specific application. Nodes, as a rule, are fixed and identical, while the interconnects are programmable. More General wave processors, in contrast, use sophisticated and individually programmable nodes that may or may not be monolithic, depending on the size of the array and design parameters. As a wave-like dissemination of data through the systolic array is reminiscent of the impulse of the human circulatory system, the name "systolic" learned from medical terminology.[2]
The main advantage of systolic arrays is that all the data operands and intermediate results are contained in (pass through) the array processor. No need to resort to external Shin, main memory or internal caches during each operation, as in the case with standardized machines. Consistent limits of parallel computation Amdahl's theorem also does not apply in the same way, because data dependencies are handled implicitly in the interconnection of the programmable nodes.
Therefore, systolic arrays, very good in the areas of artificial intelligence, image processing, pattern recognition, computer vision, and other tasks easily solve the living brain. Wave processors, in General, can also be effective in machine learning by introducing yourself in a custom neural network at the hardware level.
Despite everything, the characteristics associated with the classification of systolic arrays to the type MISD, their classification is somewhat controversial. Of course, given the fact that the input is usually a vector of independent variables, the systolic array can not be SISD. Since the input values are merged and aggregated in the result(s) and do not retain their independence, as in the processing unit of the SIMD vectors, the array cannot be attributed to this type. As a result, the array cannot be classified as MIMD, MIMD because, by and large, is a series of smaller SISD or SIMD-computer.
Finally, due to the fact that the data set is modified through the array from node to node, multiple nodes do not operate on the same data, this makes the classification as MISD is incorrect. Moreover, the systolic array can not be considered as MISD for the same reasons not suitable for the type of SISD: the input data represents a vector not a single value, although one could argue that this input vector is a single data set. On the above comment you can close your eyes. Systolic arrays often offer the classic example of a MISD architecture in the textbooks on parallel computing and in engineering class. If we consider the array as an indivisible structure, it can be classified as SFMuDMeR = single function, multiple data, the overall result ("SFMuDMeR" = Single Function, Multiple Data, Merged Result(s)).
Flynn's Classification
Flynn's taxonomy (Flynn's taxonomy) — General classification of computer architectures op signs of parallelism in the flow of commands and data. Michael Flynn in 1966[3], расширил в 1972 году[4][5]
Single Instruction | Multiple Instruction | Single Program | Multiple Programs | |
---|---|---|---|---|
Single Data | SISD | MISD | ||
Multiple Data | SIMD | MIMD | SPMD | MPMD |
Since the taxonomy as the main criterion for use of parallelism, Flynn's taxonomy is most commonly mentioned in the technical literature[5][6] in the classification of parallel computing systems. In most cases, parallel computing systems are referred to as SIMD or MIMD classes. Thus the SISD machine in parallel is not considered, and the existence of a computer architecture MISD a big question, because even the main example (the systolic array) can not be strictly considered as suitable under the definition of this type.
To better understand what is Flynn's classification, which distinguishes SIMD and MIMD, and SISD-type from the considered in this article, MISD architectures, the following will be presented some characteristics of these categories.
It is also worth noting that the most popular classes (SIMD and MIMD) are divided in the way of memory from the programmer's point of view[7]Template:Sfn. Таким образом, выделяют системы с общей памятью (англ. shared memory, SM) и с распределённой (англ. distributed memory, DM).
Clause "from the point of view of the programmer" due to the fact that there is a computing system where the memory is physically distributed across nodes in the system, however, all processors of the system it is all seen as sharing a single global address space.
- "'SISD"' (""'S"'ingle "'I"'nstruction stream over a "'S"'ingle "'D"'ata stream") is a traditional computer the von Neumann architecture with a single processor that performs consistently one team after another, working with one data flow. To this type include pipelined, superscalar and VLIW processors.
- "'SIMD"' (eng. ""'S"'ingle "'I"'nstruction stream, "'M"'ultiple "'D"'ata stream") is one of the most common types of parallel computers. This class includes vector processors, conventional processors, when executing commands vector extensions; array processors. Here a single processor loads one command data set to them and performs the operation described in this command, the whole data set simultaneously.
- "'SM-SIMD (shared memory SIMD)"' is a subclass of SIMD shared memory from the programmer's point of view. This includes vector processors.
- "'DM-SIMD (SIMD distributed memory)"' is a subclass of SIMD distributed memory from the programmer's point of view. This includes matrix processors are a separate subspecies with a large number of processors.
- "'MISD"' — in fact, the hypothetical class, because the real systems, the representatives of this type do not yet exist. Some researchers attribute to him a pipelined computer. The above example of a systolic array whose affiliation to the MISD is being contested in connection with the representation of the input.
- "'MIMD"' (eng. ""'M"'ultiple "'I"'nstruction stream, "'M"'ultiple "'D"'ata stream") is another common type of parallel computers. Includes a multiprocessor system, where processors process multiple streams of data. This will typically include traditional multiprocessor machines, multicore and multithreaded processors, and computer clusters.
- "'SM-MIMD (shared memory MIMD)"' is a subclass of MIMD shared memory from the programmer's point of view. It is treated multiprocessor machines and multicore processors with shared memory. The multiprocessors are easy to program, support for SMP (symmetric multiprocessors) has long been present in all major operating systems. However, these computers are low scalability: the increase of processors in the system carries a high load on the common bus.
- There are so-called "'DSM-MIMD (distributed shared memory MIMD)"' subclass. That is programmer visible memory as a single address space, but physically it can be distributed across nodes. In this subclass each processor has its own local memory and other memory locations, the processor accesses through a high-speed connection. Since access to different areas of shared memory are not the same, such systems are called NUMA (English. Non-Uniform Memory Access). There is a problem: it is necessary that the processor seen in the memory changes made by other processors. For its solution, arose ccNUMA (through the harmonization cache) and nccNUMA (without the consent of the cache). NUMA systems have improved scalability, in comparison with multiprocessors that allows you to create massively parallel computing systems, where the number of processors reaches the thousands.
- "'DM-MIMD (distributed memory MIMD)"' is a subclass of MIMD distributed memory from the programmer's point of view. This includes multi-processor computers with distributed memory and computer clusters like Beowulf Network of Workstations. Local memory separate processor is not visible to others. Each processor has its own task. If he needs data from memory of another processor, it communicates with the message. These machines are high scalability, as NUMA.
- "'SPMD"' (eng. "Single Program Multiple Data") is a system where all the processors of MIMD-machine runs only one program, and each processor, it processes different data blocks.
- "'MPMD"' (eng. "Multiple Programs, Multiple Data") — system, where a) on a single processor MIMD machine running the master program, and on the other — the subordinate program, which is run by a master program (the principle of "master / slave" or "master / worker"); b) on different nodes of a MIMD-machines are different programs that are different are treated the same dataset (the principle of "coupled analysis"), for the most part they work independently from each other, but from time to time communicate to go to the next step.
Classification of certain computer depends on the perspective of the researcher. For example, the conveyor machine can be assigned to any of the four basic types[5]. For this reason it will carry MISD systolic arrays, then dispute their belonging to a given class. Also the classification depends on the level of integration, but at the MISD to such divergence of views does not reach.
Links
- ↑ MISD — Википедия.
- ↑ MISD — Wikipedia.
- ↑ Флинн М. Дж.: Высокоскоростные компьютеры. Proc IEEE 54:1901-1901. — 1966.
- ↑ Флинн М. Дж.: Некоторые компьютерные организации и их эффективность. IEEE Transactions on Computers, 21(9): 948—960. — 1972.
- ↑ 5.05.1 5.2 Падуя, 2012. Стр. 689-697.
- ↑ Валентин Седых. Мультипроцессинг сегодня. — 2004 г.
- ↑ «Обзор недавних компьютеров: Основные классы архитектур
Komentar
Posting Komentar