MIMD


In computing, MIMD multiple instruction, multiple data is a technique employed to achieve parallelism Machines using MIMD have a number of processors that function asynchronously and independently At any time, different processors may be executing different instructions on different pieces of data MIMD architectures may be used in a number of application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling, and as communication switches MIMD machines can be of either shared memory or distributed memory categories These classifications are based on how MIMD processors access memory Shared memory machines may be of the bus-based, extended, or hierarchical type Distributed memory machines may have hypercube or mesh interconnection schemes

Contents

  • 1 Examples
  • 2 Shared memory model
    • 21 Bus-based
    • 22 Hierarchical
  • 3 Distributed memory
    • 31 Hypercube interconnection network
    • 32 Mesh interconnection network
  • 4 See also
  • 5 References

Examplesedit

An example of MIMD system is Intel Xeon Phi, descended from Larrabee microarchitecture1 These processors have multiple processing cores up to 61 as of 2015 that can execute different instructions on different data NVIDIA graphics cards fit the MIMD model, whereas the AMD/ATI cards more closely resemble the SIMD model, and have a larger number of simpler processors

Most parallel computers, as of 2013, are MIMD systems2

Shared memory modeledit

The processors are all connected to a "globally available" memory, via either software or hardware means The operating system usually maintains its memory coherence3

From a programmer's point of view, this memory model is better understood than the distributed memory model Another advantage is that memory coherence is managed by the operating system and not the written program Two known disadvantages are: scalability beyond thirty-two processors is difficult, and the shared memory model is less flexible than the distributed memory model3

There are many examples of shared memory multiprocessors: UMA Uniform Memory Access, COMA Cache Only Memory Access and NUMA Non-Uniform Memory Access4

Bus-basededit

MIMD machines with shared memory have processors which share a common, central memory In the simplest form, all processors are attached to a bus which connects them to memory This means that every machine with shared memory shares a specific CM, common bus system for all the clients

For example, if we consider a bus with clients A, B, C connected on one side and P, Q, R connected on the opposite side, any one of the clients will communicate with the other by means of the bus interface between them

Hierarchicaledit

MIMD machines with hierarchical shared memory use a hierarchy of buses as, for example, in a "Fat tree" to give processors access to each other's memory Processors on different boards may communicate through inter-nodal buses Buses support communication between boards With this type of architecture, the machine may support over nine thousand processors

Distributed memoryedit

In distributed memory MIMD machines, each processor has its own individual memory location Each processor has no direct knowledge about other processor's memory For data to be shared, it must be passed from one processor to another as a message Since there is no shared memory, contention is not as great a problem with these machines It is not economically feasible to connect a large number of processors directly to each other A way to avoid this multitude of direct connections is to connect each processor to just a few others This type of design can be inefficient because of the added time required to pass a message from one processor to another along the message path The amount of time required for processors to perform simple message routing can be substantial Systems were designed to reduce this time loss and hypercube and mesh are among two of the popular interconnection schemes

Examples of distributed memory multicomputers include MPP massively parallel processors and COW clusters of workstations The former is complex and expensive: lots of super-computers coupled by broad-band networks Examples include hypercube and mesh interconections COW is the "home-made" version for a fraction of the price4

Hypercube interconnection networkedit

In an MIMD distributed memory machine with a hypercube system interconnection network containing four processors, a processor and a memory module are placed at each vertex of a square The diameter of the system is the minimum number of steps it takes for one processor to send a message to the processor that is the farthest away So, for example, the diameter of a 2-cube is 1 In a hypercube system with eight processors and each processor and memory module being placed in the vertex of a cube, the diameter is 3 In general, a system that contains 2^N processors with each processor directly connected to N other processors, the diameter of the system is N One disadvantage of a hypercube system is that it must be configured in powers of two, so a machine must be built that could potentially have many more processors than is really needed for the application

Mesh interconnection networkedit

In an MIMD distributed memory machine with a mesh interconnection network, processors are placed in a two-dimensional grid Each processor is connected to its four immediate neighbors Wraparound connections may be provided at the edges of the mesh One advantage of the mesh interconnection network over the hypercube is that the mesh system need not be configured in powers of two A disadvantage is that the diameter of the mesh network is greater than the hypercube for systems with more than four processors Basically the only thing you need is an algorithm that calculates the difference between mesh network and the hypercube

See alsoedit

  • SMP
  • NUMA
  • Flynn's taxonomy
  • SPMD
  • Superscalar
  • Very long instruction word

Referencesedit

  1. ^ http://perilsofparallelblogspotgr/2008/09/larrabee-vs-nvidia-mimd-vs-simdhtml
  2. ^ http://softwareintelcom/en-us/articles/mimd
  3. ^ a b Ibaroudene, Djaffer "Parallel Processing, EG6370G: Chapter 1, Motivation and History" Lecture Slides St Mary's University, San Antonio, Texas Spring 2008
  4. ^ a b Andrew S Tanenbaum 1997 Structured Computer Organization 4 ed Prentice-Hall pp 559–585 ISBN 978-0130959904 


MIMD Information about


MIMD
MIMD

MIMD Information Video


MIMD viewing the topic.
MIMD what, MIMD who, MIMD explanation

There are excerpts from wikipedia on this article and video



Random Posts

Social Accounts

Facebook Twitter VK
Copyright © 2014. Search Engine