Tue . 19 Aug 2019
TR | RU | UK | KK | BE |

Earth Simulator

earth simulator, earth simulator game
The Earth Simulator ES 地球シミュレータ, Chikyū Shimyurēta, developed by the Japanese government's initiative "Earth Simulator Project", was a highly parallel vector supercomputer system for running global climate models to evaluate the effects of global warming and problems in solid earth geophysics The system was developed for Japan Aerospace Exploration Agency, Japan Atomic Energy Research Institute, and Japan Marine Science and Technology Center JAMSTEC in 1997 Construction started in October 1999, and the site officially opened on March 11, 2002 The project cost 60 billion yen

Built by NEC, ES was based on their SX-6 architecture It consisted of 640 nodes with eight vector processors and 16 gigabytes of computer memory at each node, for a total of 5120 processors and 10 terabytes of memory Two nodes were installed per 1 metre x 14 metre x 2 metre cabinet Each cabinet consumed 20 kW of power The system had 700 terabytes of disk storage 450 for the system and 250 for the users and 16 petabytes of mass storage in tape drives It was able to run holistic simulations of global climate in both the atmosphere and the oceans down to a resolution of 10 km Its performance on the LINPACK benchmark was 3586 TFLOPS, which was almost five times faster than its predecessor, ASCI White

ES was the fastest supercomputer in the world from 2002 to 2004 Its capacity was surpassed by IBM's Blue Gene/L prototype on September 29, 2004

ES was replaced by the Earth Simulator 2 ES2 in March 20091 ES2 is an NEC SX-9/E system, and has a quarter as many nodes each of 128 times the performance 32x clock speed, four times the processing resource per node, for a peak performance of 131 TFLOPS With a delivered LINPACK performance of 1224 TFLOPS,2 ES2 was the most efficient supercomputer in the world at that point In November 2010, NEC announced that ES2 topped the Global FFT, one of the measures of the HPC Challenge Awards, with the performance number of 11876 TFLOPS3

Contents

  • 1 System overview
    • 11 Hardware
      • 111 System configuration
      • 112 Construction of CPU
      • 113 Processor Node PN
      • 114 Interconnection Network IN
      • 115 Processor Node PN Cabinet
    • 12 Software
      • 121 Operating system
      • 122 Mass storage file system
      • 123 Job scheduling
      • 124 Programming environment
    • 13 Facilities
      • 131 Protection from natural disasters
      • 132 Lightning protection system
      • 133 Illumination
      • 134 Seismic isolation system
    • 14 Performance
      • 141 LINPACK
      • 142 Computational performance of WRF on Earth Simulator
  • 2 See also
  • 3 References
  • 4 External links

System overviewedit

Hardwareedit

The Earth Simulator ES for short was developed as a national project by three governmental agencies: the National Space Development Agency of Japan NASDA, the Japan Atomic Energy Research Institute JAERI, and the Japan Marine Science and Technology Center JAMSTEC The ES is housed in the Earth Simulator Building approx; 50m x 65m x 17m The upgrade of the Earth Simulator has been completed in March 2009 The renewed system Earth Simulator 2 or ES2 uses 160 nodes of NEC's SX-9E

System configurationedit

The ES is a highly parallel vector supercomputer system of the distributed-memory type, and consisted of 160 processor nodes connected by Fat-Tree Network Each Processor nodes is a system with a shared memory, consisting of 8 vector-type arithmetic processors, a 128-GB main memory system The peak performance of each Arithmetic processors is 1024Gflops The ES as a whole thus consists of 1280 arithmetic processors with 20 TB of main memory and the theoretical performance of 131Tflops

Construction of CPUedit

Each CPU consists of a 4-way super-scalar unit SU, a vector unit VU, and main memory access control unit on a single LSI chip The CPU operates at a clock frequency of 32 GHz Each VU has 72 vector registers, each of which has 256 vector elements, along with 8 sets of six different types of vector pipelines: addition /shifting, multiplication, division, logical operations, masking, and load/store The same type of vector pipelines works together by a single vector instruction and pipelines of different types can operate concurrently

Processor Node PNedit

The processor node is composed of 8 CPU and 10 memory modules

Interconnection Network INedit

The RCU is directly connected to the crossbar switches and controls inter-node data communications at 64GB/s bidirectional transfer rate for both sending and receiving data Thus the total bandwidth of inter-node network is about 10TB/s

Processor Node PN Cabinetedit

The processor node is composed two nodes of one cabinet, and consists of power supply part 8 memory modules and PCI box with 8 CPU modules

Softwareedit

Below is the description of software technologies used in the operating system, Job Scheduling and the programming environment of ES2

Operating systemedit

The operating system running on ES is developed for NEC's SX Series supercomputers The SX series are vector supercomputers designed, manufactured, and marketed by NEC SUPER-UX is an operating system that takes the function from BSD and SVR42MP as an operating system based on UNIX System V, and strengthens the function necessary for the super computer in addition SUPER-UX, Berkeley Software Distribution BSD and SVR42MP are Unix based operating systems

Mass storage file systemedit

If a large parallel job running on 640 PNs reads from/writes to one disk installed in a PN, each PN accesses to the disk in sequence and performance degrades terribly Although local I/O in which each PN reads from or writes to its own disk solves the problem, it is a very hard work to manage such a large number of partial files Then ES adopts Staging and Global File SystemGFS that offers a high-speed I/O performance

Job schedulingedit

ES is basically a batch-job system Network Queuing System II NQSII is introduced to manage the batch job Queue configuration of the Earth Simulator ES has two-type queues S batch queue is designed for single-node batch jobs and L batch queue is for multi-node batch queue There are two-type queues One is L batch queue and the other is S batch queue S batch queue is aimed at being used for a pre-run or a post-run for large-scale batch jobs making initial data, processing results of a simulation and other processes, and L batch queue is for a production run Users choice an appropriate queue for users' jobs

  1. The nodes allocated to a batch job are used exclusively for that batch job
  2. The batch job is scheduled based on elapsed time instead of CPU time

Strategy 1 enables to estimate the job termination time and to make it easy to allocate nodes for the next batch jobs in advance Strategy 2 contributes to an efficiency job execution The job can use the nodes exclusively and the processes in each node can be executed simultaneously As a result, the large-scale parallel program is able to be executed efficiently PNs of L-system are prohibited from access to the user disk to ensure enough disk I/O performance herefore the files used by the batch job are copied from the user disk to the work disk before the job execution This process is called "stage-in" It is important to hide this staging time for the job scheduling Main steps of the job scheduling are summarized as follows;

  1. Node Allocation
  2. Stage-in copies files from the user disk to the work disk automatically
  3. Job Escalation rescheduling for the earlier estimated start time if possible
  4. Job Execution
  5. Stage-out copies files from the work disk to the user disk automatically

When a new batch job is submitted, the scheduler searches available nodes Step1 After the nodes and the estimated start time are allocated to the batch job, stage-in process starts Step2 The job waits until the estimated start time after stage-in process is finished If the scheduler find the earlier start time than the estimated start time, it allocates the new start time to the batch job This process is called "Job Escalation" Step3 When the estimated start time has arrived, the scheduler executes the batch job Step4 The scheduler terminates the batch job and starts stage-out process after the job execution is finished or the declared elapsed time is over Step5 To execute the batch job, the user logs into the login-server and submits the batch script to ES And the user waits until the job execution is done During that time, the user can see the state of the batch job using the conventional web browser or user commands The node scheduling, the file staging and other processing are automatically processed by the system according to the batch script

Programming environmentedit

Programming model in ES

The ES hardware has a 3-level hierarchy of parallelism: vector processing in an AP, parallel processing with shared memory in a PN, and parallel processing among PNs via IN To bring out high performance of ES fully, you must develop parallel programs that make the most use of such parallelism the 3-level hierarchy of parallelism of ES can be used in two manners, which are called hybrid and flat parallelization, respectively In the hybrid parallelization, the inter-node parallelism is expressed by HPF or MPI, and the intra-node by microtasking or OpenMP, and you must, therefore, consider the hierarchical parallelism in writing your programs In the flat parallelization, the both inter- and intra-node parallelism can be expressed by HPF or MPI, and it is not necessary for you to consider such complicated parallelism Generally speaking, the hybrid parallelization is superior to the flat in performance and vice versa in ease of programming Note that the MPI libraries and the HPF runtimes are optimized to perform as well as possible both in the hybrid and flat parallelization

Languages

Compilers for Fortran 90, C and C++ are available All of them have an advanced capability of automatic vectorization and microtasking Microtasking is a sort of multitasking provided for the Cray's supercomputer at the same time and is also used for intra-node parallelization on ES Microtasking can be controlled by inserting directives into source programs or using the compiler's automatic parallelization Note that OpenMP is also available in Fortran 90 and C++ for intra-node parallelization

Parallelization

Message Passing Interface MPI

MPI is a message passing library based on the MPI-1 and MPI-2 standards and provides high-speed communication capability that fully exploits the features of IXS and shared memory It can be used for both intra- and inter-node parallelization An MPI process is assigned to an AP in the flat parallelization, or to a PN that contains microtasks or OpenMP threads in the hybrid parallelization MPI libraries are designed and optimizedcarefully to achieve highest performance of communication on the ES architecture in both of the parallelization manner

High Performance Fortrans HPF

Principal users of ES are considered to be natural scientists who are not necessarily familiar with the parallel programming or rather dislike it Accordingly, a higher-level parallel language is in great demand HPF/SX provides easy and efficient parallel programming on ES to supply the demand It supports the specifications of HPF20, its approved extensions, HPF/JA, and some unique extensions for ES

Tools

-Integrated development environment PSUITE

Integrated development environment PSUITE is integration of various tools to develop the program that operates by SUPER-UX Because PSUITE assumes that various tools can be used by GUI, and has the coordinated function between tools, it comes to be able to develop the program more efficiently than the method of developing the past the program and easily

-Debug Support

In SUPER-UX, the following are prepared as strong debug support functions to support the program development

Facilitiesedit

Features of the Earth Simulator building

Protection from natural disastersedit

The Earth Simulator Center has several special features that help to protect the computer from natural disasters or occurrences A wire nest hangs over the building which helps to protect from lightning The nest itself uses high-voltage shielded cables to release lightning current into the ground A special light propagation system utilizes halogen lamps, installed outside of the shielded machine room walls, to prevent any magnetic interference from reaching the computers The building is constructed on a seismic isolation system, composed of rubber supports, that protect the building during earthquakes

Lightning protection systemedit

Three basic features:

  • Four poles at both sides of the Earth Simulator Building compose wire nest to protect the building from lightning strikes
  • Special high-voltage shielded cable is used for inductive wire which releases a lightning current to the earth
  • Ground plates are laid by keeping apart from the building about 10 meters

Illuminationedit

Lighting: Light propagation system inside a tube 255mm diameter, 44m49yd length, 19 tubes Light source: halogen lamps of 1 kW Illumination: 300 lx at the floor in average The light sources installed out of the shielded machine room walls

Seismic isolation systemedit

11 isolators 1 ft height, 33 ft Diameter, 20-layered rubbers supporting the bottom of the ES building

Performanceedit

LINPACKedit

The new Earth Simulator system, which began operation in March 2009, achieved sustained performance of 1224 TFLOPS and computing efficiency 2 of 9338% on the LINPACK Benchmark 1

  • 1 LINPACK Benchmark

The LINPACK Benchmark is a measure of a computer's performance and is used as a standard benchmark to rank computer systems in the TOP500 project LINPACK is a program for performing numerical linear algebra on computers

  • 2 Computing efficiency

Computing efficiency is the ratio of sustained performance to a peak computing performance Here, it is the ratio of 1224TFLOPS to 131072TFLOPS

Computational performance of WRF on Earth Simulatoredit

WRF Weather Research and Forecasting Model is a mesoscale meteorological simulation code which has been developed under the collaboration among US institutions, including NCAR National Center for Atmospheric Research and NCEP National Centers for Environmental Prediction JAMSTEC has optimized WRFV2 on the Earth Simulator ES2 renewed in 2009 with the measurement of computational performance As a result, it was successfully demonstrated that WRFV2 can run on the ES2 with outstanding and sustained performance

The numerical meteorological simulation was conducted by using WRF on the Earth Simulator for the earth's hemisphere with the Nature Run model condition The model spatial resolution is 4486 by 4486 horizontally with the grid spacing of 5 km and 101 levels vertically Mostly adiabatic conditions were applied with the time integration step of 6 seconds A very high performance on the Earth Simulator was achieved for high-resolution WRF While the number of CPU cores used is only 1% as compared to the world fastest class system Jaguar CRAY XT5 at Oak Ridge National Laboratory, the sustained performance obtained on the Earth Simulator is almost 50% of that measured on the Jaguar system The peak performance ratio on the Earth Simulator is also record-high 222%

See alsoedit

  • Supercomputing in Japan
  • Attribution of recent climate change
  • NCAR
  • HadCM3
  • EdGCM

Referencesedit

  1. ^ "Japan's Earth Simulator 2 open for business" 1 March 2009 
  2. ^ "Earth Simulator update breaks efficiency record" 5 June 2009 
  3. ^ ""Earth Simulator" Wins First Place in the HPC Challenge Awards" 17 November 2010 
  • Sato, Tetsuya 2004 "The Earth Simulator: Roles and Impacts" Nuclear Physics B Proceedings Supplements 129: 102 doi:101016/S0920-56320302511-8 

External linksedit

  • The Earth Simulator Center
  • The Earth Simulator Overview
  • The Earth Simulator Center System info
  • The Earth Simulator Research Results Repository
  • Time Magazine: 2002 Best Inventions
  • Ultrastructure Simulations
Records
Preceded by
ASCI White
7226 teraflops
World's most powerful supercomputer
March 2002 – November 2004
Succeeded by
Blue Gene/L
7072 teraflops

Coordinates: 35°22′51″N 139°37′348″E / 3538083°N 139626333°E / 3538083; 139626333

earth simulator, earth simulator 2002, earth simulator building, earth simulator computer, earth simulator danball, earth simulator game, earth simulator japan, earth simulator project, earth simulator seminar report, earth simulator system


Earth Simulator Information about

Earth Simulator


  • user icon

    Earth Simulator beatiful post thanks!

    29.10.2014


Earth Simulator
Earth Simulator
Earth Simulator viewing the topic.
Earth Simulator what, Earth Simulator who, Earth Simulator explanation

There are excerpts from wikipedia on this article and video

Random Posts

La Porte, Indiana

La Porte, Indiana

La Porte French for "The Door" is a city in LaPorte County, Indiana, United States, of which it is t...
Fernando Montes de Oca Fencing Hall

Fernando Montes de Oca Fencing Hall

The Fernando Montes de Oca Fencing Hall is an indoor sports venue located in the Magdalena Mixhuca S...
My Everything (The Grace song)

My Everything (The Grace song)

"My Everything" was Grace's 3rd single under the SM Entertainment, released on November 6, 2006 Unli...
Turkish Straits

Turkish Straits

The Turkish Straits Turkish: Türk Boğazları are a series of internationally significant waterways in...

Random Posts (searchxengine.com)

H.261

H.261

H.261 is a video compression standard. It was adopted in 1990 by the international organization ITU.
Russian evangelism

Russian evangelism

Evangelical Christians - a movement of Protestant Christianity, close to Baptism according to a numb
Rosario, Amed

Rosario, Amed

Herman Amed Valdez Rosario Isp German Amed Valdez Rosario, November 20, 1995, Santo Domingo - Domini
Chertanka

Chertanka

Chertanka is a toponym in Russia: The Chertanka tributary of the Coma is a river in the Krasnoyarsk