This petascale supercomputer built by IBM was deployed at the Lawrence Livermore National Laboratory in 2012. It quickly replaced the K Computer as the world’s fastest, benchmarking 16 petaflops. Running entirely on Linux, it shattered records for highest sustained performance at 10 petaflops. For the first time, a model of the electrophysiology of the human heart was able to run at near realtime simulation. It was based on the Blue Gene/Q design and sported over a million processor cores and a staggering 1 PB of memory. In January 2013 it became the first supercomputer to use more than one million computing cores for a single application. It was later dropped to number three on the Top500 supercomputer list, replaced by the Tinhae-2 and Titan.
The IBM Roadrunner is a supercomputer built for the Los Alamos National Laboratory and is the world’s second fastest supercomputer and the first supercomputer to boast petaflop performance. A unique system built with off the shelf parts, it achieved 1.026 petaflops on May 25th, 2008. Costing $133 million, it’s also the fourth most energy efficient supercomputer in the world. The system is a hybrid of IBM PowerXCell 8i and AMD Opteron processors, and runs on Red Hat Enterprise Linux and Fedora. It’s 6,000 square feet and went operational in 2008. The Department of Energy uses it to simulate the aging of nuclear materials, as well as crunching data for science, financial, automotive, and aerospace industries. It is one of the most unique supercomputers due to it’s hybrid processor system. It remained the world’s fastest supercomputer until it was knocked to second by the Cray XT5-HE Jaguar in 2009.
Blue Gene/L, IBM’s newest supercomputer based off the original Blue Gene project, took the title of world’s fastest supercomputer from the Earth Simulator System in Nov of 2004. With a sustained performance of 70.72 teraflops it will be used for numerous applications and high performance computing. It has also spawned a commercial version capable of 5.72 teraflops, the eServer Blue Gene Solution. Currently IBM is building a full Blue Gene/L system for the Department of Energy’s NNSA/Lawrence Livermore National Laboratory in California.
Blue Gene/L’s most remarkable feature is it’s size. The completed system will take up space equivalent to half a tennis court, much smaller than most. It will also consume significantly less power, at 1.6 megawatts. The prototype of 32,000 processors knocked the ESS off the list, but when delivered to Lawrence Livermore it will be twice that. It is valued at about $100 million.
NEC Corporation today announced the completion of its delivery of the ultra high-speed vector parallel computing system known as “the Earth Simulator,” to the Earth Simulator Center. The system is slated to begin operation on March 11, 2002.
The Earth Simulator was developed by the Earth Simulator Research and Development Center, which is a collaborative organization of the National Space Development Agency of Japan , Japan Atomic Energy Research Institute, and Japan Marine Science and Technology Center.
The Earth Simulator system was installed in the simulator building at Yokohama Institute for Earth Sciences (Yokohama, Kanagawa). This is the world’s fastest supercomputer configured with 640 nodes (64GFLOPS/node, 5,120 CPUs in total), each of which consists of eight vector processors (8GFLOPS/CPU), and achieves the peak performance of 40TFLOPS (40 trillion floating-point operations per second).
The Earth Simulator will create a “virtual planet earth” on the computer by its capability of processing vast volume of data sent from satellites, buoys and other worldwide observation point. The system will contribute to analyze and predict environmental changes on the earth through the simulation of various global scale environmental phenomena such as global warming, El Nino effect, atmospheric and marine pollution, torrential rainfall and other complicated environmental effects. It will also provide an outstanding research tool in explaining terrestrial phenomena such as tectonics and earthquakes.
IBM knocked NEC off the pedestal for fastest supercomputer in 2004 with the Blue Gene/L.
On December 6, IBM announced a new $100 million exploratory research initiative to build a supercomputer 500 times more powerful than the world’s fastest computers.
The new computer — nicknamed “Blue Gene” by IBM researchers — was capable of more than one quadrillion operations per second (one petaflop). This level of performance made Blue Gene 1,000 times more powerful than the Deep Blue machine that beat world chess champion Garry Kasparov in 1997.
Blue Gene’s massive computing power was initially used to model the folding of human proteins, making this fundamental study of biology the company’s first computing “grand challenge” since the Deep Blue experiment. Learning more about how proteins fold is expected to give medical researchers better understanding of diseases, as well as potential cures.
Deep Blue is at heart a massively parallel, RS/6000 SP-based computer system that was designed to play chess at the grandmaster level. In May 1997, the IBM supercomputer played a fascinating match with the reigning World Chess Champion, Garry Kasparov.
In 1985, a Carnegie Mellon doctoral student named Feng-hsiung Hsu began to develop a chess-playing computer called “Chipcomputing.” Twelve years and hundreds of checkmates later, Chipcomputing has evolved into what is now widely considered to be the greatest computing chess-playing computer ever constructed — Deep Blue. And this year, with improved capacity and a wealth of new chess knowledge, Deep Blue comes to the chessboard with more speed and power than ever before.
The origins of Deep Blue
The IBM Deep Blue project began when Hsu and Murray Campbell (Hsu’s classmate at Carnegie Mellon) joined IBM Research in 1989. It started as an effort to explore how to use parallel processing to solve complex computing problems. The Deep Blue team at IBM — Hsu, Campbell, Joe Hoane, Jerry Brody and C.J. Tan — saw this complex problem as a classical research dilemma of how to develop a chess-playing computer to computing the best chess players in the world.
Over the past few years, the team designed a chess-specific processor chip that is capable of examining and evaluating two to three thousand positions per second. The team joined this special purpose hardware with IBM’s PowerParallel SP computer to increase its searching capabilities several hundred-fold.
The computing iteration of the Deep Blue computer is a 32-node IBM RS/6000 SP high-performance computer, which utilizes the new Power Two Super Chip processors (P2SC). Each node of the SP employs a single microchannel card containing 8 dedicated VLSI chess processors, for a total of 256 processors working in tandem. Deep Blue’s programming code is written in C and runs under the AIX operating system. The net result is a scalable, highly parallel system capable of calculating 100-200 billions moves within three minutes, which is the time allotted to each player’s move in classical chess.
In 1988, Cray Research introduced the Cray Y-MP®, the world’s first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system to a record sustained speed of 2.3 gigaflops.
Supercomputers are the fastest type of computer. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations. For example, weather forecasting requires a supercomputer. Other uses of supercomputers include animated graphics, fluid dynamic calculations, nuclear energy research, and petroleum exploration.
The Cray-2 vector supercomputer was released in 1985 and was the successor to the Cray XMP by Cray Research. At the time of its release it was the fastest computer in the world, bumping the XMP off the top spot. It was capable of 1.9 GFLOPS. The first Cray-2 had more physical memory than all previous Cray machines.
Mainly developed for the U.S. Department of Defense and the Department of Energy, it was used for nuclear weapons research and oceanographic development. It was also used by NASA and several universities. Due to the use of liquid cooling, the Cray-2 was given the nickname “Bubbles”. The Cray-2 was later replaced by the Cray X1.
The Connection Machine was the first commercial computer designed expressly to work on simulating intelligence and life. A massively parallel supercomputer with 65,536 processors, it was the brainchild of Danny Hillis, conceived while he was a graduate student under Marvin Minsky at the MIT Artificial Intelligence Lab. At it’s height, there were 70 installations of the Connection Machine around the world.
Departing from conventional computer architecture of the time, it was modeled on the structure of a human brain: Rather than relying on a single powerful processor to perform calculations one after another, the data was distributed over the tens of thousands of processors, all of which could perform calculations simultaneously. The structures for communication and transfer of data between processors could change as needed depending on the nature of the problem, making the mutability of the connections between processors more important than the processors themselves, hence the name “Connection Machine”.
First launched in 1982, this system was capable of 500 megaflops and was the first multi-processor supercomputer. It ran the company’s first operating system based on UNIX, UNICOS. It was the descendent of the Cray 1, and was built by Cray Research. By 1986, the system’s XMP-22 model sported 4 processors and had a theoretical peak performance of 800 megaflops. It’s speed was more than double that of comparative machines. The XMP was Cray’s most successful machine.