Driving R&D through supercomputing
When Titan is available to users in 2013, it will likely be the fourth time ORNL has hosted the world's fastest computer. The first was the Oak Ridge Automatic Computer and Logical Engine (ORACLE), created in the 1950s in partnership with Argonne National Laboratory to tackle problems in nuclear physics, radiation effects, and reactor shielding.
In 1995, ORNL acquired the Intel Paragon XP/S 150. At 150 GFlops (billion floating point calculations per second), it used 3,000 processors to deliver world-record speed in research areas including climate and groundwater modeling.
In 2000, ORNL became the first DOE laboratory to break the teraflop barrier (one trillion calculations per second), and sprinted through generations of increasingly powerful machines culminating in Jaguar, a petaflop (quadrillion calculations per second) computer in 2010. Jaguar, today with 300,000 processors, has generated world-class R&D across multiple disciplines including materials science, combustion, nuclear physics and astrophysics, biology, climate, and energy.
Of course, computing speed does not translate into great science without software—algorithms to perform the calculations and applications that convert scientific problems into computer simulations—and ORNL has an impressive history of software development, too. In the 1960s, ORNL theorists Dean Oen and Mark Robinson used computer modeling to discover ion channeling. The discovery was critical to understanding ion implantation in a variety of applications including integrated circuit fabrication, and it provides a compelling example of scientific discovery through computer simulation.
In the 1980s, ORNL made numerous breakthroughs in parallel computing, research that anticipated the future of high performance computing. ORNL's Parallel Virtual Machine (PVM) software, with more than 400,000 users, became the worldwide standard for clustering computers into a virtual supercomputer. In the 1990s, ORNL teamed with other national labs and IBM to develop an ultrafast data storage system, now the standard for supercomputers across the nation. That same decade, Jack Dongarra of UT/ORNL led the development of the Message Passing Interface as well as linear algebra algorithms now used on virtually all supercomputers.
Since 2000, progress at ORNL has continued at a blistering pace. Titan will be more than 10,000 times more powerful in hardware, and more than a thousand times more powerful in software—a million-fold increase in capability in little more than a decade. This spectacular advance in performance is unprecedented, and is likely to be followed by another factor of a thousand or more over the next decade.
So what does this mean for science?
This issue of the Review provides a glimpse of that future, from the enabling hardware role of Titan's graphics processing units (GPUs), to crucial software advances at the ORNL Center for Accelerated Applications Readiness, to transformative applications including virtual reactors, extreme nuclei, climate modeling, biofuels and materials physics.
Titan will drive R&D in two ways. First, Titan will provide access to a set of problems that are impractical to attack with current supercomputing resources. Second, Titan will accommodate increased complexity in existing computer models, increasing accuracy and fidelity. As a result, we anticipate advances across broad reaches of science and technology including climate, seismology, combustion, materials, nuclear physics, biology, fluid dynamics, and fusion and fission energy.
These advances foresee an era where computer simulations stand alongside theory and experiment as a third leg of science and engineering, accelerating both discovery and design. ORNL is providing leadership for this revolution by fielding the most capable machines, developing the enabling software and applying supercomputing across a broad spectrum of science and technology challenges.
Associate Director for
Science and Technology Partnerships
Oak Ridge National Laboratory