Multicore microprocessor architectures promise petaflops levels of performance - 1,000 times more powerful than today's teraflops computers - by the end of the decade. What steps should the industry take today to gain a competitive advantage from petascale computing tomorrow?

In December 1996, Intel broke the teraflops barrier, building the first computer to execute more than one trillion floating point operations per second. Today, a teraflops computer won't even crack the bottom rung of the Top500 Supercomputer Sites list of the world's most powerful computers. Assuming a continuation of historic trends, the Top500 organization expects to see a petaflops-level system atop the list by around 2009 and predicts the list will be fully populated by such systems by around 2015. Given that today's most powerful system exceeds 250 teraflops, it doesn't take much of a crystal ball to see that petascale computers - systems whose performance is spoken of in terms of a half-petaflops or more - will be a reality in the very near future.
That's good news for exploration and production (E&P). Readers of this magazine know even better than I do the value of increased computing performance. With more powerful and cost-effective computing, oil and gas companies can analyze acquired data more rapidly and in ways never before possible. They can build higher fidelity, more granular models, explore new physics, and perform more sophisticated analysis and obtain better results than ever before. This can allow new methods of asset exploitation, accelerate recovery and production cycles, increase yields, reduce risk and enhance profitability. It's no wonder that 10% of the computers on the November 2005 Top500 list are used for geophysics computing, and that number is growing.
But converting petascale potential into achieved business value is not automatic. To benefit from petascale computing, it is important to understand where petascale capacity is coming from and take the right steps to prepare for it.
The multicore inflection point
Parallel processing has been central to computing performance for more than two decades (see Figure 2). In the 1980s, massively parallel computers based on off-the-shelf microprocessors freed supercomputing from the constraints of special-purpose CPUs, water-cooled data centers and multi-million-dollar budgets. The 1990s saw increasing degrees of parallelism within the microprocessor, beginning with instruction-level parallelism and moving on to data-level and thread-level parallelism.
This evolution paved the way for the shift to multicore architecture that's occurring across the computing industry today. Multicore processors have two or more execution engines or cores within a single processor. Each processor plugs into a single socket, but the operating system perceives each core as a discrete processor with a full set of execution resources. Many of today's processors feature dual-core architectures. Intel has more than a dozen dual and multicore projects on the drawing boards and anticipates a future of many core processors.
Multicore processors can perform more work than single-core processors within a given clock cycle, enabling performance to scale dramatically. Hence, the rapid advance toward petascale computing. Intel's processor roadmap envisions putting a teraflop of performance on a single chip in the 2011 timeframe.
Multicore processing provides other important advantages. The traditional single-core approach achieved performance increases largely by boosting clock speeds. Unfortunately, higher clock frequencies also brought an power consumption and heat. Multicore architectures minimize power consumption and heat dissipation. They also increase resource flexibility, offering the ability to assign different cores to different functions. Rather than relying on one large, heat-generating, power-sucking core, multicore architectures can activate just the cores whose functions are needed at any given time, powering down the others and enabling the processor to use only as much power as is needed. Specialized cores can be tailored to tasks such as imaging processing, speech recognition and communications processing, increasing system flexibility.
Paving the way
Petascale computing rests on a foundation of multicore architectures and extreme parallelism, which means that applications must be parallelized in order to exploit petascale performance. This presents less of a challenge for seismic processing applications, which can be easily divided into regions that can be manipulated independently and in parallel. Even here, applications that run well over 64 processors may not scale to hundreds or thousands of cores. It is even more problematic for geophysics workloads, some of which will require significant rethinking in order to benefit from multicore architectures and petascale computing. Yet the need for petascale performance is too great to ignore the petascale opportunity. Likewise, the risk of losing out to competitors if the transition to multicore computing is not made is too threatening to sit this one out.
Instead, E&P leaders should take the lead in preparing their organizations for multicore architectures and petascale computing. Envision the potential of a thousand-fold increase in performance to impact a company's business, prioritize the processes it needs to focus on and unite them in an integrated workflow. Explore new usage models. What will petascale performance enable a company to do in a new and different way rather than just faster and better?
As microprocessor companies migrate to multicore architectures, the software development environment becomes a critical differentiator. Given that many codes are not written to take advantage of extreme parallelism, it's crucial to talk with platform and processor vendors about their programming environments and what capabilities they provide for parallelizing and optimizing software. For example, Intel recently began shipping five new tools that make it easier to develop distributed and parallel applications, including high-performance compilers, math libraries, a message passing interface (MPI) library, debugging and performance analysis tools for clusters and parallel applications. Intel's research and development budget exceeds US $4 billion annually and includes numerous projects to advance the state-of-the-art in parallelizing compilers and software development environments.
It's also important to work with the other businesses and organizations that will influence success in the new world of multicore and petascale computing. How are the developers of off-the-shelf software packages preparing to exploit multicore architectures and petascale computing performance? How well are colleges and universities training their students to understand and exploit extreme parallelism?
Extending Moore's Law
Mark Twain once wrote that the reports of his death had been greatly exaggerated. One can say the same thing of Moore's Law, the observation made by Intel co-founder Gordon Moore in 1965. Moore noted that processors had been doubling in transistor count for several generations and predicted that they could continue to do so for at least a few more - and that the doubling would bring a transformative spiral of increasing performance and falling prices.
The extension of Moore's Law through multicore processing affords the E&P industry new opportunities for algorithm advances and enhanced abilities to support acquisition and analysis, exploration and recovery. There's plenty of work to be done to capitalize on the multicore paradigm shift, but E&P companies are not in it alone. In addition to developing robust multicore platforms and software environments, companies like Intel are committed to helping E&P companies achieve the benefits of multicore architectures. Experts who understand the energy industry and its technology challenges can help optimize new algorithm development for multicore architecture, design parallelization strategies for existing workloads and prepare to gain competitive advantage from petascale computing.