Larger dataset sizes and computational needs have led to a wave of developments to meet digital oilfield needs that are driven by techniques to better understand subsurface happenings as exploration efforts progress.

E&P companies are finding data solutions by turning to systems developed by companies such as NVIDIA. Technical experts from the several companies, including Petrobras, recently participated in Hart Energy’s three-part “Big Data and the Cloud” webinar series. The second installment in the series focused on how to make more accurate prospect decisions in less time with hybrid compute environments.

Nowadays, “it’s not enough just to have good seismic data. You need to have the content to be able to make better drilling decisions and well path locations. So you have data from electromagnetic surveys to help correlate that,” said Ty McKercher, an HPC solution architect from NVIDIA. “And every time you add another data type or a different data type or a different discipline, it introduces delays in the system.”

That’s where the graphics processing unit (GPU) system steps in to help reduce delays in the system and allow diverse teams to collaborate and view that data. Such systems are affordable and allow the leverage of existing knowledge to a familiar tool, McKercher continued. It is being used not only for reverse-time migration, but also for wave-equation migration, Kirchhoff time-in-depth migration, and service multiple elimination.

Placing the Sandy Bridge and GPU hybrid systems side-by-side, McKercher pointed out how the GPU hybrid fares better. With Sandy Bridge (84 central-processing-unit [CPU] servers), there are two racks at 25.2 kW and 58 teraflops (FLOPS or floating-point operations per second) for an initial cost of US $600,000. With a GPU hybrid system (34 servers plus 136 GPUs), there is one stack at 25.5 kW with 323 TFLOPS for an initial cost of $400,000. The GPU hybrid is 5.5 times faster.

“You can use [fewer] servers and therefore use less power and cooling. You get a lower cost of ownership. It’s not just the initial cost that’s less; it’s the cost over time. And that allows you to have a faster return on investment,” McKercher said. “If you have alternative cooling solutions, you can fill that rack with even more resource, and the peak performance per wattage would go even higher.”

He noted that it’s important for GPUs and CPUs to cooperate via application codes. This enables tasks to be performed simultaneously by the two. For example, data filtering can be done on the CPU, while doing computations on the GPU.

Paulo Souza, geophysical technology group, Petrobras, spoke on hybrid computing for seismic processing, pointing out more benefits for using GPUs. These included lots of registers at 36 terabytes per second (TB/s) max on K10; shared memory at 3 TB/s; dynamic load balancing by hardware (single GPU); interpolation hardware; and vector gather/scatter.

Petrobras started using GPUs in 2006 after moving from main frames. Now, GPUs make up more than 90% of the company’s processing power, Souza said.

In using reverse-time migration for imaging complex structures using GPUs, Souza said, the velocity field is read once per job. Groups of GPUs are used to process a group of shots, one shot at a time per group. And a group of shots is stacked in memory before going to disk about every three to six hours.

Souza also spoke about how to overlap computation and communication between GPUs, which requires a minimum of 1.6 gigabytes per second (GB/s). The process involves breaking the border data into small pieces and starting a pipeline to overlap four communication stages.

For example, Souza demonstrated how five tasks run at once. “We have the GPU calculating the bulk of the model. We are moving data from the GPU to the host. Also, we are sending the data to the neighbor, receiving the data from the neighbor, and copying the data from the GPU. It is all done using 1.6 GB/s.”

By using GPUs, Petrobras saw gains of up to 10 times in performance per price and per watt over traditional architectures.

Contact the author, Velda Addison, at vaddison@hartenergy.com.