Ask anyone in geophysics and geology and they will tell you that visualization is extremely important to their job, yet what they refer to is not so much the renderings but the actual workflows and how they interact with the data. And that is part of the challenge — previous-generation visualization toolkits often allowed limited interaction with the data, and often in surprisingly unintuitive ways.

For example, users have been required to view and interact with their respective seismic

Figure 1. Prestack for Interpreters enables the user to simultaneously view and interact with both the poststack data (as served by Petrel natively) in combination with inline and cross line prestack common midpoint gathers. (All images courtesy of Hue)
data, models and reservoir grids in quite diverse ways. Modeling workflows typically allow the end-user to think and interact in terms of faults, geo-bodies and other polygonal shapes, yet almost all interpretation is done using rectangular boxes and probes, which is counterintuitive and inefficient. One could argue that interpretation and modeling are entirely different workflows, and for that reason the interaction and visualization require different approaches, but more often the true answer lies within the limitations of technology — it is only now that there are approaches that remove these artificial barriers between visualization, interaction and the appropriate (and often complex) workflows. Would it not be appropriate that end-users could interact with both seismic data, models and reservoir grids with equal freedom and flexibility — so that the implicit knowledge from the different data relationships can be explored at its fullest?

“Blame the workflows”

Many vendors who promote visualization tend to ignore the importance of the end-user workflows, instead promoting just visual rendering features or scalable visualization technology as the “Omni cure” to all problems. It is important to understand that the choice of applications (or solution framework) and the choice of visualization solution need to go hand in hand.

For example, it is fully feasible to make a single-user, single-workstation application scale up in terms of visualization performance. In some cases, that might be the most appropriate thing to do, but only if the visualization is the main bottleneck to achieve overall workflow efficiencies. This also implies that all other computations (e.g., post- processing, calculation of attributes, simulations and so on) are not the real bottlenecks. As any end-user will tell you, that is rarely the case.

The visualization revolution

To tell a tale from our own experience as a visualization toolkit vendor: HueSpace1 was the first commercially available toolkit for cluster-based volume visualization. It was introduced to the market in 2003 as an integral part of the GigaViz product from Schlumberger Information Solutions. HueSpace1 relied on the CPU for all computation and visualization. The benefit of HueSpace1 was that the technology allowed end-users to interactively view very large datasets. Unfortunately, that was all that HueSpace1 did: With the introduction of cluster-based visualization, HueSpace1 offered the end-users little more than scalability.

Three years later, after a significant investment in research and development, HueSpace2
Figure 2. Prestack gathers can also be volume-rendered in a probe along with the poststack section.
debuted at the 2006 Society of Exploration Geophysicists meeting as the first commercial toolkit to fully exploit the General Purpose GPU Computation (GPGPU) approach to visualization and real-time, interactive computing. Since HueSpace2 was designed in recent years, we were able to think outside the box in terms of our direction and utilize the very recent advancements in graphics processing unit (GPU) and CPU technologies. GPUs, in the form of NVIDIA and ATI graphics cards, now out-perform CPUs for parallel tasks such as visualization and signal processing. Where HueSpace1 initially only focused on using multi-CPUs for plain-vanilla scalability, HueSpace2 uses the (multi-) GPU compute power to address the fundamental challenge of visualization and computation: offer the end-users intuitive, interactive workflows.

Innovative prestack workflows

The HueSpace2 framework/toolkit bears little resemblance to the architecture of other existing toolkits. One of the ways it differs from other toolkits is that HueSpace2 assumes that data – and the relationships — are multidimensional in nature. Headwave (formerly Finetooth Inc.) uses the HueSpace2 technology for prestack/poststack quality control, prestack interpretation and related workflows, providing the end-user instant access to terabyte-sized prestack datasets. Prestack datasets are multidimensional by nature. The Headwave platform is able to present the actual seismic data in multiple ways (wiggle traces, 3-D prestack/poststack linkage and 4-D comparisons) and switch between different domains. The applications also maintain header information and all the meta data so that end-users and algorithms are capable of using the meta data and the inherent data relationships as part of the workflows.

Figure 1 shows Headwave’s Prestack for Interpreters plug-in for Petrel Workflow Tools 2007 release. Here the user is able to simultaneously view and interact with the poststack data (as served by Petrel natively) in combination with inline and/or cross line prestack common midpoint gathers. The gathers are rendered in 3-D as 2-D cross sections that are linked to the poststack cross section. The interpreter can now move the prestack gathers along the poststack cross section. They are immediately updated, ready for further detailed analysis.
As an alternative, the prestack gathers can be volume-rendered in a probe along with the poststack section and all other data (grids, faults, attributes) in the Petrel project (Figure 2).

Headwave thus uses the HueSpace2 toolkit to load and serve data to the end-user. The prestack and poststack datasets are stored in a multi-dimensional, compressed format (with support for multiple sort orders). When a user pans through or interacts with the data, the GPUs are used to decompress the data on the fly. Normally, the access of arbitrary parts of terabyte-plus prestack data in real time would require hundreds of cluster nodes. However, the HueSpace2 engine can handle this on a single PC workstation, even at full 32-bit resolution. Our benchmarking of wavelet compression/decompression shows that one GPU does the work of 200 CPUs.

Interpreting workflows
Earlier we addressed the need for more intuitive workflows, and why imaging toolkits play a role.

Traditionally, probes used in interpretation workflows are box-shaped or, at best, limited by two defined surfaces such as seismic horizons. As every interpreter knows, geo-bodies and subsurface regions of interest take any shape or form. Being limited to only box-shaped probes makes the interpretation harder and consumes more time. Ideally, any polygon shape can define a body or probe/region of interest, and there should not be a distinction between “polygons” and “voxels.”

Removing these obstacles opens up interesting applications: First, it is possible to use any
Figure 3. Volume visualization of tracked geo-bodies using HueSpace2 — each of the geo-bodies are separate objects and can be interacted with independently or in combinations
modeled or tracked surface or body as a probe or region of interest. Second, the identified probes or regions of interests can be rendered in a multitude of ways, e.g. as surfaces or volumes, since in effect they are “both” polygons and voxels. The tracked or modeled bodies can also be extracted (“what you see is what you get”) so that they can be reused as true geo-bodies for the remainder of the interpretation and modeling workflows. One could also imagine other geo body-driven workflows where attributes are only computed for certain bodies or shapes (Figure 3). This is almost reminiscent of good old “coloring with pen and paper” (and having a pair of scissors handy) and likely more intuitive both for the pen-and-paper generation and the Gameboy generation.

Computation and visualization unite

True workflow efficiencies are achieved when the gap between visualization and processing is bridged transparently for the end-user. It used to be that things like the generation of certain attributes was so time- and resource-intensive that the cubes would take days to output. Currently, it is possible to design interactive workflows for interactive applications of algorithms such as segmentation, connectivity analysis, volume deformation and processing as part of the visualization workflow pipeline.

As another example, it is now often much faster to generate a “visual attribute on demand” than to retrieve it from disk. New workflows can therefore be designed that avoid unnecessary and tedious work for the end-users. Instead, end-users can focus on the task at hand and immediately see results regardless of the underlying effort at the toolkit and application level.

Over the next few years, geophysicists and geologists will see a shift in the level of usability of their subsurface applications. Next-generation data processing, interpretation, modeling and simulation applications will be able to take advantage of the combined visualization and compute capabilities similar to those described in the article. We believe that this combination will significantly improve end-user workflows with respect to enhanced intuitiveness and ease of use.