At its most basic, the upstream segment of the oil and gas industry's major task is to use well and seismic data, both static and dynamic, in predicting reservoir volumes and performance. The workflow and supporting tools that enable this process are deeply integrated in the beginning and concluding stages, specifically the areas of seismic interpretation and, to a lesser degree, reservoir simulation. Between those workflow stages, the current reservoir modeling process is highly specialized, poorly integrated and limits the number of fields that can benefit from reservoir simulation. Illustrated in Figure 1, this part of the workflow is referred to as the "valley of death."

The west side of the valley

Currently in our industry, there is a very integrated workflow that starts with seismic volumes and well data and ends with a set of interpreted surfaces and faults. Driven by the 3-D seismic explosion in the 1990s, literally thousands of users of this technology have emerged and have contributed to a major increase in productivity while enhancing the success in finding and delineating fields. The interpretation process can provide 2-D surface models of the pertinent geologic features and general areas of likely hydrocarbon occurrence based on seismic attributes or direct hydrocarbon indicators.

Surfaces, however, do not produce hydrocarbons, and reservoir simulation generally requires a full 3-D model of the reservoir and its petrophysical attributes. Seismic attributes provide valuable conditioning for property modeling and are commonly used in the placement of development wells; however, they are not directly usable in the reservoir simulation process. While the resulting seismic and well data interpretations are the foundations on which the 3-D earth model is built, current integrated workflows stop at this point and require users to export data into application specific formats in order to "cross the valley."

The east side of the valley

At the other end of the workflow is a prediction of future production from the field: rates of oil, gas, water and pressure for a given development scenario over time. These data are required by thousands of users and form the basis by which drilling programs, facilities design and the overall development plans for the reservoir will be determined. There are several methods used today to predict performance, from simple decline curves to reservoir simulation. The most robust of these methods, in particular for the development design and early production phases of a field, is reservoir simulation.

With the advent of parallel reservoir simulation running on relatively inexpensive Windows and Linux clusters, the old limitations of multiple day run times on high-cost, high-maintenance hardware have been reduced significantly. Modern Windows graphical user interfaces helps the set up of a reservoir simulation run more smoothly. However, the workflow bottleneck that causes 3-D models to take weeks and sometimes months to build still limits the number of assets to which a company can bring the power of reservoir simulation.

The valley exposed

Between these two major workflows is reservoir modeling, a currently complex, non-integrated process. As illustrated in Figure 1, the process currently requires moving out of the integrated interpretation environment via ASCII files of one form or another, and then another ASCII transfer to move back into performance prediction.

Reservoir modeling is currently a guru-focused process practiced in our industry by only hundreds of specialists. While this is an area with interesting and often complex science, the main factor causing this major productivity break is non-integrated, complicated software packages having limited concept of data or model management. Some companies "cross" the valley by filling it with people, both staff and consultants, to make up for the complexity and non-integrated nature of the process. Other companies simply limit the number of properties for which they build reservoir models and thereby limit the number of properties that can use reservoir simulation.

Designing a next-generation earth modeling system that effectively crosses the valley requires:

* direct integration with a full integrated interpretation system;
* building models that can be run directly in today's reservoir simulators; and
* modern, robust management and sharing of the resulting models and the data from which they were built.

These capabilities, while difficult to add to an existing software package, provide a strong base from which all the necessary functional tools can then be built. A new software platform promises to bridge the gap.

Features of an integrated reservoir modeling system

To create a package that can meet the challenge of fording the valley of death, there are several required features not found in current reservoir modeling packages. In many cases, it is only recently that these requirements became supportable by commercial hardware.

"Common everything." Reservoir modeling requires integrating a great number of well and seismic data types in addition to interpretations derived from those data. To make this process less daunting and more rapid, it is critical that one viewing technology enables the display and manipulation of this wide range of objects. Figure 2 is an example of such a 3-D viewer, in this case displaying two different 3-D depth seismic volumes, one set of interpreted horizons and faults, and a series of wells with several well logs and a set of interpreted tops.

While it is important to display and manipulate this wide variety of data types, it is equally important to have this display draw from the same shared database that is supporting the rest of the asset team. Sharing this database will eliminate the need to change data structures for reservoir modeling, as well as eliminating the "slope" on the west side of the valley. With a shared database, updates of new well data and new interpretations will be immediately available to visualize and change the model as needed. Log curve and seismic attribute data can be directly read into property modeling methods without any data movement or reformatting.

Assisted framework building. Traditionally, creating a fault and horizon network is a manually intensive process that often takes many days. It is important to provide some automatic tools to shorten the structural framework and stratigraphic grid building. The user selects the fault surfaces to be used for forming a sealed network of faults (Figure 3), and the software automatically forms a network of fault surfaces that cut and truncate each other as appropriate. The user provides quality control and editing where necessary. Within the framework, generation of the stratigraphic grid honoring complex fault systems is accomplished with little user input beyond specification of the desired grid azimuth and cell size.

Models ready for simulation. While building reservoir models for reservoir simulation use is not the only driver for reservoir modeling, it is the most common one. Reservoir modeling remains in the critical path for reservoir simulation, and it is important that there be a two-way link between modeling and simulation that does not change or distort the models themselves. The best way to accomplish this is to use a modeling topology suitable for use in today's reservoir simulators; corner-point grids. Figure 4 shows the 3-D viewer with both a reservoir property model and the results of a reservoir simulation run at time zero.
The common practice in today's world is that the geologist will build a model at a scale appropriate to capture all significant reservoir heterogeneity. At that time, the reservoir engineer will upscale this model to enable performant reservoir simulation. The challenge is to preserve enough of the geologic heterogeneity to properly reflect flow properties. While upscaling will remain an important tool for many reservoirs, there are two techniques that minimize the need for it:

1. Optimized geologic scale models. Much of the run time in conventional reservoir simulation is wasted due to non-optimal reservoir models. Model problems such as non-neighbor connections, small sliver cells, non-orthogonality and others are the contributing factors. If the geologic models are produced with a minimum of these problems, then upscaling may not be necessary for performant simulation.

2. Geologic models built with flow modeling in mind. In today's industry, it is not uncommon to have geologists and engineers work directly together. The modeling software should therefore enable a common grid design that could meet the needs of both disciplines. This would require building a grid with density that varies, providing necessary detail around wells or one with strong heterogeneity. In an aquifer or area of homogeneous geology, a course grid could be used. This would allow one grid to be used for both reservoir modeling and simulation.

Workflow benefits

With the valley of death crossed, asset teams will be able to gain significant benefits from having a rapid, integrated set of reservoir modeling tools.

Incorporation of multiple interpretation scenarios. In many ways, reservoir modeling is simply an extension of the interpretation process. Interpreters commonly carry several potential structural interpretations for a field, but the difficulty and time-consuming nature of making a reservoir framework generally limits teams to just model the most likely case. As a result, this tends to mask uncertainty around the model and limit the ability to see the effect of different interpretations on reservoir flow.

Assisted/automated well placement. Traditionally, well planning has been a tedious manual procedure. The well path tool finds a set of targets and plans paths for wells back to platform locations giving geometric and spacing constraints and locations of candidate wells along with the number of drilling days and drilling. The wells are optimized on an objective function along with one or more data filters. Figure 5 shows the oil producers and gas injector locations with their plans to surface platforms. These were generated literally in a matter of minutes using the constraints shown in the table. The data filters ensured that the gas injectors are located high in the structure above the gas-oil contact and the oil producers are located in the oil zone.

Optimized field development. In most E&P companies today, reservoir modeling and simulation are carried out only at the major decision points in the life of a field, and only for the largest of these fields. Due to the tedious nature of present-day reservoir modeling, it is difficult to update the models at the speed of a drilling program. Models and subsequent reservoir simulations only update at the end of major drilling programs, when there is sufficient time to rebuild the model. With reservoir modeling integrated into both the interpretation and flow modeling workflows, updates can take place at the pace of the drilling program. The models and simulations can be used to affect the location of the subsequent wells

What's next?

With the valley filled, during the next several years, asset team productivity will continue to increase. This pace will slow only when the limits of traditional surface modeling and corner point gridding begin to impede the workflows. The next step-change will occur when a new topology is developed that lets geoscientists truly interpret seismic volumes and surfaces are just secondary features. The same unstructured topology could also support property modeling and reservoir simulation. This may sound far-fetched today, but it is an active and promising area of research. With tools like this, asset teams will not only cross the last integration frontier but will fly over it.

For more information about Landmark's DecisionSpace PowerModel, visit www.lgc.com.