Due to the nature of permeability in shale plays, the connection of reservoir models to engineering data has become essential. Unlike conventional reservoirs in which significant darcy-scale permeability exists in situ prior to well drilling and completion, most of the effective permeability in a shale well is created during hydraulic fracture stimulation, which is an engineering activity. Hydrocarbon flow occurs within an effectively stimulated reservoir volume, and good shale reservoir characterization techniques should help identify rock with sufficient reservoir properties and physical properties to be productively stimulated.
Using its 3-D modeling software, CRYSTAL, as a platform, SIGMAhas developed an integrated reservoir characterization workflow tailored for shale plays tied to production data through finite element flow simulation. This workflow directly estimates reservoir properties like total organic carbon (TOC), porosity, resistivity, brittleness, and natural fracture density based on correlations with well logs, formation micro-imager (FMI) logs, and core data. When the proper reservoir properties and thresholds are used, the Shale Capacity model is an accurate representation of the rock volume that can potentially contribute to production when effectively stimulated. This workflow was validated with production data in a finite difference flow simulation on a Marcellus shale project in Pennsylvania.
Marcellus case study
This case study covers a 104-sq-km (40-sq-mile) area in Pennsylvania with pre- and post-stack 3-D seismic data coverage, six wells with production data, four pilot holes with substantial log suites, and one FMI log. In this area, the Marcellus shale is at least 154 m (500 ft) thick and has an upper and lower member divided by a laterally continuous carbonate called the Cherry Valley formation. Most of the horizontal wells have been landed near the base of the Lower Marcellus formation just above the carbonate Onandaga formation.
Significant production variation has been observed in the field, with up to fourfold variation in initial production rates and estimated ultimate recoveries for wells within 5 km (3 miles) of one another. Because well and completion designs as well as stratigraphic target zones have been held fairly constant in the area, reservoir characterization was recommended to understand the geologic heterogeneity in the area and shed light on production variation.
Seismically driven workflow
When available, 3-D seismic data provide vital information about key reservoir properties. However, a rigorous
treatment of the seismic data is required to yield useful information. In many cases, operators have struggled to realize value from their seismic data due to limitations in signal-to-noise ratio, resolution, or lack of fold and aperture in seismic acquisition. The first step in any reservoir modeling workflow is to quality-control (QC) seismic data and maximize seismic resolution.
Seismic data must be enhanced with care taken to preserve relative amplitude characteristics and improve resolution without creating data that do not exist within the spectrum of the original seismic data. The details of the enhancement algorithms used are beyond the scope of this article, but adaptive gradient filtering and broadband spectral inversion have become standard steps in the workflow. Improvements to the seismic data early in the workflow reap benefits in subsequent steps and have a material impact on the efficacy of the final model (Figure 1).
Once the seismic data have been enhanced, several types of attributes are created to thoroughly examine the information available in the data. Experience has shown that rarely does one seismic attribute tell the whole story needed for reservoir understanding in any reservoir, unconventional or not. Curvature attributes are generated to help understand the distribution of folds, faults, and natural fractures. Spectral attributes emphasize geologic features at different frequencies and energy levels to highlight information not seen in the full stack of data. Acoustic and elastic inversions provide yet another set of information to highlight stratigraphy and rock physical properties available in the data.
Neural nets
Many of the seismic attributes and inversions used in the workflow produce several attributes each. The resulting list of attributes could be quite large, possibly approaching or even exceeding 100. To make sense of this large dataset and conditionally correlate it to reservoir properties from well logs, machine learning algorithms employing both fuzzy logic and supervised neural nets are used.
This is a significant and important departure from traditional reservoir modeling and mapping workflows. Traditional modeling and mapping algorithms – inverse distance-squared, kriging, co-kriging, sequential Gaussian simulation, and Boolean algorithms, for example – use relatively sparse well data to estimate unknown points that may be spatially remote from those data, thus biasing the quality of the result to the proximity of well data. By using a proprietary neural net, unknown points are estimated directly from seismic data located in the very same node or grid cell.
Well control
The principle behind the neural net is elegant in its simplicity. The algorithm seeks the dataset with the best conditional correlation to a reservoir property from well log data and then uses that correlation set to estimate properties directly from the seismic data. The human brain seeks to do the very same thing with a few seismic attributes, but the neural net allows for a much larger dataset to be considered. In practice, a ranking of all the available attributes is presented with the highest correlation to a given well property normalized to 100%. The human interpreter must then select a subset (usually six to 15) of the properties from the list that have high correlations to well data and also have a physical explanation for their correlation to the property. Finally, 3-D models are created using the derived conditional correlation and are QCd against blind well data. The blind well or wells are excluded entirely from the modeling and are the ultimate benchmark for the success of the modeling. The neural net operates on a fully structured 3-D geocellular depth grid.
The 3-D models are created sequentially in order of increasing complexity, starting with gamma ray, then density, TOC, resistivity, brittleness, porosity (and sometimes closure stress), and finally natural fracture density. In this area of the Marcellus, modeling has shown TOC, brittleness, porosity, and natural fracture density to be the most significant reservoir properties, so they were used in the calculation.
The model (Figure 2) was created by applying minimum thresholds to the TOC, brittleness, porosity, and natural fracture density and then combining them into one attribute with values highlighting grid cells containing the minimum requirements for high-confidence productivity. The thresholds are cumulative, so if any of the attributes fails to measure up, the entire grid cell fails.
Once built, the model was analyzed against known production in the wells in the fields. Some qualitative relationships immediately present themselves, but a more quantitative relationship was established in a finite element fluid flow simulator. Using the model as input to the flow simulator, a history match was quickly established without any nonphysical perturbation of the input data (Figure 3). This validated that the workflow built a model representing the volume of rock contributing to production in the area that can be used for reliable reserve forecasting and well planning.
Recommended Reading
Comments
Add new comment
This conversation is moderated according to Hart Energy community rules. Please read the rules before joining the discussion. If you’re experiencing any technical problems, please contact our customer care team.