A paradigm shift from individual computing to group computing is imminent. How should this change be managed? Part one of a two-part series.

Innovations by both horizontal and vertical information technology (IT) providers are moving the E&P industry away from computing focused on individuals working at discrete tasks and more toward computing aimed at groups or teams collaborating on complex, multi-faceted E&P processes. While it might seem simple, this move challenges many of our assumptions, some which are deeply entrenched. There will be a required change in "whom" we invest in computing for, how we deploy digital technology, how we measure its effectiveness in improving operations and who owns the problem in the first place. This is a move no one will want to miss.

The mind set

Most IT advancements over the past decade have played out at the individual user level, or within a single discipline. For example, computerization at the individual level has helped geoscientists quadruple (or more) the number of potential prospects they can generate, has led to increasing exploration success rates, and has enabled reservoir engineers to history-match field simulations in record time and with greater accuracy (see Figure 1). The individuals who work within their own disciplines have achieved many of these advancements, which have led to a way of thinking about technical computing that focuses on the individual.

This mindset naturally carries over to how oil companies manage their technical IT. Let's consider the following three points.

Strategy. The way most companies govern investments in technical systems still revolves around individual users. This approach focuses on finding the most cost-effective means of accomplishing discrete tasks, such as seismic interpretation or reservoir simulation, and choosing software and workstation hardware on that basis. For example, many companies purchase UNIX workstations for the geophysicists and high-end Windows NT computers for their reservoir engineers. Technical strategy in such cases may be dictated simply by the number of professionals on staff and the type of applications required.

Cost justification. Cost justification also tends to focus on the way these systems improve the performance of individuals. For example, countless studies have documented how much time a single user needs to search for, reformat and load data. Therefore, reducing an individual's data cycle time is an explicit goal of IT investments. We know what we are paying for, what we hope to achieve and roughly how to measure it.
Ownership. With this relatively narrow description of IT, it is natural to assign ownership to the R&D or IT department. R&D in particular tends to be aligned by discipline, so it is straightforward to assign IT strategy this way. The important distinction, however, is that IT strategy decisions are typically made by corporate or staff groups. Consequently, line management sees IT as a "given" in the overall mix of resources that they can allocate to achieve business results.

In this context, technical computing is part of the design work at each stage of the upstream business. Figure 2 shows a total costs breakdown by stage (finding, developing and lifting and by work type. As shown, the majority of cost at each stage is for execution of that process, whether it be drilling wells, building facilities or artificial lift. Each of these execution stages is preceded by design work, where plays are mapped, well locations are selected, wells and facilities are engineered and production is monitored. This design category includes the cost of the geoscientists and engineers, and the computing tools they used to do their work. These tools include hardware and data infrastructure, applications and associated support services.

In summary, the industry's established approach to technical computing strategy looks something like this:
Q. What is it you are trying to automate?
A. Individuals, working within their disciplines.
Q. How do you justify the investment?
A. Individual productivity and examination of others' practices.
Q. How do you deploy it?
A. Install the software, send the user to training and possibly provide additional support on site.
Q. How do you measure it?
A. It is hard to measure, except at the level of individual productivity (time savings).
Q. Who owns it?
A. The IT department takes care of most of it.

IT suppliers have reinforced this perspective. Almost all E&P applications are written for one person or one technical domain - seismic interpretation software for the geophysicist, well log correlation and mapping software for the geologist, fluid flow simulation software for the reservoir engineer, wellbore design software for the drilling engineer and so on. Landmark has been the champion of application integration, but even our integration is mostly aimed at linking individuals rather than "group computing." Similarly, most of the hardware is built for individuals. Workstations resemble typewriters with television sets perched on top - clearly aimed at the individual. IT suppliers have innovated within this framework to a point where there is powerful technology supporting virtually every E&P discipline. Equipping either a processing geophysicist or a geophysicist engaged in structural interpretation is no problem, but the toolkits are distinct and integrated (if at all) after the fact.

The shift

Recent advances in technology, however, are challenging this notion that the individual is the main "user" of E&P computing. IT suppliers point instead to a future dominated by group or team computing, and an IT mandate focused more on processes. Horizontal innovations that favor a team/process-based approach include:

* improvements in visualization technology that allow entire teams to analyze and review results together within a common computing environment;
* increases in communication bandwidth that enable distributed groups to collaborate in real time by way of Web-based networks, while working on separate computers; and
* advancements in database architecture that promote easier storage of and access to diverse types of information, linking those data to a common thread such as an oil or gas asset. This makes it easier for multi-disciplinary teams to share the same database.

Vertical suppliers are supporting this trend with innovations in application architecture. E&P applications are moving away from monolithic programs that support specific workflows to smaller, more flexible components that are far better integrated. This architecture allows groups to combine work flexibly as opportunities for greater efficiency arise. Critically, many E&P applications have been re-designed to run on a thin-client basis so that high-bandwidth communications are not even needed. This allows groups to interact from remote locations with minimal infrastructure investment. And operational drilling and production data are being made available to technical professionals as acquired, encouraging new uses of information.

Any one of these innovations would lead to some changes in the computing agenda for E&P, but the package leads to a discontinuity in potential impact. For example, software engineering's ability to run sophisticated technical applications over a phone line is a breakthrough (vertical), but without better database integration (horizontal) or better-integrated applications (vertical), the impact would be small. Instead, disparate and unconnected innovations, taken together, allow us to offer a fundamentally different computing environment. In this environment, teams can access information, analyze it and share intermediate and final results, working as individuals linked from different geographic locations or simultaneously in the same room. This scene is being played out in other verticals such as the automotive industry, health care and the financial industry. The E&P industry did not invent group computing but it would be folly to ignore it.