Over the years technology has evolved and today utility computing has become a strategic part of oil and gas companies' information technology solutions. Although there has been talk of utility computing going on for years, it has really come about in recent months, and companies are increasingly understanding the value of this model. The utility computing model simplifies renting central processing units (CPUs) the same way a company buys all other utility services - you pay only for what you use.
What this means for the customer is that oil and gas companies of any size now have access to huge amounts of computing resources available to them on a short-term basis. For example, 3-D seismic data processing - the most compute-intensive activity currently run in the industry - can now be handled by even the smallest oil and gas exploration companies. Smaller geoscience companies that have limited compute resources can now take on larger contracts to process and interpret geophysical data by supplementing their compute environments with utility compute power. Another example would be in the area of reservoir simulation. With easy access to a very large compute cluster, smaller companies will be able to perform more detailed and precise reservoir simulations that previously have been available to companies that owned and maintained large internal compute farms.
When looking at utility computing there are important factors to take into account. Application type is important to consider. For example, seismic processing generally "scales" very well across hundreds, even thousands of "loosely coupled" processors which are usually connected via ethernet - while a traditional Kirchhoff depth migration has very different compute requirements to a full wave equation depth migration. Traditional reservoir simulators, in contrast, rarely scale above a few tens of CPUs, yet require a much faster interconnect, such as Infiniband, between CPUs.
In order to determine if a utility computing model is the right strategic move for a company, customers need to consider the following questions:

What is the cost of purchasing new systems for a specific project?
When looking at the cost of a project there are many factors to consider. Has the company thought about additional space, electrical and cooling requirements? Will extra staff be needed to administer the systems? Will there be 100% usage from the new equipment?
How big is the data set?
Is this project something that would make cost-effective sense to keep in house, or does it exceed your current capacity and make sense to take advantage of having immediate access to the high-performance computing resources of a vendor?
Does this project involve confidential processing that must be kept internally?
The scope of work must be something that can safely be outsourced to vendors. Make sure your vendor has the basic security environment in place that is needed for your applications.
Can the data be transferred to the vendor in a safe manner?
This question is dependent on the confidentiality of the data. Is it practical to use "The Network" to transmit data or do you need to ship or even hand-carry an encrypted disk array?
What is the value to my project if I increase the speed of processing results?

Immediate access to infinite resources results in a "faster time to response" and enables more iterations and refinement of the data model.
Once these questions have been answered, a customer should contact the vendor for a quote to run its work on its system. While discussing the processing job, customers should keep in mind that, in order for vendors to keep their prices competitive and profitable, the compute environment may not be exactly what the customer has experienced in the past. Some flexibility may be required when determining a minimum configuration request.
Another factor that can have a large influence on the price that a customer can expect to pay is related to time. Just as any other "utility," it is important for the service provider to be able to predict demand in a given timeframe. For example, a customer who can provide a guarantee of 1 million CPU hours over the next 12 months can expect to pay less than a customer who demands immediate access to 1,000 CPUs for the next 5 days. It is important to note, however, that a customer may not know the application's exact behavior until a pilot has been completed with the vendor to benchmark processing times. During these benchmarks, customers should test the application with different levels of compute power to gauge the time benefits.
It is not only oil companies that can benefit from the utility computing model. Service companies can use utility computing in several ways. One of the most obvious areas is what is known as "peak shaving." It is uneconomical and impractical for a service company to build its own data center to meet its expected "peak" demand that may only represent a single contract or even a single job. A more
reasonable approach is to build the center to cover, say, 70% of expected peak and have utility computing contracts in place for the other 30%.
This helps reduce the capital outlay
of equipment and the ongoing expense of running the data center as well as managing the risk of an ever-
shortening expected useful life cycle of computer and network hardware.
Another area where we are seeing demand for this model is in the development of new algorithms in both the seismic processing market and reservoir simulations. Experience has shown that taking an algorithm that "scales" very well on a few compute nodes can show no performance gain once CPU counts climb into the hundreds. Outside of major universities, most companies do not have access to super computing environments. In order to test their new ideas, the utility model gives even the smallest company access to systems with thousands of CPUs at a fraction of the price of purchasing such equipment.
In most cases, it is hard to beat the value of utility computing. With the utility computing model, customers no longer have to enable their internal compute infrastructure to run 24/7. They no longer have to provide network connectivity, space, environment and power systems. They don't have to pay for labor to support the systems, nor do they have to worry about updating their hardware every 18 to 36 months. With an in-house computing infrastructure, customers incur all of these costs, whether or not the systems are being used to their full capacity. In the utility computing world, by contrast, customers pay only for what they use, and get access to the most advanced systems - delivering extreme performance.