Is grid computing the next great technology likely to explode on an unsuspecting energy industry?

Computer hardware is cheap these days, but owning that hardware is not. According to the Gartner Group, total cost of ownership for computer systems is eight to 10 times the cost of the hardware systems. Wouldn't it be great to cut this cost substantially and even have this hardware make money for the company?
In most enterprises, about 10% of available CPU cycles is used - a fact driving enterprises to deploy the next generation of cost-reducing architectures: grid computing.
Grid computing technology, originally developed to deliver supercomputing power for large scientific projects, comes in two flavors: compute grids (using distributed CPUs) and data grids (using distributed data sets). Although grids promise dramatic returns on investment from shifting enterprise computing workloads around the globe or to external partners, most enterprises are starting with smaller, local deployments. These cluster or campus grids have a single data center and use shared file systems and the cheapest available hardware and software.
Grid computing also is called distributed computing, utility computing or P2P (peer-to-peer) architecture. Whatever it's called, it's coming, its benefits will be significant, and combined with Web services, the result is a recipe for transforming the enterprise IT infrastructure, dramatically lowering cost of ownership and making distributed and failure-tolerant systems more viable.
The basic idea is to take advantage of idle capacity, whether across the data center, around the world or at different times of the day. Grid computing has primarily focused on solving the systems-management challenges of distributed computing, such as security, authentication and policy management across heterogeneous platforms and organizations. Utility computing has focused on developing provisioning technology and the business model for on-demand, pay-as-you-go infrastructure. And P2P efforts have drawn attention to the potential for leveraging idle resources to handle huge computing tasks.
Cost of ownership
How does grid technology reduce the cost of ownership and provide return on investment (ROI)? Think about the implications of being able to peak higher when necessary and paying just a connection fee the rest of the time. Users can save money to buy more computing capacity when they really need it. Studies show as much as four times the computing capacity can be accessed through a utility for roughly the same cost. No longer will users need to estimate and predict their computing need and hope for an ROI; with a utility, they can defer the cost until the latest possible moment, and then cash in for maximum productivity.
Is it ready for prime time?
In Houston, TruGrid Inc. is commercializing grid computing to support the global energy industry. TruGrid's alliance partners, SpaceCycles. Net and the Computing Power Clearinghouse, provide capacity-on-demand hardware and software, along with a trading exchange to enable such metro grids to function in the familiar utility business model, like electricity, gas or telephone service. TruGrid's metro grids feature broadband, private, secure networks, thousands of high-performance CPUs and terabytes of enterprise storage.
Ride the Hog
TruGrid's first metro-area grid, christened the Houston Open Grid (Hog), is an aggregation of private computing sites or nodes within the city's business community and interconnected by a private, secure, broadband gigabit network owned by Phonoscope Communications. These computing sites may be professionally maintained data centers or corporations whose leaders have chosen to make certain technology assets available to others for a fee. The Hog nodes are designed to act as a single, coherent, high-performance computer system providing users with incomparable computing resources at a fraction of the cost of owning the systems. Hog can supply thousands of CPUs for use when they are needed, and the customer pays only for what is used. Hog is open and available to anyone who needs its resources, and it is managed and maintained by TruGrid.
With TruGrid's business model and strategic agreement with grid node providers, the company can take hundreds or even thousands of underused CPUs from any organization and install them at one or more secure grid node sites for hosting. This allows that organization to offload the expense of managing and supporting that hardware while maintaining secure access to their CPUs. When the CPUs are not being used by the organization, TruGrid can make them available to others on Hog at an agreed-upon price per CPU. This generates revenue that in some cases can offset the cost of the outsourcing and hosting services.
Using a grid with an ASP
TruGrid is teaming up with an unnamed oil and gas software provider to establish Hog as a vehicle for deploying and delivering on-demand application services via an application service provider (ASP) portal. By using Hog, this software provider can minimize or eliminate investments in expensive cluster hardware to support its seismic processing applications.
These applications can scale according to the data requirements and customer demand instantly. The on-demand model can supply thousands of CPUs for deployment in Houston and globally through corporate networks or across the Internet. Customers worldwide can have access to the latest state-of-the-art applications running on as many CPUs as they can afford.
Under the joint project, a customer logs in to the portal via the Internet to order data through the Oil Data Bank service as well as the appropriate application service. Oil Data Bank has a direct, private, secure connection to the software vendor for security and TruGrid for computing resources through the Phonoscope fiber-optic network. SpaceCycles.Net hosts the application.
Due to the network's high bandwidth speed, data relocation is unnecessary. The application processes the specific data from the Oil Data Bank data management site and returns the results to the Oil Data Bank site once the job or project is finished. The customer then can have the data stored at Oil Data Bank or have it transferred back to his site. This commercial service can save thousands of dollars and weeks of processing time.
Something to think about
The convergence of grid, P2P and utility computing efforts, along with Web services, promises to revolutionize the oil industry's data centers by lowering costs, eliminating server underutilization, reducing management complexity and providing additional business continuity benefits.
Computing capacity as a utility is not a new concept, but enterprises are just starting to deploy grid and utility computing products and services.
It represents a change in the way energy companies manage their computing infrastructure. The key to grid computing success is the level of trust corporate leaders have in the reliability and security of hardware and software resources outside their company walls and firewalls.