The path to open source adoption. (Images courtesy of dGB Software) |
Outside of software developers and information technology (IT) experts, open source remains somewhat enigmatic, even mysterious. People sort of “get it,” but not quite — despite being pervasive. How pervasive? “The use of open source beyond Linux is pervasive, used by almost three-quarters of organizations and spanning hundreds of thousands of projects,” said Dr. Anthony Picardi, senior vice president of global software research at IDC, providers of market research data and advisory services focused on the technology industry. “Although open source will significantly reduce the industry opportunity over the next 10 years, the real impact of open source is to sustain innovations in mature software markets, thus extending the useful life of software assets and saving customers money.”
In an August 2006 study IDC discovered that the open source software phenomenon has spread far beyond Linux with viral intensity. The study analyzed surveys from more than 5,000 developers in 116 countries and discovered that developers worldwide are rapidly increasing their use of open source. Mark Driver, vice president-research, Gartner Group (an IDC competitor), said, “You can try to avoid open source, but it’s probably easier to get out of the IT business altogether. By 2011, at least 80% of commercial software will contain significant amounts of open source code.”
Everyday examples of open source include:
• Firefox. Since its initial release in 2004, the Firefox Web browser has been downloaded more than 500 million times.
• OpenOffice. A productivity suite that provides nearly all the functionality of Microsoft Office.
• Apache and MySQL. The most used Web server and database on the Internet.
In the geosciences world, examples include:
• Madagascar. An open-source seismic processing system.
• GMT. An open-source mapping package.
• OpendTect. An open-source software system designed for seismic analysis, visualization, and interpretation.
• GRASS. An open-source geographical information system (GIS).
But what exactly is open source? What is the promise? And why has it proven to be successful in the realm of geologists, geophysicists, petroleum engineers, petrophysicists, and rock physicists?
Let’s start with a good working definition of open source, created by The Open Source Inititative: “Open source is a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in.” Let’s examine each of the three key qualities.
Flexible + reusable = agile
“In the future, instead of striving to be right at a high cost, it will be more appropriate to be flexible and plural at a lower cost. If you cannot accurately predict the future, then you must flexibly be prepared to deal with various possible futures.” – Edward de Bono, author, Mensa member, and creator of Lateral Thinking and Six Thinking Hat.
For open source systems to flourish requires an environment rife with flexibility and reusability. Fortunately, these two properties of software —flexibility and reusability — are at the heart of Agile Software Development (ASD), a software methodology that emerged in the late 1990s. Having the ability to design and integrate new “parts” quickly is part of open source’s DNA and ethos.
It was once thought that if a vendor “owns” a customer’s workflow (through the vendor’s proprietary software), they own that customer. (This was earlier referred to as “predatory vendor lock-in.”) Nowadays, vendors must provide customers unbridled value by supplying them tools to design their own workflow in a way that best serves their organization, strategy, and key stakeholders.
Agile Development and open source not only speeds up the concept-to-customer path, but by globally distributing a problem to be solved, more minds swarm for the solution, and development costs shrink.
From its inception, the pioneers behind ASD held the mantra of “creating software in a lighter, faster, more people-centric way.” They created the Agile Manifesto, with principles like:
• Customer satisfaction by rapid, continuous delivery of useful software;
• Working software delivered frequently (weeks rather than months);
• Working software as the principal measure of progress;
• Even late changes in requirements welcomed;
• Close, daily cooperation between business people and developers;
• Face-to-face conversation as the best form of communication
(Co-location);
• Projects built around motivated individuals who should be trusted;
• Continuous attention to technical excellence and good design;
• Simplicity;
• Self-organizing teams; and
• Regular adaptation to changing circumstances.
Bert Bril, dGB Earth Sciences co-founder and head of research and development, pointed out, “Agile methods are great in finding out the real needs of the users and sponsors. And, contrary to the prototyping-first approaches, we create the final tools directly. For people used to the practices of the traditional vendors, this is amazing.”
Distributed peer review
“All of the facts belong to the problem, not to the solution.” — Ludwig Wittgenstein
“You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.”— Buckminster Fuller, the creator of the geodesic dome, the second Mensa President, and author of Critical Path.
Today, several vendors use peer reviews for their projects with noble intentions: to make certain that the project is exposed to group scrutiny before being presented to the client. However, these peer review sessions are adapted from a centuries-old academic system, containing negative traits rooted in hidden politics and a chance for some to flaunt their intellectual muscles among their peers.
Exploring new ideas during face-to-face peer reviews is forbidden and deemed as “getting off track.” The individual defending his or her project work must stand alone while countering a barrage of very tough questions with extemporaneous answers, with the clock ticking. But very few people possess the strength of character to effectively fend off public aggression and “gang tackles” from groups tacitly dedicated to preserving the status quo while simultaneously attempting to solve the problem at hand.
But what happens if you are solving the wrong problem in the first place? General problem-solving processes consist of four steps:
• Acknowledging or recognizing the existence of a problem;
• Formulating the problem;
• Deriving the solution to the problem; and
• Implementing the solution.
Although all four steps are equally important and interconnected, step 2 — formulating the problem — is emphasized least, most often due to time and resource constraints. Historically, educational systems place excessive emphasis on the third step — deriving the solution — to solve pre-packaged problems. However, when the problem formulation step is widely distributed, at a global scale, you will get a much richer result: a vast pool of problem statements. Ian Mitroff, Harold Quinton Distinguished Professor of Business Policy at the University of Southern California, goes so far as to say that, “We may say that something is a problem if and only if there is more than one way of stating it!”
By formulating the problem well, you solve the right problem sufficiently. Why provide a precise solution to the wrong problem when a sufficient solution to the right problem may produce better-than-expected results? This is very important in environments of extreme change such as software development.
Unlike the academic peer review model, the distributed peer review helps build respect among its members by challenging assumptions and sacred cows behind the prevailing problem-solving process. People within distributed peer review communities feel safer and are more motivated to collaborate on ideas, present their arguments, and share in the credit.
Building whitebox software
“One of the biggest business advantages of using open source code is the ability to tap into software that iterates quickly and transparently, incorporating a range of new ideas.” – InformationWeek, September 2008
Transparency encourages companies to tackle problems head-on rather than expending energy by hiding them. On a larger scale, the most effective economic and political systems are marked by transparency:you can see the laws and regulations, see how they’re enforced, and see the outcomes. Open source provides natural transparency in every release of open source code because, unlike proprietary code, users have total access to it. End-users can see it; therefore, the vendor harbors no secrets.
Consequently, the most successful open source end-users and companies also become transparent by articulating their needs to the vendor. A natural win/win emerges: those who embrace transparency gain power.
Continuity + stability
For major E&P companies, continuity and stability are key issues that decide whether or not a software product will be adopted. In the past, major E&P companies have been hesitant to acquire software from small vendors due to the perceived risk of the vendor’s longevity.
Continuity, on the other hand, is not an issue with open source models because the software will survive even if the parent company doesn’t. Many complex open source systems have proven to be more stable than alternative commercial systems because of the large developer community that helps to debug and test software quickly. Smaller companies put more emphasis on costs and value for money. And compared to proprietary commercial systems, systems like Opend-Tect are tens to hundreds of times less expensive. For research and development users, this helps to stimulate and rapidly test new ideas.
During the rise of the Internet, we have witnessed a phenomenon: anything connected to a network will eventually be commoditized. This startling trend has far-reaching consequences for companies based in proprietary-centered technologies. On the other hand, a company’s minds and its ecosystem cannot be replicated due to their flexibility. This is the promise of open source.
Conclusion
To recap, open source software provides more flexible solutions for clients, and it escapes the predatory vendor lock-in practice. Given the global trends, we are at the early stages of rampant growth of open source solutions in the geosciences.
Recommended Reading
E&P Highlights: Oct. 28, 2024
2024-10-28 - Here’s a roundup of the latest E&P headlines, including a new field coming onstream and an oilfield service provider unveiling new technology.
EY: How AI Can Transform Subsurface Operations
2024-10-10 - The inherent complexity of subsurface data and the need to make swift decisions demands a tailored approach.
GeoPark Announces Production Start at Argentina’s Confluencia Norte
2024-11-12 - GeoPark expects production at the Confluencia Norte Block in Rio Negro, Argentina to reach its peak within 90 days of startup.
E&P Highlights: Sept. 23, 2024
2024-09-23 - Here's a roundup of the latest E&P headlines, including Turkey receiving its first floating LNG platform and a partnership between SLB and Aramco.
TGS Awards First 3D Streamer Contract for Summer 2025
2024-11-11 - The contract for TGS’ first 3D streamer acquisition for Northwest Europe’s summer season will begin in May 2025.
Comments
Add new comment
This conversation is moderated according to Hart Energy community rules. Please read the rules before joining the discussion. If you’re experiencing any technical problems, please contact our customer care team.