Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Open Source Cloud Bits

The Role of Open Source in an Infrastructure as a Service (IaaS) Stack

Last week I got into a nice discussion on Twitter regarding the role of open source in an infrastructure as a service (IaaS) stack.  With open source cloud stacks from Eucalyptus, Cloud.com, Abiquo and others competing against proprietary source solutions from Enomaly, VMware and others, this can get fairly confusing quickly.

For clarity, here is my position on open source vs. proprietary source in this aspect of the market:  both have a role to play and natively one is not better or more advantaged than the other.  However, when you get into the details there are factors that might favor one model over the other in specific cases. I will look at this from the perspective of the service providers and enterprises who use cloud stacks.  In a future post I may touch on factors that vendors should when choosing between open source and closed source models.

For service providers, margins are critical.  Any increase in capital and operating costs must enable a corresponding increase in value provided in the market.  Amazon and Google have the scale and ability to build a lot of capabilities from scratch, trading a short-term increase in R&D against a long-term decrease in operating costs.

While some cloud providers may attempt to match the low-cost giants on pricing, they know that they need to differentiate in some other material way (e.g. performance, customer service, etc.).   For these providers, the more “free open source” technology that they can leverage, the lower their operating costs may be.

This low-cost focus must permeate their decision making, from the physical infrastructure (commodity servers, JBOD/DAS storage, etc.) to the hypervisor (Xen or KVM vs. VMware), to the cloud provisioning/automation layer, and more.  Open source CMDBs (example), monitoring technologies (e.g. Nagios) and other technologies are often found in these environments.

There are trade-offs, of course.  Open source can often be more difficult to use, lack key functionality, or suffer from poor support – all of which increases costs in often material and unintended ways (note that proprietary solutions can have many of the same issues, and do more often than most people realize).

Other service providers may target the enterprise and focus on highly-differentiated offerings (though I really haven’t see much differentiation yet, at least at the IaaS level).  For these providers, the benefits of enterprise-grade storage (EMC, NetApp, HP), VMware’s HA and fault-tolerant capabilities, and other capabilities gained from using tools from HP, IBM, BMC and other vendors, may be well worth the increase in cost.  And make no mistake, the cost increase from using these technologies can be quite substantial.

Newer vendors, such as Enomaly, are having some success despite their closed-source nature (Enomaly started as open source but changed models in 2009).  Further, even when a provider uses a solution from Cloud.com or Abiquo, both of them with open source models, they will often choose to pay for premium editions in order to get functionality or support not available via open source.  In reality, anybody serious about this market will want a mix of open-source (though not necessarily free) and closed-source technologies in their environment.

In the enterprise, the story is a bit different.  If you’re already paying VMware for an all-you-can-eat enterprise license agreement (ELA), the marginal cost to use vSphere in your private cloud is zero.  KVM or Xen are not less expensive in this case.  Same is true for tools from HP, IBM, BMC and others.

The primary question, then, is whether or not these are the right solutions.  Does BMC have a better answer for private clouds than Eucalyptus?  Is IBM CloudBurst better than Abiquo for development and test?

Open source for open source’s sake is not rational.

In addition, focusing on only the economics of open source misses what might be the bigger value – risk reduction.  Closed-source projects can go under – either because the developer goes out of business, or if an acquirer decides to no longer keep a product on the market.  This does happen all of the time.  For large and well-established technologies, the risk of abandonment is generally lower.  VMware, HP and EMC are not going anywhere soon.

Open source projects, in contrast, can always be continued.  The cost may fall to those dependent on the project, but at least you get the option.  Not so with closed source – especially if the solution is killed by its owner.

Most buyers can get source code escrow terms that give them access to the source for a product in the event of bankruptcy or similar situations.  In 20 years I have not seen a source escrow addendum include a trigger to release the code if the developer stops or slows investing in it.  Today your vendor might have 20 top-tier developers delivering on a roadmap.  What if in 3 years they have only 4 folks maintaining the current code line and making minor updates?  Can I get the source code then?  Typically not.

There’s another issue that often gets overlooked.  Even if you have a source escrow agreement, that doesn’t mean that the code deposits are being made on a regular basis.  It also doesn’t mean that the code is well-commented or that accurate build scripts are included such that a person of “commercially reasonable” skill can take over the code and move it forward.  I have seen this situation happen more than once, including recently, and it’s quite a shock to learn that your vaunted supplier has been careless, lazy, or even deliberately misleading about their source code responsibilities.

CloudBzz Recommendations

1.  Insist on open source (or at least full source access – not escrow) when one or more of the following situations exist:

- the supplier is small or thinly funded (VCs can and do pull the plug even after many million$ have been invested)
- the capability/functionality provided by the technology is strategically important to you, especially when investment must be maintained to remain leading-edge in a fast-moving and intensely competitive market
- migration costs to a different technology are very high and disruptive

2.  Consider closed-source/proprietary solutions when at least two or more of the following factors are present:

- the functionality provided by the software is not core to your competitive positioning the market
- replacement costs (particularly internal change costs) are moderate or low
- the functionality and value is so much higher than open source alternatives that you’re willing to take the risk
- the technology is so widely deployed and successful that the risks of abandonment is very low
- the costs are low enough so as not to make your offering uncompetitive or internal environment unaffordable

Balancing risk, capability and control is very difficult – even more so in a young and emerging market like cloud computing.  The decisions made in haste today can have a profound impact on your success in the future – especially if you are a cloud service provider.

While open source can be a very potent source of competitive advantage, it should not be adopted purely on philosophical grounds.  If you do adopt closed source, especially at the core stack level, work hard to aggressively manage your exposure and make sure you work hard to ensure that those “unforeseen events” don’t leave you high and dry.

Read the original blog entry...

More Stories By John Treadway

John Treadway is a Vice President at Cloud Technology Partners and has over 20 years of experience delivering technology and business solutions to domestic and global enterprises across multiple industries and sectors. As a senior enterprise technology and services executive, he has a successful track record of leading strategic cloud computing and data center initiatives. John is responsible for technology IP at Cloud Technology Partners, and is actively involved with client projects and strategic alliances. John is also an active blogger in the cloud computing space and authors the CloudBzz blog. Sites/Blogs CloudBzz

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.