Welcome!

@CloudExpo Authors: AppDynamics Blog, Elizabeth White, Pat Romanski, Kevin Benedict, Liz McMillan

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Feed Post

Vendor Lock-in and Cloud Computing

Does lock-in simply come with the territory or can it be avoided?

IT vendor lock-in is as old as the IT industry itself. Some may even argue that lock-in is unavoidable when using any IT solution, regardless of whether we use it “on premise” or “as a service”. To determine whether this is the case, we examine traditional lock-in and the to-be-expected impact of cloud computing.

Vendor lock-in is seen as one of the potential drawbacks of cloud computing. One of Gartner’s research analysts recently published a scenario where lock-in and standards even surpass security as the biggest objection to cloud computing. Despite efforts like Open Systems and Java, we have managed to get ourselves locked-in with every technology generation so far. Will the cloud be different or is lock-in just a fact of live we need to live with? Wikipedia defines vendor lock-in as:

In economics, vendor lock-in, also known as proprietary lock-in, or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.

Let’s examine what lock-in means in practical terms when using IT solutions and how cloud computing would make this worse or better. For this we look at four dimensions of lock-in:

Horizontal lock-in: This restricts the ability to replace a product with a comparable or competitive product. If I choose solution A (let’s for example take a CRM solution or a development platform), then I will need to migrate my data and/or code, retrain my users and rebuild the integrations to my other solutions if I want to move to solution B. This is a bit like when I buy a Prius, I cannot drive a Volt. But it would be nice if I can use the same garage, loading cable, GPS, etc. when I switch.

Vertical lock-in: This restricts choice in other levels of the stack and occurs if choosing solution A mandates use of database X, operating system Y, hardware vendor Z and/or implementation partner S. To prevent this type of lock-in the industry embraced the idea of open systems, where hardware, middleware and operating systems could be chosen more independently. Before this time hardware vendors often sold specific solutions (like CRM or banking) that only ran on their specific hardware / OS etc. and could only be obtained in their entirety from them. So a bit like today’s (early market) SaaS offerings, where all needs to be obtained from one vendor.

Diagonal (of inclined) Lock-in: This is a tendency of companies to buy as many applications as possible from one provider, even if his solutions in those areas are less desirable. Companies picked a single vendor to make management, training and especially integration easier but also to be able to demand higher discounts. A trend that let to large, powerful vendors, which caused again higher degrees of lock-in. For now we call this voluntary form of lock-in diagonal Lock-in (although “inclined”- a synonym for diagonal - may describe this better).

Generational Lock-in: This last one is as inescapable as death and taxes and is an issue even if there is no desire to avoid horizontal, vertical or diagonal lock-in. No technology generation and thus no IT solution or IT platform lives forever (well, maybe with exception of the mainframe). The first three types of lock-in are not too bad if you had a good crystal ball and picked the right platforms (eg. Windows and not OS/2) and the right solution vendors (generally the ones that turned out to become the market leaders). But even such market leaders at some point reach end of life. Customers want to be able to replace them with the next generation of technology without it being prohibitively expensive or even impossible because of technical, contractual or practical lock-in.

The impact of cloud computing on lock-in
How does cloud computing, with incarnations like SaaS (software as a service), PaaS (platform as a service) and IaaS (infrastructure as a service) impact the above? In the consumer market we see people using a variety of cloud services from different vendors , for example Flickr to share pictures, Gmail to read email, Microsoft to chat, Twitter to Tweet and Facebook to … (well, what do they do on Facebook?), all seemingly without any lock-in issues. Many of these consumer solutions now even offer integration amongst each other. Based on this one might expect that using IT solutions “as a service” in an enterprise context also leads to less lock-in. But is this the case?

Horizontal: For the average enterprise moving from one SaaS solution to another is not so different from moving from a traditional software application to another, provided they agreed whether and how their data can be transferred. What does help is that SaaS in general seems easier and faster to implement and that it is not necessary for the company to have two sets of infrastructure available when migrating.


For PaaS it is a very different situation, especially if the development language is proprietary to the PaaS platform. In that case, the lock-in is almost absolute and comparable to the lock-in companies may have experienced with proprietary 4GL platforms, with the added complexity that with PaaS also the underlying infrastructure is locked-in (see under vertical).

Horizontal lock-in for IaaS may actually be less severe than lock-in to traditional hardware vendors as virtualization - typical for any modern IaaS implementation - isolates from underlying hardware differences. Provided customers do not lock themselves in to a particular hypervisor vendor, they should be able to move their workloads relatively easy between IaaS providers (hosting companies) and/or internal infrastructure. A requirement for this is that the virtual images can be easily converted and carried across, a capability that several independent infrastructure management solutions now offer. Even better would be an ability to move full composite applications (more about this in another post).

Vertical: For SaaS and PaaS vertical lock-in is almost by definition part of the package as the underlying infrastructure comes with the service. The good news is the customer does not have to worry about these underlying layers. The bad news is that if the customer is worried about the underlying layers, there is nothing he can do. If the provider uses exotic databases, dodgy hardware or has his datacenter in less desirable countries, all the customer can do is decide not to pick that provider. He could consider contracting upfront for exceptions, but this will in almost all case will increase the cost considerably, as massive scale and standardization are essential to business model of real SaaS providers.

On the IaaS side we see less vertical lock-in, simply because we are already at a lower level, but ideally our choice of IaaS server provider should not limit our choice of IaaS network or IaaS storage provider. For storage the lesson we learned the hard way during the client server area –for enterprise applications logic and data need to be close together to get any decent performance – still applies. As a result the storage service almost always needs to be procured from the same IaaS provider as used for processing. On the network side most IaaS providers offer a choice of network providers, as they have their datacenter connected to several network providers (either at their own location or at one of the large co-locators).

Diagonal or inclined: The tendency to buy as much as possible from one vendor may be even stronger in the cloud than in traditional IT. Enterprise customers try to find as single SaaS shop for as many applications as possible. Apart from the desire for out of the box integration, an - often overlooked - reason for this is that customers need to regularly audit the delivery infrastructure and processes of their current SaaS providers, something which is simply unfeasible if they would end up having hundreds of SaaS vendors.

For similar reasons we see customers wanting to buy PaaS from their selected SaaS or IaaS vendor. As a result vendors are trying to deliver all flavors, whether they are any good in that area or not. A recent example being the statement from a senior Microsoft official that Azure and Amazon were likely to become more similar, with the first offering IaaS and the second likely to offer some form of PaaS soon.

In my personal view, it is questionable whether such vertical cloud integration should be considered desirable. The beauty of the cloud is that companies can focus on what they are good at and do that very well. For one company this may be CRM, for another it is financial management or creating development environments and for a third it may be selling books - um, strike that - hosting large infrastructures. Customers should be able to buy from the best, in each area. CFOs do not want to buy general ledgers from CRM specialists, and for sure sales people don’t want it the other way around. Similar considerations apply for buying infrastructure services from a software company or software from an infrastructure hosting company. At the very least this is because developers and operators are different types of people, which no amount of “devops training “ will change (at least not during this generation).

Generational: As with any new technology generation people seem to feel this may be the final one: “Once we moved everything to the cloud, we will never move again.” Empirically this is very unlikely - there always is a next generation, we just don’t know what it is (if we did, we would try and move to it now). The underlying thought may be: “Let the cloud vendors innovate their underlying layers, without bothering us”. But vendor lock-in would be exactly what would prevent customers from reaping the benefits of clouds suppliers innovating their underlying layers. Let’s face it, not all current cloud providers will be innovative market leaders in the future. If we were unlucky and picked the wrong ones, the last thing we want to be is locked-in. In today’s market picking winning stocks or lotto numbers may be easier then picking winning cloud vendors (and even at stock picking we are regularly beaten by not very academically skilled monkeys).

Conclusion
My goal for this post was to try and define lock-in, understand it in a cloud context and agree that it should be avoided while we still have a chance (while 99% of all business systems are not yet running in the cloud). Large scale vertical integration is typical for immature markets – be it early-day cars or computers or now clouds. As markets mature companies specialize again on their core competencies and find their proper (and profitable) place in a larger supply chain. The lock-in table at the end, where I use the number of padlocks to indicate relative locking of traditional IT versus SaaS, PaaS and IaaS, is more meant for discussion and improvement than as an absolute statement. In fact our goal should be to reduce lock-in considerably for these new platforms. In a later post I will discuss some innovative cross cloud portability strategies to prevent lock-in when moving large numbers of solutions into the cloud, stay tuned.

PS Not that I for a minute think my blogs have any serious stopping power, but do not let the above stop you from moving suitable applications into the cloud today. It’s a learning experience that we will all need as this cloud thing gets serious for serious enterprise IT (and I am absolutely sure it will, as the percentage of suitable applications is becoming larger every day). Just make sure you define an exit strategy for each first, as all the industry analysts will tell you. In fact, even for traditional IT it always was a good idea to have an exit strategy first (you did not really think these analysts came up with something new, did you?).

This blog originally was published at ITSMportal.com on July 14, 2010

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

@CloudExpo Stories
We all know the latest numbers: Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from last year, and will reach 20.8 billion by 2020. We're rapidly approaching a data production of 40 zettabytes a day – more than we can every physically store, and exabytes and yottabytes are just around the corner. For many that’s a good sign, as data has been proven to equal money – IF it’s ingested, integrated, and analyzed fast enough. Without real-ti...
"We view the cloud not really as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Software-defined storage is a big problem in this industry because so many people have different definitions as they see fit to use it," stated Peter McCallum, VP of Datacenter Solutions at FalconStor Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
In his session at @DevOpsSummit at 19th Cloud Expo, Yoseph Reuveni, Director of Software Engineering at Jet.com, will discuss Jet.com's journey into containerizing Microsoft-based technologies like C# and F# into Docker. He will talk about lessons learned and challenges faced, the Mono framework tryout and how they deployed everything into Azure cloud. Yoseph Reuveni is a technology leader with unique experience developing and running high throughput (over 1M tps) distributed systems with extre...
"We are a well-established player in the application life cycle management market and we also have a very strong version control product," stated Flint Brenton, CEO of CollabNet,, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Actian Corporation has announced the latest version of the Actian Vector in Hadoop (VectorH) database, generally available at the end of July. VectorH is based on the same query engine that powers Actian Vector, which recently doubled the TPC-H benchmark record for non-clustered systems at the 3000GB scale factor (see tpc.org/3323). The ability to easily ingest information from different data sources and rapidly develop queries to make better business decisions is becoming increasingly importan...
"Operations is sort of the maturation of cloud utilization and the move to the cloud," explained Steve Anderson, Product Manager for BMC’s Cloud Lifecycle Management, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Enterprise networks are complex. Moreover, they were designed and deployed to meet a specific set of business requirements at a specific point in time. But, the adoption of cloud services, new business applications and intensifying security policies, among other factors, require IT organizations to continuously deploy configuration changes. Therefore, enterprises are looking for better ways to automate the management of their networks while still leveraging existing capabilities, optimizing perf...
Security, data privacy, reliability and regulatory compliance are critical factors when evaluating whether to move business applications from in-house client hosted environments to a cloud platform. In her session at 18th Cloud Expo, Vandana Viswanathan, Associate Director at Cognizant, In this session, will provide an orientation to the five stages required to implement a cloud hosted solution validation strategy.
Unless your company can spend a lot of money on new technology, re-engineering your environment and hiring a comprehensive cybersecurity team, you will most likely move to the cloud or seek external service partnerships. In his session at 18th Cloud Expo, Darren Guccione, CEO of Keeper Security, revealed what you need to know when it comes to encryption in the cloud.
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
What are the successful IoT innovations from emerging markets? What are the unique challenges and opportunities from these markets? How did the constraints in connectivity among others lead to groundbreaking insights? In her session at @ThingsExpo, Carmen Feliciano, a Principal at AMDG, will answer all these questions and share how you can apply IoT best practices and frameworks from the emerging markets to your own business.
Basho Technologies has announced the latest release of Basho Riak TS, version 1.3. Riak TS is an enterprise-grade NoSQL database optimized for Internet of Things (IoT). The open source version enables developers to download the software for free and use it in production as well as make contributions to the code and develop applications around Riak TS. Enhancements to Riak TS make it quick, easy and cost-effective to spin up an instance to test new ideas and build IoT applications. In addition to...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments. In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, discussed how a cloud designed for production operations not only helps accelerate developer in...
Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ...