|By Gregor Petri||
|July 16, 2010 04:45 PM EDT||
IT vendor lock-in is as old as the IT industry itself. Some may even argue that lock-in is unavoidable when using any IT solution, regardless of whether we use it “on premise” or “as a service”. To determine whether this is the case, we examine traditional lock-in and the to-be-expected impact of cloud computing.
Horizontal lock-in: This restricts the ability to replace a product with a comparable or competitive product. If I choose solution A (let’s for example take a CRM solution or a development platform), then I will need to migrate my data and/or code, retrain my users and rebuild the integrations to my other solutions if I want to move to solution B. This is a bit like when I buy a Prius, I cannot drive a Volt. But it would be nice if I can use the same garage, loading cable, GPS, etc. when I switch.
Vertical lock-in: This restricts choice in other levels of the stack and occurs if choosing solution A mandates use of database X, operating system Y, hardware vendor Z and/or implementation partner S. To prevent this type of lock-in the industry embraced the idea of open systems, where hardware, middleware and operating systems could be chosen more independently. Before this time hardware vendors often sold specific solutions (like CRM or banking) that only ran on their specific hardware / OS etc. and could only be obtained in their entirety from them. So a bit like today’s (early market) SaaS offerings, where all needs to be obtained from one vendor.
Generational Lock-in: This last one is as inescapable as death and taxes and is an issue even if there is no desire to avoid horizontal, vertical or diagonal lock-in. No technology generation and thus no IT solution or IT platform lives forever (well, maybe with exception of the mainframe). The first three types of lock-in are not too bad if you had a good crystal ball and picked the right platforms (eg. Windows and not OS/2) and the right solution vendors (generally the ones that turned out to become the market leaders). But even such market leaders at some point reach end of life. Customers want to be able to replace them with the next generation of technology without it being prohibitively expensive or even impossible because of technical, contractual or practical lock-in.
For PaaS it is a very different situation, especially if the development language is proprietary to the PaaS platform. In that case, the lock-in is almost absolute and comparable to the lock-in companies may have experienced with proprietary 4GL platforms, with the added complexity that with PaaS also the underlying infrastructure is locked-in (see under vertical).
Horizontal lock-in for IaaS may actually be less severe than lock-in to traditional hardware vendors as virtualization - typical for any modern IaaS implementation - isolates from underlying hardware differences. Provided customers do not lock themselves in to a particular hypervisor vendor, they should be able to move their workloads relatively easy between IaaS providers (hosting companies) and/or internal infrastructure. A requirement for this is that the virtual images can be easily converted and carried across, a capability that several independent infrastructure management solutions now offer. Even better would be an ability to move full composite applications (more about this in another post).
Vertical: For SaaS and PaaS vertical lock-in is almost by definition part of the package as the underlying infrastructure comes with the service. The good news is the customer does not have to worry about these underlying layers. The bad news is that if the customer is worried about the underlying layers, there is nothing he can do. If the provider uses exotic databases, dodgy hardware or has his datacenter in less desirable countries, all the customer can do is decide not to pick that provider. He could consider contracting upfront for exceptions, but this will in almost all case will increase the cost considerably, as massive scale and standardization are essential to business model of real SaaS providers.
On the IaaS side we see less vertical lock-in, simply because we are already at a lower level, but ideally our choice of IaaS server provider should not limit our choice of IaaS network or IaaS storage provider. For storage the lesson we learned the hard way during the client server area –for enterprise applications logic and data need to be close together to get any decent performance – still applies. As a result the storage service almost always needs to be procured from the same IaaS provider as used for processing. On the network side most IaaS providers offer a choice of network providers, as they have their datacenter connected to several network providers (either at their own location or at one of the large co-locators).
Diagonal or inclined: The tendency to buy as much as possible from one vendor may be even stronger in the cloud than in traditional IT. Enterprise customers try to find as single SaaS shop for as many applications as possible. Apart from the desire for out of the box integration, an - often overlooked - reason for this is that customers need to regularly audit the delivery infrastructure and processes of their current SaaS providers, something which is simply unfeasible if they would end up having hundreds of SaaS vendors.
For similar reasons we see customers wanting to buy PaaS from their selected SaaS or IaaS vendor. As a result vendors are trying to deliver all flavors, whether they are any good in that area or not. A recent example being the statement from a senior Microsoft official that Azure and Amazon were likely to become more similar, with the first offering IaaS and the second likely to offer some form of PaaS soon.
In my personal view, it is questionable whether such vertical cloud integration should be considered desirable. The beauty of the cloud is that companies can focus on what they are good at and do that very well. For one company this may be CRM, for another it is financial management or creating development environments and for a third it may be selling books - um, strike that - hosting large infrastructures. Customers should be able to buy from the best, in each area. CFOs do not want to buy general ledgers from CRM specialists, and for sure sales people don’t want it the other way around. Similar considerations apply for buying infrastructure services from a software company or software from an infrastructure hosting company. At the very least this is because developers and operators are different types of people, which no amount of “devops training “ will change (at least not during this generation).
Generational: As with any new technology generation people seem to feel this may be the final one: “Once we moved everything to the cloud, we will never move again.” Empirically this is very unlikely - there always is a next generation, we just don’t know what it is (if we did, we would try and move to it now). The underlying thought may be: “Let the cloud vendors innovate their underlying layers, without bothering us”. But vendor lock-in would be exactly what would prevent customers from reaping the benefits of clouds suppliers innovating their underlying layers. Let’s face it, not all current cloud providers will be innovative market leaders in the future. If we were unlucky and picked the wrong ones, the last thing we want to be is locked-in. In today’s market picking winning stocks or lotto numbers may be easier then picking winning cloud vendors (and even at stock picking we are regularly beaten by not very academically skilled monkeys).
My goal for this post was to try and define lock-in, understand it in a cloud context and agree that it should be avoided while we still have a chance (while 99% of all business systems are not yet running in the cloud). Large scale vertical integration is typical for immature markets – be it early-day cars or computers or now clouds. As markets mature companies specialize again on their core competencies and find their proper (and profitable) place in a larger supply chain. The lock-in table at the end, where I use the number of padlocks to indicate relative locking of traditional IT versus SaaS, PaaS and IaaS, is more meant for discussion and improvement than as an absolute statement. In fact our goal should be to reduce lock-in considerably for these new platforms. In a later post I will discuss some innovative cross cloud portability strategies to prevent lock-in when moving large numbers of solutions into the cloud, stay tuned.
PS Not that I for a minute think my blogs have any serious stopping power, but do not let the above stop you from moving suitable applications into the cloud today. It’s a learning experience that we will all need as this cloud thing gets serious for serious enterprise IT (and I am absolutely sure it will, as the percentage of suitable applications is becoming larger every day). Just make sure you define an exit strategy for each first, as all the industry analysts will tell you. In fact, even for traditional IT it always was a good idea to have an exit strategy first (you did not really think these analysts came up with something new, did you?).
This blog originally was published at ITSMportal.com on July 14, 2010
Gartner predicts that the bulk of new IT spending by 2016 will be for cloud platforms and applications and that nearly half of large enterprises will have cloud deployments by the end of 2017. The benefits of the cloud may be clear for applications that can tolerate brief periods of downtime, but for critical applications like SQL Server, Oracle and SAP, companies need a strategy for HA and DR protection. While traditional SAN-based clusters are not possible in these environments, SANless cluste...
May. 24, 2015 12:45 PM EDT Reads: 1,658
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
May. 24, 2015 12:30 PM EDT Reads: 5,240
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
May. 24, 2015 12:30 PM EDT Reads: 1,222
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Speaker Bios Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has b...
May. 24, 2015 12:15 PM EDT Reads: 1,504
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enter...
May. 24, 2015 12:15 PM EDT Reads: 2,231
SYS-CON Events announced today that DragonGlass, an enterprise search platform, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. After eleven years of designing and building custom applications, OpenCrowd has launched DragonGlass, a cloud-based platform that enables the development of search-based applications. These are a new breed of applications that utilize a search index as their backbone for data...
May. 24, 2015 12:00 PM EDT Reads: 1,959
With worldwide spending on cloud services and infrastructure growing by 23% in 2015 to $118B, it is clear that cloud services are here to stay. Yet, the rate of cloud adoption varies by companies and markets around the world. With thousands of outages and hijacks across the Internet every day, one reason for hesitation is the faith in quality Internet performance. In his session at 16th Cloud Expo, Michael Kane, Senior Manager at Dyn, will explore how Internet performance affects your end-user...
May. 24, 2015 12:00 PM EDT Reads: 1,601
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers ...
May. 24, 2015 12:00 PM EDT Reads: 2,158
As the Internet of Things unfolds, mobile and wearable devices are blurring the line between physical and digital, integrating ever more closely with our interests, our routines, our daily lives. Contextual computing and smart, sensor-equipped spaces bring the potential to walk through a world that recognizes us and responds accordingly. We become continuous transmitters and receivers of data. In his session at @ThingsExpo, Andrew Bolwell, Director of Innovation for HP's Printing and Personal S...
May. 24, 2015 11:30 AM EDT Reads: 3,972
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
May. 24, 2015 11:30 AM EDT Reads: 3,115
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
May. 24, 2015 11:30 AM EDT Reads: 1,527
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...
May. 24, 2015 11:00 AM EDT Reads: 1,743
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, explored the synergies in these two approaches, with practical tips, techniques, research data, wa...
May. 24, 2015 11:00 AM EDT Reads: 6,702
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
May. 24, 2015 10:45 AM EDT Reads: 1,989
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...
May. 24, 2015 10:30 AM EDT Reads: 4,312
If cloud computing benefits are so clear, why have so few enterprises migrated their mission-critical apps? The answer is often inertia and FUD. No one ever got fired for not moving to the cloud - not yet. In his session at 15th Cloud Expo, Michael Hoch, SVP, Cloud Advisory Service at Virtustream, discussed the six key steps to justify and execute your MCA cloud migration.
May. 24, 2015 10:30 AM EDT Reads: 2,931
SYS-CON Events announced today that the "First Containers & Microservices Conference" will take place June 9-11, 2015, at the Javits Center in New York City. The “Second Containers & Microservices Conference” will take place November 3-5, 2015, at Santa Clara Convention Center, Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
May. 24, 2015 10:00 AM EDT Reads: 2,251
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud en...
May. 24, 2015 10:00 AM EDT Reads: 1,995
“We are strong believers in the DevOps movement and our staff has been doing DevOps for large enterprise environments for a number of years. The solution that we build is intended to allow DevOps teams to do security at the speed of DevOps," explained Justin Lundy, Founder & CTO of Evident.io, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
May. 24, 2015 09:30 AM EDT Reads: 4,722
Docker is becoming very popular--we are seeing every major private and public cloud vendor racing to adopt it. It promises portability and interoperability, and is quickly becoming the currency of the Cloud. In his session at DevOps Summit, Bart Copeland, CEO of ActiveState, discussed why Docker is so important to the future of the cloud, but will also take a step back and show that Docker is actually only one piece of the puzzle. Copeland will outline the bigger picture of where Docker fits a...
May. 24, 2015 09:00 AM EDT Reads: 6,198