Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: IBM Cloud, @CloudExpo

IBM Cloud: Blog Post

Databases in the Cloud

Cloud computing is such a fascinating topic because ...

Cloud computing is such a fascinating topic because, to anybody who can get enough distance from the marketing hype, it really represents a significant departure from the way the software and the hardware industry have been operating in the last decades. In fact, cloud computing is to the traditional IT industry what the Internet has been to the music and film industry: an unwelcome and very threatening development dictating completely different business models.

The basic technical premises behind cloud computing are known since many years. Already in the 80's, as the number of computers around constantly increased, it did not take long until people realized many of them were idle for most of time. The result was the first cluster management tools capable of moving simple jobs between machines. From there, the advances in networking during the 90's brought Grid Computing and then the large computer farms behind the Web brought what we now call Cloud Computing. On the software side, a similar development took place over the years going from monolithic designs to multi-tier architectures and from tightly coupled systems to service oriented architectures. Add virtualization, and all the basic tools for building computing clouds are in place.

Whether Software as a Service (SaaS) or Platform as a Service (PaaS), there are many examples out there of services that work well; make economic sense to all parties involved; and are destined to grow both in the number of users as well as in the functionality they provide. This, however, does not seem to be the case for databases. At least not for databases containing important data.

This is an interesting observation since, technically, there is nothing that prevents databases from residing in the cloud. To understand the complex relation between databases and the cloud, one needs to understand the complex chain of problems that need to be solved before a database with important data resides in the cloud. These problems are: 

- legal aspects of where the data resides

- long term custody warranties

- trust in the cloud

If the data residing in a database is of any real value to anybody except a small group of individuals, it is likely that there are many regulations imposing a wide variety of constraints on where the data can be, who can look at it, and whether it can be moved anywhere. For instance, in federal countries, local governments often have legislation imposing that the data must be stored within the region. At larger scale, it seems unlikely that a country would agree to have government data stored in a different country. In the private sector, many software development outsourcing efforts have failed because of the difficulty to provide realistic data for testing without giving any confidential information away. And if the database stays where it is, it is unlikely that the software stack built on top of it will move to the cloud.

Assuming there is a cloud in the vicinity that fulfills all the locality requirements, the next hurdle is the legal custodian warranties imposed on data. Important and relevant data must be by law available and searchable for long periods of time, often many decades. Clouds cannot provide such guarantees today and, in this matter, the IT industry has never dealt with such time horizons before.

Finally, even there is a cloud in the proper place that guarantees that it will stay there for the nest 50 years, the question that remains is whether it can be trusted to do so. What happens to the data if the cloud simply disappears? Replication makes the location problem even more difficult and it certainly does not help to reduce the cost of the cloud. It also does not solve the problem of a company simply shutting down the service. Without very strong, enforceable guarantees -as it happens in other branches of industry that are critical to the economy - there will be not enough trust to move databases with important data into a commercial cloud.

Does this imply that we will never see databases in the cloud? Not at all. However, the clouds were important databases will reside might be different from the commercial ones that are attracting so much attention these days.

First, the clouds where databases may live very comfortably will be private clouds. Governments, for instance, are likely to own (or contract) such clouds to offer cloud services to the public sector. Second, community clouds linking the private clouds of partner companies are also likely to be common since they spread the costs among several participants while still giving access to more resources that anyone of them directly owns. Being a federation of private clouds, they are easier to protect, organize under well defined contractual agreements, and to tailor to the particular application by using, e.g., application aware networks. Third, public clouds will be used not necessarily for storing the data but for scalability and processing purposes in all those cases where parts of the data can be safely brought into the open. For instance, a company can keep the confidential data within the private cloud but place databases with copies of the publicly available data on a public cloud. By keeping the master copy of the data, the company takes advantage of the public cloud for scalability but can make sure all regulations are followed in house using conventional solutions.

The challenges to bring databases into the cloud are both technical and regulatory. Until the regulatory problems are solved -and that may take a long time- the key to putting databases into the cloud will be to have infrastructures that give users flexibility and complete control over the databases and the data inside, regardless of the type of cloud used. If users can easily create copies of their databases or part of their databases and place those into the cloud; can guarantee that they have within their premises a consistent copy of the data at all times; and can take advantage of the cloud to reduce the costs of provisioning, scaling out, and adding functionality to their data management systems, then databases will move to the cloud. If all these chores are not provided through automatic tools, the overhead, costs, and risks involved will be too high to justify moving enterprise class databases to the cloud.

More Stories By Maximilian Ahrens

Ahrens is an expert and frequent speaker on international conferences for service oriented architecture and virtualization. Before co-founding Zimory, he served as a project manager and research scientist at the innovation development entity of Deutsche Telekom Laboratories. Responsible for infrastructure and enterprise IT projects spanning multiple divisions of the Deutsche Telekom group -- Ahrens is an expert on enterprise IT and business processes. Before Deutsche Telekom, he led several business process reengineering projects for major German companies. Ahrens received his degree in computer science and business administration from Technische Universität Berlin.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.