Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

The Path to the Intelligent Cloud

The next generation of cloud computing will be very different than the IaaS/PaaS/SaaS offerings we know today

Let's face it right now the cloud is pretty immature. The level of automation and management of these environments are analogous to the early assembly lines, but it won't be this way long. This is not the industrial revolution and it moves at a wicked fast pace. Before we know it the next generation of cloud computing will be upon us and it will be very different than the IaaS/PaaS/SaaS offerings we know today.
For one, it will be intelligent. That is, the cloud will be content aware and it's network connections will act like mycelia hyphae and what one hyphae learns will become available to the entire cloud. Whereas the current cloud is focused on scalability and elasticity, the next instance of the cloud will focus on redundancy, resiliency and collaboration. The discussion regarding public, private or hybrid will become moot as the cloud simply becomes a system of nodes with some nodes participating fully while some don't participate at all. Nodes will contribute to the cloud on a controlled basis. Some will host their own nodes while others will pay service providers to host their nodes for them.
However, the bigger issue is not when will this occur, but why must this occur? This must occur because we are learning that no matter how much the cost of compute resources comes down, it will never be enough low enough to be cost-effective to host the Zettabytes we're interested in. The cloud today is teaching us a valuable lesson; content is king! Once we squeeze all the inefficiency and underutilization out of our data centers there will be little cost savings left to derive from our own cloud infrastructure, but that won't stop the machine once it's started. Just like any other successful ecosystem, once started, it eats foundation and then starts feeding externally to survive. This pattern is how small companies becomes large corporations. This pattern is how small republics become big government. The cloud as an ecosystem is no different and it feeds on content. When it consumes all the content we can provide it with about our own organization, it will start to feed on external content. We are starting to see this occurring already under the guise of "Big Data".
The current focus on what is cloud computing is but a mere distraction fostered by a market that is organically moving toward the culmination of the intelligent cloud. This, however, doesn't undermine the effort underway as it is a critical component of reaching the intelligent cloud outcome. That is, the consolidation of silo compute stacks onto converged infrastructure is a critical first step toward the node architecture of the intelligent cloud. However, the lack of discussion and focus on application rationalization will have profound effect of limiting forward progression. Moreover, the limited tools for inventorying and understanding the dependencies between application components forces the application rationalization process to be heavily based on human knowledge engineering.
Until the tools market for application rationalization matures, it is imperative that organizations get serious about building their Configuration Management Databases (CMDB) and following IT Service Management processes. Failure to comply with these imperatives will significantly limit the upside advantages that cloud computing can provide to the business. Sure, executives will be thrilled with the immediate cost reductions, but when was the last time anyone remembers their CEO saying two years later, "don't worry Bill, you saved us $2 million two years ago, you're still golden in my book!" The immediate cost savings from infrastructure consolidation, SaaS outsourcing and Big Data analytics will be short-lived and the CEO will be looking for when they can finally start to sunset some of those proprietary application stacks and move their applications to their costly cloud infrastructure, only without the tools and without the ITSM foundations, the answer is going to be that it will require big up-front spend to gain efficiencies and further costs savings in the future.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.