Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

And the Killer App for Private Cloud Computing Is

Automating components is easy. It’s automating processes that’s hard

The premise that if you don’t have an infrastructure comprised solely of Infrastructure 2.0 components then you cannot realize an automated, on-demand data center is, in fact, wrong. While the capabilities of modern hardware that come with Infrastructure 2.0 such as a standards-based API able to be leveraged by automation systems certainly makes the task all the more simple, it is not the only way that components can be automated. In fact, “legacy” infrastructure has been automated for years using other mechanisms that can certainly be incorporated into the dynamic data center model.

When it’s time to upgrade or purchase new solutions, those components enabled with standards-based APIs should certainly be considered before those without, but there’s no reason that a hybrid data center replete with both legacy and dynamic infrastructure components cannot be automated in such a way as to form the basis for a “private cloud.” The thought that you must have a homogeneous infrastructure is not only unrealistic it’s also indicative of a too-narrow focus on the individual components rather than systems – and processes - that make up data center operations.

In “The Case Against Private Clouds” Bernard Golden blames the inability to automate legacy infrastructure for a yet-to-occur failure in private cloud implementation:

The key to automating the bottom half of the chart -- the infrastructure portion -- is to use equipment that can be configured remotely with automated measures. In other words, the equipment must be capable of exposing an API that an automated configuration system can interact with. This kind of functionality is the hallmark of up-to-date equipment. Unfortunately, most

data centers are full of equipment that does not have this functionality; instead they have a mishmosh of equipment of various vintages, much of which requires manual configuration. In other words, automating much of the existing infrastructure is a non-starter.

The claim that legacy infrastructure is going to require manual configuration and therefore automating most of the infrastructure is a “non-starter” is the problem here. In other words, if you have “legacy” infrastructure in your data center you can’t build a private cloud because there’s no way to automate its configuration and management.

Identity Management Systems (IDMS) focusing on provisioning and process management have long ago solved this particular problem as has a

plethora of automation and scripting-focused vendors that provide automation technology for network and systems management tasks. CMDB (Configuration Management Database) technology, too, has some capabilities around automating the configuration of network-focused devices that could easily be extended to include a wider variety of network and application network infrastructure.

Any network or systems’ administrator worth their salt can whip up a script (PowerShell, bash, korn, whatever) that can automatically SSH into a remote network device or system and launch another script to perform X or Y and Z. This is not rocket science, this isn’t even very hard. We’ve been doing this for as long as we’ve had networked systems that needed management.

What is hard, and what’s going to make “private” clouds difficult to implement is orchestration and management. That’s hard, and largely immature at this stage because you’re automating processes (i.e.  orchestration) not systems.

That’s really what’s key to a cloud implementation, not the automation of individual components in the network and application infrastructure.


AUTOMATION IS EASY. ORCHESTRATION IS WHAT’S HARD.


Anyone can automate a task on an individual data center component. But automating a series of tasks, i.e. a process, is much more difficult because it not only requires an understanding of the process but is also essentially “integration”. And integration of systems, whether on the software side of the data center or the network and application network side of the data center, is painful. It should be a four letter word – a curse – and though it isn’t considered one it’s often vocalized with the same tone and intention as one invoking some form of ancient curse.

But I digress. The point is not that integration is hard – everyone knows that – but that it’s the integration and collaboration of components that comprise the automation of processes, i.e. orchestration, that makes building a “private cloud” difficult.

Management and orchestration solutions that can easily integrate both legacy infrastructure and Infrastructure 2.0 via standards-based APIs and traditional “hacks” requiring secure remote access and remote execution of scripts is the “killer app” for “private cloud computing”.

It’s already been done in the identity management space (IDMS). It’s already been done in the business and application space (BPM). It should be no surprise that it will, eventually, be “done” in the infrastructure world. Folks watching the infrastructure and cloud computing space just have to stop looking at two layers of the stack and broaden their view a bit to realize that the answer isn’t going to be found solely within the confines of infrastructure. Like the model and applications it hosts, it’s going to be found in a collaborative effort involving components, systems, and people.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.