Welcome!

Cloud Expo Authors: Sematext Blog, Elizabeth White, Liz McMillan, Pat Romanski, Jason Bloomberg

Related Topics: Cloud Expo

Cloud Expo: Blog Feed Post

Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait

The introduction of the newest member of the cloud computing buzzword family is “IT as a Service"

Infrastructure 2.0 ≠ cloud computing ≠ IT as a Service. There is a difference between Infrastructure 2.0 and cloud. There is also a difference between cloud and IT as a Service. But they do go together, like a parfait. And everybody likes a parfait…

image The introduction of the newest member of the cloud computing buzzword family is “IT as a Service.” It is understandably causing some confusion because, after all, isn’t that just another way to describe “private cloud”?  No, actually it isn’t. There’s a lot more to it than that, and it’s very applicable to both private and public models. Furthermore, equating “cloud computing” to “IT as a Service” does both a big a disservice as making synonyms of “Infrastructure 2.0” and “cloud computing.” These three [ concepts | models | technologies ] are highly intertwined and in some cases even interdependent, but they are not the same.

In the simplest explanation possible: infrastructure 2.0 enables cloud computing which enables IT as a service.

Now that we’ve got that out of the way, let’s dig in.

ENABLE DOES NOT MEAN EQUAL TO

One of the core issues seems to be the rush to equate “enable” with “equal”. There is a relationship between these three technological concepts but they are in no wise equivalent nor should be they be treated as such. Like SOA, the differences between them revolve primarily around the level of abstraction and the layers at which they operate. Not the layers of the OSI model or the technology stack, but the layers of a data center architecture.

Let’s start at the bottom, shall we?

INFRASTRUCTURE 2.0

At the very lowest layer of the architecture is Infrastructure 2.0. Infrastructure 2.0 is focused on enabling dynamism and collaboration across the network and application delivery network infrastructure. It is the way in which traditionally disconnected (from a communication and management point of view) data center foundational components are imbued with the ability to connect and collaborate. This is primarily accomplished via open, standards-based APIs that provide a granular set of operational functions that can be invoked from a variety of programmatic methods such as orchestration systems, custom applications, and via integration with traditional data center management solutions. Infrastructure 2.0 is about making the network smarter both from a management and a run-time (execution) point of view, but in the case of its relationship to cloud and IT as a Service the view is primarily focused on imagemanagement.

Infrastructure 2.0 includes the service-enablement of everything from routers to switches, from load balancers to application acceleration, from firewalls to web application security components to server (physical and virtual) infrastructure. It is, distilled to its core essence, API-enabled components.

CLOUD COMPUTING

Cloud computing is the closest to SOA in that it is about enabling operational services in much the same way as SOA was about enabling business services. Cloud computing takes the infrastructure layer services and orchestrates them together to codify an operational process that provides a more efficient means by which compute, network, storage, and security resources can be provisioned and managed. This, like Infrastructure 2.0, is an enabling technology. Alone, these operational services are generally discrete and are packaged up specifically as the means to an end – on-demand provisioning of IT services.

Cloud computing is the service-enablement of operational services and also carries along the notion of an API. In the case of cloud computing, this API serves as a framework through which specific operations can be accomplished in a push-button like manner.

IT as a SERVICE

At the top of our technology pyramid, as it is likely obvious at this point we are building up to the “pinnacle” of IT by laying more aggressively focused layers atop one another, we have IT as a Service. IT as a Service, unlike cloud computing, is designed not only to be consumed by other IT-minded folks, but also by (allegedly) business folks. IT as a Service broadens the provisioning and management of resources and begins to include not only operational services but those services that are more, well, businessy, such as identity management and access to resources.

IT as a Service builds on the services provided by cloud computing, which is often called a “cloud framework” or a “cloud API” and provides the means by which resources can be provisioned and managed. Now that sounds an awful lot like “cloud computing” but the abstraction is a bit higher than what we expect with cloud. Even in a cloud computing API we are steal interacting more directly with operational and compute-type resources. We’re provisioning, primarily, infrastructure services but we are doing so at a much higher layer and in a way that makes it easy for both business and application developers and analysts to do so.

An example is probably in order at this point.

THE THREE LAYERS in the ARCHITECTURAL PARFAIT

image

 

 

 

Let us imagine a simple “application” which itself requires only one server and which must be available at all times.

That’s the “service” IT is going to provide to the business.

In order to accomplish this seemingly simple task, there’s a lot that actually has to go on under the hood, within the bowels of IT.

LAYER ONE

Consider, if you will, what fulfilling that request means. You need at least two servers and a Load balancer, you need a server and some storage, and you need – albeit unknown to the business user – firewall rules to ensure the application is only accessible to those whom you designate. So at the bottom layer of the stack (Infrastructure 2.0) you need a set of components that match these functions and they must be all be enabled with an API (or at a minimum by able to be automated via traditional scripting methods). Now the actual task of configuring a load balancer is not just a single API call. Ask RackSpace, or GoGrid, or Terremark, or any other cloud provider. It takes multiple steps to authenticate and configure – in the right order – that component. The same is true of many components at the infrastructure layer: the APIs are necessarily granular enough to provide the flexibility necessary to be combined in a way as to be customizable for each unique environment in which they may be deployed. So what you end up with is a set of infrastructure services that comprise the appropriate API calls for each component based on the specific operational policies in place.

LAYER TWO

At the next layer up you’re providing even more abstract frameworks. The “cloud API” at this layer may provide services such as “auto-scaling” that require a great deal of configuration and registration of components with other components. There’s automation and orchestration occurring at this layer of the IT Service Stack, as it were, that is much more complex but narrowly focused than at the previous infrastructure layer. It is at this layer that the services become more customized and able to provide business and customer specific options. It is also at this layer where things become more operationally focused, with the provisioning of “application resources” comprising perhaps the provisioning of both compute and storage resources. This layer also lays the foundation for metering and monitoring (cause you want to provide visibility, right?) which essentially overlays, i.e. makes a service of, multiple infrastructure resource monitoring services.

LAYER THREE

At the top layer is IT as a Service, and this is where systems become very abstracted and get turned into the IT King “A La Carte” Menu that is the ultimate goal according to everyone who’s anyone (and a few people who aren’t). This layer offers an interface to the cloud in such a way as to make self-service possible. It may not be Infrabook or even very pretty, but as long as it gets the job done cosmetics are just enhancing the value of what exists in the first place. IT as a Service is the culmination of all the work done at the previous layers to fine-tune services until they are at the point where they are consumable – in the sense that they are easy to understand and require no real technical understanding of what’s actually going on. After all, a business user or application developer doesn’t really need to know how the server and storage resources are provisioned, just in what sizes and how much it’s going to cost.

IT as a Service ultimately enables the end-user – whomever that may be – to easily “order” IT services to fulfill the application specific requirements associated with an application deployment. That means availability, scalability, security, monitoring, and performance.

A DYNAMIC DATA CENTER ARCHITECTURE

One of the first questions that should come to mind is: why does it matter? After all, one could cut out the “cloud computing” layer and go straight from infrastructure services to IT as a Service. While that’s technically true it eliminates one of the biggest benefits of a layered and highly abstracted architecture : agility. By presenting each layer to the layer above as services, we are effectively employing the principles of a service-oriented architecture and separating the implementation from the interface. This provides the ability to modify the implementation without impacting the interface, which means less down-time and very little – if any – modification in layers above the layer being modified. This translates into, at the lowest level, vender agnosticism and the ability to avoid vendor-lock in. If two components, say a Juniper switch and a Cisco switch, are enabled with the means by which they can be enabled as services, then it becomes possible to switch the two at the implementation layer without requiring the changes to trickle upward through the interface and into the higher layers of the architecture.

It’s polymorphism applied to an data center operation rather than a single object’s operations, to put it in developer’s terms. It’s SOA applied to a data center rather than an application, to put it in an architect’s terms.

It’s an architectural parfait and, as we all know, everybody loves a parfait, right?


Related blogs & articles:

Follow me on Twitter View Lori's profile on SlideShare friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

 

 

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, ...
"ElasticBox is an enterprise company that makes it very easy for developers and IT ops to collaborate to develop, build and deploy applications on any cloud - private, public or hybrid," stated Monish Sharma, VP of Customer Success at ElasticBox, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
At 15th Cloud Expo, Shrikant Pattathil, Executive Vice President at Harbinger Systems, demos a video delivery platform that helps you do interactive videos. He discusses how Harbinger is accomplishing it in the cloud world, the problems they faced and the choices they made to get around these problems.
“DevOps is really about the business. The business is under pressure today, competitively in the marketplace to respond to the expectations of the customer. The business is driving IT and the problem is that IT isn't responding fast enough," explained Mark Levy, Senior Product Marketing Manager at Serena Software, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
“The year of the cloud – we have no idea when it's really happening but we think it's happening now. For those technology providers like Zentera that are helping enterprises move to the cloud - it's been fun to watch," noted Mike Loftus, VP Product Management and Marketing at Zentera Systems, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Want to enable self-service provisioning of application environments in minutes that mirror production? Can you automatically provide rich data with code-level detail back to the developers when issues occur in production? In his session at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how to accomplish this and more utilizing technologies such as Microsoft Azure, Visual Studio online, and Application Insights in this demo-heavy session.
Entuity®, a provider of enterprise-class network management solutions, today announced that it solidifies its position as a market leader through global enterprise customer acquisitions and a refined channel strategy. In 2014, Entuity increased new license revenues in EMEA by over 75 percent, and LATAM by over 125 percent as customers embraced Entuity for its highly automated solution and unified architecture. Entuity’s refined channel strategy focuses on even deeper strategic alignment with ke...
We are all here because we are sold on the transformative promise of The Cloud. But what good is all of this ephemeral, on-demand infrastructure if your usage doesn't actually improve the agility and speed of your business? How must Operations adapt in order to avoid stifling your Cloud initiative? In his session at DevOps Summit, Damon Edwards, co-founder and managing partner of the DTO Solutions, will highlight the successful organizational, process, and tooling patterns of high-performing c...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
The 4th International DevOps Summit, co-located with16th International Cloud Expo – being held June 9-11, 2015, at the Javits Center in New York City, NY – announces that its Call for Papers is now open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's large...
“We help people build clusters, in the classical sense of the cluster. We help people put a full stack on top of every single one of those machines. We do the full bare metal install," explained Greg Bruno, Vice President of Engineering and co-founder of StackIQ, in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"Blue Box has been around for 10-11 years, and last year we launched Blue Box Cloud. We like the term 'Private Cloud as a Service' because we think that embodies what we are launching as a product - it's a managed hosted private cloud," explained Giles Frith, Vice President of Customer Operations at Blue Box, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
In this demo at 15th Cloud Expo, John Meza, Product Engineer at Esri, showed how Esri products hook into Hadoop cluster to allow you to do spatial analysis on the spatial data within your cluster, and he demonstrated rendering from a data center with ArcGIS Pro, a new product that has a brand new rendering engine.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
CloudBees, Inc., has announced a $23.5 million financing round, led by longtime CloudBees investor Lightspeed Venture Partners. Existing investors Matrix Partners, Verizon Ventures and Blue Cloud Ventures also participated in the round. The latest funding announcement follows earlier rounds of $4 million, $10.5 million and $10.8 million, bringing the total investment in CloudBees to just under $50 million since the company’s inception in 2010. Previous venture investment rounds were led by Ma...
“We are strong believers in the DevOps movement and our staff has been doing DevOps for large enterprise environments for a number of years. The solution that we build is intended to allow DevOps teams to do security at the speed of DevOps," explained Justin Lundy, Founder & CTO of Evident.io, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Media announced that Splunk, a provider of the leading software platform for real-time Operational Intelligence, has launched an ad campaign on Big Data Journal. Splunk software and cloud services enable organizations to search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices. The ads focus on delivering ROI - how improved uptime delivered $6M in annual ROI, improving customer operations by minin...
The move in recent years to cloud computing services and architectures has added significant pace to the application development and deployment environment. When enterprise IT can spin up large computing instances in just minutes, developers can also design and deploy in small time frames that were unimaginable a few years ago. The consequent move toward lean, agile, and fast development leads to the need for the development and operations sides to work very closely together. Thus, DevOps become...