Welcome!

@CloudExpo Authors: Progress Blog, Pat Romanski, LeanTaaS Blog, Kevin Benedict, Don MacVittie

Related Topics: @CloudExpo

@CloudExpo: Article

A Recipe for Cloud Design

Isn’t it time you started creating the infrastructure designs your business consumers want?

The deployment of infrastructure systems to support applications has been a challenge since we first developed a choice beyond the venerable mainframe. To some it’s a simple formula; take a server, toss in a little network and storage, bake for a few weeks and you’re done. If you’re having a few friends over (need a little more) then simply add a few more servers and you’re done. What the heck, hardware is cheap and gets cheaper every year...the advent of Cloud makes it even easier to maintain that philosophy.

Unfortunately that mentality is pervasive in much of IT, not to mention the business world. But there are two fundamental flaws: the first revolves around the design effectiveness, and the second pertains to operational sustainability.

Let’s talk about the latter flaw first, since it is more obvious and by now most readers are familiar with it. Until recently, much of the past decade in IT focused on optimizing our data centers. Why? Because for far too many years we simply threw hardware at the problem, and then suddenly – in the post dot-com IT budget slashing era – we realized we had thousands upon thousands of servers that were very poorly utilized and cost far more to support than procure. Enter server consolidation, virtualization, and the ostensibly never-ending IT Optimization / Transformation programs. We will come back to this a little later.

Now let's discuss effective design. As it's the season to cook and consume far too much food, it seems fitting to employ the “recipe” analogy. To the casual observer, the process of cooking something seems like an extraordinarily simple exercise: mix the ingredients, cook to proper time and temperature, and voila – the meal is done (if only it were that simple…think back to all the times when you took a bite and immediately wished you hadn’t). How about the first time you tried to cook something by yourself? Suddenly it didn’t seem quite so simple – who knew that there were a dozen different kinds of chocolate chips, or that it really does make a difference when you use a 10x10 pan instead of the recommended 8x14.

There are so many different choices to make when selecting the ingredients, and even more when you discover there are also multiple cooking methods. The permutations are more than most minds can grasp, and you start to understand a little better why it costs so much to dine in a five-star restaurant. A good chef does not arbitrarily select ingredients or the cooking method; he/she considers carefully the audience they are cooking for, the budget, and the amount of time available. Are you cooking for the President and his family, for 30 important guests, or for your local Cub Scout Pack? Each option provides different criteria, requiring decisions throughout the meal preparation process to be made, any one of which (if made poorly) may negatively impact the outcome. Dinner will be late, bad, or hastily ordered pizzas.

Is IT design any different? There are many, many varieties of servers, storage, and networking gear with varying elements of cost and performance benefits. So why do many organizations generally behave like they are cooking for the Cub Scouts on a campout, using low cost ingredients and cooking to feed 100 people? The answer lies in all three of the concepts we’ve discussed:

1. Hardware is cheap, and Cloud makes it appear even cheaper

2. We don’t understand the users (who’s coming to the party)

3. We don’t know how many we’re cooking for, so lets adopt an approach that allows us to serve as many as possible (quantity is more important than quality).

As amusing as this analogy may be, it’s actually a vicious cycle that we’ve been trapped in for some time. If our users don’t like the result at the end of step 3, we simply give them more resources … see step 1. Does that make our business partners happy (probably not)? Does it ensure we’ll be perpetually optimizing our IT landscape (definitely yes)? Is it any wonder that our business partners often opt to “go out for dinner”? Despite the many benefits of Cloud Computing, it does not fix this problem – it is the IT equivalent of ‘fast food’. What we gain in speed of service is (today) lost in quality, and I’ll politely suggest that long-term dining in the Cloud may have an adverse effect on our IT waistlines.

Ok, ok, enough with the food analogy – you get the point. The fact of the matter is that with our current approach to designing IT systems the leverage of Cloud as a delivery mechanism simply transfers the inefficiencies of that approach to the Cloud provider, who has greater economies of scale and can thus provide it for a lower cost. What are we to do?

The correct approach starts with understanding the workload demand from the beginning, and characterizing it in terms of something we call Quality of Experience (QoE). This is a composite metric, based loosely on an understanding of the relative importance of performance, cost, and efficiency to the intended workload. If that was as clear as mud, imagine you have 100 points to assign to each of those attributes; you can assign the points any way you want so long as the total adds up to 100. Critical revenue generating systems probably get 80 points or more for performance, whereas an employee expense reporting application likely gets 70+ points on the cost scale.

Now that we have a business understanding of the workload, we need to look at the various solutions (patterns, reference architectures) available and begin to apply our design rules to select the one that best matches the desired QoE and workload characteristics. What are design rules? Those are the decisions we make when taking an abstract pattern and decide what hardware choices to fulfill with, how to scale to meet the anticipated peak demand, how to make it highly available (if necessary), and how to recover if there’s a fault. Good organizations take it a step further and apply still more rules on what types of monitoring and reporting will be added to the solution, and top-notch groups with flexibility in their run- time environment will also determine the ‘when’ and ‘how’ a workload can be dynamically allocated more resources … or have them taken away by a higher priority workload.

If this sounds complicated, it’s because it is – this is the fundamental reason most IT shops say, “The heck with it, lets just cook for the Cub Scouts” and adopt very rigid infrastructure deployment options. It takes too long, and requires someone with a wide range of skills and considerable experience to participate in each and every project.

It doesn’t have to be that way any more.

What we’ve done is taken a step back and looked at this process with discipline of an engineer. In doing so we made two critical observations:

• A small percentage of projects actually require in-depth, manual design...the vast majority are simply a repeat application of something that’s been done in the past with minor variations.

• The majority of complex decisions and permutations an experienced designer (architect or engineer) makes can be codified into software rules.

That realization led us to develop our Blueprint platform; a flexible design suite that allows for the use of patterns and reference architecture and applies your (or our) design rules to generate mass-producible Blueprint designs specific to the workload requirements (QoE). Even the rules for run-time execution can be codified, whether it belongs in a tradition environment or in a private/public/hybrid Cloud. Isn’t it time you started creating the infrastructure designs your business consumers want, while also freeing up your best people to work on the projects that actually require their skills?

More Stories By James Houghton

James Houghton is Co-Founder & Chief Technology Officer of Adaptivity. In his CTO capacity Jim interacts with key technology providers to evolve capabilities and partnerships that enable Adaptivity to offer its complete SOIT, RTI, and Utility Computing solutions. In addition, he engages with key clients to ensure successful leverage of the ADIOS methodology.

Most recently, Houghton was the SVP Architecture & Strategy Executive for the infrastructure organization at Bank of America, where he drove legacy infrastructure transformation initiatives across 40+ data centers. Prior to that he was the Head of Wachovia’s Utility Product Management, where he drove the design, services, and offering for SOA and Utility Computing for the technology division of Wachovia’s Corporate & Investment Bank. He has also led leading-edge consulting practices at IBM Global Technology Services and Deloitte Consulting.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, discussed how given the magnitude of today's application ...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.