Click here to close now.




















Welcome!

@CloudExpo Authors: Elizabeth White, Liz McMillan, Cloud Best Practices Network, Kevin Jackson, Pat Romanski

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Apache, Cloud Security

@CloudExpo: Article

Understanding the Impact of Your Workload on Your Cloud Infrastructure

Deploying dynamic and scalable websites

Enterprises are quickly realizing that their future success is dependent on their ability to adapt their business to the Cloud. That realization however comes with more questions and concerns about executing an effective cloud-based strategy. The explosion of the OpenStack community has made it possible for hosting providers and businesses to create or utilize Amazon-like public and private clouds, but it's clear that the Cloud is not a one-size-fits-all solution. One prime factor that dictates the success of a cloud computing strategy is the particular workload an enterprise is tackling. From DevOps, to rapidly deploying dynamic and scalable websites, enterprises' workload needs should dictate their cloud architecture.

The specific workloads have an impact on many elements of the cloud, particularly the architecture of the infrastructure. It becomes clear how integral infrastructure architecture is to meeting workload requirements as we examine specific workload use cases.

The first element to consider in the architecture of cloud infrastructure is computing power. The number and speed of compute nodes within a cloud configuration will dictate how quickly processes can be executed. This comes into play prominently when assessing a workload, as the computing power required to develop a web app pales in comparison to the compute power required to execute Big Data analysis. Large-scale data analysis projects require powerful compute capabilities. While this kind of project is completely within the purview of well-constructed cloud architectures, that architecture must be designed as such.

The next integral ingredient to a cloud's architecture is the storage architecture. There are several different types of storage that vary in availability, resiliency and transactional performance. Amazon's Simple Storage Service (S3) provides a multi-tenant object storage environment, while block storage, like Amazon EBS, provides a persistent storage target. Typically an enterprise architecture would require a multi-level SAN architecture that provided enough IOPS (input/outputs) for the storage of the VMs as well as the transactional block storage. As flash storage has matured, it has become possible to collapse the typical storage architecture, running virtual machine operating systems and persistent transactional data on the same tier.

Another variable that's worth pointing out is that of data access speeds. While one might have a large element of storage space, the ability to quickly access the data stored within is a factor in developing infrastructure for particular workloads.

The last vector for consideration is that of density. In many datacenters, space is readily available. However, that may not always be the case. Compact and energy-efficient datacenter hardware systems take up less space in a datacenter, thereby saving space and presumably cost. However, dense hardware tends to be more expensive - making the proposition contingent upon cost per square foot versus the cost of denser hardware. One must also consider the power density per square foot as this varies widely depending on the data center. This kind of determination must be made based on ad hoc criteria and circumstances. Less dense solutions tend to also be less power-efficient, bringing an additional cost point of analysis into the picture.

Dissecting DevOps
DevOps is a term that has gained quite a bit of notoriety in recent years, as enterprises acknowledge the interdependence of IT operations and software development teams. DevOps aficionados are looking to cloud technology as a means to more closely align the two groups' respective goals, which tend to be fundamentally at odds.

The DevOps operation means creating a cloud environment that allows developers to quickly self-service launch the necessary build and test virtual machines required to create the artifacts used in a continuous delivery pipeline. This kind of pipeline requires that the main code base (often referred to as the trunk or mainline) be constantly in the "green" state and execute with no fatal errors. One of the fundamental keys to creating that pipeline is rapidly rebuilding and unit testing any changed code. Some development shops rebuild on every code check-in by every developer, while others take a less extreme approach and build every ten minutes or on the hour.

The success that can be achieved by a continuous build environment is largely dependent on a fast, well-orchestrated infrastructure. In this use-case, an IT manager will seek out a cloud architecture that launches and kills virtual machines (VMs) quickly, and includes highly accessible storage. Depending on the specifics, this could result in a cloud that combines a large amount of IOPS to quickly launch the VMs and perform the workload.

Deploying Dynamic and Scalable Websites
There's arguably no greater beneficiary of cloud computing than a company that repeatedly launches similar websites. Let's take a media company as an example that delivers entertainment content across its platform. Critical to this company's success is delivering existing and new content through rapidly changing websites. Powering this are innovative applications that provide interactive experiences that engage and create a loyal user base. For this particular type of workload, developers require automated provisioning and flexible storage and compute options, as different launches require different demands, such as a UGC contest demanding greater storage and an MMORPG video game that requires a compute-intensive environment.

These requirements often vary in their scope but are consistent in their frequency, so it is vital to eliminate the need for repetitive, time-consuming tasks such as installing and configuring commonly used website software like databases and web servers. Well-made templates can be re-used and when consistency is maintained automatically, system administrators can focus on higher-value tasks rather than performing repairs. Where other workloads may have a narrow scope, elasticity and flexibility within compute, storage and data access elements is required to effectively and efficiently deploy dynamic and scalable websites.

Approaching High Performance Computing Animation
One interesting HPC application of cloud technologies is that of animation rendering. Over the years the animation industry has used various computer hardware and software technologies to automate the steps in the production process. Because many of these steps require high-performance computing systems with significant CPU and IOPs capabilities, animation shops have often relied on purpose-built hardware and software systems for their peak capacity. With the advent of server virtualization, high-speed solid state drives (SSDs) and standards-based cloud platforms, animators are taking a closer look at the benefits of cloud technology. In order for these workloads to be efficient and effective in the cloud, high power computing must be coupled with high IOPs, as virtual machines must be launched and deprovisioned as short lived but CPU-intensive tasks.

Designing an infrastructure around a particular workload is a process that requires comprehensive understanding of the basic functions of the workload in question, and while optimizing an infrastructure for a particular workload can present some front-end hurdles, the efficiency and potential cost savings in the long run are significant, as managers can focus resources on a particularly impactful element of their architecture.

More Stories By Christopher Aedo

Christopher Aedo is senior director of technical operations at Morphlabs where he oversees the technology and operations side. He found his niche early in his career while helping a global accounting firm move their information systems from an IBM mainframe to a distributed network of Novell and SCO Unix servers. He is currently focused on making it easy for technology groups to move their infrastructure and applications from bare-metal or virtualized servers into public and private clouds.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Organizations from small to large are increasingly adopting cloud solutions to deliver essential business services at a much lower cost. According to cyber security experts, the frequency and severity of cyber-attacks are on the rise, causing alarm to businesses and customers across a variety of industries. To defend against exploits like these, a company must adopt a comprehensive security defense strategy that is designed for their business. In 2015, organizations such as United Airlines, Sony...
Amazon and Google have built software-defined data centers (SDDCs) that deliver massively scalable services with great efficiency. Yet, building SDDCs has proven to be a near impossibility for ‘normal’ companies without hyper-scale resources. In his session at 17th Cloud Expo, David Cauthron, founder and chief executive officer of Nimboxx, will discuss the evolution of virtualization (hardware, application, memory, storage) and how commodity / open source hyper converged infrastructure (HCI) so...
In their Live Hack” presentation at 17th Cloud Expo, Stephen Coty and Paul Fletcher, Chief Security Evangelists at Alert Logic, will provide the audience with a chance to see a live demonstration of the common tools cyber attackers use to attack cloud and traditional IT systems. This “Live Hack” uses open source attack tools that are free and available for download by anybody. Attendees will learn where to find and how to operate these tools for the purpose of testing their own IT infrastructu...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
IBM’s Blue Box Cloud, powered by OpenStack, is now available in any of IBM’s globally integrated cloud data centers running SoftLayer infrastructure. Less than 90 days after its acquisition of Blue Box, IBM has integrated its Blue Box Cloud Dedicated private-cloud-as-a-service into its broader portfolio of OpenStack® based solutions. The announcement, made today at the OpenStack Silicon Valley event, further highlights IBM’s continued support to deliver OpenStack solutions across all cloud depl...
Red Hat is investing in Tesora, the number one contributor to OpenStack Trove Database as a Service (DBaaS) also ranked among the top 20 companies contributing to OpenStack overall. Tesora, the company bringing OpenStack Trove Database as a Service (DBaaS) to the enterprise, has announced that Red Hat and others have invested in the company as a part of Tesora's latest funding round. The funding agreement expands on the ongoing collaboration between Tesora and Red Hat, which dates back to Febr...
With the proliferation of connected devices underpinning new Internet of Things systems, Brandon Schulz, Director of Luxoft IoT – Retail, will be looking at the transformation of the retail customer experience in brick and mortar stores in his session at @ThingsExpo. Questions he will address include: Will beacons drop to the wayside like QR codes, or be a proximity-based profit driver? How will the customer experience change in stores of all types when everything can be instrumented and a...
The Internet of Things (IoT) is about the digitization of physical assets including sensors, devices, machines, gateways, and the network. It creates possibilities for significant value creation and new revenue generating business models via data democratization and ubiquitous analytics across IoT networks. The explosion of data in all forms in IoT requires a more robust and broader lens in order to enable smarter timely actions and better outcomes. Business operations become the key driver of I...
WSM International, the pioneer and leader in server migration services, has announced an agreement with WHOA.com, a leader in providing secure public, private and hybrid cloud computing services. Under terms of the agreement, WSM will provide migration services to WHOA.com customers to relocate some or all of their applications, digital assets, and other computing workloads to WHOA.com enterprise-class, secure cloud infrastructure. The migration services include detailed evaluation and planning...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of tech...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes ab...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
Cloud and datacenter migration innovator AppZero has joined the Microsoft Enterprise Cloud Alliance Program. AppZero is a fast, flexible way to move Windows Server applications from any source machine – physical or virtual – to any destination server, in any cloud or datacenter, using its patented container technology. AppZero’s container is also called a Virtual Application Appliance (VAA). To facilitate Microsoft Azure onboarding, AppZero has two purpose-built offerings: AppZero SP for Azure,...