|By Phil Whelan||
|May 4, 2014 08:00 AM EDT||
David Rubinstein's post "Industry Watch: Be resilient as you PaaS" makes a very good point about underutilized hardware in IT data-centers.
Millions of enterprise workloads remain in data centers, where servers are 30% to 40% underutilized, and that's if they're virtualized. If not, they're only using 5% to 7% of capacity.
The reason for this?
Take, for example, servers that are spun up for a project two years ago that were never decommissioned, just sitting there, waiting for a new workload that will never come. And, because the costs of blades and racks went down, cheap hardware has led to a kind of data center sprawl.
So is IT buying new hardware for each new project and not recycling? Is this because projects do not clearly die? Do we need "time of death" announcements mandated on IT projects?
Late last year, the US Government Accountability Office reported that the Federal IT spends 70% of its IT budget on care for legacy systems. That's $56 billion! How many of those legacy systems are rusty old projects with low utilization and are over-consuming resources?
Who wants those old servers? They might only be 2 years "old," but each new project has its own requirements and trying to retrofit old hardware to shiny new projects is probably undesirable. A project has a budget and specifications. A project needs X number of machines, with Y gigabytes of RAM and numerous other requirements. Can second-hand hardware fit the bill? If it can, can we find enough of it? Nobody wants a Frankenstein's monster of a cluster or the risk that it would be difficult to grow the cluster in the future. Better to go with new servers.
IaaS provides a path to a solution. It makes it easier to provision and decommission resources. Virtual machines can be sized to fit the task. Even if specific new hardware is required to meet the SLA (Service Level Agreement) demands of a new project, it can be recycled back in the available resources.
But does IaaS go far enough?
IaaS provides infrastructure to be used by projects. But as David Rubinstein points out, even virtualized machines are still 30% to 40% underutilized. This is because virtual machines are provisioned as a best guess of what might be required by the software that is intended to run on it. The business does the math of the traffic they hope to receive and then adds some padding to be safe. They pass the numbers to Operations who provision the machines. Once a machine is provisioned those resources are locked in.
Virtualized or not, adding and removing servers to a cluster can be logistically difficult if the application is not designed with scale in mind. When it comes to legacy IT, it is probably better to leave as-is than to invest more resources and risk in a project that is receiving no engineering attention.
While IaaS is an essential building block of the cloud, it is not the solution. Pioneers in the age of DevOps have shifted the mindset from pets (knowing every machine by its cute name) to cattle (anonymous machines in named clusters). But even on the other side of this transition we are still focused on machines and operating systems.
Configuration management helps us to wrangle and make sense of our cattle. It whips our machines into some kind of uniform shape with the intention of moving them all in the same direction, but not without considerable sweat and tears. Snowflake servers that did not receive that last vital update, for reasons unknown, bring disease to the herd.
In Andrew C. Oliver's InfoWorld post "The platform-as-a-service winner is ... Puppet", he makes a good point that IT organizations are also "not a beautiful or unique snowflake".
Whatever you're doing with your IT infrastructure, someone else is probably doing the exact same thing. If what you're doing is so damn unique, then you're probably adding needless layers of complexity and you should stop.
Is configuration management the answer? It enables every organization to build beautiful unique snowflakes, but all the business actually wants is an igloo.
Servers are getting thinner. There is shift from caring about the overhead of the operating systems to focusing on the processes. CoreOS is great example of this and there is also boot2docker on the development side. We will no doubt see many more flavors of this in the next few years.
If we encapsulate the complexity of software applications and services in containers then the host operating system becomes much simpler, less of a concern and more standardized. If we can find an elegant way to spread the containers across our cluster then we only need to provision a cluster of thin standardized servers to house them.
Why Containers? Why not virtual machines or hardware?
Filling a cluster with virtual machines, where each virtual machine is dedicated to a single task is wasteful. It's like trying to utilize the space in a jar by filling it with marbles, when we could be filling it with sand. Depending on your hardware and virtual machine relationship, it might even be like filling it with oranges.
Containers are also portable - and not just in a deployment sense. In a recent talk on "Google Compute Engine and Docker", Marc Cohen from Google spoke about how they migrate running GAE instances from one datacenter to another, while only seeing a flicker of transitional downtime. vSphere also has similar tools for doing this with virtual machines. Surely, it is only a matter of time before we see this commonly available with Linux containers.
We need clusters; not machines, not operating systems. We need clusters that support containers. Maybe we need "Cluster-as-a-Service." Actually, maybe we don't need any more "-as-a-Service" acronyms.
Just as we have stopped caring about the unique name of individual servers, we should stop caring about the details of a cluster. When I have a machine with 16Gb of RAM, do I need to care if it's 2x8Gb DIMMS or 4x4Gb? No, I only need to know that this machine gives me "16Gb of RAM". Eventually we will view clusters in the same way.
Now that we can fill jars with sand instead of oranges, we can be less particular about the size and shape of the jars we choose to build our cluster. Throwing away old jars is easier when we can pour the sand into a new jar.
I believe we always should start with "why." Why am I writing a script to run continuous integration and ensure I always have the latest version of HAProxy installed on all the frontend servers? Why are we buying more RAM and installing Memcached servers?
The "why" is ultimately a business reason - which is a long way from the command-line prompt.
Todd Underwood's excellent talk on "PostOps: A Non-Surgical Tale of Software, Fragility, and Reliability" highlights the SLA as the contract that is made with the business and ensures that the Operations team is delivering the throughput, response time and uptime that the business expects. Beyond that they have free reign to ensure that the SLA is met.
Too many SysAdmins and Operations Engineers are stuck in the mindset of machines and operating systems. They have spent their careers thinking this way and honing their skills. They focus too much on the "what" and the "how", not on the "why".
At some level these skills are vital. The harder you push, the more likely it is that you will find a problem further down the stack and have to roll up your sleeves and get dirty.
Operations teams are responsible every level of stack, from the metal upwards. Even in the age of "the cloud" and outsourcing, an Operations team that does not feel that they themselves are ultimately responsible for their stack will have issues at some point. An example of this is Netflix, who use Amazon's cloud infrastructure service and may actually understand it better than Amazon. They take responsibility for their stack very seriously and because of this they are able to provide an amazingly resilient service at incredible scale.
Focusing solely on machines and operating systems does not scale and too many IT departments find it difficult to transcend this. Netflix are a good example here too. When you have tens of thousands of machines you are not going worry about keeping them all in sync or fixing machines that are having issues unless it is pandemic. Use virtualization, create golden images, if a machine is limping - shoot it. This is how Netflix operates. And to ensure that their sharp-shooters are always at the top of their game, Netflix uses tools like Chaos Monkey. They are purposely putting wolves amongst their sheep.
DevOps is a movement that aims to liberate IT from its legacy mindset. It aims to step back and take a look at the bigger picture. View business needs. Address the needs that span across the currently siloed Dev and Ops. This may be through fancy new orchestration and CI tools or be through organizational changes.
Tools and infrastructure are evolving so quickly that the best tool today will not be the best tool tomorrow. It is difficult to keep up.
Todd Underwood said that Operations Engineers, or rather Site Reliability Engineers, at Google are always working just outside of their own understanding. This is the best way to keep moving forward. From this, I take that if you fully understand the tools you are using, you are probably moving too slowly and falling behind. Just do not move so quickly that you jeopardize operational safety.
As long as we focus on "why", we can keep building our stack around that purpose instead of around our favorite naming convention, our favorite operating system and favorite toolset.
PaaS is currently the best tool for orchestrating the container layer above the cluster layer. It manages the sand in your jars. ActiveState's PaaS solution, Stackato, also provides a way to manage the human side of your infrastructure. By integrating with your LDAP server, providing organizational and social aspects you can see your applications as applications, rather than as infrastructure.
As Troy Topnik points out in a recent post IaaS is not required to run Stackato. Although an IaaS will make managing your cluster easier as you grow.
But will an IaaS help with managing your resources at the application level? No. Will it help identify wasted resources? No. Will it put you directly in touch with the owners of the applications? No. But PaaS will.
The reasons for waste in IT organizations are many. One reason is the way resources are provisioned for projects. The boundaries of a project are defined too far down the stack, even though a project may simply be defined by a series of processes, message queues and datastores. Another reason for waste is bad utilization of resources from the outset - not building the infrastructure where resources can be shared. A third reason for waste is lack of visibility into where waste is occurring.
Lack of visibility comes from not being able to see or reason about machines and projects. Machine responsibilities may be cataloged by IT, but will it be visible by all involved? Building the relationship between machines, processes and projects is still a documentation task without PaaS.
If redundant applications are not visible to the entire organization, then those accountable for announcing "time of death" will less likely make that call. Old projects which should be end-of-lifed will continue to over-consume their allocated resources - resources that were allocated with the hope of what the project may one day become.
Nobody likes waste. Resource consolidation through newer architectures and solutions like PaaS is a way to tackle this. PaaS brings more visibility to your IT infrastructure. It exposes resources at the application level. It spreads resources evenly across your cluster in a way that enables you to scale up and down with ease and utilizes all your hardware efficiently.
Source: ActiveState, originally published, here.
High-performing enterprise Software Quality Assurance (SQA) teams validate systems that are ready for use - getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time - bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of E...
May. 29, 2015 04:00 PM EDT Reads: 5,547
Amazon and Google have built software-defined data centers (SDDCs) that deliver massively scalable services with great efficiency. Yet, building SDDCs has proven to be a near impossibility for companies without hyper-scale resources. In his session at 15th Cloud Expo, David Cauthron, CTO and Founder of NIMBOXX, highlighted how a mid-sized manufacturer of global industrial equipment bridged the gap from virtualization to software-defined services, streamlining operations and costs while connect...
May. 29, 2015 04:00 PM EDT Reads: 3,652
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
May. 29, 2015 03:45 PM EDT Reads: 1,378
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, ...
May. 29, 2015 03:45 PM EDT Reads: 5,084
paradigm shifts in networking, to cloud and licensure, and all the Internet of Things in between. In 2014 automation was the name of the game. In his session at DevOps Summit, Matthew Joyce, a Sales Engineer at Big Switch, will discuss why in 2015 it’s complexity reduction. Matthew Joyce, a sales engineer at Big Switch, is helping push networking into the 21st century. He is also a hacker at NYC Resistor. Previously he worked at NASA Ames Research Center with the Nebula Project (where OpenSta...
May. 29, 2015 03:21 PM EDT Reads: 405
The term culture has had a polarizing effect among DevOps supporters. Some propose that culture change is critical for success with DevOps, but are remiss to define culture. Some talk about a DevOps culture but then reference activities that could lead to culture change and there are those that talk about culture change as a set of behaviors that need to be adopted by those in IT. There is no question that businesses successful in adopting a DevOps mindset have seen departmental culture change, ...
May. 29, 2015 03:00 PM EDT Reads: 5,163
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists will address the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affec...
May. 29, 2015 03:00 PM EDT Reads: 2,256
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In this session, James Kirkland, Red Hat's Chief Architect for the ...
May. 29, 2015 02:33 PM EDT Reads: 520
While there are hundreds of public and private cloud hosting providers to choose from, not all clouds are created equal. If you’re seeking to host enterprise-level mission-critical applications, where Cloud Security is a primary concern, WHOA.com is setting new standards for cloud hosting, and has established itself as a major contender in the marketplace. We are constantly seeking ways to innovate and leverage state-of-the-art technologies. In his session at 16th Cloud Expo, Mike Rivera, Seni...
May. 29, 2015 02:30 PM EDT Reads: 1,457
EMC Corporation on Tuesday announced it has entered into a definitive agreement to acquire privately held Virtustream. When the transaction closes, Virtustream will form EMC’s new managed cloud services business. The acquisition represents a transformational element of EMC’s strategy to help customers move all applications to cloud-based IT environments. With the addition of Virtustream, EMC completes the industry’s most comprehensive hybrid cloud portfolio to support all applications, all workl...
May. 29, 2015 02:00 PM EDT Reads: 1,540
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series dat...
May. 29, 2015 02:00 PM EDT Reads: 6,862
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
May. 29, 2015 02:00 PM EDT Reads: 2,507
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
May. 29, 2015 01:15 PM EDT Reads: 2,749
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enter...
May. 29, 2015 01:15 PM EDT Reads: 3,062
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using ...
May. 29, 2015 01:00 PM EDT Reads: 7,511
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
May. 29, 2015 01:00 PM EDT Reads: 2,460
Thanks to widespread Internet adoption and more than 10 billion connected devices around the world, companies became more excited than ever about the Internet of Things in 2014. Add in the hype around Google Glass and the Nest Thermostat, and nearly every business, including those from traditionally low-tech industries, wanted in. But despite the buzz, some very real business questions emerged – mainly, not if a device can be connected, or even when, but why? Why does connecting to the cloud cre...
May. 29, 2015 12:42 PM EDT Reads: 725
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York City, NY. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption...
May. 29, 2015 12:30 PM EDT Reads: 1,325
SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust...
May. 29, 2015 12:15 PM EDT Reads: 1,637
Imagine a world where targeting, attribution, and analytics are just as intrinsic to the physical world as they currently are to display advertising. Advances in technologies and changes in consumer behavior have opened the door to a whole new category of personalized marketing experience based on direct interactions with products. The products themselves now have a voice. What will they say? Who will control it? And what does it take for brands to win in this new world? In his session at @Thi...
May. 29, 2015 12:13 PM EDT Reads: 712