Welcome!

@CloudExpo Authors: Yeshim Deniz, William Schmarzo, Elizabeth White, Rishi Bhargava, Carl J. Levine

Related Topics: @CloudExpo, Microservices Expo, Microsoft Cloud, Open Source Cloud, Containers Expo Blog, Apache

@CloudExpo: Blog Feed Post

Crunching the Numbers in Search of a Greener Cloud

All of that hardware must be powered and cooled, and all of those offices must be lit

Although sometimes portrayed as a big computer in the sky, the reality of cloud computing is far more mundane. Clouds run on physical hardware, located in data centres, connected to one another and to their customers via high speed networks. All of that hardware must be powered and cooled, and all of those offices must be lit. Whilst many data centre operators continue to make welcome strides toward increasing the efficiency of their buildings, machines and processes, these advances remain a drop in the ocean next to the environmental implications of choices made about power source. With access to good information, might it be possible for users of the cloud to make choices that save themselves money, whilst at the same time saving (a bit of) the planet?

Greenpeace has consistently drawn attention to the importance of energy choices in evaluating the environmental credentials of data centres, with 2011′s How Dirty Is Your Data? report continuing to polarise arguments after more than a year. The most efficient modern data centres deploy an impressive arsenal of tricks to save energy (and therefore money), and to burnish their green credentials. They use the most efficient modern processors, heat offices with waste server heat, cool servers with water from the toilets and the sea, or keep air conditioning costs low by opening the building when it’s cool outside. But analysis from London’s Mastodon C suggests that these efforts, although laudable, typically trim only a few percentage points from a data centre’s environmental impact. According to Mastodon C CEO and co-founder Francine Bennett, a whopping 61% of a data centre’s environmental footprint can be attributed to choosing dirty power sources like coal. Efficient data centre design is to be welcomed, but we shouldn’t make the mistake of assuming that efficient data centres are necessarily green data centres. The corollary is also true, but if the figures are to be believed it has less serious consequences for the planet.

Dirty – and finite – power sources such as oil, coal, and gas remain the mainstay of power generation in most countries. According to figures from the Energy Information Administration in the United States, 37% of US energy consumption in 2010 was from ‘oil and other liquids,’ 21% was from coal, 9% was nuclear, 25% was gas, 1% was liquid biofuels, and only 7% was from renewables. More recent data suggests little change in the US’ spread of energy sources, although other countries are less reliant on coal. 2009 statistics (page 7) from the International Energy Agency suggest that coal accounts for 19.7% of consumption amongst OECD countries. More worryingly, although coal accounts for only 21% of consumption in the US, it has a disproportionate impact upon carbon emissions (a metric for which the US tops the table). Looking at 2010′s figures for carbon dioxide emissions directly attributable to power generation, coal’s 21% contribution to the consumption figure is responsible for 80% of the emissions total. By 2012 that has improved a little, to a mere 78%. Every small move away from coal has a large downstream effect on carbon emissions.

Energy-related carbon dioxide emissions attributable to generation of electricity

So data centres should just stop using coal then, right? That’s certainly what Greenpeace wants. But the picture is, of course, not quite that simple. Data centres require significant up-front investment, often years before the first customer pays anyone any money. Grants, incentives, and inward investment programmes may all lead data centre builders to choose otherwise odd locations for their new facilities. Data centre operators need power that is predictable, reliable, and affordable. They often simply draw most of that power from the utility grid, which will get its energy from a variety of suppliers. Offsets from planting a few trees or selling electricity generated by the windmills on your roof does nothing significant to compensate for the megawatts you’re sucking down from your closest coal-fired power station. As Amazon’s James Hamilton noted last week, data centres often want or need to be situated within easy reach of population centres. Bandwidth matters, so much so that it sometimes makes business sense to pay for cooling a data centre in a desert. Renewables such as solar, wind, and biofuels are good for carbon emissions, but can have other less welcome consequences as carbon-capturing forests and food-producing farmland are cleared to make way for solar arrays, windmills and oil palm plantations. Geothermal power is abundant, clean and almost free, but often a long way from prospective customers, and tainted by (unfair) association with geological instability. No one wants their data centre engulfed by a lava flow.

Data centres are big investments, amortised over many years. Their locations are selected for a whole host of reasons, of which the greenness of the electricity supply is only one. Some data centre providers will make much of their greenness, and may even see a business opportunity to charge a premium price that helps their customers feel good about themselves. Others say as little as possible, either because they don’t think we’ll like the truth or because (they say) no one is asking them the question.

But many users of these data centres have more room for manoeuvre. They have a choice, and maybe they just need enough information to let them exercise that choice wisely.

Some jobs will always need to be kept close, down the fattest, shortest, fastest pipe you can find. In low latency trading, for example, the speed of light presents a bottleneck. Other jobs might need to run in (or avoid) specific geographies. European data protection rules, financial and healthcare regulations in many countries, and most governments’ sensitivity about clandestine snooping on their activities are all reasons that have been used to place data in one place rather than another. A third class of jobs might need to run on one cloud rather than another. They’re optimised to utilise the features of a particular cloud provider, or they require an operating system or libraries or granular controls that only certain providers support. But even in each of these cases, there is often an element of choice. More than one data centre is easily accessible to a Wall Street trader. More than one cloud provider satisfies US/European Safe Harbor Provisions. Almost every significant cloud infrastructure provider offers mechanisms to choose one of their data centres over another. And then there’s the (far larger?) class of jobs that could run anywhere they can find a Windows or Linux virtual machine. For them, the choices are many and varied. And in a big data context, where a single job might spin up thousands of machines, those choices have real – measurable – environmental implications.

CO2 emissions vary by location… and time of day. Image © Mastodon C.

And that’s where some of the work being done by Mastodon C comes in. By gathering real data on climate (which is responsible for 20% of environmental footprint), power source (up to 61%) and server power usage, and adding educated estimates regarding efficiency initiatives inside the data centre, the company can tell you where the greenest place to run a compute job right now will be. Unseasonably cold in Singapore this week? Send your jobs to Asia. Sun visits Dublin for the day? Maybe avoid Ireland until the inevitable happens.

Cloud developers are creatures of habit. They’ll take default settings. They’ll send jobs to the same Region they used last time. And all of that means they tend to use Amazon… and they tend to use Amazon’s US-EAST region, in Virginia.

Mastodon C offers a web tool to display current figures on the CO2 emissions attributable to servers in different data centres around the world. Today, the tool shows figures for Iceland’s Greenqloud and IaaS giant Amazon, but even that offers some useful insight. As Francine Bennett notes, the vast majority (possibly 70%) of Amazon jobs run in the company’s Virginia data centre. When Virginia’s cool (which it rarely is during the summer months), this data centre’s not that bad, but when temperatures begin to rise only sun-drenched Dublin (erm…) and monsoon-gripped Singapore score more poorly on the emissions scale. Amazon’s Oregon data centre costs exactly the same as Virginia, but emissions are typically far lower. So if latency isn’t a principal concern (and it often isn’t for a big data job that’s left to get on with churning through a pile of data in an S3 bucket), and your data is already going to be processed in the United States, why not send it to green Oregon by default, instead of soot-stained Virginia?

Amazon’s most expensive facility, in Brazil, is even greener than Oregon, but the price puts a lot of potential customers off. So much so that spot prices for the site are often remarkably low. So if your compute jobs are amenable to running (and being killed from time to time) on a spot instance, Sao Paolo is also worth a look.

Greenqloud and AWS, of course, are only part of the cloud infrastructure picture. Bennett says that the company is keen to include similar data for other significant cloud providers such as Rackspace and Microsoft. Rather than predict data centre efficiency figures as they’ve done for Amazon, Bennett says they’re keen to work with the cloud providers directly, and to incorporate actual measurements from inside the data centres into the model.

Mastodon C is also about to release an API to the model behind the pretty UI, which developers (or cloud management companies like Rightscale) can then incorporate into their own code. Why couldn’t a big data job simply place itself in the greenest location at run-time?

The environment is not the only consideration in deciding where to send compute jobs. But if tools like Mastodon C’s can shine an accurate light on the financial and environmental costs of different data centres, then it seems inevitable that people will begin to pay attention. Not (immediately), perhaps, the corporate CIO in his big BMW. But the hipster founders of the next Facebook, the next Zynga, and the next Google, with their Teslas and Nests? Surely they’d be quick to embrace the means to get their computing done just as fast, just as cheaply, but greener?

Finally, there’s the subtext hidden between all the graphs and statistics that Mastodon C can show. Carbon emissions from data centres fluctuate with oil prices, the weather, and more. And those fluctuations mean that the price a data centre owner pays to run a given server for a given time fluctuates too. But, as a customer, you don’t see those price fluctuations. You pay your $0.64 to run a virtual machine in Amazon’s Virginia data centre, regardless of whether they’ve had to turn the aircon on or not. It’s 33°C there as I type, so they probably have.

At what point – if ever – would a data centre provider consider reflecting some of this variation in the actual price they charge? Would it be a transparent, fair, and honest way to pass on their true costs, or an unpredictable nightmare that would make any sort of long-term planning impossible?

You often have a choice about where you do your computing. Habit and laziness perhaps mean you don’t always exercise that choice, but maybe a visit to Mastodon C’s web dashboard will be enough to make you place your next cloud job somewhere other than the default.

What do you think? Are carbon footprints and temperature graphs and the rest something that cloud customers can and should concern themselves with? Do our small actions matter, or is it easier to just leave all of this to the people who run big data centres?

Image of Nesjavellir by Flickr user Lydur Skulason

Read the original blog entry...

More Stories By Paul Miller

Paul Miller works at the interface between the worlds of Cloud Computing and the Semantic Web, providing the insights that enable you to exploit the next wave as we approach the World Wide Database.

He blogs at www.cloudofdata.com.

@CloudExpo Stories
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, will provide a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services ...
SYS-CON Events announced today that Auditwerx will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Auditwerx specializes in SOC 1, SOC 2, and SOC 3 attestation services throughout the U.S. and Canada. As a division of Carr, Riggs & Ingram (CRI), one of the top 20 largest CPA firms nationally, you can expect the resources, skills, and experience of a much larger firm combined with the accessibility and atten...
Imagine having the ability to leverage all of your current technology and to be able to compose it into one resource pool. Now imagine, as your business grows, not having to deploy a complete new appliance to scale your infrastructure. Also imagine a true multi-cloud capability that allows live migration without any modification between cloud environments regardless of whether that cloud is your private cloud or your public AWS, Azure or Google instance. Now think of a world that is not locked i...
MongoDB Atlas leverages VPC peering for AWS, a service that allows multiple VPC networks to interact. This includes VPCs that belong to other AWS account holders. By performing cross account VPC peering, users ensure networks that host and communicate their data are secure. In his session at 20th Cloud Expo, Jay Gordon, a Developer Advocate at MongoDB, will explain how to properly architect your VPC using existing AWS tools and then peer with your MongoDB Atlas cluster. He'll discuss the secur...
Deep learning has been very successful in social sciences and specially areas where there is a lot of data. Trading is another field that can be viewed as social science with a lot of data. With the advent of Deep Learning and Big Data technologies for efficient computation, we are finally able to use the same methods in investment management as we would in face recognition or in making chat-bots. In his session at 20th Cloud Expo, Gaurav Chakravorty, co-founder and Head of Strategy Development ...
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
In his session at Cloud Expo, Alan Winters, an entertainment executive/TV producer turned serial entrepreneur, will present a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to m...
"delaPlex is a software development company. We do team-based outsourcing development," explained Mark Rivers, COO and Co-founder of delaPlex Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...