Click here to close now.

Welcome!

CloudExpo® Blog Authors: Hovhannes Avoyan, Elizabeth White, Liz McMillan, Lori MacVittie, Pat Romanski

Related Topics: CloudExpo® Blog, Microservices Expo, Microsoft Cloud

CloudExpo® Blog: Article

Understanding Windows Azure

Part 2: A look inside the Windows Azure datacenters

To understand Windows Azure and the Azure Services Platform, it's necessary to understand how the Microsoft Datacenters work. This article provides an overview of how Microsoft Designs their datacenters and why the Generation 4 Datacenters are so revolutionary.

The Building of Datacenters
Microsoft has been building data centers for a long time. One of the best-known services Microsoft offers is Windows Update, which delivers updates as part of their content delivery network all over the world. But this is not the only product Microsoft's Datacenters are famous for. Other important products are Windows Live Messenger, Hotmail and Windows Live ID. Windows Live Messenger is one of the largest IM software and Hotmail is a frequently used e-mail software. Microsoft authorizes millions of users every day with their Live Services, which is used for Hotmail, Messenger and numerous other services. As you can see, Microsoft has experience building datacenters, but so far hasn't sold products like Windows Azure.

Microsoft's G4 - Generation 4 - Datacenters
Microsoft Research did a great job of improving their datacenters especially how they build them. Microsoft calls this the G4 - Generation 4 Datacenters. They have an industrial design - components are standardized, which lowers the cost and enables the vendors to use templates when designing their servers for Microsoft. Generation 4 Datacenters are basically built-in containers - yes, exactly those containers that we think about when we think about ship containers. There are major advantages to this design. Imagine a datacenter needs to be relocated. Microsoft would only need a couple of trucks and some property and the relocation is almost done. The main advantage to this design is that server vendors such as HP or Dell know exactly what the server racks should look like by adding them in a container. If a Datacenter needs to grow, a Generation 4 Datacenter just adds some additional containers to the existing ones. In addition, Microsoft focused on building standard tools for the cooling system so that local maintainance workers can easily get trained on the systems. It's important to note that the Generation 4 Datacenters aren't only a containerized server room. What Microsoft does with the Generation 4 Datacenters is that they improve the entire live-cycle of how the data centers are built and work. This gives Microsoft some additional benefits such as faster time-to-market and reduced costs.

How Microsoft Datacenters Help Protect the Environment
The term "Green IT" has been around for a while. Microsoft takes this term seriously and tries to minimize the energy consumption of their datacenters. For Microsoft this is not only the possibility of lowering the energy and cooling costs but also to protect our environment. With the Generation 4 Datacenters, Microsoft tries to build the containers with environmentally friendly materials and to take advantage of "ambient cooling." The last one focuses on reducing the amount of energy that needs to be invested to cool the server systems by taking advantage of the datacenter's environment. There are a couple of best practices and articles available on what Microsoft does to build environmentally friendly datacenters. I have included some links at the end of the article.

For an overview of Microsoft's Datacenter Design, this video that explains how Generation 4 Datacenters are built.

Security in Microsoft's Datacenters
Microsoft has a long tradition of building datacenters and operating systems. For decades, Microsoft had to face hackers, viruses and other malware that tried to attack their operating systems. More than other vendors, Microsoft learned from these attacks and started to build a comprehensive approach to security. The document I refer to in this article describes Microsoft's strategy for a safe Cloud Computing environment. Microsoft built an online services security and compliance team that focuses on implementing security in their applications and platforms. Microsoft's key assets for a safe and secure cloud computing environment are the commitment to trustworthy computing and the need for privacy. Microsoft works with a "privacy by default" approach.

To secure its datacenters, Microsoft holds safe datacenters certifications from various organizations such as the ISO/IEC and the British Standards Institute. Furthermore, Microsoft uses the ISO/IEC27001:2005 framework for security. This consists of the four points "Plan, Do, Check, Act."

If you want to go deeper into this Topic, I recommend you read "Securing Microsoft's Cloud Infrastructure."

What Happens with the Virtual Machines?
Figure 1 explains exactly what is going on in a Windows Azure Datacenter. I found this information in David Lemphers's blog, where he gave an overview of what happens in the datacenter. First of all, the servers are started and a maintenance OS is downloaded. This OS now talks to a service called "Fabric Controller." This service is in charge of the overall platform management and the server gets the instruction to create a host partition with a host VM. Once this is done, the server will restart and load the Host VM. The Host VM is configured to run in the datacenter and to communicate with other VMs on a safe basis. The services that we use don't run in the host VM. There's another VM, called the Guest VM, that runs within the host VM (the host VM is booted natively). Since we now have the VMRole, every guest VM holds a diff-store that will store the changes that are made to the virtual machine. The standard image is never modified. Each Host VM can contain several guest VMs.

Resources

•   •   •

This article is part of the Windows Azure Series on Cloud Computing Journal. The Series was originally posted on Codefest.at, the official Blog of the Developer and Platform Group at Microsoft Austria. You can see the original Series here.

More Stories By Mario Meir-Huber

Mario Meir-Huber studied Information Systems at the University of Linz. He worked in the IT sector for some years before founding CodeForce, an IT consulting and services company together with Andreas Aschauer. Since the advent of Cloud Computing, he has been passionate about this technology. He talks about Cloud Computing at various international events and conferences and writes for industry-leading magazines on cloud computing. He is Cloud Computing expert in various independent IT organizations and wrote a book on Cloud Computing covering all topics of the Cloud. You can follow Mario on Twitter (@mario_mh) or read his Blog at http://cloudvane.wordpress.com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Cloud services are the newest tool in the arsenal of IT products in the market today. These cloud services integrate process and tools. In order to use these products effectively, organizations must have a good understanding of themselves and their business requirements. In his session at 15th Cloud Expo, Brian Lewis, Principal Architect at Verizon Cloud, outlined key areas of organizational focus, and how to formalize an actionable plan when migrating applications and internal services to the ...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
Most companies hope for rapid growth so it's important to invest in scalable core technologies that won't demand a complete overhaul when a business goes through a growth spurt. Cloud technology enables previously difficult-to-scale solutions like phone, network infrastructure or billing systems to automatically scale based on demand. For example, with a virtual PBX service, a single-user cloud phone service can easily transition into an advanced VoIP system that supports hundreds of phones and ...
Containers Expo Blog covers the world of containers, as this lightweight alternative to virtual machines enables developers to work with identical dev environments and stacks. Containers Expo Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Bookmark Containers Expo Blog ▸ Here Follow new article posts on Twitter at @ContainersExpo
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup...
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects - scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e....
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, de...
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. ...
Grow your business with enterprise wearable apps using SAP Platforms and Google Glass. SAP and Google just launched the SAP and Google Glass Challenge, an opportunity for you to innovate and develop the best Enterprise Wearable App using SAP Platforms and Google Glass and gain valuable market exposure. In his session at @ThingsExpo, Brian McPhail, Senior Director of Business Development, ISVs & Digital Commerce at SAP, outlined the timeline of the SAP Google Glass Challenge and the opportunity...
As enterprises look to take advantage of the cloud, they need to understand the importance of safeguarding their confidential and sensitive data in cloud environments. Enterprises must protect their data from (i) system administrators who don't need to see the data in the clear and (ii) adversaries who become system administrators from stolen credentials. In short, enterprises must take control of their data: The best way to do this is by using advanced encryption, centralized key management and...
What are the benefits of using an enterprise-grade orchestration platform? In their session at 15th Cloud Expo, Nate Gordon, Director of Technology at Appcore, and Kedar Poduri, Senior Director of Product Management at Citrix Systems, took a closer look at the architectural design factors needed to support diverse workloads and how to run these workloads efficiently as a service provider. They also discussed how to deploy private cloud environments in 15 minutes or less.
In his session at DevOps Summit, Andrei Yurkevich, CTO at Altoros, provided an overview of all the benefits and opportunities, as well as drawbacks of deploying Cloud Foundry PaaS with Juju and compared it to BOSH. Discover the features that overlap, and understand what Juju Charm is, what it is not, where you use one or the other or where you use both BOSH and Juju Charms together. Andrei Yurkevich is Cloud Foundry protagonist and CTO at Altoros. Under his supervision, the Altoros engineering ...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water,...
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrateg...
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of...
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device exp...
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. 8th International Big Data Expo, co-located with 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. As advanced data storage, access and analytics technologies aimed at handling high-volume and/or fast moving data all move center stage, aided by the cloud computing bo...
Some developers believe that monitoring is a function of the operations team. Some operations teams firmly believe that monitoring the systems they maintain is sufficient to run the business successfully. Most of them are wrong. The complexity of today's applications have gone far and beyond the capabilities of "traditional" system-level monitoring tools and approaches and requires much broader knowledge of business and applications as a whole. The goal of DevOps is to connect all aspects of app...
DevOps is all about agility. However, you don't want to be on a high-speed bus to nowhere. The right DevOps approach controls velocity with a tight feedback loop that not only consists of operational data but also incorporates business context. With a business context in the decision making, the right business priorities are incorporated, which results in a higher value creation. In his session at DevOps Summit, Todd Rader, Solutions Architect at AppDynamics, discussed key monitoring techniques...