Click here to close now.


@CloudExpo Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Chris Fleck, Jason Bloomberg

Blog Feed Post

Windows Server 2012 – New Advanced Features

In this article I would like to share the new things in Windows Server 2012 that grabbed my particular attention. It’s not a full list of the new features, which you can find on the Microsoft official site. It’s more like a summary of the more advanced and intriguing new features.

Live migrations

Windows Server 2008 R2 supported live migration, but only if the virtual hard disk’s location remained the same, i.e. SAN. What Windows Server 2012 brings to the scene is the ability to move a virtual machine outside a cluster environment to any other Hyper-V host. You can even move several machines at the same time. The only thing you would need is a shared folder accessible from both locations and then you could move the storage (storage migration) of a virtual machine to a new one. Windows Server 2012 even offers you the ability to do a “Shared Nothing” live migration, meaning the ability to migrate a virtual machine from one host to another, even if they have no connectivity between themselves.

Minimum bandwidth

In a typical virtualization infrastructure there are multiple virtual machines sharing the same physical network card. In periods of heavy loads one virtual machine can manipulate the traffic if it needs it, leaving insufficient traffic for the rest of the machines. In Windows Server 2008 R2 you were able to set the maximum bandwidth for each virtual machine, meaning that they couldn’t occupy more of their allocated bandwidth, even if they needed to. However, it was inefficient in situations when the other virtual machines didn’t actually need the rest of the bandwidth. Setting a minimum bandwidth in Windows Server 2012 allows you to specify how much bandwidth each virtual machine needs in order to function. However, these constraints are applied only when there is a conflict in the bandwidth needs of virtual machines. If there is free bandwidth each virtual machine may use it until other virtual machines that are under their minimum bandwidth need it.
Let’s say we have a 1 Gigabit Ethernet card. We specify the minimum bandwidths for Virtual Machine (VM) 1, VM2, and VM3 to be respectively 500 Mb, 300 Mb, and 200 Mb (the sum can’t exceed the total bandwidth of the Ethernet card). In a moment of less activity from VM2 and VM3, VM1 uses 700 Mb of the available bandwidth, VM2 and VM3 use 100 Mb each. However, in the next moment a transaction is processed to VM2 and it needs all its available bandwidth. When that happens, VM2 will first occupy the available 100 Mb, but because it still needs more bandwidth and it’s under its minimum bandwidth of 300 Mb, V1 (as it exceeds its minimum bandwidth) will have to give VM2 100 Mb more.

Network virtualization

What it allows you to do is to have multiple virtual networks, possibly with the same IP address schemes, on top of the same physical network. It is really useful for cloud services providers. However, it can be used in businesses as well, for example when HR or Payroll traffic should be totally separated from the rest of the traffic. It also allows you to move virtual machines wherever you need them, despite the physical network, even to the cloud. For this to be possible each virtual machine has two different addresses for each network adapter. One is used for communication with the rest of the virtual machines and hosts in that network and is called Client Address. The other one is called Provider Address and it’s used for communications on the physical network only. These addresses are given so that different clients/departments have specific addresses so that the provider knows which traffic comes from which client/department. This way, the traffic is completely isolated from any other traffic on the physical network.

Resource metering

It allows you to easily do your capacity planning, because it collects information for the use of resources of a virtual machine throughout a period of time. Furthermore, Windows Server 2012 introduces the concept of resource pools. They combine multiple virtual machines belonging to one specific client or used for one specific function and the resource metrics are collected on a per user/function basis. This technique is helpful for IT budgeting needs and for billing customers. The metrics usually being collected are: Average CPU use (for a selected period of time); Average memory use; Minimum memory use; Maximum memory use; Maximum disk allocation; Incoming network traffic; Outgoing network traffic.

Dynamic Host Configuration Protocol (DHCP)

A rogue DHCP server is a fake server which is connected to the network, collects DHCP server requests and responds with incorrect addressing information. Active Directory protects its DHCP service by not allowing other DHCP servers to operate on the network until they are authenticated. However, this does not apply for non-Microsoft-Windows DHCP servers. They could still connect to the network and give addresses. Windows Server 2012 limits this by allowing you to specify which ports can have DHCP servers attached. So if the intruder is attached to any other port, its DHCP fake server packets would be dropped.


They are mainly used when you need point-in-time recovery in case of an error. For example, when you apply a service pack on a production server, you may want to give yourself a backdoor in case something bad happens. You take the snapshot before the service pack installation and if needed, you recover the server with it. What happens if you don’t need it? You’ve monitored the server for a while and everything seems normal after the patch? You don’t need the screenshot anymore. However, in Windows Server 2008 R2 you couldn’t just get rid of it. You would have to pause the virtual machine for a while, making it inaccessible. Windows Server 2012 has a new feature, called Hyper-V Live Merge, which allows you to release the snapshot while the machine continues to run.

Stay tuned to Monitis for our future articles on Windows Server 2012. We will take a deeper look into these and some more advanced new features.

Share Now:del.icio.usDiggFacebookLinkedInBlinkListDZoneGoogle BookmarksRedditStumbleUponTwitterRSS

Read the original blog entry...

More Stories By Hovhannes Avoyan

Hovhannes Avoyan is the CEO of Monitis, Inc., a provider of on-demand systems management and monitoring software to 50,000 users spanning small businesses and Fortune 500 companies.

Prior to Monitis, he served as General Manager and Director of Development at prominent web portal Lycos Europe, where he grew the Lycos Armenia group from 30 people to over 200, making it the company's largest development center. Prior to Lycos, Avoyan was VP of Technology at Brience, Inc. (based in San Francisco and acquired by Syniverse), which delivered mobile internet content solutions to companies like Cisco, Ingram Micro, Washington Mutual, Wyndham Hotels , T-Mobile , and CNN. Prior to that, he served as the founder and CEO of CEDIT ltd., which was acquired by Brience. A 24 year veteran of the software industry, he also runs Sourcio cjsc, an IT consulting company and startup incubator specializing in web 2.0 products and open-source technologies.

Hovhannes is a senior lecturer at the American Univeristy of Armenia and has been a visiting lecturer at San Francisco State University. He is a graduate of Bertelsmann University.

@CloudExpo Stories
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Interested in leveraging automation technologies and a cloud architecture to make developers more productive? Learn how PaaS can benefit your organization to help you streamline your application development, allow you to use existing infrastructure and improve operational efficiencies. Begin charting your path to PaaS with OpenShift Enterprise.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of, and Fred Yatzeck, principal architect leading product development at, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
Data loss happens, even in the cloud. In fact, if your company has adopted a cloud application in the past three years, data loss has probably happened, whether you know it or not. In his session at 17th Cloud Expo, Bryan Forrester, Senior Vice President of Sales at eFolder, will present how common and costly cloud application data loss is and what measures you can take to protect your organization from data loss.
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. Migration to cloud shifts computing resources from your data center, which can yield significant advantages provided that the cloud vendor an offer enterprise-grade quality for your application.
JFrog has announced a powerful technology for managing software packages from development into production. JFrog Artifactory 4 represents disruptive innovation in its groundbreaking ability to help development and DevOps teams deliver increasingly complex solutions on ever-shorter deadlines across multiple platforms JFrog Artifactory 4 establishes a new category – the Universal Artifact Repository – that reflects JFrog's unique commitment to enable faster software releases through the first pla...
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes.
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes ab...
As-a-service models offer huge opportunities, but also complicate security. It may seem that the easiest way to migrate to a new architectural model is to let others, experts in their field, do the work. This has given rise to many as-a-service models throughout the industry and across the entire technology stack, from software to infrastructure. While this has unlocked huge opportunities to accelerate the deployment of new capabilities or increase economic efficiencies within an organization, i...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete en...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.