Welcome!

@CloudExpo Authors: Elizabeth White, Pat Romanski, Liz McMillan, Dan Potter, Sematext Blog

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Microsoft Cloud, Open Source Cloud, @CloudExpo

Containers Expo Blog: Article

The Benefits of Virtualization

What does the latest Sandy Bridge mean for virtualization in the central office?

Those familiar with deploying virtual machines (VMs) know that in order to ensure performance, VMs must be tied to physical platforms. As the demand for data-intensive virtualized and cloud solutions continues to increase, more powerful server platforms will be required to deliver this performance without significantly multiplying hardware infrastructure for every VM.

The new Intel Sandy Bridge series (Intel's Xeon E5-2600 processor family) is ideally suited for enabling more powerful and efficient virtualized solutions for high-throughput, processing-intensive communications applications. This latest dual-processor architecture features an increased core count, I/O, and memory performance to allow more virtual machines to run on a single physical platform. Virtualization can be extremely memory-intensive, as more VMs typically require more total system memory. In order to optimize performance and easily manage VMs, each one usually requires at least one physical processor core. Using the new Sandy Bridge E5-2600 series architecture can enable individual physical servers to support greater numbers of virtualized appliances, thereby consolidating hardware for lower operational costs, preventing against VM sprawl, and simplifying transition to the cloud with opportunities for scaling up over multiple cores.

The Benefits of Virtualization
Modern carrier-grade platforms comprise unprecedented amounts of processing, memory, and network I/O resources. For developers, though, these goodies also come with the mandate to make the most effective use of modern platforms through scaling and other techniques. Through the intelligent use of carrier-class virtualization, developers can create highly scalable platforms and often eliminate unnecessary over-provisioning of resources for peak usage.

Current advances in multicore processors, cryptography accelerators, and high-throughput Ethernet silicon make it possible to consolidate what previously required multiple specialized server platforms into a single private cloud. 4G wireless deployments, HD-quality video to all devices, the continuing transition to VoIP technologies, increased security concerns, and power efficiency requirements are all driving the need for more flexible solutions.

By deploying a private cloud with virtual machine infrastructure, one's hardware becomes a pool of resources available to be provisioned as needed. The control plane, data plane, and networking can all share the same pool of common hardware.

Deployments can be easily upgraded by simply adding physical resources to the managed pool. Additionally, migrating VM instances from one compute node to another, as Figure 1 shows, can be non-disruptive.

Many telecom solutions require multiple different hardware solutions simply because they are made up of applications that run on different operating systems. In a private cloud deployment, multiple operating systems can be run on the same physical hardware, eliminating this requirement.

A private cloud enables running instances (VMs assigned to a specific function) to be tailored to different workload environments. For example, a dedicated service level can be assigned to each instance, and as demand increases or decreases, other instances can be spawned or decommissioned as necessary. This allows each process workload to be tailored for the moment-in-time demand required (see Figure 2). This ability to tailor each process workload to address moment-in-time demand means the practice of over-provisioning all resources for a "peak workload" can go by the wayside. As resources are no longer needed, they are simply added back into the pool to be used by other instances that may need to be spawned.

Virtual machines allow for the more efficient use of hardware resources by allowing multiple instances to share the same physical hardware, maximizing the use of those resources and increasing the work per watt of power consumed when compared to traditional infrastructure.

VMs also allow for 1+1 and N+1 redundancy through the use of multiple virtual instances running fewer independent hardware nodes, such as AdvancedTCA SBCs. In addition, VMs often require fewer physical nodes to achieve the same level of redundancy. By reducing the physical node count to achieve the same uptime goals, less power is consumed overall (see Figure 3).

AdvancedTCA and Virtualization
For private clouds running VM infrastructure, choosing AdvancedTCA chassis with SBCs for the compute node (the most common core element in any private cloud) makes sense because of their commonality, variety, manageability, and ease of deployment.

Network switches with Layer 3 functionality are the glue that holds the private cloud together. The selection of AdvancedTCA switches will depend largely on the internal and external bandwidth required for each compute node. Video streaming or deep packet inspection typically requires much more bandwidth (and thus higher bandwidth switches) than SMSC messaging, for example, to optimize performance.

The last necessity is also one of the most critical: shared storage. For an instance to be launched or migrated to any physical node, all nodes must also have access to the same storage. In private cloud infrastructure, a high-performance SAN and a cluster file system often supply this access. Connectivity options typically include Fibre Channel, SAS, and iSCSI connectivity. iSCSI with link speeds of up to 10 Gbps is the least intrusive approach to implementation to each node, as the SAN can be connected to AdvancedTCA fabric switches to provide storage connectivity to each node.

To avoid excessive use of fabric bandwidth for storage connectivity in high-throughput environments, employing SAS or Fibre Channels that are directly attached and connected externally to each node via RTMs is a viable option. With multiple manufacturers now making AdvancedTCA blade-based SANs as well as NEBS certified external SANs, many options are available to meet the SAN requirements for a carrier-grade private cloud.

How Sandy Bridge Processors Optimize AdvancedTCA Platforms
The new Intel Xeon processor E5 family, based on the Sandy Bridge microarchitecture, changes how well software applications run on AdvancedTCA platforms. It supports innovative networking through 40-gigabit Ethernet, and its features allow for advanced virtualization and cloud computing techniques.

The Intel Xeon E5-2600 series CPUs consist of up to eight cores, each running up to 55 percent faster than its Xeon 5600 predecessor. It can therefore deliver much higher server performance to the enterprise market. Furthermore, new enterprise servers can support up to 32 GB dual in-line memory modules (DIMMs) so memory capacity can increase from 288 GB to 768 GB using 24 slots. E5-based AdvancedTCA compute blades with more limited board real estate are expected to support up to 256 GB in 16 VLP RDIMM slots at launch. This represents a 40 percentincrease over prior blades.

Greater power efficiency is another key benefit. The E5 family provides up to a 70 percent performance gain per watt over previous generation CPUs. Communications OEMs can develop power-efficient dual processor blades for service providers that fully meet or beat AdvancedTCA power specifications.

But the real game-changer lies in the E5-2600's integrated I/O, which allows designers to reduce latency significantly and increase bandwidth. AdvancedTCA's 40G fabric has been backplane-ready since 2010 in anticipation of an updated PICMG specification release. Since then, solution providers have sought ways to eliminate bottlenecks and utilize as much of the fabric as possible. Now that Intel has integrated the new PCI-Express 3.0 with 40 lanes aboard each Xeon® processor and Quickpath Interconnects (QPIs) linking each CPU together, I/O bottlenecks are reduced, throughput is increased, and I/O latency is cut by up to 30 percent. A standard dual Xeon® E5-2600 CPU configuration offers up to 80 lanes of PCIe Gen3, which provides 200 percent more throughput than the previous generation architectures.

The overall result is much higher I/O throughput. New AdvancedTCA blades will now be able to deliver more than 10 Gb/s per node. This is a critical milestone for wireless video applications that service providers are so hungry to launch. Greater overall performance and higher performance per watt are significant by themselves, but having enough I/O capacity to match the processor capabilities makes for even greater advances in application throughput.

More Stories By Austin Hipes

Austin Hipes currently serves as the director of field engineering for NEI. In this role, he manages field applications engineers (FAEs) supporting sales design activities and educating customers on hardware and the latest technology trends. Over the last eight years, Austin has been focused on designing systems for network equipment providers requiring carrier grade solutions. He was previously director of technology at Alliance Systems and a field applications engineer for Arrow Electronics. He received his Bachelor’s degree from the University of Texas at Dallas.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In his session at 18th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., and Logan Best, Infrastructure & Network Engineer at Webair, focused on real world deployments of DDoS mitigation strategies in every layer of the network. He gave an overview of methods to prevent these attacks and best practices on how to provide protection in complex cloud platforms. He also outlined what we have found in our experience managing and running thousands of Linux and Unix ...
Continuous testing helps bridge the gap between developing quickly and maintaining high quality products. But to implement continuous testing, CTOs must take a strategic approach to building a testing infrastructure and toolset that empowers their team to move fast. Download our guide to laying the groundwork for a scalable continuous testing strategy.
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
"We are a well-established player in the application life cycle management market and we also have a very strong version control product," stated Flint Brenton, CEO of CollabNet,, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In his session at @DevOpsSummit at 19th Cloud Expo, Yoseph Reuveni, Director of Software Engineering at Jet.com, will discuss Jet.com's journey into containerizing Microsoft-based technologies like C# and F# into Docker. He will talk about lessons learned and challenges faced, the Mono framework tryout and how they deployed everything into Azure cloud. Yoseph Reuveni is a technology leader with unique experience developing and running high throughput (over 1M tps) distributed systems with extre...
"Software-defined storage is a big problem in this industry because so many people have different definitions as they see fit to use it," stated Peter McCallum, VP of Datacenter Solutions at FalconStor Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
"Operations is sort of the maturation of cloud utilization and the move to the cloud," explained Steve Anderson, Product Manager for BMC’s Cloud Lifecycle Management, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
Unless your company can spend a lot of money on new technology, re-engineering your environment and hiring a comprehensive cybersecurity team, you will most likely move to the cloud or seek external service partnerships. In his session at 18th Cloud Expo, Darren Guccione, CEO of Keeper Security, revealed what you need to know when it comes to encryption in the cloud.
What are the successful IoT innovations from emerging markets? What are the unique challenges and opportunities from these markets? How did the constraints in connectivity among others lead to groundbreaking insights? In her session at @ThingsExpo, Carmen Feliciano, a Principal at AMDG, will answer all these questions and share how you can apply IoT best practices and frameworks from the emerging markets to your own business.
Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ...
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Cloud analytics is dramatically altering business intelligence. Some businesses will capitalize on these promising new technologies and gain key insights that’ll help them gain competitive advantage. And others won’t. Whether you’re a business leader, an IT manager, or an analyst, we want to help you and the people you need to influence with a free copy of “Cloud Analytics for Dummies,” the essential guide to this explosive new space for business intelligence.
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, outlined ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and sto...