Welcome!

@CloudExpo Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, Aruna Ravichandran

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Building Your Private Cloud

Essential network considerations

Today's typical broadband virtual private network (VPN) connections to cloud applications will prove insufficient for tomorrow's cloud infrastructure services.

The reason is that infrastructure workloads demand more from the network than software services.

While broadband network services fit the user-to-machine cloud model for Software as a Service (SaaS) applications, the network needs to be upgraded in three key areas for machine-to-machine, cloud infrastructure services (IaaS):

  • Capacity and scalability
  • Security and encryption
  • Bandwidth on-demand

Let's take a look at why your network will need to incorporate each of these emerging requirements for IaaS.

Capacity and Scalability
The first requirement is most obvious, as the workload size under infrastructure services is orders of magnitude larger than the amount of network traffic generated by software services. Cloud workloads start with virtual machines (VM) and storage mobility.

As business-critical server applications like email, Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP) move to the cloud, they are typically deployed as VMs rather than on dedicated physical servers. Today, organizations can take advantage of the advanced processing features on an average server to house as many as 15 VMs per physical server - each with their own operating system and application.

This logical partitioning can increase the server's efficiency from the standard 15-30 percent range to upwards of 90 percent. Once the server is virtual, workload balancing to alleviate hot spots and avoid application performance degradation can now be done electronically by moving VMs over the network to alternate servers. Ideally, this workload balancing is done while the application is "live" for uninterrupted availability and elimination of complex server restarting.

In the cloud, virtualized servers can be called a VM "instance." Each VM "instance" is contracted from the cloud provider with a certain amount of CPU, memory and storage resources, and can range considerably in size. Amazon Web Services (AWS) instances vary from Small 1.7 GB memory and 160 GB of storage to Quad Extra Large 68 GB memory and 1,690 GB storage. These numbers could soon go higher as VMware recently announced support for monster-sized VMs with up to 1 Tbyte size of memory.

In addition to server instances, many cloud firms are now providing cloud-based storage services ranging from corporate services like Amazon's Simple Storage Service (S3) to consumer-oriented, easy-to-use cloud storage provided by Dropbox. Let's not forget Apple's new iCloud service, which promises five free GBytes for storing not only music and photos, but also books, videos and even business-oriented information like applications, documents, contacts, calendar and email. Clearly, storage has proven to be an early "killer app" for the cloud, and it's a market that Taneja Group estimates to be $4B, and will grow to $14B by 2014.

The need to offer a network with larger capacity that can easily scale becomes apparent as the industry moves beyond using cloud storage services for modest bandwidth-intensive applications to more demanding enterprise-class needs.

Consumer Class Cloud
Use Cases

Enterprise Class Cloud
Use Cases

Business files

Disaster Recovery

Music

VM Workload migration

Photo

Storage virtualization

Video

Virtulaized data centers

The need to offer a secure, reliable, high-performance connection to the cloud becomes much more critical to enterprise success. The reason for this is simple - enterprise cloud customers only have so much time in the day to move their mission-critical data, and therefore require the right connection and the ability to tune that connection based on their specific needs.

As the cloud business evolves from Software Services running cloud-based applications that transfer small amounts of cloud storage to Infrastructure Services for more mission-critical, larger file size requirements, the standard Internet connection will no longer suffice. Instead, we need a different network architecture approach. IaaS applications like storage, and new use cases like VM mobility, require technology with better bandwidth capacity and scalability to get their workflow accomplished in a reasonable amount of time.

Today's cloud IaaS users are not coping very well with existing network restrictions, which may have them sending their information via truck instead of electronically. And truck transfers introduce security concerns as well as obviously long latency values.

Let's see why these typical VM and storage workloads impact the network.

The chart above maps VM and storage workload sizes against different bandwidth deployments, to show the time in days to accomplish the migration.

The .52 TByte case on the bottom reflects a "small" instance of VMs and storage, 10 GB of memory and 2 GB of storage, and a use case to move 10 instances. The 25 TByte use case on the top of the chart scales up in this example to 500 VMs of larger VM instances.

As the figure shows, even small jobs like an occasional VM move to change server vendor platforms, for example, may be fairly small-sized, but cannot be accomplished within a day on most corporate networks. These relatively small infrastructure jobs - moving VMs and associated storage consisting of .52 TBytes - would take multiple eight-hour days using typical Internet speeds, or more than one workday on a typical corporate 40 Mbps network. These workload times are "best case" as retransmissions and network delays due to packet loss and latency often seen on shared Internet links would greatly expand the time for VM and storage workload transfers.

Unplanned VM moves, such as an emergency workload balancing when a critical application hits a server capacity threshold, may require immediate, large doses of bandwidth to resolve the crisis in a timely manner. Often we have predictable peak workload times such as during a holiday season where applications may be moved to the cloud to take advantage of a very scalable server environment. We can see that the model of typical job sizes for these workloads of 1.25 TBytes and 10.5 TBytes require around 1 Gbps links to complete in a day.

Finally, the bulk workload use case example for moving critical applications live during a data center change could involve many Terabytes of data, and with a relatively short time frame for completion. These larger jobs like a 25 TByte bulk VM migration would take multiple days even with a 1 Gbps network connection, further illustrating the need for more scalability and capacity in the cloud network.

Next, we'll see why network connections to fulfill the promise of cloud-based enterprise-class infrastructure services will also need to be secure and on-demand.

Security and Encryption
In addition to more flexible bandwidth, cloud services need to address a wide array of security concerns, from storage security for data at rest to network service security for data in flight. Enterprises considering cloud deployments have many other concerns related to security such as data recovery, reliability, physical location, network access, performance and network latency.

Public IP networks tend to offer few guarantees for service level uptime, quality of service and latency. For example, Amazon assumes 80 percent network utilization for data transfers in their Import/Export calculations, which we can attribute to typical congestion, retransmission and latency characteristics of shared network connectivity. These "best effort" networks force enterprises to compromise, and settle for less than ideal levels of packet loss and network latency that greatly affects the performance levels of infrastructure applications. In addition, enterprise users of public IP networks for critical infrastructure processes may be at risk to a denial of service attack, which could have very severe business availability implications.

With modern, carrier-grade Ethernet and Packet Optical networking architectures, enterprises can comfortably drive as much as 95 percent network utilization for increased throughput, along with better access performance, scalability, availability and lower network latency. A predictable and secure network is essential for enterprise mission-critical infrastructure networking applications.

Many organizations also face regulatory compliance and intellectual property protection requirements for their data networking. For example, network-level encryption services are increasingly important in health care, government, financial services and other industries dependent on their ability to protect their sensitive data.

Encryption services address data protection requirements by making the data in flight unintelligible in case the connection is compromised. Today's encryption services offer line-speed encryption in a compact size, and feature the added benefit of providing complete end-to-end management of encrypted services where key management is separated from network management. This separation is a critical element in allowing service providers to offer encryption services that still enable enterprises to control their own encryption keys.

Encryption of data in-flight between the organization and a cloud provider ensures secure transfer while maintaining network performance, latency and bandwidth level.

Bandwidth On-Demand
While network services need to be scalable and secure, they also need to be affordable.

We've discussed the need for network scalability and capacity for infrastructure services in the first section. Under local area network (LAN) conditions, VM migrations are usually not a problem. When moving across metro or long distances, however, we need dynamic network scalability to provide the throughput and other characteristics necessary for transferring large VM and storage workloads. The deployment of higher capacity bandwidth circuits is possible, but the industry standard 3- or 5-year contracts for bandwidth capacity are not economically viable for variable workload demands like VM migrations typically experienced with cloud infrastructure services.

We need to do some math to see why the connection speeds used for cloud-based user-to-machine traffic need to be at fundamentally different levels when applied to machine-to-machine traffic for server and storage services.

At Amazon Web Services, the company provides a simple chart to determine how long it will take to transfer data to the Amazon cloud, taking into account the volume of data that needs to be sent and available bandwidth speeds, assuming standard Internet connections from T1 (1.54 Mbps) though 1 GbE. When the time to transfer exceeds their recommended threshold value, Amazon suggests physically shipping data on storage devices via its Amazon Web Services Import/Export service.

According to Amazon's chart, it would take 82 days to transfer 1 TByte of information using a T1 network service, so that means that anything above 100 GBytes should be physically shipped instead of electronically transferred. (To put this in perspective, 100 GBytes is about the size of a 2004-era, laptop PC disk, so that's not a lot of information by today's standards.) This means that a T1 service is not enough bandwidth for many workload transfers.

On the other end of the scale, Amazon estimates that sending 1TByte over a 1 GbE network would take less than one day (similar to the calculations shown in the chart discussed in the first section). For transfers exceeding 60 TBytes over a 1 GbE network, Amazon, again, recommends using its import/export physical transport service. Even with a 1 GbE network, there are still some serious limitations with cloud data transfer. (Keep in mind that multiple-day electronic data transfers dramatically increase the probability of something going wrong - which would extend the job even longer.)

Providing "on-demand" bandwidth to accomplish this workload makes it more affordable for cloud use cases like workload mobility, availability and collaboration. For example, a cloud service backbone could scale to a 10 Gbps network and enable more than 30 TBytes to be transferred in a day, easily addressing the bulk VM migration use case, and then scale down once the migration is over.

Amazon's new Direct Connect service is a response to this need and could be a forerunner to more cloud service providers moving to new cloud networking architectures that respond to the growing amount - and importance - of the information in the cloud. Direct Connect provides a direct 1 or 10 Gbps connection to an Amazon cloud data center billed on an hourly basis. For Amazon cloud users, this new network service could provide the scalability and extra capacity to move large workloads back and forth from the cloud while paying only for time used on the network.

Dynamic networking can also be implemented with intelligent edge devices that can change an application's connection and allocation to existing bandwidth. A steady state configuration may have equal bandwidth allocation to each connected application. When a bandwidth-hungry workload is needed over a connection, such as a VM migration, the edge device can dynamically reallocate bandwidth connection assignments so the VM migration gets the bandwidth it needs to accomplish the job in a timely manner.

Carrier networks have the potential to dramatically increase performance by adding incremental new bandwidth end-to-end, charging for the premium bandwidth only when used. Then, after the workload task is accomplished, the premium bandwidth could be automatically reduced to the former steady state level.

Many service providers are looking to these new designs that can accommodate the ebb and flow of IT workload between enterprise and cloud data centers.

Summary
Cloud IaaS services offer IT management many options that increase their agility and decrease the time to deploy new solutions. Today's private enterprise networks are already prepared to address cloud application access, but as noted above, this all changes in a cloud infrastructure services model.

Virtualization of servers breaks the physical boundaries of workload balancing. The desire for policy-driven and automated workload balancing between private and cloud data centers requires a more scalable, secure and on-demand backbone network. Now, a more flexible, secure and dynamic network can extend the virtual data center, breaking down the data center walls by connecting enterprise data centers and cloud resources.

This new enterprise IT architecture - the IT architecture of the future - will feature virtualized data center capacity enabled with a carrier class, on-demand network backbone designed for cloud infrastructure services.

More Stories By Jim Morin

Jim Morin is a Product Line Director working in Ciena’s Industry Marketing segment. He is responsible for developing and communicating solutions and the business value for Ciena’s enterprise data center networking and cloud networking opportunities. Prior to joining Ciena in 2008 he held roles in business development and product management for several high technology storage and networking companies in Minneapolis.

Jim holds an MBA from the University of St. Thomas and a BA from the University of Notre Dame. He recently served on the Commission on the Leadership Opportunity in US Deployment of the Cloud (CLOUD2).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
First generation hyperconverged solutions have taken the data center by storm, rapidly proliferating in pockets everywhere to provide further consolidation of floor space and workloads. These first generation solutions are not without challenges, however. In his session at 21st Cloud Expo, Wes Talbert, a Principal Architect and results-driven enterprise sales leader at NetApp, will discuss how the HCI solution of tomorrow will integrate with the public cloud to deliver a quality hybrid cloud e...
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
Organizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data. Most organizations lack a road map for using Big Data to optimize key business processes, deliver a differentiated customer experience, or uncover new business opportunities. They do not understand what’s possible with respect to integrating Big Data into the business model.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.