Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Elizabeth White, Pat Romanski, Carmen Gonzalez

Related Topics: @CloudExpo, Agile Computing

@CloudExpo: Article

Cloud Infrastructure: Maybe the NBN Isn't So Crazy After All

Gigabit Service Sounds Hare-Brained, but Infrastructure Improvement is Critical

Cloud Computing is often compared to utilities such as water and electricity, an oversimplification in that it doesn't account for data integrity, SLA granularity, or where specific services lie along the SaaS/PaaS/IaaS continuum. But the bandwidth that underlies all Cloud Computing service can be compared in this way.

Just as water pressure would drop in an area if everyone turned all their faucets on full blast, computer networks bog down when too many people demand too much individual bandwidth. Same principle with freeway jams, which can be so fierce and unrelenting in megacities such as Los Angeles that traffic engineers employ wave theory to explain them.

At the local level, at least, the Internet actually can be thought of as a series of "tubes," which can carry only so much material. The late Sen. Ted Stevens wasn't as clueless as folks made him out to be. He understood that Internet performance could be severely degraded at the local level, whether attributed to slow transmission rates or latency.

Overbooking as Engineering
Oversubscription is the villain here. A business, rather than technical, analogy might be an airline oversubscribing its flights based on historical information that says not everyone who is booked on a flight will show up for it.

But airlines typically over-subscribe by a factor of 10 to 20 percent, and sometimes 50 percent. Internet service providers might do this by a factor of 100:1 (ie, by 10,000 percent) or 1000:1 (ie, by 100,000 percent). , or even 10,000:1.

My thinking on this was prompted by Australia's National Broadband Network (NBN) initiative, which promises to deliver gigabit/second service to almost all citizens of the country. The oversubscription behind this plan seemed insane to me. On this topic, I heard from an engineer, Phillip Jaenke from Cleveland, Ohio (@RootWyrm on Twitter), who's been working on high-speed networks for almost 20 years.

He said we could start examining this problem by looking at an oversubscription rate of 100:1, or "1000:10; that is for every 1000Mb (gigabit) of bandwidth, we will guarantee all users uploading and downloading at the same time can get 10Mbps of bandwidth. To support 5000 users from a single POP, you need 50,000Mbps (50 gigabits per second) of bandwidth."

"This means putting each POP onto a SONET backbone built around multiple OC-192's to maintain an oversubscription rate of 100:1. To get down to OC-48's requires an oversubscription rate of 10,000:10 (1000:1)."

A SONET is a Synchronous Optical Network, the type of network widely used by providers in the US and Canada. An OC-192 is an optical cable that delivers 9.6 gigabits per second. It's equal to four of the more-common OC-48 lines, or almost 180,000 T-1 lines.

So, according to Phillip's numbers, you would need five or six of these puppies--at a cost of more than $1 million each--to deliver 10Mb service to 5000 users. The cost of these OC-192s would be about $1,000 per person.

This example has an oversubscription rate of 100:1. If everyone is trying to max out their bandwidth at once, their service would drop to little more than dial-up levels.

The NBN Amps This Up
The Australian NBN plan calls for delivery rates that are 100X higher than our example. So, either the lines will cost $100,000 per person, or oversubscription rates will be 10,000:1. My guess is that the gigabit promise will be scaled back to 100Mbps (as it was originally, according to what I've read), and the reality is that almost no one will need an effective speed higher than 10Mbps.

Streaming video live requires a max of 10mbps, although people who are serial downloaders (then watching stuff later) will chew up whatever bandwidth is available.

Phillip, always the engineer, told me he "agrees with the Australian NBN. For one thing, using the movie streaming example, let's talk about what happens in three years or sooner, when people start streaming Blu-Ray with Dolby DTS audio. The requirements then jump tremendously; 48Mbit/s."

He pointed out that "the NBN won't be implemented tomorrow. It's a network that's designed for the next 10 years or so."

Here's the Crux of the Matter
More important, "it's not such a bad idea because it forces infrastructure improvements, something that is lacking in every market. Projects like NBN and Metro Ethernet programs force the core infrastructure to upgrade."

He notes that the rapid growth of cable modems in the US forced the nation's backbone to be upgraded from "collections of 45Mbps DS3 lines, with a few long-run OC-12s at peering points."

Had cable modems and DOCSIS not taken off in the US, our backbone would most likely still be collections of 45Mbit DS3s with a few long run OC12's (622Mbps lines at peering points."

Cloud Computing Needs It
He also told  me that continuous infrastructure improvement is even "more important for Cloud infrastructure. Amazon Web Services and the like can't work if they can't get sufficient bandwidth to their facilities, regardless of how much bandwidth their customers have."

Perhaps the "10Mbps is enough" argument will some day rank with the "640K of RAM is enough" argument in its short-sightedness. On the other hand, computer memory has followed a Moore's Law on steroids price/performance curve over the past 30 years. I doubt optical networks will follow the same.

Engineers will put people on the moon if you give them the resources to do it. I would just caution politicians worldwide not to promise the moon as they lecture and legislate the future of Internet service delivery.

More Stories By Roger Strukhoff

Roger Strukhoff (@IoT2040) is Executive Director of the Tau Institute for Global ICT Research, with offices in Illinois and Manila. He is Conference Chair of @CloudExpo & @ThingsExpo, and Editor of SYS-CON Media's CloudComputing BigData & IoT Journals. He holds a BA from Knox College & conducted MBA studies at CSU-East Bay.

CloudEXPO Stories
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
When you're operating multiple services in production, building out forensics tools such as monitoring and observability becomes essential. Unfortunately, it is a real challenge balancing priorities between building new features and tools to help pinpoint root causes. Linkerd provides many of the tools you need to tame the chaos of operating microservices in a cloud native world. Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services - success rate, latency, and throughput.
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application performance guarantees & data privacy.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.