Welcome!

@CloudExpo Authors: Ed Featherston, Rostyslav Demush, Jamie Madison, Jason Bloomberg, Greg Pierce

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Hyperscale Computing Driving Small-Scale Designs

Are mid-scale offerings soon to be obsolete?

Multi-million user social networks, cloud hosting, Internet search and Big Data problems such as meteorology, complex physics and business informatics, all share one basic need - they each require incredibly large, complex and varied computer platforms. However, a common requirement across these systems is to "optimize the unit cost of computing." At this degree of hyperscale computing, the network, system, software, facility, and maintenance all add up to 10s or 100s of millions of dollars per project, and optimizations of a single element or the coordination of multiple elements can save the business millions. A good example of this holistic approach is Facebook's OpenCompute project, which saved the company 38% in efficiency and costs 24% less in build expense.

Similar to the automobile industry, where the racing technology from Indy, F1, and NASCAR end up in passenger vehicles, the hyperscale compute innovations we're seeing in juggernauts like Facebook will end up as line-item part numbers from vendors that are available to everyone. The timing couldn't be better, as solid state hard drives are becoming affordable and most enterprises are ramping up private cloud initiatives within their firms.

In a hyperscale design, premium computing constructs (like those seen in blade systems) are normally abandoned, favoring stripped down commodity designs that do the job at a fraction of the price. Because of the size of the deployment, rewriting an application to take advantage of the commodity compute fabric, or moving a task that was done in purpose-built hardware into custom software (e.g., disaster fault recovery), becomes cost-effective. Essentially, the decreased investment in hardware funds the software investments with ease. So what design elements are being abandoned in favor of hyperscale computing?

An example of the complex monolithic system that is being abandoned

  • Premium storage array networks with expensive optical connectivity and recovery features are being replaced with a mix of locally attached and network-attached storage, eliminating the heavy burden on the storage network
  • Dedicated compute, manage and storage networks are being replaced, favoring virtual LANs that reduce cabling and network costs
  • High cost per port network switching is being replaced, favoring commodity network components
  • High cost per socket blade systems are being replaced, favoring commodity compute components
  • Devices for monitoring and management are being replaced, favoring software tools and thoughtfully architected applications
  • Hot-swappable devices for high availability are being replaced, favoring streamlined hardware configuration
  • Redundant power supplies are being eliminated

The best visualization for this kind of unit cost of computing design is the Google Platform from 1998 that integrated individual parts without the purchase of machine cases.

Previously, creating the best optimized hyperscale compute fabric meant that a full staff of hardware/network/applications/systems/facilities engineers was needed to drive out the costs. Today, there are firms that are using hyperscale designs to create private cloud solutions affordable for small to medium-sized business markets or for business units in large firms. Companies working in this space aim to create the highest performance per IOP private cloud solution, delivering highly scalable infrastructure solutions.

Ideally, this architecture comes in the form of a single unit that uses converged networking, a mix of local and network-attached storage, and management software included in a small form factor. There are a handful of innovative vendors offering these solutions today. Customers adopting this type of solution enjoy an extremely low-cost commitment as a minimally configured system is capable of running a base level of virtual machines in a private and dedicated system with the potential to scale as needed. Hyperscale designs also work well in large-scale deployments, where 100,000s of virtual machines are being run.

The mid-scale cloud market, comprised of 10,000s of virtual machines, is also an interesting space. Currently, mid-market integrated private cloud offerings require large upfront costs and ongoing operational costs for dedicated staff to manage and maintain the complicated compute, storage, and networking, in addition to the expensive per socket and port hardware. Buyers in this space should certainly be asking vendors the cost per VM and the cost per terabyte of storage before they purchase, as well as determining the skills that are required to maintain an infrastructure of that kind. At this point, the mid-scale solutions look obsolete, as evolving hyperscale formats require lower cost commitments, and deliver high price performance coupled with compute, network and storage cooperation.

When discussing application considerations, hyperscale architecture is a natural platform for applications designed to leverage its key features - horizontal scalability (for high throughput and increased performance), and redundancy (for high availability and fault tolerance). Earlier hyperscale architectures, as mentioned earlier, took a different approach toward performance and reliability. Data access performance and high availability relied on premium storage array networks with expensive optical connectivity and recovery features. Compute performance relied on a high cost per socket blade system and high cost per port network switches.

The service orientation and "assumed failure" approach to cloud applications puts the burden of performance and reliability assurance on the application architecture. By constructing applications as a collection of loosely coupled services, greater performance can be achieved by distributing and replicating services horizontally across commodity compute, network, and storage components. High availability can also be achieved in a similar fashion by replicating application services across the hyperscale environment and introducing a failover mechanism to mirrored services upon service failure detection.

It's important to note several additional benefits achieved by this synergy between the hyperscale architecture and applications designed to leverage it. From a performance standpoint, system monitoring software can easily be configured to detect business policy-driven performance thresholds and automatically scale or contract services based on such policies. A similar strategy can be established for high availability policies. Should the number of redundant backup services fall below a certain threshold, additional backup services can be launched before any danger of service disruption is reached. Without going into exhaustive detail, it's clear that another hyperscale benefit is the ease in which applications and platform components can be patched and replaced without service disruption. Finally, the same mechanism by which patches are applied and platforms are replaced makes it easy to test and launch new features in line with the company's business strategy.

In conclusion, organizations across a wide variety of markets require robust servers with high density performance at an affordable entry price for all levels of businesses. A hyperscale architecture combined with well-designed applications provides enterprises with a powerful tool to operate an agile business, staying ahead of the competition and exploiting new business opportunities to its advantage.

More Stories By Lee Thompson

Lee Thompson is passionate about using cutting-edge technology to automate businesses, and was one of the key architects of E*TRADE FINANCIAL, using technology to price financial services products affordable for everyone. Lee currently brings his broad experience to Morphlabs as Chief Technology Officer, and to dev2ops.org, where he is a contributor.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...