Welcome!

@CloudExpo Authors: Kevin Jackson, Elizabeth White, AppNeta Blog, Liz McMillan, Harry Trott

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Hyperscale Computing Driving Small-Scale Designs

Are mid-scale offerings soon to be obsolete?

Multi-million user social networks, cloud hosting, Internet search and Big Data problems such as meteorology, complex physics and business informatics, all share one basic need - they each require incredibly large, complex and varied computer platforms. However, a common requirement across these systems is to "optimize the unit cost of computing." At this degree of hyperscale computing, the network, system, software, facility, and maintenance all add up to 10s or 100s of millions of dollars per project, and optimizations of a single element or the coordination of multiple elements can save the business millions. A good example of this holistic approach is Facebook's OpenCompute project, which saved the company 38% in efficiency and costs 24% less in build expense.

Similar to the automobile industry, where the racing technology from Indy, F1, and NASCAR end up in passenger vehicles, the hyperscale compute innovations we're seeing in juggernauts like Facebook will end up as line-item part numbers from vendors that are available to everyone. The timing couldn't be better, as solid state hard drives are becoming affordable and most enterprises are ramping up private cloud initiatives within their firms.

In a hyperscale design, premium computing constructs (like those seen in blade systems) are normally abandoned, favoring stripped down commodity designs that do the job at a fraction of the price. Because of the size of the deployment, rewriting an application to take advantage of the commodity compute fabric, or moving a task that was done in purpose-built hardware into custom software (e.g., disaster fault recovery), becomes cost-effective. Essentially, the decreased investment in hardware funds the software investments with ease. So what design elements are being abandoned in favor of hyperscale computing?

An example of the complex monolithic system that is being abandoned

  • Premium storage array networks with expensive optical connectivity and recovery features are being replaced with a mix of locally attached and network-attached storage, eliminating the heavy burden on the storage network
  • Dedicated compute, manage and storage networks are being replaced, favoring virtual LANs that reduce cabling and network costs
  • High cost per port network switching is being replaced, favoring commodity network components
  • High cost per socket blade systems are being replaced, favoring commodity compute components
  • Devices for monitoring and management are being replaced, favoring software tools and thoughtfully architected applications
  • Hot-swappable devices for high availability are being replaced, favoring streamlined hardware configuration
  • Redundant power supplies are being eliminated

The best visualization for this kind of unit cost of computing design is the Google Platform from 1998 that integrated individual parts without the purchase of machine cases.

Previously, creating the best optimized hyperscale compute fabric meant that a full staff of hardware/network/applications/systems/facilities engineers was needed to drive out the costs. Today, there are firms that are using hyperscale designs to create private cloud solutions affordable for small to medium-sized business markets or for business units in large firms. Companies working in this space aim to create the highest performance per IOP private cloud solution, delivering highly scalable infrastructure solutions.

Ideally, this architecture comes in the form of a single unit that uses converged networking, a mix of local and network-attached storage, and management software included in a small form factor. There are a handful of innovative vendors offering these solutions today. Customers adopting this type of solution enjoy an extremely low-cost commitment as a minimally configured system is capable of running a base level of virtual machines in a private and dedicated system with the potential to scale as needed. Hyperscale designs also work well in large-scale deployments, where 100,000s of virtual machines are being run.

The mid-scale cloud market, comprised of 10,000s of virtual machines, is also an interesting space. Currently, mid-market integrated private cloud offerings require large upfront costs and ongoing operational costs for dedicated staff to manage and maintain the complicated compute, storage, and networking, in addition to the expensive per socket and port hardware. Buyers in this space should certainly be asking vendors the cost per VM and the cost per terabyte of storage before they purchase, as well as determining the skills that are required to maintain an infrastructure of that kind. At this point, the mid-scale solutions look obsolete, as evolving hyperscale formats require lower cost commitments, and deliver high price performance coupled with compute, network and storage cooperation.

When discussing application considerations, hyperscale architecture is a natural platform for applications designed to leverage its key features - horizontal scalability (for high throughput and increased performance), and redundancy (for high availability and fault tolerance). Earlier hyperscale architectures, as mentioned earlier, took a different approach toward performance and reliability. Data access performance and high availability relied on premium storage array networks with expensive optical connectivity and recovery features. Compute performance relied on a high cost per socket blade system and high cost per port network switches.

The service orientation and "assumed failure" approach to cloud applications puts the burden of performance and reliability assurance on the application architecture. By constructing applications as a collection of loosely coupled services, greater performance can be achieved by distributing and replicating services horizontally across commodity compute, network, and storage components. High availability can also be achieved in a similar fashion by replicating application services across the hyperscale environment and introducing a failover mechanism to mirrored services upon service failure detection.

It's important to note several additional benefits achieved by this synergy between the hyperscale architecture and applications designed to leverage it. From a performance standpoint, system monitoring software can easily be configured to detect business policy-driven performance thresholds and automatically scale or contract services based on such policies. A similar strategy can be established for high availability policies. Should the number of redundant backup services fall below a certain threshold, additional backup services can be launched before any danger of service disruption is reached. Without going into exhaustive detail, it's clear that another hyperscale benefit is the ease in which applications and platform components can be patched and replaced without service disruption. Finally, the same mechanism by which patches are applied and platforms are replaced makes it easy to test and launch new features in line with the company's business strategy.

In conclusion, organizations across a wide variety of markets require robust servers with high density performance at an affordable entry price for all levels of businesses. A hyperscale architecture combined with well-designed applications provides enterprises with a powerful tool to operate an agile business, staying ahead of the competition and exploiting new business opportunities to its advantage.

More Stories By Lee Thompson

Lee Thompson is passionate about using cutting-edge technology to automate businesses, and was one of the key architects of E*TRADE FINANCIAL, using technology to price financial services products affordable for everyone. Lee currently brings his broad experience to Morphlabs as Chief Technology Officer, and to dev2ops.org, where he is a contributor.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and ...
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
"We are a modern development application platform and we have a suite of products that allow you to application release automation, we do version control, and we do application life cycle management," explained Flint Brenton, CEO of CollabNet, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
We are always online. We access our data, our finances, work, and various services on the Internet. But we live in a congested world of information in which the roads were built two decades ago. The quest for better, faster Internet routing has been around for a decade, but nobody solved this problem. We’ve seen band-aid approaches like CDNs that attack a niche's slice of static content part of the Internet, but that’s it. It does not address the dynamic services-based Internet of today. It does...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
The Internet of Things (IoT) promises to simplify and streamline our lives by automating routine tasks that distract us from our goals. This promise is based on the ubiquitous deployment of smart, connected devices that link everything from industrial control systems to automobiles to refrigerators. Unfortunately, comparatively few of the devices currently deployed have been developed with an eye toward security, and as the DDoS attacks of late October 2016 have demonstrated, this oversight can ...
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, will share examples from a wide range of industries – includin...
Unless your company can spend a lot of money on new technology, re-engineering your environment and hiring a comprehensive cybersecurity team, you will most likely move to the cloud or seek external service partnerships. In his session at 18th Cloud Expo, Darren Guccione, CEO of Keeper Security, revealed what you need to know when it comes to encryption in the cloud.
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
"We are an all-flash array storage provider but our focus has been on VM-aware storage specifically for virtualized applications," stated Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
"We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
It's easy to assume that your app will run on a fast and reliable network. The reality for your app's users, though, is often a slow, unreliable network with spotty coverage. What happens when the network doesn't work, or when the device is in airplane mode? You get unhappy, frustrated users. An offline-first app is an app that works, without error, when there is no network connection. In his session at 18th Cloud Expo, Bradley Holt, a Developer Advocate with IBM Cloud Data Services, discussed...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at 20th Cloud Expo, Ed Featherston, director/senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.