Welcome!

@CloudExpo Authors: Pat Romanski, Elizabeth White, SmartBear Blog, Anders Wallgren, Dana Gardner

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Hyperscale Computing Driving Small-Scale Designs

Are mid-scale offerings soon to be obsolete?

Multi-million user social networks, cloud hosting, Internet search and Big Data problems such as meteorology, complex physics and business informatics, all share one basic need - they each require incredibly large, complex and varied computer platforms. However, a common requirement across these systems is to "optimize the unit cost of computing." At this degree of hyperscale computing, the network, system, software, facility, and maintenance all add up to 10s or 100s of millions of dollars per project, and optimizations of a single element or the coordination of multiple elements can save the business millions. A good example of this holistic approach is Facebook's OpenCompute project, which saved the company 38% in efficiency and costs 24% less in build expense.

Similar to the automobile industry, where the racing technology from Indy, F1, and NASCAR end up in passenger vehicles, the hyperscale compute innovations we're seeing in juggernauts like Facebook will end up as line-item part numbers from vendors that are available to everyone. The timing couldn't be better, as solid state hard drives are becoming affordable and most enterprises are ramping up private cloud initiatives within their firms.

In a hyperscale design, premium computing constructs (like those seen in blade systems) are normally abandoned, favoring stripped down commodity designs that do the job at a fraction of the price. Because of the size of the deployment, rewriting an application to take advantage of the commodity compute fabric, or moving a task that was done in purpose-built hardware into custom software (e.g., disaster fault recovery), becomes cost-effective. Essentially, the decreased investment in hardware funds the software investments with ease. So what design elements are being abandoned in favor of hyperscale computing?

An example of the complex monolithic system that is being abandoned

  • Premium storage array networks with expensive optical connectivity and recovery features are being replaced with a mix of locally attached and network-attached storage, eliminating the heavy burden on the storage network
  • Dedicated compute, manage and storage networks are being replaced, favoring virtual LANs that reduce cabling and network costs
  • High cost per port network switching is being replaced, favoring commodity network components
  • High cost per socket blade systems are being replaced, favoring commodity compute components
  • Devices for monitoring and management are being replaced, favoring software tools and thoughtfully architected applications
  • Hot-swappable devices for high availability are being replaced, favoring streamlined hardware configuration
  • Redundant power supplies are being eliminated

The best visualization for this kind of unit cost of computing design is the Google Platform from 1998 that integrated individual parts without the purchase of machine cases.

Previously, creating the best optimized hyperscale compute fabric meant that a full staff of hardware/network/applications/systems/facilities engineers was needed to drive out the costs. Today, there are firms that are using hyperscale designs to create private cloud solutions affordable for small to medium-sized business markets or for business units in large firms. Companies working in this space aim to create the highest performance per IOP private cloud solution, delivering highly scalable infrastructure solutions.

Ideally, this architecture comes in the form of a single unit that uses converged networking, a mix of local and network-attached storage, and management software included in a small form factor. There are a handful of innovative vendors offering these solutions today. Customers adopting this type of solution enjoy an extremely low-cost commitment as a minimally configured system is capable of running a base level of virtual machines in a private and dedicated system with the potential to scale as needed. Hyperscale designs also work well in large-scale deployments, where 100,000s of virtual machines are being run.

The mid-scale cloud market, comprised of 10,000s of virtual machines, is also an interesting space. Currently, mid-market integrated private cloud offerings require large upfront costs and ongoing operational costs for dedicated staff to manage and maintain the complicated compute, storage, and networking, in addition to the expensive per socket and port hardware. Buyers in this space should certainly be asking vendors the cost per VM and the cost per terabyte of storage before they purchase, as well as determining the skills that are required to maintain an infrastructure of that kind. At this point, the mid-scale solutions look obsolete, as evolving hyperscale formats require lower cost commitments, and deliver high price performance coupled with compute, network and storage cooperation.

When discussing application considerations, hyperscale architecture is a natural platform for applications designed to leverage its key features - horizontal scalability (for high throughput and increased performance), and redundancy (for high availability and fault tolerance). Earlier hyperscale architectures, as mentioned earlier, took a different approach toward performance and reliability. Data access performance and high availability relied on premium storage array networks with expensive optical connectivity and recovery features. Compute performance relied on a high cost per socket blade system and high cost per port network switches.

The service orientation and "assumed failure" approach to cloud applications puts the burden of performance and reliability assurance on the application architecture. By constructing applications as a collection of loosely coupled services, greater performance can be achieved by distributing and replicating services horizontally across commodity compute, network, and storage components. High availability can also be achieved in a similar fashion by replicating application services across the hyperscale environment and introducing a failover mechanism to mirrored services upon service failure detection.

It's important to note several additional benefits achieved by this synergy between the hyperscale architecture and applications designed to leverage it. From a performance standpoint, system monitoring software can easily be configured to detect business policy-driven performance thresholds and automatically scale or contract services based on such policies. A similar strategy can be established for high availability policies. Should the number of redundant backup services fall below a certain threshold, additional backup services can be launched before any danger of service disruption is reached. Without going into exhaustive detail, it's clear that another hyperscale benefit is the ease in which applications and platform components can be patched and replaced without service disruption. Finally, the same mechanism by which patches are applied and platforms are replaced makes it easy to test and launch new features in line with the company's business strategy.

In conclusion, organizations across a wide variety of markets require robust servers with high density performance at an affordable entry price for all levels of businesses. A hyperscale architecture combined with well-designed applications provides enterprises with a powerful tool to operate an agile business, staying ahead of the competition and exploiting new business opportunities to its advantage.

More Stories By Lee Thompson

Lee Thompson is passionate about using cutting-edge technology to automate businesses, and was one of the key architects of E*TRADE FINANCIAL, using technology to price financial services products affordable for everyone. Lee currently brings his broad experience to Morphlabs as Chief Technology Officer, and to dev2ops.org, where he is a contributor.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear...
If there is anything we have learned by now, is that every business paves their own unique path for releasing software- every pipeline, implementation and practices are a bit different, and DevOps comes in all shapes and sizes. Software delivery practices are often comprised of set of several complementing (or even competing) methodologies – such as leveraging Agile, DevOps and even a mix of ITIL, to create the combination that’s most suitable for your organization and that maximize your busines...
Struggling to keep up with increasing application demand? Learn how Platform as a Service (PaaS) can streamline application development processes and make resource management easy.
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, will discuss how research has demonstrated the value of Machine Learning in delivering next generation analytics to im...
See storage differently! Storage performance problems have only gotten worse and harder to solve as applications have become largely virtualized and moved to a cloud-based infrastructure. Storage performance in a virtualized environment is not just about IOPS, it is about how well that potential performance is guaranteed to individual VMs for these apps as the number of VMs keep going up real time. In his session at 18th Cloud Expo, Dhiraj Sehgal, in product and marketing at Tintri, will discu...
Up until last year, enterprises that were looking into cloud services usually undertook a long-term pilot with one of the large cloud providers, running test and dev workloads in the cloud. With cloud’s transition to mainstream adoption in 2015, and with enterprises migrating more and more workloads into the cloud and in between public and private environments, the single-provider approach must be revisited. In his session at 18th Cloud Expo, Yoav Mor, multi-cloud solution evangelist at Cloudy...
There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the...
This is not a small hotel event. It is also not a big vendor party where politicians and entertainers are more important than real content. This is Cloud Expo, the world's longest-running conference and exhibition focused on Cloud Computing and all that it entails. If you want serious presentations and valuable insight about Cloud Computing for three straight days, then register now for Cloud Expo.
Redis is not only the fastest database, but it has become the most popular among the new wave of applications running in containers. Redis speeds up just about every data interaction between your users or operational systems. In his session at 18th Cloud Expo, Dave Nielsen, Developer Relations at Redis Labs, will shares the functions and data structures used to solve everyday use cases that are driving Redis' popularity.
SYS-CON Events announced today that Stratoscale, the software company developing the next generation data center operating system, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Stratoscale is revolutionizing the data center with a zero-to-cloud-in-minutes solution. With Stratoscale’s hardware-agnostic, Software Defined Data Center (SDDC) solution to store everything, run anything and scale everywhere...
Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments. In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, will discuss how a cloud designed for production operations not only helps accelerate developer...
Angular 2 is a complete re-write of the popular framework AngularJS. Programming in Angular 2 is greatly simplified – now it's a component-based well-performing framework. This immersive one-day workshop at 18th Cloud Expo, led by Yakov Fain, a Java Champion and a co-founder of the IT consultancy Farata Systems and the product company SuranceBay, will provide you with everything you wanted to know about Angular 2.
Peak 10, Inc., has announced the implementation of IT service management, a business process alignment initiative based on the widely adopted Information Technology Infrastructure Library (ITIL) framework. The implementation of IT service management enhances Peak 10’s current service-minded approach to IT delivery by propelling the company to deliver higher levels of personalized and prompt service. The majority of Peak 10’s operations employees have been trained and certified in the ITIL frame...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
SYS-CON Events announced today that Enzu, a leading provider of cloud hosting solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to foc...
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.
Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic...