@CloudExpo Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, John Katrick, Mehdi Daoudi

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Hyperscale Computing Driving Small-Scale Designs

Are mid-scale offerings soon to be obsolete?

Multi-million user social networks, cloud hosting, Internet search and Big Data problems such as meteorology, complex physics and business informatics, all share one basic need - they each require incredibly large, complex and varied computer platforms. However, a common requirement across these systems is to "optimize the unit cost of computing." At this degree of hyperscale computing, the network, system, software, facility, and maintenance all add up to 10s or 100s of millions of dollars per project, and optimizations of a single element or the coordination of multiple elements can save the business millions. A good example of this holistic approach is Facebook's OpenCompute project, which saved the company 38% in efficiency and costs 24% less in build expense.

Similar to the automobile industry, where the racing technology from Indy, F1, and NASCAR end up in passenger vehicles, the hyperscale compute innovations we're seeing in juggernauts like Facebook will end up as line-item part numbers from vendors that are available to everyone. The timing couldn't be better, as solid state hard drives are becoming affordable and most enterprises are ramping up private cloud initiatives within their firms.

In a hyperscale design, premium computing constructs (like those seen in blade systems) are normally abandoned, favoring stripped down commodity designs that do the job at a fraction of the price. Because of the size of the deployment, rewriting an application to take advantage of the commodity compute fabric, or moving a task that was done in purpose-built hardware into custom software (e.g., disaster fault recovery), becomes cost-effective. Essentially, the decreased investment in hardware funds the software investments with ease. So what design elements are being abandoned in favor of hyperscale computing?

An example of the complex monolithic system that is being abandoned

  • Premium storage array networks with expensive optical connectivity and recovery features are being replaced with a mix of locally attached and network-attached storage, eliminating the heavy burden on the storage network
  • Dedicated compute, manage and storage networks are being replaced, favoring virtual LANs that reduce cabling and network costs
  • High cost per port network switching is being replaced, favoring commodity network components
  • High cost per socket blade systems are being replaced, favoring commodity compute components
  • Devices for monitoring and management are being replaced, favoring software tools and thoughtfully architected applications
  • Hot-swappable devices for high availability are being replaced, favoring streamlined hardware configuration
  • Redundant power supplies are being eliminated

The best visualization for this kind of unit cost of computing design is the Google Platform from 1998 that integrated individual parts without the purchase of machine cases.

Previously, creating the best optimized hyperscale compute fabric meant that a full staff of hardware/network/applications/systems/facilities engineers was needed to drive out the costs. Today, there are firms that are using hyperscale designs to create private cloud solutions affordable for small to medium-sized business markets or for business units in large firms. Companies working in this space aim to create the highest performance per IOP private cloud solution, delivering highly scalable infrastructure solutions.

Ideally, this architecture comes in the form of a single unit that uses converged networking, a mix of local and network-attached storage, and management software included in a small form factor. There are a handful of innovative vendors offering these solutions today. Customers adopting this type of solution enjoy an extremely low-cost commitment as a minimally configured system is capable of running a base level of virtual machines in a private and dedicated system with the potential to scale as needed. Hyperscale designs also work well in large-scale deployments, where 100,000s of virtual machines are being run.

The mid-scale cloud market, comprised of 10,000s of virtual machines, is also an interesting space. Currently, mid-market integrated private cloud offerings require large upfront costs and ongoing operational costs for dedicated staff to manage and maintain the complicated compute, storage, and networking, in addition to the expensive per socket and port hardware. Buyers in this space should certainly be asking vendors the cost per VM and the cost per terabyte of storage before they purchase, as well as determining the skills that are required to maintain an infrastructure of that kind. At this point, the mid-scale solutions look obsolete, as evolving hyperscale formats require lower cost commitments, and deliver high price performance coupled with compute, network and storage cooperation.

When discussing application considerations, hyperscale architecture is a natural platform for applications designed to leverage its key features - horizontal scalability (for high throughput and increased performance), and redundancy (for high availability and fault tolerance). Earlier hyperscale architectures, as mentioned earlier, took a different approach toward performance and reliability. Data access performance and high availability relied on premium storage array networks with expensive optical connectivity and recovery features. Compute performance relied on a high cost per socket blade system and high cost per port network switches.

The service orientation and "assumed failure" approach to cloud applications puts the burden of performance and reliability assurance on the application architecture. By constructing applications as a collection of loosely coupled services, greater performance can be achieved by distributing and replicating services horizontally across commodity compute, network, and storage components. High availability can also be achieved in a similar fashion by replicating application services across the hyperscale environment and introducing a failover mechanism to mirrored services upon service failure detection.

It's important to note several additional benefits achieved by this synergy between the hyperscale architecture and applications designed to leverage it. From a performance standpoint, system monitoring software can easily be configured to detect business policy-driven performance thresholds and automatically scale or contract services based on such policies. A similar strategy can be established for high availability policies. Should the number of redundant backup services fall below a certain threshold, additional backup services can be launched before any danger of service disruption is reached. Without going into exhaustive detail, it's clear that another hyperscale benefit is the ease in which applications and platform components can be patched and replaced without service disruption. Finally, the same mechanism by which patches are applied and platforms are replaced makes it easy to test and launch new features in line with the company's business strategy.

In conclusion, organizations across a wide variety of markets require robust servers with high density performance at an affordable entry price for all levels of businesses. A hyperscale architecture combined with well-designed applications provides enterprises with a powerful tool to operate an agile business, staying ahead of the competition and exploiting new business opportunities to its advantage.

More Stories By Lee Thompson

Lee Thompson is passionate about using cutting-edge technology to automate businesses, and was one of the key architects of E*TRADE FINANCIAL, using technology to price financial services products affordable for everyone. Lee currently brings his broad experience to Morphlabs as Chief Technology Officer, and to dev2ops.org, where he is a contributor.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@CloudExpo Stories
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
"We started a Master of Science in business analytics - that's the hot topic. We serve the business community around San Francisco so we educate the working professionals and this is where they all want to be," explained Judy Lee, Associate Professor and Department Chair at Golden Gate University, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
DXWorldEXPO LLC announced today that Dez Blanchfield joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Dez is a strategic leader in business and digital transformation with 25 years of experience in the IT and telecommunications industries developing strategies and implementing business initiatives. He has a breadth of expertise spanning technologies such as cloud computing, big data and analytics, cognitive computing, m...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 C...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
"NetApp is known as a data management leader but we do a lot more than just data management on-prem with the data centers of our customers. We're also big in the hybrid cloud," explained Wes Talbert, Principal Architect at NetApp, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Since we launched LinuxONE we learned a lot from our customers. More than anything what they responded to were some very unique security capabilities that we have," explained Mark Figley, Director of LinuxONE Offerings at IBM, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...