Welcome!

@CloudExpo Authors: Pat Romanski, Yeshim Deniz, Liz McMillan, Elizabeth White, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Post

The Dirty Truth About Efficiency in Hyperconvergence By @ACConboy | @CloudExpo #Cloud

The discussion on efficiency can be extended through every single part of a vendor's architectural and business choices

The business dictionary defines efficiency as the comparison of what is actually produced or performed with what can be achieved with the same consumption of resources (money, time, labor, design, etc.). Example being: The designers needed to revise the product specifications as the complexity of its parts reduced the efficiency of the product.

In technology today we constantly hear efficiency being used as a marketing term by folks who haven't ever actually looked under the hood of the technology in question to the design of how architectures actually work. Efficiency is constantly being used as a buzzword without real evaluation of its meaning in the context of how the product in question really does what it was intended to do, compared to the alternatives in the market. There are way too many vendors saying "trust us, ours is the most efficient..."

Sadly, a quote from Paul Graham all too often comes to mind when dealing with vendors and their architectural choices in new and rapidly growing market segments such as Hyperconvergence:

"In a rapidly growing market, you don't worry too much about efficiency. It's more important to grow fast. If there's some mundane problem getting in your way, and there's a simple solution that's somewhat expensive, just take it and get on with more important things."

Understand that when the technology at hand is a Hyperconverged infrastructure (HCI), the importance of this term "Efficiency" cannot be overstated. Bear in mind that what a hyperconverged vendor (or cloud vendor, or converged vendor) is actually doing is taking responsibility for all of the architectural decisions and choices that their customers would have made for themselves in the past. All of the decisions around which parts to use in the solution, how many parts there should be, how best to utilize them, what capabilities the end product should have, and what the end user should be able to do (and what - in their opinion - they don't need to be able to do) with the solution.

Likewise, these choices made in the different approaches you see in the HCI market today can have profound effects on the efficiency (and resulting cost) of the end product. All too often, shortcuts are taken that radically decrease efficiency in the name of expediency (see the Paul Graham quote above). Like cloud and flash before it, the HCI space is seen as a ‘land grab' by several vendors, and getting to market trumps how they get there. Those fundamental decisions do not have priority (with some vendors) over getting their sales and marketing machines moving.

The IO Path
One great example of technology moving forward is SSD and flash technologies. Used properly, they can radically improve performance and reduce power consumption. However, several HCI vendors are using SSD and flash as an essential buffer to hide very slow IO paths used between Virtual Machines, VSAs (just another VM),  and their underlying disks - creating what amounts to a Rube Goldberg machine for an IO path - (one that consumes 4 to 10 disk IOs or more for every IO the VM needs done) rather than using flash and SSD as proper tiers with a QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers. Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of flash or solid state. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

Disk Controller Approaches
In other cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see vendors choosing to simply virtualize a SAN controller and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IOs having to pass multiple times through VMs in the system and adjacent systems, maintaining and magnifying the overhead of storage protocols (and their foibles). Likewise, this approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) often consumes significant resources in terms of CPU and RAM that could easily otherwise power additional Virtual Machines in the architecture. This has been done by many vendors due to the forced lock-in and lack of flexibility caused by legacy virtualization approaches.

Essentially, the VSA is a shortcut solution to the ‘legacy of lock-in' problem. In one case I can think of, a VSA running on each server (or node) in a vendor's architecture BEGINS its RAM consumption at 16GB and 4 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do (see above on IO path). With a different vendor, the VSA reserves over 43GB per node on their entry point offering, and over 100GB of RAM per node on their most common platform - a 3 node cluster reserving 300 GB RAM just for IO path overhead.  An average SMB customer could run their entire operation in just the CPU and RAM resources these VSAs consume. While this approach may offer benefits to the large enterprise that may not miss the consumed resource due to features offered in the VSA outweighing the loss, which is very much not the case with the average SMB customer.  Again, not a paragon of the efficiency required in today's SMB and mid-market IT environments.

This discussion on efficiency can be extended through every single part of a vendor's architectural and business choices - from hardware, hypervisor, management, disk, to the way they run their business. All of these choices result in dramatic impacts on the resultant capabilities, performance, cost and usability of the final product. As technology consumers in the modern SMB datacenter, we need to look beyond the marketeering to truly vet the choices being made for us.

The really disheartening part is when the vendors in question choose to either hide their choices, bury them on overly technical manuals, or claim that a given ‘new' technical approach is a panacea for every use case without mentioning that the ‘new' technical approach is really just a marketing renamed (read buzzword) approach that is just a slightly modified version of what has been around for decades and is already widely known to only be appropriate only for very specific uses. Even worse, they simply come out of the gate with the arrogance of "Trust us, our choices are best."

At the end of the day, it is always best to think of efficiency first, in all the areas it touches within your environment and the proposed architecture. Think of it for what it really is, how it impacts use and ask your vendor the specifics. You may well be surprised by the answers you get to the real questions....

More Stories By Alan Conboy

A 20 year industry veteran, Alan Conboy is a technology evangelist who specializes in designing, prototyping, selling, and implementing disruptive storage and virtualization technologies targeted at the SMB and mid-market. First mover in the X86/X64 Hyperconvergence space. One of the first 30 people ever certified by SNIA.

CloudEXPO Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy.