Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Pat Romanski, Carmen Gonzalez, Yeshim Deniz

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Post

The Dirty Truth About Efficiency in Hyperconvergence By @ACConboy | @CloudExpo #Cloud

The discussion on efficiency can be extended through every single part of a vendor's architectural and business choices

The business dictionary defines efficiency as the comparison of what is actually produced or performed with what can be achieved with the same consumption of resources (money, time, labor, design, etc.). Example being: The designers needed to revise the product specifications as the complexity of its parts reduced the efficiency of the product.

In technology today we constantly hear efficiency being used as a marketing term by folks who haven't ever actually looked under the hood of the technology in question to the design of how architectures actually work. Efficiency is constantly being used as a buzzword without real evaluation of its meaning in the context of how the product in question really does what it was intended to do, compared to the alternatives in the market. There are way too many vendors saying "trust us, ours is the most efficient..."

Sadly, a quote from Paul Graham all too often comes to mind when dealing with vendors and their architectural choices in new and rapidly growing market segments such as Hyperconvergence:

"In a rapidly growing market, you don't worry too much about efficiency. It's more important to grow fast. If there's some mundane problem getting in your way, and there's a simple solution that's somewhat expensive, just take it and get on with more important things."

Understand that when the technology at hand is a Hyperconverged infrastructure (HCI), the importance of this term "Efficiency" cannot be overstated. Bear in mind that what a hyperconverged vendor (or cloud vendor, or converged vendor) is actually doing is taking responsibility for all of the architectural decisions and choices that their customers would have made for themselves in the past. All of the decisions around which parts to use in the solution, how many parts there should be, how best to utilize them, what capabilities the end product should have, and what the end user should be able to do (and what - in their opinion - they don't need to be able to do) with the solution.

Likewise, these choices made in the different approaches you see in the HCI market today can have profound effects on the efficiency (and resulting cost) of the end product. All too often, shortcuts are taken that radically decrease efficiency in the name of expediency (see the Paul Graham quote above). Like cloud and flash before it, the HCI space is seen as a ‘land grab' by several vendors, and getting to market trumps how they get there. Those fundamental decisions do not have priority (with some vendors) over getting their sales and marketing machines moving.

The IO Path
One great example of technology moving forward is SSD and flash technologies. Used properly, they can radically improve performance and reduce power consumption. However, several HCI vendors are using SSD and flash as an essential buffer to hide very slow IO paths used between Virtual Machines, VSAs (just another VM),  and their underlying disks - creating what amounts to a Rube Goldberg machine for an IO path - (one that consumes 4 to 10 disk IOs or more for every IO the VM needs done) rather than using flash and SSD as proper tiers with a QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers. Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of flash or solid state. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

Disk Controller Approaches
In other cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see vendors choosing to simply virtualize a SAN controller and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IOs having to pass multiple times through VMs in the system and adjacent systems, maintaining and magnifying the overhead of storage protocols (and their foibles). Likewise, this approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) often consumes significant resources in terms of CPU and RAM that could easily otherwise power additional Virtual Machines in the architecture. This has been done by many vendors due to the forced lock-in and lack of flexibility caused by legacy virtualization approaches.

Essentially, the VSA is a shortcut solution to the ‘legacy of lock-in' problem. In one case I can think of, a VSA running on each server (or node) in a vendor's architecture BEGINS its RAM consumption at 16GB and 4 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do (see above on IO path). With a different vendor, the VSA reserves over 43GB per node on their entry point offering, and over 100GB of RAM per node on their most common platform - a 3 node cluster reserving 300 GB RAM just for IO path overhead.  An average SMB customer could run their entire operation in just the CPU and RAM resources these VSAs consume. While this approach may offer benefits to the large enterprise that may not miss the consumed resource due to features offered in the VSA outweighing the loss, which is very much not the case with the average SMB customer.  Again, not a paragon of the efficiency required in today's SMB and mid-market IT environments.

This discussion on efficiency can be extended through every single part of a vendor's architectural and business choices - from hardware, hypervisor, management, disk, to the way they run their business. All of these choices result in dramatic impacts on the resultant capabilities, performance, cost and usability of the final product. As technology consumers in the modern SMB datacenter, we need to look beyond the marketeering to truly vet the choices being made for us.

The really disheartening part is when the vendors in question choose to either hide their choices, bury them on overly technical manuals, or claim that a given ‘new' technical approach is a panacea for every use case without mentioning that the ‘new' technical approach is really just a marketing renamed (read buzzword) approach that is just a slightly modified version of what has been around for decades and is already widely known to only be appropriate only for very specific uses. Even worse, they simply come out of the gate with the arrogance of "Trust us, our choices are best."

At the end of the day, it is always best to think of efficiency first, in all the areas it touches within your environment and the proposed architecture. Think of it for what it really is, how it impacts use and ask your vendor the specifics. You may well be surprised by the answers you get to the real questions....

More Stories By Alan Conboy

A 20 year industry veteran, Alan Conboy is a technology evangelist who specializes in designing, prototyping, selling, and implementing disruptive storage and virtualization technologies targeted at the SMB and mid-market. First mover in the X86/X64 Hyperconvergence space. One of the first 30 people ever certified by SNIA.

CloudEXPO Stories
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O'Reilly author.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker containers gain prominence. He explored these challenges and how to address them, while considering how containers will influence the direction of cloud computing.
Only Adobe gives everyone - from emerging artists to global brands - everything they need to design and deliver exceptional digital experiences. Adobe Systems Incorporated develops, markets, and supports computer software products and technologies. The Company's products allow users to express and use information across all print and electronic media. The Company's Digital Media segment provides tools and solutions that enable individuals, small and medium businesses and enterprises to create, publish, promote and monetize their digital content.
In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundation of software-defined infrastructure, Nutanix has rapidly expanded into full application lifecycle management across any infrastructure or cloud .Join us as we delve into how the Nutanix Developer Stack makes it easy to build hybrid cloud applications by weaving DBaaS, micro segmentation, event driven lifecycle operations, and both financial and cloud governance together into a single unified st...
Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes. We are offering early bird savings on all ticket types where you can save significant amount of money by purchasing your conference tickets today.