@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Zakia Bouachraoui, Carmen Gonzalez

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

Measuring Cloud Storage Performance: Blocks vs. Files

What are some good reasons to adopt cloud storage?

What are some good reasons to adopt cloud storage? Cost, durability and flexibility.

So let me talk about performance, instead.

Look at this graph:

provider bandwidth measurements

As part of our daily testing, we do routine performance measurements across a broad swath of cloud storage providers. It gives us a check to ensure that the various CloudArray subsystems are performing as they should, and gives us the data to make optimization decisions. In this particular test, we measure transfer rates at various buffer sizes. We “fill the pipe” by queueing up multiple streams of data simultaneously, initiating one transfer as soon as the previous one finishes, so that latency doesn’t skew the data.

This particular provider reaches its peak bandwidth at 256K transfer size, and is actually transferring at 50% of peak at 32K.

But look at this graph:

microsoft file distribution data

This data is from a Microsoft study published in FAST 2007. It describes the distribution of file sizes in a file system — interestingly, even though the mean file size does creep up over the years of the study, the distribution doesn’t change much. What we can draw from it is the fact that in a typical file system, roughly 80% of the files are less than 32k.

We can see that a naive system which just maps files on the file system directly to objects in the cloud is going to spend 80% of its file transfers at the bottom half of the bandwidth curve, achieving less than 50% of the peak available bandwidth. That’s ignoring latency, too: our hypothetical naive system is going to have to be streaming files out at a perfectly synchronized pace in order to achieve the theoretical maximum.

Our Provider A actually does pretty well with small transfers, rising to peak bandwidth relatively rapidly. What about a different provider?

provider bandwidth measurements -- two providers

What abysmal performance, right? Provider B’s bandwidth only rises to almost 20% of Provider A’s by 512K. Forget about issues with small writes: what reason would anybody have for picking Provider B? Could any cost or durability benefits be enough to suffer performance penalties this big?

But let’s zoom out and take a bigger picture view:

provider bandwidth measurements: large IO

Oops. This graph tells an entirely different story. There’s a real performance benefit to using Provider B, assuming that you are transferring large chunks of data.

Every cloud storage provider has different characteristics, even if the APIs are similar. The role of a cloud storage gateway is to smooth over the differences and provide a predictable solution for storing data in the cloud. That’s what CloudArray does: aggregates lots of small writes into large-block transfers, absorbs transient failures and network faults, and generally works to manage and optimize cloud storage utilization.

If a gateway vendor tells you that they won’t work with a particular cloud storage provider because its performance doesn’t meet their SLAs, then the simple fact is that their gateway isn’t doing its job. The more complicated fact is that naive file-to-object mappers will always be fundamentally flawed when dealing with real-world business data, because real-world business data is housed within file systems, and file systems are designed to talk to storage subsystems, and storage subsystems are designed to talk to disk controllers.

Anybody who’s worked in the enterprise storage array business can tell you about the small write problem in RAID: here it is again, written in the clouds. You’d pay a penalty for writing small pieces of data to a RAID volume, except for the years of work that have been spent developing storage systems that smooth out performance without sacrificing reliability. Odds are that many users have never heard of the small write problem, much less tuned their software to it or tried to plan out their file systems around optimizing their storage arrays.

But that’s exactly what they’ll need to be doing with their cloud storage, unless they use CloudArray. What’s our secret sauce? Actually, it’s no secret: we’re a block storage device, and we can tune and perfect our transfers to match your provider. That means we minimize time-to-durability and maximize the effectiveness of our cache. And that’s why we can make pictures like this:

We don’t make recommendations or publish our performance test results because ultimately, the choice of cloud storage provider should be a business decision, based on business factors like cost, durability, location, and a host of others. CloudArray’s architecture and capabilities make it possible for our customers to make those decisions for themselves, while being assured of getting the best performance.


– John Bates, CTO


Footnote/Mathematical aside: a careful reader will note that I discussed 80% of transfers, not transferred data. In fact, depending upon the total number of files and the distribution of sizes of the upper 20%, small files may be less than 1% of the total used capacity. And therefore, given the totally unrealistic model which disregards latency, a system with a low ratio of file count to total capacity and with a provider performance profile like A’s would pay only a minor small write penalty.

But that just serves to strengthen my point: why should any of this matter you, the user? A system with a higher ratio, or with a high-bandwidth skewed provider, can wind up spending 60% of its time transferring 10% of its data. Why should it be up to you to calculate your file system distributions and match them to the right cloud storage?

Read the original blog entry...

More Stories By Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & Co-Founder of TwinStrata. He has spent over 20 years in enterprise data storage, both as a business manager and as an entrepreneur and founder in startup companies.

Prior to TwinStrata, he served as VP of Product Strategy and Technology at Incipient, Inc., where he helped deliver the industry's first storage virtualization solution embedded in a switch. Prior to Incipient, he was General Manager of the storage virtualization business at Hewlett-Packard. Vekiarides came to HP with the acquisition of StorageApps where he was the founding VP of Engineering. At StorageApps, he built a team that brought to market the industry's first storage virtualization appliance. Prior to StorageApps, he spent a number of years in the data storage industry working at Sun Microsystems and Encore Computer. At Encore, he architected and delivered Encore Computer's SP data replication products that were a key factor in the acquisition of Encore's storage division by Sun Microsystems.

CloudEXPO Stories
Big Switch's mission is to disrupt the status quo of networking with order of magnitude improvements in network e ciency, intelligence and agility by delivering Next-Generation Data Center Networking. We enable data center transformation and accelerate business velocity by delivering a responsive, automated, and programmable software-dened networking (SDN) fabric-based networking solution. Traditionally, the network has been viewed as the barrier to data center transformation as legacy networking architectures hinder IT organizations with brittle, complex and cumbersome switch-by-switch management paradigms and in exible, proprietary hardware choices that are increasingly unable to keep up with the pace required of businesses today.
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O'Reilly author.
Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes. We are offering early bird savings on all ticket types where you can save significant amount of money by purchasing your conference tickets today.
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker containers gain prominence. He explored these challenges and how to address them, while considering how containers will influence the direction of cloud computing.