Welcome!

Cloud Expo Authors: Jerry Melnick, Liz McMillan, Michelle Drolet, Elizabeth White, Kevin Benedict

Related Topics: Virtualization, SOA & WOA, Open Source, Cloud Expo, Big Data Journal, SDN Journal

Virtualization: Article

Zero Hour for Tier Zero Storage

How can OEMs reduce cost but deliver the random IOPS performance that their customers need?

Over the last three years, the market has been abuzz with the news that deduplication technology was going to change the economics of flash-based storage systems forever. In fact just recently, in his article War Between SSDs and HDDs Will Escalate Through 2016, industry analyst Ben Woo of Neuralytix, Inc. noted:

"In the next two to five years, the only way flash-based storage vendors can challenge HDD-based storage systems on price is by way of data efficiency. The cost per unit of storage ($/GB) of HDDs is still 1/10the cost of NAND flash. However, data efficiency technologies (such as deduplication and/or compression) from a variety of vendors are showing data efficiency ratios that are over 10:1. The cost of SSD storage media is now coming in line with the cost of high-end HDDs."

But the fact is, the performance of these all flash storage systems that provide dedupe has been pretty lackluster to date and, in particular, with regard to random writes. It's hard to find anyone capable of handling more than 150 thousand 4K IOPS on the market. Those vendors that do report faster numbers (and who claim to have dedupe) are often turning off their dedupe feature for published performance results (or worse, performing post-process write allocations that shorten the life of the flash). At the same time, some of these flash vendors are telling the industry that they're seeing 10:1 data reduction rates when applying optimization. When you factor in the cost savings for this deduplication rate, the jump in price between a midrange performance flash array (with dedupe) and a high end array (without dedupe) is extraordinary.

Effective Cost of Todays Flash Storage Systems Across 4K IOPS Random Write Thresholds

How can OEMs reduce cost but deliver the random IOPS performance that their customers need? The roadmaps from many vendors suggest this is possible, but as usual the answer comes down to how hard the technology is to build or buy. Storage engineers assure me that with enough time and available resources a vendor delivering a high-performance flash product can build their own deduplication by implementing block reference counting and fine-grained thin provisioning capabilities within their architecture and combining those with a modern high performance duplicate advisory index. And several vendors are building solutions that will be able to hit these numbers today. At least one component supplier even offers a ready-to-run device mapper driver for storage systems based on Linux.

None the less, vendors have been less than forthcoming with regard to the performance of their currently shipping products. Either they will have to step up and be more forthcoming with their own performance assessments or it will be left to IT organizations to measure performance themselves. Either way, we really need to standardize on a common tool for evaluation and share the results. The open source Flexible IO Tester (fio) utility, that's developed and maintained by Jens Axboe over at Fusion-io, is an ideal candidate. There are several tests that should be considered when using this tool.

  1. Use the libaio engine to measure both sequential and random reads and writes as well as mixed workloads.
  2. Analyze workloads with a range of queue depths from 1 to 1024 and run the tool with varying numbers of simultaneous jobs.
  3. Test IOPS across multiple block sizes from 4K to 128K and understand how system performance varies with different sized IO requests.

In addition, there are several parameters to keep in mind when running the tests to prove you're seeing real, not baked numbers.

  1. To prevent deduplication from giving the implementation an unfair performance advantage, fio allows evaluators to test vendor systems with dedupe turned on but generate pure random data (data which does not dedupe).
  2. Write tests should be performed first so as to fill the disk with data (thinly provisioned systems can actually appear to run faster than the underlying storage on read tests if you don't).
  3. Tests should be run over long durations so that the evaluator develops a good understanding of how performance might change over time.

What Data Optimization for Tier 0 Means to the Industry
High-end systems have tremendous value. They address workloads that the lower end just can't touch at a cost per IOP that is far lower than that of traditional spinning disk. With the coming of next-generation solutions with proper deduplication, cost/GB drops dramatically broadening its applicability in the enterprise.

Effective Cost of Today's Flash Storage Systems vs Next Generation Dedupe Reference Architecture Across 4K IOPS Random Write Thresholds

In the past, deduplication couldn't scale to meet the performance demands of Tier 0 storage tiers. This meant that the cost of a storage system capable of delivering even 175 thousand random write IOPS was effectively six times that of a system that could deliver 150 thousand. With these next-generation deduplication systems becoming available, that cost penalty for high performance storage goes away. Top tier vendors have no excuse for withholding these capabilities from their customers. The results above are proof that newer systems can deliver true high-end performance and the cost savings from data reduction at the same time. Once these technologies land in the market, it should be possible to buy Tier 0 storage at Tier 1 prices. As always, trust, but verify.

More Stories By Louis Imershein

As Senior Director of Product Strategy at Permabit Technology Corporation, Louis Imershein is responsible for product evolution and strategic planning for the Albireo family of products. He has 22 years of technical leadership experience in product management, software development and support. Prior to joining Permabit, Imershein was a Senior Product Marketing Manager for the Sun Microsystems Data Management Group. He has a Bachelor's degree in Biological Science from the University of California, Santa Cruz.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Cloud Expo Breaking News
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
Next-Gen Cloud. Whatever you call it, there’s a higher calling for cloud computing that requires providers to change their spots and move from a commodity mindset to a premium one. Businesses can no longer maintain the status quo that today’s service providers offer. Yes, the continuity, speed, mobility, data access and connectivity are staples of the cloud and always will be. But cloud providers that plan to not only exist tomorrow – but to lead – know that security must be the top priority for the cloud and are delivering it now. In his session at 14th Cloud Expo, Kurt Hagerman, Chief Information Security Officer at FireHost, will detail why and how you can have both infrastructure performance and enterprise-grade security – and what tomorrow's cloud provider will look like.
The social media expansion has shown just how people are eager to share their experiences with the rest of the world. Cloud technology is the perfect platform to satisfy this need given its great flexibility and readiness. At Cynny, we aim to revolutionize how people share and organize their digital life through a brand new cloud service, starting from infrastructure to the users’ interface. A revolution that began from inventing and designing our very own infrastructure: we have created the first server network powered solely by ARM CPU. The microservers have “organism-like” features, differentiating them from any of the current technologies. Benefits include low consumption of energy, making Cynny the ecologically friendly alternative for storage as well as cheaper infrastructure, lower running costs, etc.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.
From data center to cloud to the network. In his session at 3rd SDDC Expo, Raul Martynek, CEO of Net Access, will identify the challenges facing both data center providers and enterprise IT as they relate to cross-platform automation. He will then provide insight into designing, building, securing and managing the technology as an integrated service offering. Topics covered include: High-density data center design Network (and SDN) integration and automation Cloud (and hosting) infrastructure considerations Monitoring and security Management approaches Self-service and automation
In his session at 14th Cloud Expo, David Holmes, Vice President at OutSystems, will demonstrate the immense power that lives at the intersection of mobile apps and cloud application platforms. Attendees will participate in a live demonstration – an enterprise mobile app will be built and changed before their eyes – on their own devices. David Holmes brings over 20 years of high-tech marketing leadership to OutSystems. Prior to joining OutSystems, he was VP of Global Marketing for Damballa, a leading provider of network security solutions. Previously, he was SVP of Global Marketing for Jacada where his branding and positioning expertise helped drive the company from start-up days to a $55 million initial public offering on Nasdaq.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 14th Cloud Expo, Marc Jones, Vice President of Product Innovation for SoftLayer, will explain how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.