Welcome!

@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

Breaking the Storage Array Lifecycle

Cloud storage appliances can grow to a virtually unlimited storage capacity without the need to ever upgrade

Anyone who purchases storage arrays is familiar with the many advantages of modular storage systems and storage area networks. However, they may also be familiar with one of the less desirable attributes of storage arrays: the typical three- to five-year lifecycle that forces decommissions and mandates upgrades on a regular basis. With many organizations expanding their need for storage by 20-60% on an annual basis [1], outgrowing capacity of existing storage arrays is a regular occurrence, effectively rendering upgrade cycles to be a fact of life.

Although decommissioning and upgrading a storage array may not appear all that daunting, the process includes a number of cumbersome aspects:

  • The migration process from old to new storage arrays could last months, creating an overlap when both the old and a new storage array must be maintained. In some enterprise environments, the migration can cost thousands of dollars per Terabyte [2]
  • When purchasing a new storage array, its sizing is based on anticipated future growth, resulting in an initial capacity "over-purchase" and an underutilized storage array for most of its life cycle. Result? Pre-payment for capacity that may not be needed over the next two years
  • Software and architectural changes associated with each storage array upgrade may compromise stability or, at a minimum, require staff retraining on new policies and features

The Economics of Storage Array Replacement
Why decommission arrays at all? Why not simply expand storage capacity by adding new storage arrays instead of replacing them? With storage densities and capacities growing every year and cost per gigabyte dropping, the economics provide the answer.

Consider a three-year-old, 10TB storage array that costs $50,000 to purchase and has a total annual maintenance cost (vendor maintenance fee, administration, floor space, power, cooling, etc.) of $25,000 per year. Now let's say today the IT manager purchases a new 25TB storage array for $50,000. Thanks to improved storage density, it may carry the same total annual maintenance cost of $25K per year.

If the manager chooses to keep both arrays active, he would now have data residing on the old storage array at $2.50/GB in annual maintenance cost and data residing on the new storage array at only $1/GB in annual maintenance cost (with plenty of capacity to spare). This economic inefficiency makes a pretty strong business case for decommissioning the old array and moving all data to the new array as quickly as possible.

Is there any hope of breaking this storage array cycle?

There is hope with on-demand cloud storage. Cloud storage substantially changes the economics and deployment of data storage and makes some of the aspects of the storage array life cycle start to disappear entirely.

  • The pay-as you-go cloud model eliminates the need to pre-purchase or over-purchase capacity
  • Cost per GB drops over time as cloud providers continue to lower costs every 6-12 months
  • Storage capacity is virtually limitless, so there is never a need to decommission or upgrade any hardware

The area chart below illustrates some of the economic inefficiencies of this deployment model. While most organizations consume storage at a near linear rate over time, the required initial pre-purchase of capacity results in a relatively disproportionate up-front investment. The blue area on the chart shows how total cost of ownership (TCO) increases over time with traditional storage deployment - note the spikes at every purchase cycle (Q1 Y1 & Q1 Y4). The green bars on the chart illustrate how TCO increases with a pay-as-you-go cloud storage model. Even in this simplistic case where the TCO of both models is identical after three years, you can easily spot the inefficiencies of traditional storage, which requires pre-payment for unused capacity.

This is a very basic analysis. In practice, there are many additional costs that factor into traditional storage deployments including:

  • Administrative costs of the migrations from old arrays to a new arrays
  • Potential overlap where both arrays must be maintained simultaneously
  • Potential downtime and productivity loss during the migration process

Furthermore, the analysis does not capture some of the additional cost savings of cloud storage that include:

  • Economies of scale in pricing and administration as a result of leveraging large multi-tenant cloud environments
  • Price per GB erosion over time

Factoring these additional costs of traditional storage and additional savings of cloud storage may be enough to convince most organizations that it is time to introduce cloud storage into their existing IT infrastructure, however, when it comes to deploying a cloud storage strategy, many companies don't know where or how to begin.

Introducing Cloud Storage into the Organization: A Hybrid Approach

Replacing traditional storage with cloud storage will break the traditional storage array life cycle, but a complete "forklift" replacement may be overkill. While cloud storage may not necessarily meet all on-premise storage needs, it can still augment existing storage infrastructure. A hybrid local-cloud storage environment can streamline storage operations and can even extend the life cycle of traditional storage through selective data offload.

A more conservative approach might be to identify data suitable for cloud storage such as secondary copies, backups, off-site data and/or archives. Interestingly, archives are often stored on traditional onsite storage to make them easily accessible to meet compliance requirements. By some reports, like the one below from ESG, the growth rate for archive data is expected to grow approximately 56% per year.

With literally hundreds of thousands of petabytes of archives to store over the next few years, the benefits of offloading archives or infrequently accessed data from traditional storage are numerous. In fact, transitioning this data to cloud storage can extend the traditional life cycle of storage arrays beyond the typical 3-5 year time frame. Imagine a 6-10 year storage array life cycle instead. That would result in a reduction of capital investment in storage infrastructure by half and introduce a significantly more efficient just-in-time, pay-as-you-go model.

How can businesses leverage tiers of cloud storage in a manner that integrates seamlessly into an existing storage infrastructure?

Connecting the Storage Infrastructure to the Cloud
Given most storage consumed by businesses is either block-based or NAS-based, an on-premise cloud storage appliance or gateway that converts cloud storage to a block or NAS protocol can greatly simplify deployment. When choosing a solution, keep in mind that, unlike NAS, block-based solutions have the advantage of supporting both block-based access and any file system protocol. Block-based iSCSI solutions support petabytes of storage and provide thin-provisioning, caching, encryption, compression, deduplication and snapshots, matching feature sets of sophisticated SAN storage arrays. These solutions can readily reside alongside existing SANs and are available in both software and hardware form factors.

Cloud storage appliances can grow to a virtually unlimited storage capacity without the need to ever upgrade, eliminating many administrative burdens and risks of the storage array life cycle. Since cloud storage is pay-as-you-go, cost adjustments occur automatically, eliminating the economic inefficiencies of the storage array life cycle.

In combination with a cloud storage gateway or appliance, businesses should also consider storage tiering software. Auto-tiering software can be found in storage virtualization solutions, data classification solutions, and even in some hypervisor solutions. Businesses that choose an auto-tiering framework can immediately begin to extend the life cycle of their existing storage arrays and leverage the benefits of cloud storage by selectively offloading infrequently used data.

References

  1. IDC: Unstructured data will become the primary task for storage
  2. Hitachi Data Systems: Reducing Costs and Risks for Data Migrations

More Stories By Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & Co-Founder of TwinStrata. He has spent over 20 years in enterprise data storage, both as a business manager and as an entrepreneur and founder in startup companies.

Prior to TwinStrata, he served as VP of Product Strategy and Technology at Incipient, Inc., where he helped deliver the industry's first storage virtualization solution embedded in a switch. Prior to Incipient, he was General Manager of the storage virtualization business at Hewlett-Packard. Vekiarides came to HP with the acquisition of StorageApps where he was the founding VP of Engineering. At StorageApps, he built a team that brought to market the industry's first storage virtualization appliance. Prior to StorageApps, he spent a number of years in the data storage industry working at Sun Microsystems and Encore Computer. At Encore, he architected and delivered Encore Computer's SP data replication products that were a key factor in the acquisition of Encore's storage division by Sun Microsystems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happen...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San Francisco which creates an "Outcomes-Centric Business Analytics" degree." Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business ou...
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properly managed, is poised to bring about a digital transformation to enterprise IT. We will discuss the trend, the technology and the timeline for adoption.