Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Storage in Cloud Is Not the Center of the Universe

The age of the super-scale single storage array is over

In a previous post, I touched on the need to have APIs for managing storage in cloud environments.  In this post, I’ll talk about how the way in which storage is deployed in cloud environments has to change.

For the last 10 years, the advent of Storage Area Networks (SANs) has created a storage-centric view of the world with storage at the centre and the “planets” – networking and servers – wrapped around it like some pre-Copernican view of the universe.  Over time, SANs have evolved to be ever bigger, with some organisations deploying huge fibre channel fabrics.  As we’ve seen today, EMC continues to perpetuate that view with the release of the VMAX 40K, a 4PB monster of a storage array in the best traditions of the central SAN-based model.

However the world has changed.  Storage is no longer the centre of the IT universe, but merely a player within it, and just as it came as a shock to those in power in the 1500′s when Copernicus proposed the sun was at the centre of the universe, so it will happen with IT and storage – especially so for cloud environments.

A Bit of History



SANs evolved from a time before (x86) virtualisation when everyone deployed physical servers.  The storage in the server was isolated and the server chassis was the limiting factor on expansion of storage capacity.  Copper SCSI cable limitations meant storage and server needed to be close, so expanding the storage for a single server could mean re-racking and downtime.  Storage Area Networks and the use of optical fibre for the interconnect, allowed storage to be centralised.  Now the resources were centrally stored and so sharable by all servers; they were not tied by physical distance as optical fibre could be run for hundreds of metres and they were scalable as the storage arrays could be scaled up in size simply by adding more disk to the shared pool.  It’s also worth remembering that the first storage arrays from the 1990′s were made with much less reliable drive hardware than we have today.  As a consequence the arrays were over-engineered to provide the high level of availability that centralisation required.

Consolidation can go too far.  Placing all storage resources into one or a small number of arrays increases the impact of the following:

  • Change Control – upgrading of microcode or other physical change has a wider impact and can be more difficult to get approved unless maintenance windows are well structured.
  • Failure – the failure of a single array can have huge consequences as they scale and support more servers
  • Complexity – large arrays benefit from scale in both capacity and performance, however larger arrays are more complex to manage (hence the introduction of auto-tiering technology), especially from a performance perspective,
  • Lifecycle – as arrays get bigger in size, the effort to migrate data on and off the array at the beginning and end of their lifecycle results in additional cost and wasted resources.
There is clearly a “sweet spot” in terms of array size, purely from the manageability angle.

Virtualisation & Cloud



The shared model works well with physically separate servers.  However virtualisation has changed the server landscape; where before we had hundreds of servers in the data centre, now we see those ratios drop by a factor of 10:1 or 20:1 as virtualisation becomes mainstream.  These ratios can be even higher in cloud environments where greater consolidation levels are required.  Previously the server to storage ratio was a many to one relationship.  Today we are seeing vendors push architectures that have, in some cases, a one to one relationship.  Deploying a single storage array for every server may be a little extreme, but what we are seeing is a move away from a centralised model to one of scalable node-based storage, where storage can be added into an existing complex of arrays.  In addition, data management intelligence is moving into the hypervisor.  VMware now manages storage vMotion requests, dynamic data placement with DRS, offloading the “heavy lifting” to VAAI.  Technologies such as remote replication aren’t needed.

What this means is we’re seeing a move towards storage hardware being used as a pure IOPS generator.  In cloud solutions, storage needs to be lean and cheap, whilst still being reliable.  What it doesn’t need is lots of additional extras.

The Storage Architect Take

The age of the super-scale single storage array is over.  Storage consolidation through SAN is no longer needed and cloud deployments cope better from node-based scale-out storage solutions.  Although most intelligence is moving to the hypervisor, the ability to seamlessly move from one array to another is still a requirement.  Four petabytes in a single array isn’t needed by 90% of organisations and those who may need that level of capacity probably won’t deploy it in a single array.  It’s time to move on.

Related Links

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.