Welcome!

@CloudExpo Authors: Yeshim Deniz, Carmen Gonzalez, Zakia Bouachraoui, Pat Romanski, Liz McMillan

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, Machine Learning

@CloudExpo: Blog Feed Post

Like “API” Is “Storage Tier” Redefining Itself?

There is an interesting bit in high-tech that isn’t much mentioned but happens pretty regularly

There is an interesting bit in high-tech that isn’t much mentioned but happens pretty regularly – when a good idea is adapted and moved to new uses, raising it a bit in the stack or revising it to keep up with the times. The quintessential example of this phenomenon is the progression from “subroutines” to “libraries” to “frameworks” to “APIs” to “Web Services”. The progression is logical and useful, but those assembler and C programmers that were first stuffing things into reusable subroutines could not have foreseen the entire spectrum of what their “useful” idea was going to become over time. I had the luck of developing in all of  those stages. I wrote assembly routines right before they were no longer necessary for everyday development, and wrote web services/SOA routines for the first couple of years they were about.


YES INDEED, THIS IS A STORAGE BLOG

I think we see that happening in storage and don’t even realize it yet, which is kind of cool, because we all get a ring-side seat if you know where to look. image

When the concept of tiering first came around – I am not certain if it was first introduced with HSM or ILM, someone can weigh in on that distinction in the comments – it was aimed at the difference in performance between disks and arrays of disks. The point was that your more expensive disk wasn’t necessary for everyday tasks. And it was a valid point. Tiering has become a part of most large organizations’ storage management plans, just because it makes sense.

But the one truth about technology over the last twenty or thirty years is that it absolutely does not stand still. The moment you think it has plateaued, something new comes along from left field and changes the landscape. Storage has seen no shortage of this process, with the disks that were being used at the time tiering was introduced being replaced by SAS and SATA, then eventually SATA II. The interesting thing about these changes is that the reliability and access speed differences have gone down as a percentage since the days of SCSI vs. ATA. The disks just keep getting more reliable and faster. and with RAID everywhere, you get increased reliability through data redundancy. Though the amount of reliability you gain is dependent upon the level of RAID you choose, that’s relatively common knowledge at this point, so we won’t get too deep into it here.

Image Courtesy of www.clickonF5.org


BRING ON THE CHANGE!

And then the first bombshell hit. SSD. The performance difference of SSD versus hard disk is astounding and very real. It’s not something so close that you could choose to implement the slower technology (as is  true with hard disks), if you need the performance level of SSD for a given application, there are very few options but to bite the bullet and buy SSD. But it’s fast. It’s very fast. And prices are coming down.

Now the second bombshell hits. Cloud Storage. It’s immense. It’s very immense. And with a Cloud Storage Gateway, it looks like all your other storage – or at least all your other NAS storage. Companies like Cirtas and Nasuni are making cloud usable with local caches and interfaces to cloud providers. Some early reports like this one from Storage Switzerland claim that they make access “as fast as local storage”, but I’ll wager that’s untrue, simply because the cache IS local storage, all else has to go out through your WAN link. By definition that means the aggregate is slower than local disk access unless every file operation is a cache hit. Mathematically, I think that would be highly improbable. But even so, if they speed up cloud storage access and make it enterprise friendly, you now have a huge – potentially unlimited – place to store your stuff. And if my guess is right (it is a guess, have not tested at all, and don’t know of any ongoing testing), our WOM product should make these things perform like LAN storage due to the combination of TCP optimizations, compression and de-duplication in-flight reducing the burden on the WAN.


AND THAT’S WHERE IT GETS INTERESTING

So your hard disks are so close in performance and reliability – particularly after taking RAID into account – that the importance of the old definitions is blurred. You can have tier one with SATA II disks. No problem, lots of smaller and medium sized orgs DO have such an arrangement.

But that implies that what used to be “tier one” and “tier two” have greatly merged, the line between them blurring. Just in time for these two highly differentiated technologies to take on. I have a vision of the future where high-performance, high-volume sites use SSD for more and more of tier one, RAIDed SAS and/or SATA drives for tier two, and cloud storage for backup/replication/tier three. Then tiers have meaning again – tier one is screaming fast, tier two is the old standby, combining fast and reliable, and tier three is cloud storage (be it public or private, others can argue that piece out)…

And that has implications for both budgeting and architecture. SSD is more expensive. Depending upon your provider and usage patterns, cloud is less expensive (than disk, not tape). That implies a shift of dollars from the low end to the high end of your spending patterns. Perhaps, if you have savvy contract negotiators, it means actual savings overall on storage expenses, but more likely you’re just smoothing the spending out by paying monthly for cloud services instead of “Oh No, we have to  buy a new array”.


A BRIGHT, TIERFUL FUTURE

But tiering is a lot more attractive if you have three actual distinct tiers that serve specific purposes. Many organizations will start with tape as the final destination for backup purposes, but I don’t believe they’ll stay there. Backing up to disk has a long history at this point, and if that backup was going to disk that you can conceivably keep for as long as you’re willing to pay for it, I suspect that archival will be the primary focus of tape going forward. I don’t predict that tape will die, it is just too convenient and too intertwined to walk away from. And it makes sense for archival purposes - “we have to keep this for seven billion years because of government regulation, but we don’t need it” is a valid use for storage that you don’t pay by the month for and is stable for longer periods of time.

Of course I think you should throw an ARX in front of all of this storage to handle the tiering for you, but there are other options out there, something will have to make the determination, so find what works best for you.

Not so long ago, I would have claimed that most organizations didn’t need SSD, and only heavily stressed databases would actually see the benefit. These days I’m more sanguine about the prospects. As prices drop, ever more uses for SSDs are apparent. As of this writing they’re running $2 - $2.50 per gig, a lot more than SATA or even SAS, but most companies don’t need nearly as much tier one storage as they do tier two.


WATCH FOR IT

That’s the way I see it falling out too – prices on SSD will continue to drive down toward SAS/SATA, and you’ll want to back up tier one a lot more – which you should anyway – while cloud storage started pretty inexpensive and will likely continue to drop while all gets sorted out.

And like “subroutine”, you’ll only find traditional hard disks alone in the data center for small or very special purpose uses. Like the subroutine, it will give way to more specialized collections of storage on one end and “inling” SSDs on the other end.

Until the Next Big Thing comes along anyway.

 

 

 

 

 

 

 

Image compliments of steampunkworkshop.com go ahead, click on the image, it’s a steam USB charger – the next big thing…


Follow me on Twitter icon_facebook

AddThis Feed Button Bookmark and Share

Related Articles and Blogs

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The Crypto community has run out of anarchists, libertarians and almost absorbed all the speculators it can handle, the next 100m users to join Crypto need a world class application to use. What will it be? Alex Mashinsky, a 7X founder & CEO of Celsius Network will discuss his view of the future of Crypto.
In an age of borderless networks, security for the cloud and security for the corporate network can no longer be separated. Security teams are now presented with the challenge of monitoring and controlling access to these cloud environments, as they represent yet another frontier for cyber-attacks. Complete visibility has never been more important-or more difficult. Powered by AI, Darktrace's Enterprise Immune System technology is the only solution to offer real-time visibility and insight into all parts of a network, regardless of its configuration. By learning a ‘pattern of life' for all networks, devices, and users, Darktrace can detect threats as they arise and autonomously respond in real time - all without impacting server performance.
Today, Kubernetes is the defacto standard if you want to run container workloads in a production environment. As we set out to build our next generation of products, and run them smoothly in the cloud, we needed to move to Kubernetes too! In the process of building tools like KubeXray and GoCenter we learned a whole bunch. Join this talk to learn how to get started with Kubernetes and how we got started at JFrog building our new tools. After the session you will know: How we got to Kubernetes (and why we chose it)
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we've lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the...
10ZiG Technology is a leading provider of endpoints for a Virtual Desktop Infrastructure environment. Our fast and reliable hardware is VMware, Citrix and Microsoft ready and designed to handle all ranges of usage - from task-based to sophisticated CAD/CAM users. 10ZiG prides itself in being one of the only companies whose sole focus is in Thin Clients and Zero Clients for VDI. This focus allows us to provide a truly unique level of personal service and customization that is a rare find in the industry. We offer a multitude of custom embedding options and hardware configurations to ensure our devices are tailor-made to fit seamlessly into the environments of our customers.