Welcome!

@CloudExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White, Liz McMillan

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, Machine Learning

@CloudExpo: Blog Feed Post

Like “API” Is “Storage Tier” Redefining Itself?

There is an interesting bit in high-tech that isn’t much mentioned but happens pretty regularly

There is an interesting bit in high-tech that isn’t much mentioned but happens pretty regularly – when a good idea is adapted and moved to new uses, raising it a bit in the stack or revising it to keep up with the times. The quintessential example of this phenomenon is the progression from “subroutines” to “libraries” to “frameworks” to “APIs” to “Web Services”. The progression is logical and useful, but those assembler and C programmers that were first stuffing things into reusable subroutines could not have foreseen the entire spectrum of what their “useful” idea was going to become over time. I had the luck of developing in all of  those stages. I wrote assembly routines right before they were no longer necessary for everyday development, and wrote web services/SOA routines for the first couple of years they were about.


YES INDEED, THIS IS A STORAGE BLOG

I think we see that happening in storage and don’t even realize it yet, which is kind of cool, because we all get a ring-side seat if you know where to look. image

When the concept of tiering first came around – I am not certain if it was first introduced with HSM or ILM, someone can weigh in on that distinction in the comments – it was aimed at the difference in performance between disks and arrays of disks. The point was that your more expensive disk wasn’t necessary for everyday tasks. And it was a valid point. Tiering has become a part of most large organizations’ storage management plans, just because it makes sense.

But the one truth about technology over the last twenty or thirty years is that it absolutely does not stand still. The moment you think it has plateaued, something new comes along from left field and changes the landscape. Storage has seen no shortage of this process, with the disks that were being used at the time tiering was introduced being replaced by SAS and SATA, then eventually SATA II. The interesting thing about these changes is that the reliability and access speed differences have gone down as a percentage since the days of SCSI vs. ATA. The disks just keep getting more reliable and faster. and with RAID everywhere, you get increased reliability through data redundancy. Though the amount of reliability you gain is dependent upon the level of RAID you choose, that’s relatively common knowledge at this point, so we won’t get too deep into it here.

Image Courtesy of www.clickonF5.org


BRING ON THE CHANGE!

And then the first bombshell hit. SSD. The performance difference of SSD versus hard disk is astounding and very real. It’s not something so close that you could choose to implement the slower technology (as is  true with hard disks), if you need the performance level of SSD for a given application, there are very few options but to bite the bullet and buy SSD. But it’s fast. It’s very fast. And prices are coming down.

Now the second bombshell hits. Cloud Storage. It’s immense. It’s very immense. And with a Cloud Storage Gateway, it looks like all your other storage – or at least all your other NAS storage. Companies like Cirtas and Nasuni are making cloud usable with local caches and interfaces to cloud providers. Some early reports like this one from Storage Switzerland claim that they make access “as fast as local storage”, but I’ll wager that’s untrue, simply because the cache IS local storage, all else has to go out through your WAN link. By definition that means the aggregate is slower than local disk access unless every file operation is a cache hit. Mathematically, I think that would be highly improbable. But even so, if they speed up cloud storage access and make it enterprise friendly, you now have a huge – potentially unlimited – place to store your stuff. And if my guess is right (it is a guess, have not tested at all, and don’t know of any ongoing testing), our WOM product should make these things perform like LAN storage due to the combination of TCP optimizations, compression and de-duplication in-flight reducing the burden on the WAN.


AND THAT’S WHERE IT GETS INTERESTING

So your hard disks are so close in performance and reliability – particularly after taking RAID into account – that the importance of the old definitions is blurred. You can have tier one with SATA II disks. No problem, lots of smaller and medium sized orgs DO have such an arrangement.

But that implies that what used to be “tier one” and “tier two” have greatly merged, the line between them blurring. Just in time for these two highly differentiated technologies to take on. I have a vision of the future where high-performance, high-volume sites use SSD for more and more of tier one, RAIDed SAS and/or SATA drives for tier two, and cloud storage for backup/replication/tier three. Then tiers have meaning again – tier one is screaming fast, tier two is the old standby, combining fast and reliable, and tier three is cloud storage (be it public or private, others can argue that piece out)…

And that has implications for both budgeting and architecture. SSD is more expensive. Depending upon your provider and usage patterns, cloud is less expensive (than disk, not tape). That implies a shift of dollars from the low end to the high end of your spending patterns. Perhaps, if you have savvy contract negotiators, it means actual savings overall on storage expenses, but more likely you’re just smoothing the spending out by paying monthly for cloud services instead of “Oh No, we have to  buy a new array”.


A BRIGHT, TIERFUL FUTURE

But tiering is a lot more attractive if you have three actual distinct tiers that serve specific purposes. Many organizations will start with tape as the final destination for backup purposes, but I don’t believe they’ll stay there. Backing up to disk has a long history at this point, and if that backup was going to disk that you can conceivably keep for as long as you’re willing to pay for it, I suspect that archival will be the primary focus of tape going forward. I don’t predict that tape will die, it is just too convenient and too intertwined to walk away from. And it makes sense for archival purposes - “we have to keep this for seven billion years because of government regulation, but we don’t need it” is a valid use for storage that you don’t pay by the month for and is stable for longer periods of time.

Of course I think you should throw an ARX in front of all of this storage to handle the tiering for you, but there are other options out there, something will have to make the determination, so find what works best for you.

Not so long ago, I would have claimed that most organizations didn’t need SSD, and only heavily stressed databases would actually see the benefit. These days I’m more sanguine about the prospects. As prices drop, ever more uses for SSDs are apparent. As of this writing they’re running $2 - $2.50 per gig, a lot more than SATA or even SAS, but most companies don’t need nearly as much tier one storage as they do tier two.


WATCH FOR IT

That’s the way I see it falling out too – prices on SSD will continue to drive down toward SAS/SATA, and you’ll want to back up tier one a lot more – which you should anyway – while cloud storage started pretty inexpensive and will likely continue to drop while all gets sorted out.

And like “subroutine”, you’ll only find traditional hard disks alone in the data center for small or very special purpose uses. Like the subroutine, it will give way to more specialized collections of storage on one end and “inling” SSDs on the other end.

Until the Next Big Thing comes along anyway.

 

 

 

 

 

 

 

Image compliments of steampunkworkshop.com go ahead, click on the image, it’s a steam USB charger – the next big thing…


Follow me on Twitter icon_facebook

AddThis Feed Button Bookmark and Share

Related Articles and Blogs

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.
Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes. We are offering early bird savings on all ticket types where you can save significant amount of money by purchasing your conference tickets today.
Crosscode Panoptics Automated Enterprise Architecture Software. Application Discovery and Dependency Mapping. Automatically generate a powerful enterprise-wide map of your organization's IT assets down to the code level. Enterprise Impact Assessment. Automatically analyze the impact, to every asset in the enterprise down to the code level. Automated IT Governance Software. Create rules and alerts based on code level insights, including security issues, to automate governance. Enterprise Audit Trail. Auditors can independently identify all changes made to the environment.
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Get started on automating your way to a brighter future!
DevOpsSUMMIT at CloudEXPO, to be held June 25-26, 2019 at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.