Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Java IoT, Containers Expo Blog, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Feed Post

Here’s What NOT Using Cloud Storage Is Costing You

Cloud storage presents a very interesting analogy

It’s no secret that doing nothing is often considered a safe bet. The psychology behind inaction is well understood, particularly in the case of IT — the path of least disruption is usually maintaining the status quo rather than trying something new,

But once in a while, a decision of inaction can prove very costly. For instance, would you ignore leaky plumbing in your home? Barring any flooding or damage, there may not be much urgency to act — perhaps until the water bill arrives, at which point you experience a change of heart. But what if the leak preexisted before you moved in to the home and you never realized you were overpaying for water to begin with?

Cloud storage presents a very interesting analogy to the above situation. You may never realize how much unnecessary spending is a part of maintaining traditional storage until you examine some of the cloud-based alternatives.

Take for instance a hypothetical organization using 50TB of storage capacity today. Let’s examine the cost of traditional storage versus cloud storage using a few reasonable assumptions:

  • Cost of traditional storage: $1500 per TB for traditional on-prem storage with 25% in annual maintenance. Assume replacement every 3 years
  • Cost of cloud storage: $0.026 per GB per month for cloud storage (using Google Cloud Storage pricing). Assume another 50% for bandwidth (downloads) and puts/gets. Let’s call it  $0.039 per GB per month total
  • Starting capacity: 50TB
  • Capacity growth: 30% annually
  • Storage price reduction: 20% annually
  • Administration and physical costs are ignored for now

Below is a chart that illustrates the differences in total cost of ownership (TCO) between cloud and traditional storage over the next 9 years:

TCO-Capture

So how can the the gap between cloud and traditional storage be so substantial? Some will argue $1500 per TB is expensive for a storage system, as raw disk can be purchased for $100 per TB from an e-tailer. But raw disk capacity does not make a high-durability, always-on storage system. Most enterprise storage utilizes RAID protection which raises costs and reduces usable capacity. Furthermore, enterprise storage typically requires multi-site redundancy for disaster recovery. In that light, $1500 per usable TB is a great, if not implausibly good, deal.

Contrast that to top tier cloud storage, which comes standard with triple data center redundancy and intra-site redundancy. Cloud storage requires virtually no maintenance or replacement ever, avoiding the 2 replacement cycle “spikes” for traditional storage. What’s more eye-popping is that this comparison does not take into account the administrative cost savings of cloud storage — doing away with day to day tasks such as failure management, maintenance, upgrades, etc — nor does it take into account the environmental costs — power, cooling and floor space.

What’s missing in the comparison? A way to deliver cloud storage as a replacement for traditional storage. Cloud-integrated storage provides that route, offering the familiar interfaces and performance of local storage and enabling the cost saving of cloud.

Next time you are budgeting for data storage, consider the cost of maintaining the status quo.

The post Here’s what NOT using cloud storage is costing you appeared first on TwinStrata.

More Stories By Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & Co-Founder of TwinStrata. He has spent over 20 years in enterprise data storage, both as a business manager and as an entrepreneur and founder in startup companies.

Prior to TwinStrata, he served as VP of Product Strategy and Technology at Incipient, Inc., where he helped deliver the industry's first storage virtualization solution embedded in a switch. Prior to Incipient, he was General Manager of the storage virtualization business at Hewlett-Packard. Vekiarides came to HP with the acquisition of StorageApps where he was the founding VP of Engineering. At StorageApps, he built a team that brought to market the industry's first storage virtualization appliance. Prior to StorageApps, he spent a number of years in the data storage industry working at Sun Microsystems and Encore Computer. At Encore, he architected and delivered Encore Computer's SP data replication products that were a key factor in the acquisition of Encore's storage division by Sun Microsystems.

CloudEXPO Stories
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.