Welcome!

@CloudExpo Authors: Ed Featherston, Rostyslav Demush, Jamie Madison, Jason Bloomberg, Greg Pierce

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Agile Computing, Apache

@CloudExpo: Blog Feed Post

Amazon Delivers Cloud Archive Storage with Glacier

Glacier enables AWS customers to store their long-term retention data within Amazon’s existing data centres at a very low cost

At the end of August 2012, Amazon Web Services released their latest service offering – a long-term archive service called Glacier.  As a complement to their existing active data access service S3, Glacier provides long term storage for “cold” data – information that has to be retained for a long time but doesn’t require frequent access.

What Exactly is Glacier?

Many organisations need to retain data in archive format for extended periods of time.  This is for regulatory or compliance purposes or may simply be part of their normal business process.  Good examples are medical, healthcare, financial or media (video and audio) data.  Typically for many IT departments, backup has provided a lazy way of archiving information.  Access to backups retained for up to 10 years provides a cheap and rudimentary archive service.  However backup isn’t archive (see my recent article on the subject) as an archive provides additional features around data management and security.  Glacier enables AWS customers to store their long-term retention data within Amazon’s existing data centres at a very low cost, starting at $0.01/GB per month.  The low cost is tempered with rather leisurely access times of between 3-5 hours for data retrieval.

Within Glacier, data is stored in vaults.  Up to 1000 vaults may be created per AWS region, with each vault providing individual security credentials via Amazon’s IAM (Identity and Access Management) service.  Within a vault, data is stored in an archive, which consists of one or more files.  Obviously if multiple files need to be stored together for consistency purposes then they can be stored as a single archive.  An unlimited number of archive files can be created, with a limit of 40TB on any single archive file itself.

Data uploaded to Glacier is stored using AES-256 encryption, managed by AWS.  Customers requiring their own encryption are advised to pre-encrypt their data before upload.

Amazon are claiming a data “durability” level of 99.999999999% per archive although I’m not really sure how they define the term “durability” and exactly what that means in terms of data loss.

As mentioned earlier, data retrieval is between 3-5 hours per archive.  Retrieval requests (or jobs as they are known in Glacier) are queued asynchronously and can be notified once complete via AWS SNS (Simple Notification Service).  Once data is retrieved, it is available for access to the customer for 24 hours.  The long retrieval time implies that the majority of Glacier data is stored on tape, with retrieval resulting in a copy to disk for general access.  Based on the costs, this also makes sense.

The Charging Model

Charging for Glacier is more complex than the other AWS offerings and includes the following components:

  • $0.01/GB/month for storage of data
  • Data upload – no charge for data volume
  • Upload and retrieval requests – $0.05 per 1000 requests
  • Archive query commands (list vault contents, get job status, delete objects) – no charge
  • Data retrieval – 5% of archive per month for free, $0.011/GB upwards after that
  • Data out (moving data outside an AWS region) – $0.12 – $0.05/GB dependent on volume
  • Moving data to EC2 – no charge
  • Deletion of data less than 90 days old – $0.033/GB

It’s interesting that there is a charge for deleting new data, presumably to encourage users to use the service for the purpose it was intended.  In addition, only 5% of the archive can be retrieved per month without incurring costs (although data out incurs a cost), however there are no costs for transferring data to EC2.  This creates an ecosystem that encourages data to be kept in Glacier, using EC2 as the indexing and search or refresh mechanism.

What’s Not Included

Glacier itself is simply a large storage vault for data.  All objects are stored using 138-byte character keys.  Data access is managed via REST-based APIs that can also be developed using pre-coded Java and .NET SDKs.  This means there are no facilities within Glacier for providing some of the most fundamental parts of an archive – notably metadata and indexing capabilities.  These need to be developed by the user themselves and as yet I haven’t found anyone offering services that use Glacier as their storage platform.  There are a few little quirks to bear in mind too.  For instance, vaults are inventoried on a daily basis, so could be inconsistent with any external index the user creates.

The Architect’s View

Amazon have provided a framework and storage repository that could be used by many organisations to store their data over the long term.  This does not mean that tape is dead – far from it – Glacier itself is certainly using tape technology.  What Amazon are providing is a data store against which 3rd party developers can create their own archive solutions in a similar way to that being used for S3 (think Nasuni or Jungledisk).  There are already many other cloud archiving solutions available today (see the same recent article) and on its own Glacier doesn’t represent direct competition, but rather provides another storage platform in which data can be stored.  However there are a few things to consider when using a Glacier-based service;

  • The indexing of data is purely based on any 3rd party vendor’s indexing system or needs to be managed by the end user
  • Taking data out of the archive to move elsewhere will incur a cost
  • Refreshing data within the archive will incur a cost

Glacier and the supporting services could therefore represent a significant and unexpected lock-in for customers.

Overall, Glacier does provide a framework against which developers can create new services for archive and that’s a good thing.  Cost will be a significant factor for many and the marketing-set price of $0.01/GB/month certainly sounds attractive.  Like the other AWS offerings, I’m sure Glacier will be very successful.

 

Related Links

Read the original blog entry...

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...