Welcome!

Cloud Expo Authors: Liz McMillan, Roger Strukhoff, Pat Romanski, Elizabeth White, Jason Thompson

Related Topics: Cloud Expo

Cloud Expo: Article

Cloud Computing Expo: Cloud Optimized Storage Solutions

The basic ideology of COSS is to ingest significant amounts of both structured and unstructured content

The basic ideology of Cloud Optimized Storage Solutions, as noted in the three previous installments, is to ingest significant amounts of both structured and unstructured content and, operating within the confines of SLAs and tiering, provide this data back to users with acceptable performance.

In the previous three Cloud Optimized Storage Solution (COSS) articles in this series, I’ve discussed the content being stored, the method of storage, as well as principles derived from data tiering.  Today, I want to jump ahead a bit and discuss how neural networks and heuristics can impact the processing of object and file data for the cloud.

One of the more recent advancements within computing has been the application of heuristics and neural networking. Heuristics is defined as being “…an educational method in which learning takes place through discoveries that result from investigations…“ While heuristics has historically been used in such products like anti-virus software, it provides an incredible wealth of capability and technology for the COSS solution. Similarly, neural networks provide capacitive understanding of processing layers and optimizations that learn patterns based on underlying statistical data. How do these two technologies apply to COSS?

The basic ideology of COSS, as noted in the previous parts of this paper, is to ingest significant amounts of both structured and unstructured content and, operating within the confines of SLAs and tiering, provide this data back to users with acceptable performance. While fairly reductionistic in nature, it is how the data is allocated to storage that provides the greatest insight into the impact that neural nets and heuristics can potentially have. To illustrate this point, here is a graphical example of file placement within COSS without using heuristics.

As seen below, data is submitted to COSS by API or other integration point, meta data is calculated for said object based on pre-defined categories of content understanding (i.e. “Movies”) and content is placed in Tier 1 for faster access and greater availability. Policy is enacted on this movie object such that it is automatically moved from Tier 1 to Tier 2 after a fixed period of time and again to Tier 3 based on similar time constraints. Globally, policy is additional set for compression, encryption, deduplication, and optimizations and this is applied for content at rest as well as incoming data. Once data has been moved from tier to tier, there is no really process for retrieving that data and promoting it to a different tier based on access or usage patterns.



While this example is extremely reductionistic, it highlights the particular areas where neural nets and heuristics can be applied to approve both the way that data is ingested but also how it is maintained across its lifespan (i.e until delete). In essence, COSS, under this particular model, is administrator-enforced. Here, then, is an example of data ingest to COSS with neural nets and heuristics enabled:

Almost immediately, it becomes apparent that COSS is taking a more active role in the ingest and storage allocation for the file data. Instead of having a global category created (i.e. “Movies”), COSS applies bit-patterning and packet inspection to the data being ingested to determine file composition. Such inspection has several significant implications: less time spent applying policy enhancements such as deduplication/encyption (storage processor intensive) and more time optimizing content layout and placement within tiers (default becomes Tier 2: accessibility and performance). Once the data is inspected, it is determined to be of a certain type (i.e. application/x-octet stream) and placed in a default tier (Tier 2). COSS recognizes that this data is already in a compressed state and rules out compression and deduplication policies and potentially, depending on source/API mapping, rules out encryption policies. Once data is at rest on Tier 2, COSS watches file access patterns to determine when and how it is being accessed. If statistical trending against that file starts showing increased access, COSS will promote the file to a higher tier for more adequate performance and access. If the trending notices a decline in traffic to that file, it can demote it to Tier 2, Tier 3, etc. without affecting surrounding data.

Implications for Global Implementations

The examples above highlighted policies and actions on a single file or object but when it is extrapolated out to the COSS system on a global level, it becomes a much more powerful tool. In essence, the heuristic database and neural network capabilities can be applied to linked COSS systems for global replication and file/object processing. As patterning is completed against file types and categories are created or designed by the engine, the resulting database can be asynchronously updated to other members of the larger COSS network. This replication would make use of recursive heuristic database updates to ensure consistency against the other COSS members and to ensure that data residing across all COSS members was categorized and tagged appropriately. Additionally, since one of the mechanisms for data protection with COSS is to utilize multiple data replicas for redundancy, it serves the additional purpose of spreading the database for protection purposes.

Implications for Heuristic Processing and Control

The additional processing overhead that heuristic analysis brings to the fore an added layer of complexity in implementation and design. Given that COSS is designed to utilize commodity hardware with the differentiating feature being the actual software “brains,” the added performance burden of a heuristic model might seem untenable for basic implementations. However, as recent research has shown, the simple addition of a General Purpose Graphical Processing Unit (GPGPU) to the COSS hardware to offload these more complex routines would fit within the paradigm of commodity hardware. By coding to specific GPGPU routines (as evidenced by the research into WPA key decode, for example) based on nVidia’s CUDA specifications, for example, the heuristic branch paths could be removed from the general storage operation paths handled by the storage system processor. Since each GPGPU typically has ownership of a local, low latency cache (e.g. GDDR4) and has multiple programmable vector units, the ability to process large sets of data is assured.

One area that would need to be addressed with the use of GPGPUs for heuristic programming is the issue of redundancy. Given that no methodology currently exists to maintain GPGPU functionality across two discrete units in a single system, either the programming path would need to account for multiple GPGPU engines within the general I/O complex or it would need to be designed into the heuristic path. In a clustered front end I/O stack (a la EMC’s Atmos), it would be a simple matter of having a GPGPU per individual node member with the overall software stack to process the heuristic path in a parallel fashion.

More Stories By Dave Graham

Dave Graham is a Technical Consultant with EMC Corporation where he focused on designing/architecting private cloud solutions for commercial customers.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Cloud Expo Breaking News
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at Internet of @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, will discuss the underlying factors that are driving the economics of intelligent systems. Discover how hardware commoditization, the ubiquitous nature of connectivity, and the emergence of Big Data and analysis are providing the pull to meet customer expectations of a widely connected, multi-dimensional universe of people, things, and information.
SYS-CON Events announced today that Esri has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Esri inspires and enables people to positively impact the future through a deeper, geographic understanding of the changing world around them. For more information, visit http://www.esri.com.
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how these devices generate enough data to learn our behaviors and simplify/improve our lives. What if we could connect everything to everything? I'm not only talking about connecting things to things but also systems, cloud services, and people. Add in a little machine learning and artificial intelligence and now we have something interesting...
After a couple of false starts, cloud-based desktop solutions are picking up steam, driven by trends such as BYOD and pervasive high-speed connectivity. In his session at 15th Cloud Expo, Seth Bostock, CEO of IndependenceIT, cuts through the hype and the acronyms, and discusses the emergence of full-featured cloud workspaces that do for the desktop what cloud infrastructure did for the server. He’ll discuss VDI vs DaaS, implementation strategies and evaluation criteria.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
Cloud Computing is evolving into a Big Three of Amazon Web Services, Google Cloud, and Microsoft Azure. Cloud 360: Multi-Cloud Bootcamp, being held Nov 4–5, 2014, in conjunction with 15th Cloud Expo in Santa Clara, CA, delivers a real-world demonstration of how to deploy and configure a scalable and available web application on all three platforms. The Cloud 360 Bootcamp, led by Janakiram MSV, an analyst with Gigaom Research, is the first bootcamp that introduces the core concepts of Infrastructure as a Service (IaaS) based on the workings of the Big Three platforms – Amazon EC2, Google Compute Engine, and Azure VMs. Bootcamp attendees will get to see the big picture and also receive the knowledge needed to make the best cloud decisions for their business applications and entire enterprise IT organization.
The Internet of Things promises to transform businesses (and lives), but navigating the business and technical path to success can be difficult to understand. In his session at 15th Internet of @ThingsExpo, Chad Jones, Vice President, Product Strategy of LogMeIn's Xively IoT Platform, will show you how to approach creating broadly successful connected customer solutions using real world business transformation studies including New England BioLabs and more.
“Distrix fits into the overall cloud and IoT model around software-defined networking. There’s a broad category around software-defined networking that’s focused on data center, and we focus on the WAN,” explained Jay Friedman, President of Distrix, in this SYS-CON.tv interview at the Internet of @ThingsExpo, held June 10-12, 2014, at the Javits Center in New York City. Internet of @ThingsExpo 2014 Silicon Valley, November 4–6, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading IoT industry players in the world.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
“The Internet of Things is a wave that has arrived and it’s growing really fast. The concern at Aria Systems is making sure that people understand the ramifications of their attempts to monetize whatever it is they build on the Internet of Things," explained C Brendan O’Brien, Co-founder and Chief Architect at Aria Systems, in this SYS-CON.tv interview at the Internet of @ThingsExpo, held June 10-12, 2014, at the Javits Center in New York City. Internet of @ThingsExpo 2014 Silicon Valley, November 4–6, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading IoT industry players in the world.
The Internet of Things is a natural complement to the cloud and related technologies such as Big Data, analytics, and mobility. In his session at Internet of @ThingsExpo, Joe Weinman will lay out four generic strategies – digital disciplines – to exploit emerging digital technologies for strategic advantage. Joe Weinman has held executive leadership positions at Bell Labs, AT&T, Hewlett-Packard, and Telx, in areas such as corporate strategy, business development, product management, operations, and R&D.
SYS-CON Events announced today that DevOps.com has been named “Media Sponsor” of SYS-CON's “DevOps Summit at Cloud Expo,” which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. DevOps.com is where the world meets DevOps. It is the largest collection of original content relating to DevOps on the web today Featuring up-to-the-minute news, feature stories, blogs, bylined articles and more, DevOps.com is where the thought leaders of the DevOps movement make their ideas known.
There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that allow marketers to cut through the noise and design their campaigns around strong, predictive signals, rather than arbitrary statistics. SendGrid sends up to half a billion emails a day for customers such as Pinterest and GitHub. All this email adds up to more text than produced in the entire twitterverse. We track events like clicks, opens and deliveries to help improve deliverability for our customers – adding up to over 50 billion useful events every month. While SendGrid data covers only abo...
SYS-CON Events announced today that the Web Host Industry Review has been named “Media Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Since 2000, The Web Host Industry Review has made a name for itself as the foremost authority of the Web hosting industry providing reliable, insightful and comprehensive news, reviews and resources to the hosting community. TheWHIR Blogs provides a community of expert industry perspectives. The Web Host Industry Review Magazine also offers a business-minded, issue-driven perspective of interest to executives and decision-makers. WHIR TV offers on demand web hosting video interviews and web hosting video features of the key persons and events of the web hosting industry. WHIR Events brings together like-minded hosting industry professionals and decision-makers in local communities. TheWHIR is an iNET Interactive property.