Welcome!

@CloudExpo Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

The Future of Big Data

The true value of Big Data lies in the amount of useful data that can be derived from it

About once every five years or so, the technology industry blazes a new path of innovation. The PC, the Internet, smart mobility and social networking have emerged over the past 20 plus years, delivering new technologies and business ecosystems that have fundamentally changed the world. The latest catalyst is Big Data.

Nearly every major new computing era in the past has had a hot IPO provide a catalyst for more widespread adoption of the shift. The recent Splunk IPO evokes parallels with Netscape, the company that provided the catalyst in 1995 to a wave of Internet computing for both B2C and B2B marketplaces. It ushered in a wave of new innovation and a plethora of new .com businesses. Hundreds of billions of dollars in new value was subsequently created and business environments changed forever.

Big Data refers to the enormous volume, velocity, and variety of data that exists and has the potential to be turned into business value. The challenge of Big Data is taking inhuman amounts of data and distilling it into information that human brains can use. Most businesses accumulate astronomical amounts of data - and the volume is expanding at an alarming rate. According to IDC, the volume of digital content in the world will grow to 2.7 billion terabytes in 2012, up 48% from 2011, and will reach 8 billion terabytes by 2015. [1]

The data flood, of course, comes from both structured corporate databases and unstructured data from Web pages, blogs, social networking messages and other sources. For example, there are now countless digital sensors worldwide in industrial equipment, automobiles, electrical meters and shipping crates. They can measure and communicate location, movement, vibration, temperature, humidity, even chemical changes in the air. Companies wield data like a weapon. Retailers, like Wal-Mart and Kohl's, analyze sales, pricing and economic, demographic and weather data to tailor product selections at particular stores and determine the timing of price markdowns. Logistics companies like UPS mine data on truck delivery times and traffic patterns to fine-tune routing.

Today, a whole ecosystem of new businesses is springing up to engage with this new reality: companies that store data; companies that mine data for insight; and companies that aggregate data to make it manageable. But it's an ecosystem that's still emerging, and its exact shape has yet to make itself clear.

One of the biggest challenges of working with Big Data is assembling it and preparing it for analysis. Different systems store data in different formats, even within the same company. Assembling, standardizing, and cleaning data of irregularities - all without scrubbing it of the information that makes it valuable - is a central challenge of this space.

Of course, Hadoop, an open source software framework derived from Google's Map Reduce and Google File System (GFS) papers, is being leveraged by several technology vendors to do just that. Hadoop maps tasks across a cluster of machines, splitting them into smaller sub-tasks, before reducing the results into one master calculation. It's really an old grid computing technique given new life in the age of cloud computing.

Hadoop is converging with other technology advances such as high-speed data analysis made possible because of parallel computing, in-memory processing, and lower cost flash memory in the form of solid state drives. The prospect of being able to process troves of data very quickly, in memory, without time-consuming forays to retrieve information stored on disk drives, is a big advance that will enable companies to assemble, sort, and analyze data much more rapidly.

For example, T-Mobile is using SAP's HANA to mine data from stores, text messages and call centers on its 30 million U.S. customers to tailor personalized deals. What used to take a week can be done in three hours with the SAP system. Organizations that can leverage this capability to make faster and more informed business decisions will have a distinct advantage over competitors.

In a short period of time, Hadoop has transitioned from relative obscurity as a consumer Internet project into the mainstream consciousness of enterprise IT. Hadoop is designed to handle mountains of unstructured data. However, as it exists, the open source code is a long way from meeting enterprise requirements for security, management, and efficiency without some serious customization. Enterprise-scale Hadoop deployments require costly IT specialists who are capable of guiding a lot of somewhat disjointed processes. That currently limits adoption to organizations with substantial IT budgets.

It will take a refined platform to enable Hadoop and its derivatives to fit into the enterprise as a complement to existing data analytics and data warehousing tools from established business process vendors like Oracle, HP, and SAP. At Zettaset, for example, we are focused on making Hadoop much more accessible to enterprises of all sizes by creating a high availability platform that takes much of the complexity out of assembling and preparing huge amounts of data for analysis. We have aggregated multiple steps into a streamlined automated process, significantly enhanced security, and are now integrating our software into an appliance which can be racked in the data center and easily managed through a user-friendly GUI.

The true value of Big Data lies in the amount of useful data that can be derived from it. The future of Big Data is therefore to do for data and analytics what Moore's Law has done for computing hardware, and exponentially increase the speed and value of business intelligence. Whether it is linking geography and retail availability, using patient data to forecast public health trends, or analyzing global climate trends, we live in a world full of data. Effectively harnessing Big Data will give businesses a whole new lens through which to see it.

Reference

  1. Source: "IDC Predictions 2012: Competing for 2020," December 2011

More Stories By Jim Vogt

With more than 25 years of leadership experience in both start-up and established corporations, Jim Vogt brings a wealth of business and technology expertise to his role as president and CEO of Zettaset. Most recently, he served as senior vice president and general manager of the Cloud Services business unit at Blue Coat Systems. Prior to Blue Coat, he served as president and CEO at Trapeze Networks, which was acquired by Belden, Inc. He was also president and CEO at data encryption start-up Ingrian Networks (acquired in April, 2008 by SafeNet). Prior to his private company posts, Vogt spent 11 years with SynOptics, Bay and Nortel where he held several product line and general management roles, including president of Nortel’s Small Business Solutions group, vice president and general manager of Bay’s workgroup product and distributed network systems divisions, and vice president of product management for Bay’s desktop products group.

Jim holds a BS in electrical engineering from the University of Nevada and an MBA from Santa Clara University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also received the prestigious Outstanding Technical Achievement Award three times - an accomplishment befitting only the most innovative thinkers. Shankar Kalyana is among the most respected strategists in the global technology industry. As CTO, with over 32 years of IT experience, Mr. Kalyana has architected, designed, developed, and implemented custom and packaged software solutions across a vast spectrum o...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sponsorship and exhibit opportunities.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?