Welcome!

@CloudExpo Authors: Liz McMillan, Roger Strukhoff, Pat Romanski, Zakia Bouachraoui, Dana Gardner

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

The Three Vs of Big Data

Data-intensive computing to unify, theorize, experiment, and do simulation at scale

Big Data is a top technology trend for 2012 according to Forrester Research. The Economist said that Big Data is a new game changing asset and The Harvard Business Review termed it as a scientific revolution. Scientific Revolution? Because it is data-intensive computing to unify, theorize, experiment, and do simulation at scale.

It is also termed the Fourth Paradigm – “The techniques and technologies for such data-intensive science are so different that it is worth distinguishing data-intensive science from computational science as a new, fourth paradigm for scientific exploration.”



Big Data is when the size of the data itself becomes part of the problem. But Big Data is not just “big”. There are the 3V’s of Big Data:

  1. Volume – Terabyte records, transactions, tables, files. A Boeing Jet engine spews out 10TB of operational data for every 30 minutes they run. Hence a 4-engine Jumbo jet can create 640TB on one Atlantic crossing. Multiply that to 25,000 flights flown each day and you get the picture.
  2. Velocity – batch, near-time, real-time, streams. Today’s on-line ad serving requires 40ms to respond with a decision. Financial services need near 1MS to calculate customer scoring probabilities. Stream data, such as movies, need to  travel at high speed for proper rendering.
  3. Variety – structures, unstructured, semi-structured, and all the above in a mix. WalMart processes 1M customer transactions per hour and feeds information to a database estimated at 2.5PB (petabytes). There are old and new data sources like RFID, sensors, mobile payments, in-vehicle tracking, etc.

Because of these characteristics, traditional DBMS solutions are inadequate. Hence we have seen the growth of technologies such as Hadoop (map-reduce algorithm started at Google) mostly processing unstructured data in batch mode. New solutions are needed for realtime processing.

See my blog from last year on this subject.

More Stories By Jnan Dash

Jnan Dash is Senior Advisor at EZShield Inc., Advisor at ScaleDB and Board Member at Compassites Software Solutions. He has lived in Silicon Valley since 1979. Formerly he was the Chief Strategy Officer (Consulting) at Curl Inc., before which he spent ten years at Oracle Corporation and was the Group Vice President, Systems Architecture and Technology till 2002. He was responsible for setting Oracle's core database and application server product directions and interacted with customers worldwide in translating future needs to product plans. Before that he spent 16 years at IBM. He blogs at http://jnandash.ulitzer.com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives. Serverless and Kubernetes are great examples of continuous, rapid pace of change in enterprise IT. They also raise a number of critical issues and questions about employee training, development processes, and operational metrics. There's a real need for serious conversations about Serverless and Kubernetes among the people who are doing this work and managing it. So we are very pleased today to announce the ServerlessSUMMIT at CloudEXPO.
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
This month @nodexl announced that ServerlessSUMMIT & DevOpsSUMMIT own the world's top three most influential Kubernetes domains which are more influential than LinkedIn, Twitter, YouTube, Medium, Infoworld and Microsoft combined. NodeXL is a template for Microsoft® Excel® (2007, 2010, 2013 and 2016) on Windows (XP, Vista, 7, 8, 10) that lets you enter a network edge list into a workbook, click a button, see a network graph, and get a detailed summary report, all in the familiar environment of the Excel® spreadsheet application. A collection of network maps and reports created with NodeXL can be seen in the NodeXL Graph Gallery, an archive of data sets uploaded by the NodeXL user community.
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.