Welcome!

@CloudExpo Authors: Liz McMillan, Elizabeth White, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Cognitive Computing , Agile Computing

@CloudExpo: Blog Feed Post

The Next Big Thing: WeeData

Big Data are high-volume, high-velocity, and/or high-variety information assets that require new forms of processing

‘Big Data’ has a problem, and that problem is its name.

Dig deep into the big data ecosystem, or spend any time at all talking with its practitioners, and you should quickly start hitting the Vs. Initially Volume, Velocity and Variety, the Vs rapidly bred like rabbits. Now we have a plethora of new V-words, including Value, Veracity, and more. Every new presentation on big data, it seems, feels obligated to add a V to the pile.

Gartner stands by the original three, stating earlier this year that

Big Data are high-volume, high-velocity, and/or high-variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.

(my emphasis)

But by latching onto the ‘big’ part of the name, and reinforcing that with the ‘volume’ V, we become distracted and run the risk of rapidly missing the point. The implication from a whole industry is that size matters. Bigger is better. If you don’t collect everything, you’re woefully out of touch. And if you’re not counting in petas, exas, zettas or yottas, how on earth do you live with the shame?

From the outset, though, size was only part of the picture. Streams of data from social networks, traffic management systems or stock control processes raise a lot of challenges because of the speed with which data must be ingested, or the rapidity with which actionable decisions must be taken. Data volumes may only be a few gigabytes or – oh, the embarrassment – megabytes, but the challenge is still very real. Combining data of different types from disparate sources also creates opportunities. Video from traffic cameras, combined with the logs of pressure sensors beneath the roads, weather data from forecasters and historic records, and social networking comments about the smoothness (or otherwise) of the morning commute build a rich picture that no single source can provide. The initial data sets may be large, but the subset that is actually relevant to the problem being studied will often be far smaller. Variety is the challenge here, not volume.

Of all the Vs clamouring to join the initial three, the most compelling to me must surely be Value. What is the value of the insight offered by this data, regardless of how big it is, how fast it’s coming at me, or how many formats it comprises?

The correct interpretation of the right analysis, performed on the optimal set of data. Surely that’s what we should celebrate? Sometimes, certainly, that analysis will be performed on mind-bogglingly huge data stores. But sometimes it won’t, and the results can be just as valuable. Is it better to collect everything and then extract the gigabyte or two of data you actually need, or does it make more sense to know what you’re interested in and devise a collection strategy that gets you the right data in the first place?

Big Data is impressive stuff. This fledgling market segment is full of interesting companies doing remarkable things. The ability to hold genomes, city traffic systems, internet search logs, or complex financial models in memory and to manipulate them in order to derive insight in real time is stunning and transformative. The switch from highly structured relational databases to schema-less data stores creates new opportunities, and new challenges.

Let’s celebrate the value of data volume when it’s appropriate to do so, but let’s not create the patently false impression that big is best.

Ben Kepes and I jokingly used Twitter to announce a new company at last year’s Defrag. WeeData (‘wee‘ as in small, not the other kind) was to concern itself with everything that big data’s champions did not, and that’s a very large market indeed. We’re hoping that the people who rushed to sign up as customers, investors, and Board members got the joke…

Read the original blog entry...

More Stories By Paul Miller

Paul Miller works at the interface between the worlds of Cloud Computing and the Semantic Web, providing the insights that enable you to exploit the next wave as we approach the World Wide Database.

He blogs at www.cloudofdata.com.

CloudEXPO Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to the new world.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
Everyone wants the rainbow - reduced IT costs, scalability, continuity, flexibility, manageability, and innovation. But in order to get to that collaboration rainbow, you need the cloud! In this presentation, we'll cover three areas: First - the rainbow of benefits from cloud collaboration. There are many different reasons why more and more companies and institutions are moving to the cloud. Benefits include: cost savings (reducing on-prem infrastructure, reducing data center foot print, reducing IT support costs), enabling growth (ensuring a highly available, highly scalable infrastructure), increasing employee access & engagement (by having collaboration tools that are usable and available globally regardless of location there will be an increased connectedness amongst teams and individuals that will help increase both efficiency and productivity.)
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.