Welcome!

@CloudExpo Authors: Pat Romanski, Zakia Bouachraoui, Liz McMillan, Elizabeth White, Stackify Blog

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Apache, @DXWorldExpo, SDN Journal

@CloudExpo: Article

The Three ‘ilities’ of Big Data

Part 1: Portability, usability & quality converge to define how well the processing power of Big Data platforms can be harnessed

When talking about Big Data, most people talk about numbers: speed of processing and how many terabytes and petabytes the platform can handle. But deriving deep insights with the potential to change business growth trajectories relies not just on quantities, processing power and speed, but also three key ilities: portability, usability and quality of the data.

Portability, usability, and quality converge to define how well the processing power of the Big Data platform can be harnessed to deliver consistent, high quality, dependable and predictable enterprise-grade insights.

Portability: Ability to transport data and insights in and out of the system

Usability: Ability to use the system to hypothesize, collaborate, analyze, and ultimately to derive insights from data

Quality: Ability to produce highly reliable and trustworthy insights from the system

Portability
Portability is measured by how easily data sources (or providers) as well as data and analytics consumers (the primary "actors" in a Big Data system) can send data to, and consume data from, the system.

Data Sources can be internal systems or data sets, external data, data providers, or the apps and APIs that generate your data. A measure of high portability is how easily data providers and producers can send data to your Big Data system as well as how effortlessly they can connect to the enterprise data system to deliver context.

Analytics consumers are the business users and developers who examine the data to uncover patterns. Consumers expect to be able to inspect their raw, intermediate or output data to not only define and design analyses but also to visualize and interpret results. A measure of high portability for data consumers is easy access - both manually or programmatically - to raw, intermediate, and processed data. Highly portable systems enable consumers to readily trigger analytical jobs and receive notification when data or insights are available for consumption.

Usability
The usability of a Big Data system is the largest contributor to the perceived and actual value of that system. That's why enterprises need to consider if their Big Data analytics investment provides functionality that not only generates useful insights but also is easy to use.

Business users need an easy way to:

  • Request analytics insights
  • Explore data and generate hypothesis
  • Self-serve and generate insights
  • Collaborate with data scientists, developers, and business users
  • Track and integrate insights into business critical systems, data apps, and strategic planning processes

Developers and data scientists need an easy way to:

  • Define analytical jobs
  • Collect, prepare, pre process, and cleanse data for analysis
  • Add context to their data sets
  • Understand how, when, and where the data was created, how to interpret data and know who created them

Quality
The quality of a Big Data system is dependent on the quality of input data streams, data processing jobs, and output delivery systems.

Input Quality: As the number, diversity, frequency, and format of data channel sources explode, it is critical that enterprise-grade Big Data platforms track the quality and consistency of data sources. This also informs downstream alerts to consumers about changes in quality, volume, velocity, or the configuration of their data stream systems.

Analytical Job Quality: A Big Data system should track and notify users about the quality of the jobs (such as map reduce or event processing jobs) that process incoming data sets to produce intermediate or output data sets.

Output Quality: Quality checks on the outputs from Big Data systems ensure that transactional systems, users, and apps offer dependable, high-quality insights to their end users. The output from Big Data systems needs to be analyzed for delivery predictability, statistical significance, and access according to the constraints of the transactional system.

Though we've explored how portability, usability, and quality separately influence the consistency, quality, dependability, and predictability of your data systems, remember it's the combination of the ilities that determines if your Big Data system will deliver actionable enterprise-grade insights.

This piece is the first in a three-part series on how businesses can squeeze maximum business value out of their Big Data analysis.

More Stories By Kumar Srivastava

Kumar Srivastava is the product management lead for Apigee Insights and Apigee Analytics products at Apigee. Before Apigee, he was at Microsoft where he worked on several different products such as Bing, Online Safety, Hotmail Anti-Spam and PC Safety and Security services. Prior to Microsoft, he was at Columbia University working as a graduate researcher in areas such as VOIP Spam, Social Networks and Trust, Authentication & Identity Management systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and controlling infrastructure. The rise of Site Reliability Engineering (SRE) is part of that redefinition of operations vs development roles in organizations.
Signs of a shift in the usage of public clouds are everywhere Previously, as organizations outgrew old IT methods, the natural answer was to try the public cloud approach; however, the public platform alone is not a complete solutionThe move to hybrid, custom, and multi-cloud will become more and more prevalent At the heart of this technology trend exists a custom solution to meet the needs and concerns of these organizations, including compliance, security, and cost issues Blending Service and Deployment Models
When a company wants to develop an application, it must worry about many aspects: selecting the infrastructure, building the technical stack, defining the storage strategy, configuring networks, setting up monitoring and logging, and on top of that, the company needs to worry about high availability, flexibility, scalability, data processing, machine learning, etc. Going to the cloud infrastructure can help you solving these problems to a level, but what if we have a better way to do things. As a pioneer in serverless notion, Google Cloud offers a complete platform for each of those necessities letting users to just write code, send messages, assign jobs, build models, and gain insights without deploying a single machine. So cloud compute on its own is not enough, we need to think about all of the pieces we need to move architecture from the bottom, up towards the top of the stack. Wi...
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.