Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Yeshim Deniz

Related Topics: @DXWorldExpo, @CloudExpo, @ThingsExpo

@DXWorldExpo: Blog Feed Post

Data Lake: Save Me More Money vs. Make Me More Money By @Schmarzo | @BigDataExpo #BigData

The data lake is a centralized repository for all the organization’s data of interest whether internally or externally generated

2016 will be the year of the data lake. But I expect that much of 2016 data lake efforts will be focused on activities and projects that save the company more money. That is okay from a foundation perspective, but IT and Business will both miss the bigger opportunity to leverage the data lake (and its associated analytics) to make the company more money.

This blog examines an approach that allows organizations to quickly achieve some “save me more money” cost benefits from their data lake without losing sight of the bigger “make me more money” payoff – by coupling the data lake with data science to optimize key business processes, uncover new monetization opportunities and create a more compelling and differentiated customer experience.

Let’s start by quickly reviewing the concept of a data lake.

The Data Lake
The data lake is a centralized repository for all the organization’s data of interest, whether internally or externally generated. The data lake frees the advanced analytics and data science teams from being held captive to the data volume (detailed transactional history at the individual level), variety (structured and unstructured data) and velocity (real-time/right-time) constraints of the data warehouse. The data lake provides a line of demarcation that supports the traditional business intelligence/data warehouse environment (for operational and management reporting and dashboards) while enabling the organization’s new advanced analytics and data science capabilities (see Figure 1).

Bill 1

Figure 1: The Data Lake

The viability of a data lake was enabled by many factors including:

  • The development of Hadoop as a scale-out processing environment. Hadoop was developed and perfected by internet giants such as Google, Yahoo, eBay and Facebook to store, manage and analyze petabytes of web, search and social media data.
  • The dramatic cost savings using open source software (Hadoop, MapReduce, Pig, Python, HBase, etc.) running on commodity servers that yields a 20x to 50x cost advantage over traditional, proprietary data warehousing technologies .
  • The ability to load data as-is, which means that a schema does NOT need to be created prior to loading the data. This supports the rapid ingestion and analysis of a wide variety of structured and unstructured data sources.

The characteristics of a data lake include:

  • Ingest. Capture data from wide range of traditional (operational, transactional) and new sources (structured and unstructured) as-is
  • Store. Store all your data in one environment for cross-functional business analysis
  • Analyze. Support the analytics and data science to uncover new customer, product, and operational insights
  • Surface. Empower front-line employees and managers, and drive a more profitable customer engagement leveraging customer, product and operational insights
  • Act. Integrate analytic insights into operational (Finance, Manufacturing, Marketing, Sales Force, Procurement, Logistics) and management (Business Intelligence reports and dashboards) systems

Data Lake Foundation: Save Me More Money
Most companies today have some level of experience with Hadoop. And many of these companies are embracing the data lake in order to drive costs out of the organization. Some of these “save me more money” areas include:

  • Data enrichment and data transformation for activities such as converting unstructured text fields into a structured format or creating new composite metrics such as recency, frequency and sequencing of customer activities.
  • ETL (Extract, Transform, Load) offload from the data warehouse. It is estimated that ETL jobs consume 40% to 80% of all the data warehouse cycles. Organizations can realize an immediate value by moving the ETL jobs off of the expensive data warehouse to the data lake.
  • Data Archiving, which provides a lower-cost way to archive or store data for historical, compliance or regulatory purposes
  • Data discovery and data visualization that supports the ability to rapidly explore and visualize a wide variety of structured and unstructured data sources.
  • Data warehouse replacement. A growing number of organizations are leveraging open-source technologies such as Hive, HBase, HAWQ and Impala to move their business intelligence workloads off of the traditional RDBMS-based data warehouse to the Hadoop-based data lake.

These customers are dealing with what I will call “data lake 1.0,” which is a technology stack that includes storage, compute and Hadoop. The savings from these “Save me more money” activities can be nice with a Return on Investment (ROI) typically in the 10% to 20% range. But if organizations stop there, then they are leaving the 5x to 10x ROI projects on the table. Do I have your attention now?

Data Lake Game-changer: Make Me More Money
Leading organizations are transitioning their data lakes to what I call “data lake 2.0” which includes the data lake 1.0 technology foundation (storage, compute, Hadoop) plus the capabilities necessary to build business-centric, analytics-enabled applications. These additional data lake 2.0 capabilities include data science, data visualization, data governance, data engineering and application development. Data lake 2.0 supports the rapid development of analytics-enabled applications, built upon the Analytics “Hub and Spoke” data lake architecture that I introduced in my blog “Why Do I Need A Data Lake?” (see Figure 2).

Bill blog2

Figure 2: Analytics Hub and Spoke Architecture

Data lake 2.0 and the Analytics “Hub and Spoke” architecture supports the development of a wide range of analytics-enabled applications including:

  • Customer Acquisition
  • Customer Retention
  • Predictive Maintenance
  • Marketing Effectiveness
  • Customer Lifetime Value
  • Demand Forecasting
  • Network Optimization
  • Risk Reduction
  • Load Balancing
  • “Smart” Products
  • Pricing Optimization
  • Yield Optimization
  • Theft Reduction
  • Revenue Protection

Note: Some organizations (public sector, federal, military, etc.) don’t really have a “make me more money” charter; so for these organizations, the focus should be on “make me more efficient.”

Big Data Value Iceberg
The game-changing business value enabled big data isn’t found in the technology-centric data lake 1.0, or the top of the iceberg. Like an iceberg, the bigger business opportunities are hiding just under the surface in data lake 2.0 (see figure 3).

bill blog3

Figure 3: Data Lake Value Iceberg

The “Save Me More Money” projects are the typical domain of IT, and that is what data lake 1.0 can deliver. However if your organization is interested in the 10x-20x ROI “Make Me More Money” opportunities, then your organization needs to aggressively continue down the data lake path to get to data lake 2.0.
10x-20x ROI projects…do I have your attention now?

Data Lake: Save Me More Money vs. Make Me More Money
Bill Schmarzo

Read the original blog entry...

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Hitachi Vantara as CTO, IoT and Analytics.

Previously, as a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

CloudEXPO Stories
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, Alex Lovell-Troy, Director of Solutions Engineering at Pythian, presented a roadmap that can be leveraged by any organization to plan, analyze, evaluate, and execute on moving from configuration management tools to cloud orchestration tools. He also addressed the three major cloud vendors as well as some tools that will work with any cloud.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.