Welcome!

@CloudExpo Authors: Liz McMillan, Elizabeth White, Pat Romanski, Akhil Sahai, Rishi Bhargava

Related Topics: @BigDataExpo, Java IoT, Microservices Expo, Containers Expo Blog, Agile Computing, @CloudExpo, Apache

@BigDataExpo: Article

Examining the True Cost of Big Data

As you start on your Big Data journey or project, be sure to ask what exactly the business requires

The good news about the Big Data market is that we generally all agree on the definition of Big Data, which has come to be known as data that has volume, velocity and variety where businesses need to collect, store, manage and analyze in order to derive business value or otherwise known as the "4 V's." However, the problem with such a broad definition is that it can mean different things to different people once you start to put some real values next to those V's.

Let's be honest, Volume can be a different thing to different organizations. To some it is anything above 10 terabytes of managed data in their BI environment and to others it is petabyte scale and nothing less. Likewise velocity can be multi-billions of daily records coming into the enterprise from various external and internal networks. When it really comes down to it, each business situation will be quite different not only from a size and speed perspective but also more important from the business use-case or requirement. A large bank's Big Data problem could be very different to that of an online retailer or an airline. If you compare what say a hospital is trying to do collecting and analyzing all the sensor patient data compared to a utilities provider running a smart-grid or a telecommunications operator. True, all could be categorized as machine generated or raw data but the exact type of data might be different not to mention the volume or growth rate. Probably the one unique common denominator across all aforementioned industries is that everyone is keeping the data for longer time-periods. No one is throwing it away - not even the detailed data.

The Many Cost Factors to Consider
Costs will of course vary depending on the individual allocated IT budget but regardless, how the company allocates IT budget dollars to new Big Data initiatives needs consideration. Let's face it, enterprise buyers didn't suddenly come into a bunch of newfound IT assets or line items on their budget and the current world economic situation would certainly not suggest so. More likely existing budgets are being re-allocated and instead of spending more on say existing traditional data warehouses or appliances, monies are being allocated to new projects running on open source projects including Apache Hadoop which promises both low cost, ease of scale not to mention the obvious best approach to managing and analyzing multi-structured data sets. The difficultly then arises how do you integrate or have your Hadoop environment co-exist with the established BI or DW environment that the business has grown to love and rely upon?

Leverage What You Already Have
Let's assume you have a data warehouse or data mart in place today and you already use various ETL or data movement tools and BI dashboard, analytics or reporting tools and you don't want to disrupt business users which could not only impacting performance levels but also training up on a new set of tools. In fact you already likely beholden to strict SLA's around response times for the various business reports and KPI's. However, at the same time the business is demanding access to new data sets in order to glean better insights either directly analyzing this data or co-mingling it with existing customer data. This could take the form of web-logs, click stream data or social media data from various interactive sites the business is now leveraging and tracking. The promise of impacting profit margins and gaining a competitive edge just cannot be avoided.

As we all know, traditional relational or columnar databases can't handle the unstructured data types so IT needs to rollout a different solution to satisfy the business demands. Evaluations can take many forms but typically will start with which Hadoop distribution, which NoSQL or NewSQL database and what query access tools in addition to MapReduce. It is certainly no easy task as there are a large number of technology solutions on the market today that claim to run on or with Hadoop providing MapReduce or SQL-like capabilities which all satisfy the requirement of managing volumes of unstructured data. Some are more mature than others; some proven and not all are low-cost. Open source on the surface looks very low cost but as soon as you require any level of support, which lets face it once it's live and relied upon as a business critical environment, you will need to allocate a line item on your budget. The Big Data line item won't just be one line as it will need to include all components required to properly rollout a Big Data solution to truly satisfy the business demands. Just like any other IT environment the obvious pieces will include: Software licensing and support, hardware, skilled dedicated resources, professional services and training and the dedicated time of business users to provide input on key requirements including specifying types of reports, queries and analysis which will naturally change and evolve over time.

Big Data Costs Can Quickly Creep Up
In terms of the hardware expenditure required to manage the new Big Data set, you may start out with a Hadoop cluster of say 10 nodes and yes that is certainly manageable but if your data velocity is significant, you can quickly reach 100+ nodes and now you will face a number of other expenses including additional headcount and skilled resources to manage the environment proactively in addition to tools for managing the cluster including system management and alerting and potentially add-on software which can vary by business use-case but might cover real-time analytics against streaming data for say fraud detection or detection of unusual patterns. You may also need a business tool to provide a front-end GUI dashboard to track specific KPIs or data visualization tools so business users can quickly understand what is going on. Very quickly the costs become less about the storage and hardware and more around the software that focuses on getting the most value from this newly collected data set.

There is no denying the fact that Big Data presents great new opportunities but reaching the point of a quantifiable ROI in a fast time frame is still a very real challenge. Everyone is talking about Big Data and all the innovative technology approaches to tackling it but it is still difficult to find lots of business success stories within any one-industry sector. It's still fairly immature but the good news is that its moving at a much faster pace than any other IT project today and certainly our data warehouse and BI forefathers have provided lessons learned over the past two decades.

Big Data Is Big Business but It Comes with Strict Requirements
If we want to examine more closely the main areas of expenditure for a Big Data project, it is probably best to look at it through the lens of a specific type of business and use-case. Let's take a large financial institution that has a number of existing traditional data warehouse / BI environments but because the business doesn't want to throw any data away (well let's face it regulations don't allow that for a number of years) and realistically the business wants to retain specific data sets for ongoing trending and analysis. This includes examining questions such as "what constitutes a low-risk client based on spending behavior patterns over a specific time period cross-referenced with customer demographics" which will help the institution better target a particular segment of the market.

Given the IT budget doesn't allow for increased spend that correlates with data growth rates, they need to seriously reduce costs and so decide to go the route of a Hadoop-based environment given its promise for low-cost scale and the fact that it can provide insights into customer patterns by capturing semi- and unstructured data. Front-ending the warehouse with a dedicated Hadoop cluster is the preferred architectural approach but the business users still want access to both the Hadoop environment and the existing traditional data warehouse environment.

Given we are talking about a financial institution, the question of security and availability quickly come to the top of the requirements list. At the same time, if business users want to access that data, SQL query access and using the current BI tool against that new set of data is also a requirement. If you can avoid having to the move large chunks of data on a frequent basis from one to the other, it will not only reduce costs but also latency. In an ideal world, being able to leverage the skill sets you already have and avoiding duplication of work is key.

Below is a quick table outlining the main cost factors to be considered and a set of comments against each of these areas that could reduce costs.

 

Big Data on Hadoop Cost Factors

Key Consideration to drive down cost

 

Storage

Look at databases that provide data compression to yield storage savings (better than GZip or LZO).

 

Hardware (Nodes)

Granular data compression at database level will reduce nodes over time.

 

Data Analytics - Skilled Resources

Examine technology solutions that provide standard SQL or BI tool access in addition to MapReduce (Pig etc.)

 

Cluster management - Skilled Resources

Leverage existing Dev-operations staff if you deploy a SQL-compliant data environment

 

Security

Look for database solutions that provide built-in security permissions and access.

 

Availability / DR

Consider a data management environment that doesn't require additional tools for replication.

 

Training

Consider solutions where you don't need to retrain or hire all new resources. Leverage what you have (standard SQL-skilled DBAs)

Summary: Consider All Factors and Get Business Buy-in Quickly
Big Data is fundamentally a business problem. If you begin with the question of "what is the business trying to achieve by collecting, storing and analyzing this new set of data...", you will start down the right path to realizing business gains. Whether you outsource the initiative or bring in external consultants and vendors to manage the project, the same questions will arise and in order to leverage what you already have which includes both existing IT environments and skills, you will be better able to contain costs.

Furthermore, we all love the promise of new innovative technologies including Hadoop and MapReduce but without leveraging tried and tested standards we have come to love and respect, it doesn't make a whole lot of sense from both a technical or economic sense. As you start on your Big Data journey or project, be sure to ask what exactly the business requires and how can you leverage what you already have today. We all know, getting business user buy-in and success is half the battle to a successful rollout.

More Stories By John Bantleman

John Bantleman, CEO of RainStor, has more than 20 years’ experience in the management of software companies. Prior to overseeing RainStor, he transformed LBMS into a $45 million business prior to its successful NASDAQ flotation in 1997. Today’s LBMS’ technology is now part of CA’s product portfolio. The following year John was instrumental in the launch of Evolve, and drove the company through to a successful IPO on NASDAQ.

Returning to the UK in 2003, John spent 12 months working on the advisory boards of venture capital organizations such as Apax Partners. He joined RainStor Inc. as Chairman in 2004 and became CEO at the start of 2007 and relocated back to the US to head-up worldwide operations in 2009.

Comments (3)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Continuous testing helps bridge the gap between developing quickly and maintaining high quality products. But to implement continuous testing, CTOs must take a strategic approach to building a testing infrastructure and toolset that empowers their team to move fast. Download our guide to laying the groundwork for a scalable continuous testing strategy.
As companies gain momentum, the need to maintain high quality products can outstrip their development team’s bandwidth for QA. Building out a large QA team (whether in-house or outsourced) can slow down development and significantly increases costs. This eBook takes QA profiles from 5 companies who successfully scaled up production without building a large QA team and includes: What to consider when choosing CI/CD tools How culture and communication can make or break implementation
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 Silicon Valley. The 19th Cloud Expo and 6th @ThingsExpo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Interne...
"We formed Formation several years ago to really address the need for bring complete modernization and software-defined storage to the more classic private cloud marketplace," stated Mark Lewis, Chairman and CEO of Formation Data Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp...
Most organizations prioritize data security only after their data has already been compromised. Proactive prevention is important, but how can you accomplish that on a small budget? Learn how the cloud, combined with a defense and in-depth approach, creates efficiencies by transferring and assigning risk. Security requires a multi-defense approach, and an in-house team may only be able to cherry pick from the essential components. In his session at 19th Cloud Expo, Vlad Friedman, CEO/Founder o...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
With over 720 million Internet users and 40–50% CAGR, the Chinese Cloud Computing market has been booming. When talking about cloud computing, what are the Chinese users of cloud thinking about? What is the most powerful force that can push them to make the buying decision? How to tap into them? In his session at 18th Cloud Expo, Yu Hao, CEO and co-founder of SpeedyCloud, answered these questions and discussed the results of SpeedyCloud’s survey.
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develo...
SYS-CON Events announced today that MangoApps will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device.
Redis is not only the fastest database, but it is the most popular among the new wave of databases running in containers. Redis speeds up just about every data interaction between your users or operational systems. In his session at 19th Cloud Expo, Dave Nielsen, Developer Advocate, Redis Labs, will share the functions and data structures used to solve everyday use cases that are driving Redis' popularity.
“We're a global managed hosting provider. Our core customer set is a U.S.-based customer that is looking to go global,” explained Adam Rogers, Managing Director at ANEXIA, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions.
Early adopters of IoT viewed it mainly as a different term for machine-to-machine connectivity or M2M. This is understandable since a prerequisite for any IoT solution is the ability to collect and aggregate device data, which is most often presented in a dashboard. The problem is that viewing data in a dashboard requires a human to interpret the results and take manual action, which doesn’t scale to the needs of IoT.
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. Commvault can ensure protection, access and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his general session at 18th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Part...