Welcome!

@CloudExpo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Kevin Jackson, Carmen Gonzalez

Related Topics: @BigDataExpo, Microservices Expo, Containers Expo Blog, Agile Computing, @CloudExpo, Apache

@BigDataExpo: Article

Babies, Big Data, and IT Analytics

Machine learning is a topic that has gone from obscure niche to mainstream visibility over the last few years

Machine learning and IT analytics can be just as beneficial to IT operations as it is for monitoring vital signs of premature babies to identify danger signs too subtle or abnormal to be detected by a human. But an enterprise must be willing to implement monitoring and instrumentation that gathers data and incorporates business activity across organizational silos in order to get meaningful results from machine learning.

Machine learning is a topic that has gone from obscure niche to mainstream visibility over the last few years. High profile software companies like Splunk have tapped into the Big Data "explosion" to highlight the benefits of building systems that use algorithms and data to make decisions and evolve over time.

One recent article on machine learning on the O'Reilly Radar blog that caught my attention made a connection between web operations and medical care for premature infants. "Operations, machine learning, and premature babies" by Mike Loukides describes how machine learning is used to analyze data streamed from dozens of monitors connected to each baby. The algorithms are able to detect dangerous infections a full day before any symptoms are noticeable to a human.

An interesting point from the article is that the machine learning system is not looking for spikes or irregularities in the data; it is actually looking for the opposite. Babies who are about to become sick stop exhibiting the normal variations in vital signs shown by healthy babies. It takes a machine learning system to detect changes in behavior too subtle for a human to notice.

Mike Loukides then wonders whether machine learning can be applied to web operations. Typical performance monitoring focuses on thresholds to identify a problem. "But what if crossing a threshold isn't what indicates trouble, but the disappearance (or diminution) of some regular pattern?" Machine learning could identify symptoms that a human fails to identify because he's just looking for thresholds to be crossed.

Mike's conclusion sums up much of the state of the IT industry concerning machine learning:

At most enterprises, operations have not taken the next step. Operations staff doesn't have the resources (neither computational nor human) to apply machine intelligence to our problems. We'd have to capture all the data coming off our servers for extended periods, not just the server logs that we capture now, but any every kind of data we can collect: network data, environmental data, I/O subsystem data, you name it.

As someone who works for a company that applies a form of machine learning (Behavior Learning for predictive analytics) to IT operations and application performance management, I read this with great interest. I didn't necessarily disagree with his conclusion but tried to pull apart the reasoning behind why more companies aren't applying algorithms to their IT data to look for problems.

There are at least three requirements for companies who want to move ahead in this area:

1. Establish maturity of one's monitoring infrastructure. This is the most fundamental point. If you want to apply machine intelligence to IT operations then you need to first add instrumentation and monitoring. Numerous monitoring products and approaches abound but you have to get the data before you can analyze it.

2. Coordinate multiple enterprise silos. Modern IT applications are increasingly complex and may cross multiple enterprise silos such as server virtualization, network, databases, application development, and other middleware components. Enterprises must be willing to coordinate between these multiple groups in gathering monitoring data and performing cross-functional troubleshooting when there are performance or uptime issues.

3. Incorporate business activity monitoring (BAM). Business activity data provides the "vital signs" of a business. Examples of retail business activity data include number of units sold, total gross sales, and total net sales for a time period. Knowing the true business impact of an application performance problem requires the correlation of business data. When an outage occurred for 20 minutes, how many fewer units were sold? What was the reduction in gross and net sales?

An organization that can fulfill these requirements is capable of achieving real benefits in IT operations and can successfully apply analytics. Gartner has established the ITScore Maturity Model for determining one's sophistication in availability and performance monitoring. Here is the description for level 5, which is the top tier:

Behavior Learning engines, embedded knowledge, advanced correlation, trend analysis, pattern matching, and integrated IT and business data from sources such as BAM provide IT operations with the ability to dynamically manage the IT infrastructure in line with business policy.

Applying machine learning to IT operations isn't easy. Most enterprises don't do it because they need to overcome organizational inertia and gather data from multiple groups scattered throughout the enterprise. For the organizations willing to do this, however, they will see tangible business benefits. Just as a hospital could algorithmically detect the failing health of a premature infant, an enterprise willing to use machine learning will visibly see how abnormal problems within IT operations can impact revenue.

More Stories By Richard Park

Richard Park is Director of Product Management at Netuitive. He currently leads Netuitive's efforts to integrate with application performance and cloud monitoring solutions. He has nearly 20 years of experience in network security, database programming, and systems engineering. Some past jobs include product management at Sourcefire and Computer Associates, network engineering and security at Booz Allen Hamilton, and systems engineering at UUNET Technologies (now part of Verizon). Richard has an MS in Computer Science from Johns Hopkins, an MBA from Harvard Business School, and a BA in Social Studies from Harvard University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive ad...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, discussed the best practices that will ensure a successful smart city journey.
Enterprise networks are complex. Moreover, they were designed and deployed to meet a specific set of business requirements at a specific point in time. But, the adoption of cloud services, new business applications and intensifying security policies, among other factors, require IT organizations to continuously deploy configuration changes. Therefore, enterprises are looking for better ways to automate the management of their networks while still leveraging existing capabilities, optimizing perf...
The pace of innovation, vendor lock-in, production sustainability, cost-effectiveness, and managing risk… In his session at 18th Cloud Expo, Dan Choquette, Founder of RackN, discussed how CIOs are challenged finding the balance of finding the right tools, technology and operational model that serves the business the best. He also discussed how clouds, open source software and infrastructure solutions have benefits but also drawbacks and how workload and operational portability between vendors an...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at Dell EMC, introduced a methodology for capturing, enriching and sharing data (and analytics) across the organization...
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at 20th Cloud Expo, Ed Featherston, director/senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is rele...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
Why do your mobile transformations need to happen today? Mobile is the strategy that enterprise transformation centers on to drive customer engagement. In his general session at @ThingsExpo, Roger Woods, Director, Mobile Product & Strategy – Adobe Marketing Cloud, covered key IoT and mobile trends that are forcing mobile transformation, key components of a solid mobile strategy and explored how brands are effectively driving mobile change throughout the enterprise.
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackIQ...
One of the hottest areas in cloud right now is DRaaS and related offerings. In his session at 16th Cloud Expo, Dale Levesque, Disaster Recovery Product Manager with Windstream's Cloud and Data Center Marketing team, will discuss the benefits of the cloud model, which far outweigh the traditional approach, and how enterprises need to ensure that their needs are properly being met.
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
Up until last year, enterprises that were looking into cloud services usually undertook a long-term pilot with one of the large cloud providers, running test and dev workloads in the cloud. With cloud’s transition to mainstream adoption in 2015, and with enterprises migrating more and more workloads into the cloud and in between public and private environments, the single-provider approach must be revisited. In his session at 18th Cloud Expo, Yoav Mor, multi-cloud solution evangelist at Cloudy...