Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: @CloudExpo, @DXWorldExpo, @ThingsExpo

@CloudExpo: Article

It’s All About the Data: #MachineLearning | @CloudExpo #IoT #ML #BigData

The goal of machine learning sounds simple: provide systems with the ability to learn based on the information provided them

Big Data. Analytics. Internet of Things. Cloud. In the last few years, you cannot have a discussion around technology without those terms entering the conversation. They have been major technology disruptors impacting all aspects of the business. Change seems to occur at breakneck speeds and shows no sign of slowing. Today, it appears the one constant in technology is change. Constant change requires constant innovation which thereby introduces more new technologies. One of the new technologies entering the conversation is machine learning. Gartner identified machine learning as one of the top 10 technology trends for 2016. It is definitely a hot topic.

Everything old is new again
What I find fascinating about machine learning is that the basic tenets harken back to the '70s and '80s in the early years of artificial intelligence research. The work at that time was constrained by compute capacity and amount of data available. This is the key that has enabled machine learning to leap forward in recent years - both of those constraints no longer hold. Compute cycles and data are available at levels unimagined just decades ago.

The goal of machine learning sounds simple: provide systems with the ability to learn based on the information provided them. Simple as it sounds, this is counter to classic software engineering and has its challenges. Most software development we are familiar with ‘hard codes' the systems behavior based on planned and anticipated user and data interactions. The standard ‘if-then-else' model.

The algorithms required for artificial intelligence/machine learning are much more complex. They need to allow the system to develop its own analytical models based on inputs. Those models are constantly changing based on the information provided. Based on the data and those models, behavior is determined. As you can tell from the description, this results in very non-deterministic behavior. The system will analyze, interpret and react based on the information provided, modify that behavior as more information, and then feedback is provided. The analysis and behavior is constantly changing and being refined over time. Imagine developing the test suite for that system! (A topic for future discussion).

You are already reaping the benefits of machine learning
Do you have a Netflix account? Or Amazon? Both Netflix and Amazon provide a ‘recommended for you' list every time you log in. Both companies have very complex, proprietary algorithms analyzing the huge repository of information about you and all their member's transactions. Based on that information, they develop models of your expected behavior and present a list of recommendations to you. How you react to those recommendations is also fed back into the algorithms, constantly tweaking and adjusting your behavior model.

Or how about your smart phone? Think for a moment about the complexity of the simple statement, "Siri, what is the weather forecast for today?" First the software needs to be able to understand your voice, your accent, and your way of speech in order to be able to determine the actual words being spoken. If it's not sure, the software asks for clarification, and it learns from the clarification. Each time you use it, your phone gets better at understanding what you are saying. Once it understands the words, it has to process natural language into something meaningful to the system. This again requires complex algorithms analyzing the information, creating a model, and executing on its interpretation. As with parsing the words, if it's not sure, the software will prompt for clarification. That clarification will be fed back into the system that models your way of speaking and the context of the language you use.

It's all about the data
In a recent article on TechCrunch, ‘How startups can compete with enterprises in artificial intelligence and machine learning' John Melas-Kyriazi refers to data as the ‘fuel we feed into training machine learning models that can create powerful network effects at scale.' I find that a very apt analogy. The complex algorithms and models are the engine of machine learning, but without fuel, the engine - the data - won't work very well, if at all. A colleague of mine, John Williams, (Chief Strategy Officer at Collaborative Consulting) for years has been fond of saying, "It's all about the data." That could not be more true than in the world of machine learning.

Given the importance of the data to the success of any machine learning implementation, there are some key considerations to take into account:

  • Data Quality - In the world of data, this has always been an important consideration. Data cleansing and scrubbing is standard practice already in many organizations. It has become critical for machine learning implementations. Putting dirty fuel into even the best engine will bring it to a grinding halt.
  • Data Volume - Big Data is tailor-made for machine learning. The more information the algorithms and subsequent models have to work with, the better the results. The key word here is learning. We as individuals learn more as more information is provided to us. This concept is directly applicable in the machine learning world.
  • Data Timeliness - Besides volume, new and timely data is also a consideration. If the machine learning is based on a large volume of data that is completely outdated, the resulting models will not be very useful.
  • Data Pedigree - Where did the data come from? Is it a valid source? The pedigree is less important when using internal systems, as the source is well known, but many machine learning systems will be getting their data from public sources. Or potentially, from the many devices in the world of the Internet of Things. Crowd-sourcing data (for example Waze, a GPS mobile app) requires extra effort to ensure you trust the information being consumed. Imagine a new kind of cyber-attack - feeding your machine learning system bad data to impact the results. Remember Microsoft's problem with its AI Chatbot Tay learning to be a racist?

No technology negates the need for good design and planning
There is no doubt machine learning technology has amazing potential at impacting businesses across the spectrum, whether it will be in healthcare for diagnosing Alzheimer's disease to self-driving cars that were once in the realm of science fiction. No technology negates the need for good design and planning; machine learning is no different. As technologists, it's our responsibility to ensure the proper efforts have been made to supply machine learning implementations with the best fuel possible. Understanding the quality, volume, timeliness, and pedigree needs of these systems can help us navigate this new world of machine learning, leading us to successful execution, and, ultimately, providing value back to the business.

More Stories By Ed Featherston

Ed Featherston is VP, Principal Architect at Cloud Technology Partners. He brings 35 years of technology experience in designing, building, and implementing large complex solutions. He has significant expertise in systems integration, Internet/intranet, and cloud technologies. He has delivered projects in various industries, including financial services, pharmacy, government and retail.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpdesk on to our product development and cloud service teams. Learn how the Chief Executive team helped guide managers and staff towards this goal with metrics to measure progress, staff hiring or realignment, and new technologies and certifications.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data environments, and cloud data warehouses like Snowflake, Redshift, BigQuery and Azure SQL DW, have given the cloud its own gravity - pulling data from existing environments. In this presentation we will discuss this transition, describe the challenges and solutions for creating the data flows necessary to move to cloud analytics, and provide real-world use-cases and benefits obtained through adop...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.