Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Post

Big Data Highlights from McKinsey: Part 2 - Production, Supply, and Logistics

Big Data will have a large impact on manufacturing value chains

For the second post on the McKinsey report on Big Data, we take a look at the impact of Big Data on production, supply, and logistics. When we discuss Big Data, there’s often an implicit assumption that it has to do entirely with IT functions. However, Big Data actually will have a large impact on manufacturing value chains.

While the manufacturing sector has historically been a driver of GDP and employment for many developed economies, globalization has made it a global activity with extended supply chains made possible by advances in information and communications technology as well as reduced transportation and market entry costs. This has resulted in incredibly complex and fragmented webs of globe-spanning chains, and increasing country specialization in specific changes of the production process. Manufacturers, in turn, have assembled global production and supply chain networks for cost advantage.

Where does Big Data come in? McKinsey argues that manufacturers will need to leverage large datasets in order to continue achieving large levels of productivity. The “raw material” is also very much available:

Manufacturing stores more data than any other sector—close to 2 exabytes of new data stored in 2010. This sector generates data from a multitude of sources, from instrumented production machinery (process control), to supply chain management systems, to systems that monitor the performance of products that have already been sold (e.g., during a single cross-country flight, a Boeing 737 generates 240 terabytes of data).  And the amount of data generated will continue to grow exponentially. The number of RFID tags sold globally is projected to rise from 12 million in 2011 to 209 billion in 2021. IT systems installed along the value chain to monitor the extended enterprise are creating additional stores of increasingly complex data, which currently tends to reside only in the IT system where it is generated. Manufacturers will also begin to combine data from different systems including, for example, computer-aided design, computer-aided engineering, computer-aided manufacturing, collaborative product development management, and digital manufacturing, and across organizational boundaries in, for instance, end-to-end supply chain data.

McKinsey forecasts the biggest applications of big data will be in Research and Development, Supply Chain, and Production functions:

  • The collaborative use of data can enhance product lifecycle management, using customer data to create better design to value, and better enable open innovation through sorting of high-value ideas.
  • Manufacturers can use data to improve demand forecasting and supply planning, but can also use big data to integrate reams of data from various sources in near-real time to adjust production.
  • Big data can also revolutionize the production process by creating “digital factory” simulations to better uncover optimal production layousts, sprinkle sensors throughout the supply chain and production process to reduce waste and cut operations and maintenance costs.
  • Marketing and sales/after sales support can also use sensor data and analytics to make customer service more reliable. As the report notes, “a repair technician can be dispatched before the customer even realizes that a component
    is likely to fail.”

However, the report goes on to note that Big Data may pose some cultural challenges for typical manufacturing organizations. One telling anecdote from the report described oil refineries that rely on managers with spreadsheets instead of algorithms using data collected directly from machinery. In order to reap the benefits from Big Data, manufacturers will have to develop capabilities and human resources up to the task of getting the best use out of the vast amount of information potentially available to them.

Read the original blog entry...

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder of Crucial Point and publisher of CTOvision.com

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.
All zSystem customers have a significant new business opportunity to extend their reach to new customers and markets with new applications and services, and to improve the experience of existing customers. This can be achieved by exposing existing z assets (which have been developed over time) as APIs for accessing Systems of Record, while leveraging mobile and cloud capabilities with new Systems of Engagement applications. In this session, we will explore business drivers with new Node.js apps for delivering enhanced customer experience (with mobile and cloud adoption), how to accelerate development and management of SoE app APIs with API management.