Welcome!

@CloudExpo Authors: Paul Simmons, Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Feed Post

An IT Supply-Chain Model, Once More with Feeling

The idea of cloud computing making IT management more similar to supply chain management has been mentioned before

The idea of cloud computing making IT management more similar to supply chain management has been mentioned before; it’s time to take a closer look.

Let’s start by looking at the supply chain in its simplest imaginable form, even simpler than the supply chain at a manufacturing company. Think of a transport company – like a Federal Express, DHL or TNT – that transports packages from location A to location B. There are processes, people and resources needed to get the package from A to B within the supply chain.

The reality today is that many of these distribution companies do not actually come to your door themselves – at least not in every region or town; they use subcontractors and local partners at various points. It would be far too expensive for the delivery firm to have their own trucks and employ their own drivers in every remote country, city and village around the world. (Bear with me, we will get to cloud computing in a minute). This way, they can still offer you end to-end-service and keep you up to day of minute by minute parcel movements around the globe. They provide customers with tracking numbers - or “meta“-information (01001011). They know exactly which trucks are where, and with which packages; as a result they can “outsource” almost every logistical process (the outside arrows in our animated diagram).


figure 1: animated IT supply chain

But IT does not transport packages from A to B (at least I hope that is not what you do all day!). IT meets the demands of the business by providing a steady supply of services. IT does not have trucks or warehouses, but departments such as development, operations and support that work within their supply chain. What an IT supply chain essentially does, is take IT resources - like applications, infrastructure and people - and use these to create and deliver services.

 

Some IT shops have decided not to react to demand, but to actively help the users - work with the business – figure out what they should want or need (shown by the arrow marked “innovation” in the diagram). A more recent trend is the introduction of DevOps, a way to closely connect and integrate the demand side with the supply side. This is often done in conjunction with the introduction of agile development processes.

Users typically care about speed, cost and reliability, not about whether IT used its own trucks or someone else’s. Speed - like in many supply chains - is one of the main criteria. Responding faster to customer or user demands reduces cycle time and time-to-market and makes organizations more agile and more competitive. The use of cloud computing in all its incarnations, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), can play an important role in further increasing this speed.

With IaaS, the IT department can significantly speed up the procurement, installation and provisioning of required hardware. Because of its OPEX model, no capital expenditure requests need to be raised, no boxes with hardware need to be unpacked, no servers need to be installed. Just as in the above distribution example, the organization can rapidly respond to heavily fluctuating demand, extreme growth or demand for new services by using external capacity if and when needed.

With SaaS the route from determining demand to getting a service up and running is even shorter, because the whole thing is already a service the minute we start looking at it. There is no buying, installing, or configuring of the software. It all runs already at the provider’s website. If you are implementing a solution for ten thousand users across hundreds of departments, the time you save by not having to install a CD is not that significant. Large SaaS implementations go live much quicker than traditional on premises implementations, in many cases for psychological or even emotional reasons.Once the solution is already running, users are much more willing to start using it on the spot. Many SaaS providers reinforce this further by specifically designing their software to enable simple “quick starts.”

In those cases where there is no readymade solution available, PaaS (Platform as a Service) can deliver significant time savings. As soon as the developer has defined the solution, it can be used in production. The PaaS provider – through its PaaS platform – takes care of all the next steps such as provisioning the servers, loading the databases, granting the users access etc. Comparing PaaS with IaaS, the big difference s that with PaaS, the provider continues to manage the infrastructure, including tuning, scaling, securing, and so forth.IT operations does not have to worry about allocating capacity, about moving it from test to production or about all the other things operations normally takes care off. And because the PaaS provider has already done this many, many times, it can be done immediately and automatically.

Sound too good to be true?
Well,actually it might be, because - although the above can be faster – it also can mean IT loses control and can no longer assure the other two aspects that users care about: reliability and cost. So, how can these concerns be addressed? In the same way as in the distribution example: by making sure that at all times, IT has all the information (010011001) about “where everything is,” or better, “where everything is running.”

This management system –call it a “cloud connected management suite” if you like – needs to not only give insight about where things are running and how well they are running, but also allow you to orchestrate the process, move workloads from one provider to another, and help you decide whether to take SaaS or PaaS applications back in house (or move them to a more trusted provider). Ideally it will allow you to dynamically optimize your environment based on the criteria –such as speed, cost reliability – and constraints – such as compliance, capacity, contracts - that are applicable at that moment in time to your specific business.

Clearly this dynamic approach is a long way from the more traditional “If it ain’t broke, don’t change it,” but IT will have to get used to this. Or - even better - embrace this new way of doing things, just like planners at industrial companies did. Today’s global manufacturing would not be as efficient and such a driver for the world’s prosperity if they had not started to optimize their global processes a long time ago.
There are, however, a number of prerequisites to be able to implement such a supply chain approach in IT. First, we need to achieve fluidity or movability of IT. IT needs to be able to take fairly sizable IT chunks and move them somewhere else with relative ease. On the infrastructure side, virtualization is a major enabler of this. Virtualization containerizes and decouples from the underlying hardware, thus acting as a dire needed “lubricant”. But to enable true movability, more is needed.


figure 1: animated IT supply-chain  (repeat)


Many of today’s applications are as intertwined as the proverbial plate of spaghetti. This makes the average datacenter a house of cards, where removing one thing may cause everything else to come crashing down. On the functional side, the use of Service Oriented Architectures can help, but we will also need to apply this thinking on the operational side.A virtual machine model is in many cases too low level for managing the movement of complete services; management needs to take place at a higher level of abstraction, ideally based on a model of the service.
The second hurdle is security. I don’t mean that the security at the external providers may be insufficient for the needs of our organization. In fact, the security measures implemented at external providers are often much more advanced and reliable than those inside most enterprises (fear for lack of security is consistently listed as top concern by organizations before they use cloud computing, but it rapidly moves down the list of concerns once organizations have hands-on experience with cloud computing). The real security inhibitor for the dynamic IT supply chain is that most organizations are not yet able to dynamically grant or block access to a constantly-changing set of users, across a fast moving and changing portfolio of applications running at a varying array of providers. This requires us to rethink how security is approached, where it is seen more as along the lines of “Security as a Service”, an enabler instead of an inhibitor.

The third consideration is that any optimization will have to work across the whole supply chain, meaning across all of the different departments and silos that the average large IT organization consists of. For example, it has to look at the total cost of a service, including running, supporting, fixing, upgrading, assuring, and securing it. Likewise, it also has to optimize the speed and the reliability - or at least give visibility into these - across the whole chain.

To prevent sub-optimization (the arch enemy of real optimization) one needs to understand and connect to many of the existing information and systems in these departments. Systems in diverse areas such as helpdesk, project management, security, performance, costing, demand management, data management, etc. IT supply-chain optimization is in its infancy and many start-ups are gearing up to offer some form of cloud management, but it will be clear that offering optimization requires quite a broad and integrated view of IT.

The end result of adopting a Supply Chain approach is that IT becomes more an orchestrator of a supply chain - a broker of services - than a traditional supplier of services. Demand and Supply are two sides of the same coin which occur (almost recursively) throughout the chain. Once we close the loop, the supply chain becomes a cycle that constantly improves and becomes more efficient and agile in delivering on the promises the organization makes to its customers, just like an industrial supply chain but also very much in the spirit of Deming and the original ideas around Service Management.

This post originally appeared as at ITSMportal.com

Read the original blog entry...

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

@CloudExpo Stories
HyperConvergence came to market with the objective of being simple, flexible and to help drive down operating expenses. It reduced the footprint by bundling the compute/storage/network into one box. This brought a new set of challenges as the HyperConverged vendors are very focused on their own proprietary building blocks. If you want to scale in a certain way, let's say you identified a need for more storage and want to add a device that is not sold by the HyperConverged vendor, forget about it...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implemen...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D...
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"This week we're really focusing on scalability, asset preservation and how do you back up to the cloud and in the cloud with object storage, which is really a new way of attacking dealing with your file, your blocked data, where you put it and how you access it," stated Jeff Greenwald, Senior Director of Market Development at HGST, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Creating replica copies to tolerate a certain number of failures is easy, but very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number of failures it can tolerate. And the management is like herding cats (overseeing capacity, rebuilds, migrations, and degraded performance). In his general session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for the HGST Cloud Infrastructure Business Unit, discussed how a new approach is neces...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...