Welcome!

@CloudExpo Authors: Rostyslav Demush, Elizabeth White, Liz McMillan, Ed Featherston, Yeshim Deniz

Related Topics: @CloudExpo

@CloudExpo: Article

BPM in the Cloud: Disruptive Technology

Why the middleware-in-the-Cloud approach to PaaS is doomed to fail

Sometimes, it feels like ZapThink is the lone voice of reason in the wilderness. We fought to cast SOA as a business-driven architectural approach in the face of massive vendor misinformation, hell-bent on turning SOA into an excuse to buy middleware. And while many organizations are successfully implementing SOA, perhaps with our help, in large part we lost that battle.

Today we’re fighting the corresponding battle over the Cloud. The general battle lines are similar to the SOA days—ZapThink espousing business-driven architecture, while the vendors do their best to spin Cloud as an excuse to buy more gear. But just as the players in the two World Wars were largely the same even though the tactics were quite different, so too with SOA and the Cloud.

The Original BPM Battle Lines
Today, a battle is brewing that promises to reopen some old wounds of the SOA days to be fought over the new territory of the Cloud. That battle is over Business Process Management (BPM). The fundamental idea behind BPM software is that you need some kind of engine to coordinate interactions between actors and disparate applications into business processes. Those actors may be human, or other applications themselves. To specify the process, you must create some kind of process representation the process engine can run.

Vendors loved BPM because process engines were a natural add-on to their middleware stacks. Coordinating multiple applications meant creating multiple integrations, and for that you need middleware. Lots of it. And in spite of paying lip service to cross-platform Service compositions that would implement vendor-independent processes, in large part each vendor rolled out a proprietary toolset.

ZapThink, of course, saw the world of SOA-enabled BPM quite differently. In our view, the Service-oriented way of looking at BPM was to free it from the engines, and focus on Services: composing them and consuming them. But there was a catch: Services are inherently stateless. The challenge with the Service-oriented approach to BPM was how to main state for each process instance in an inherently stateless environment.

We tackled the problem of state head on back in 2006, when we discussed how to maintain state in compositions of stateless Services that implement processes. Then in 2008, we discussed how you could free SOA-based BPM from middleware by maintaining state in message interactions between Services and their consumers. If you got this right, you wouldn’t need a heavyweight process engine to do it for you.

The ESB vendors largely ignored us (Fiorano Software being a notable exception). Sure, WS-Addressing might have helped, but the vendors chose not to implement this standard consistently, instead relying on threads or other object instances to maintain process instance state. After all, once you fall for the heavyweight, big engine approach to BPM, you’re essentially locked into the platform you have chosen. And vendors love nothing more than customer lock-in.

The New Battle Line in the Cloud
While we have to award victory to the vendors in the SOA-based BPM war, the move to the Cloud offers an entirely new battleground with completely new rules. Today, of course, the vendors (and it seems, everyone else) want to put their software in the Cloud. So it’s a natural consequence that the BPM vendors would seek to move their BPM engines into the Cloud as well, perhaps as part of a PaaS provider strategy. Clearly such a move would be a good bet from the business perspective, as it’s likely that many BPM customers would find value in a Cloud-based offering.

Here’s where the story gets interesting. In order to achieve the elasticity benefit of the Cloud for a distributed application, it’s essential for the application tier to be stateless. The Cloud may need to spawn additional instances to handle the load, and any particular instance may crash. But because the Cloud is highly available and partition tolerant, such a crash mustn’t hose the process that Cloud instance is supporting.

As a result, there is simply no way a traditional BPM engine can run properly in the Cloud. After all, BPM engines’ raison d’être is to maintain process state, but you can’t do that on a Cloud instance without sacrificing elasticity! In other words, all the work the big vendors put into building their SOA-platform-centric BPM engines must now be chucked out the door. The Cloud has changed the rules of BPM.

Hypermedia-Oriented Architecture, Anyone?
ZapThink has the answer. The same answer we had in 2006, only now we must recast the answer for the Cloud: maintain process state information in the messages.

The messages we’re talking about are the interactions between resources on the server and the clients that the process actors are using, what we call representations in the REST context. In other words, it’s essential to transfer application state in representations to the client. This is the reason why REST is “representational state transfer,” by the way.

The vendors, however, are far from shaking in their boots, since ZapThink once again has the lone voice in the wilderness when we talk about REST—and this time, the vendors aren’t the ones responsible for the misinformation. Instead, it’s the community of developers who have miscast REST as a way to build uniform APIs to resources. But that’s not what REST is about. It’s about building hypermedia applications. In fact, REST isn’t about Resource-Oriented Architecture at all. You could say it’s about Hypermedia Oriented Architecture.

At the risk of extending an already-tired cliché, let’s use the abbreviation HOA. In HOA, hypermedia are the engine of application state. (RESTafarians recognize this phrase as the dreaded HATEOAS REST constraint.) With HATEOAS, hyperlinks dynamically describe the contract between the client and the server in the form of a workflow at runtime. That’s right. No process engine needed! All you need are hyperlinks.

The power of this idea is obvious once you think through it, since the World Wide Web itself is the prototypical example of such a runtime workflow. You can think of any sequence of clicking links and loading Web pages as a workflow, after all—where those pages may be served up by different resources on different servers anywhere in the world. No heavyweight, centralized process engine in sight.

The New BPM Battle Lines
The big vendors aren’t worried now, because they don’t understand the power of hypermedia. If HOA-based BPM takes off, however, then their whole middleware-in-the-Cloud approach to PaaS is doomed to fail. If they’re smart, they’d better dig their trenches now. After all, they have too much invested in the old way of doing things. HOA-based BPM is potentially a disruptive technology, and they’re facing the innovator’s dilemma.

On the other hand, there is a substantial opportunity for the innovators—new entrants to the BPM marketplace who figure out how to build a Cloud-friendly BPM engine. Think you have what it takes? Here are some pointers to architecting a partition-tolerant, RESTful BPM application.

  • Separate the resources that build process applications from the representations of those resources. Servers are responsible for generating self-descriptive process maps that contain all the context necessary for any actor to work through a process. After all, orchestrations can be resources as well. In other words, the persistence tier doesn’t host a process engine, it hosts a model-driven resource that generates stateless process applications to run on the elastic application tier.
  • The application tier acts as the client for such process representations, and as a server that supports the clients of the process. Keep the application tier stateless by serving application state metadata in representations to the clients of the process. In other words, the application tier processes interactions with clients statelessly. As a result, any application tier instance is interchangeable with any other. If one crashes, you can bootstrap a replacement without affecting processes in progress. This interchangeability is the secret to maintaining elasticity and fault tolerance.
  • Separate UI representations from application state representations. If a client has the state representation, it should be able to fetch the appropriate UI representation from any application tier instance. As a result, the state representations are portable from one client to another. You could begin a process on a laptop and transfer it to a mobile phone, for example, and continue where you left off.
  • Use a lightweight, distributed queuing mechanism to address client uptime issues. If a client (typically a browser) crashes in the middle of a process, you want to be able to relaunch the client and pick up where you left off. But if the client has the only copy of the application state, you have a problem. Instead, allow the client to fetch a cached copy from a queue.
  • For processes that require heavy interactions among multiple actors, follow a peer-to-peer model. Most processes that involve multiple actors call for infrequent interactions between those actors, for example, processes with approval steps. For such processes, support those interactions via the resource state in the persistence tier. However, when you require heavy interactions among actors (imagine a chat window, for example), enable the actors to share a single application instance that initiates a peer-to-peer interaction.
  • Maintain integrity via checksums. Conventional wisdom states that you don’t want to let the client have too much control, or a bad actor could do real damage. To prevent such mischief, ensure that any invalid request from the client triggers an error response on the application tier. As a result, the worst a hacker can do is screw up their own session. Serves them right!
  • Not much to it, is there? On the one hand, it’s surprising no vendor has stepped up to the plate to build a fully partition tolerant, RESTful BPM app. But then again, it’s not the technical complexity that’s the problem—it’s the paradigm shift in thinking about the nature of stateful applications in today’s Cloud-ready world. That shift is what makes Cloud-friendly BPM a disruptive technology.

    The ZapThink Take

    Perhaps I should have named this ZapFlash “RESTful BPM,” because, of course, that’s what it’s about. But that would have introduced unfortunate confusion, since that phrase has already been co-opted by vendors who use it to mean “traditional BPM engines with RESTful APIs.” But that’s the old way of thinking. What this ZapFlash proposes is a different way of thinking about BPM applications altogether.

    This paradigm shift, as with all such shifts, is part of a larger trend from old ways of thinking to entirely different approaches to building and consuming applications. The table below illustrates some aspects of this shift:

    Old way

    New way

    Partition intolerant

    Partition tolerant

    Vertical scalability

    Horizontal scalability

    ACID

    BASE

    Process engines

    RESTful BPM

    State management with objects (threads, session beans, etc.)

    State management with HATEOAS

    Web Services-based SOA

    REST-based SOA

    Legacy enterprise apps poorly shoehorned into the Cloud

    Next generation enterprise apps built for the Cloud (enterprise SaaS)

    Middleware centricity

    Cloud centricity

    Imperative programming

    Declarative programming


    It may seem the new way is at the bleeding edge, but in reality, many of today’s Cloud-based companies are already living the new paradigm. Only enterprises and incumbent vendors find themselves struggling with the old way. But that is changing. Stay tuned, ZapThink will continue to lead the way!

    Image source: Leszek Leszczynski

More Stories By Jason Bloomberg

Jason Bloomberg is the leading expert on architecting agility for the enterprise. As president of Intellyx, Mr. Bloomberg brings his years of thought leadership in the areas of Cloud Computing, Enterprise Architecture, and Service-Oriented Architecture to a global clientele of business executives, architects, software vendors, and Cloud service providers looking to achieve technology-enabled business agility across their organizations and for their customers. His latest book, The Agile Architecture Revolution (John Wiley & Sons, 2013), sets the stage for Mr. Bloomberg’s groundbreaking Agile Architecture vision.

Mr. Bloomberg is perhaps best known for his twelve years at ZapThink, where he created and delivered the Licensed ZapThink Architect (LZA) SOA course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, the leading SOA advisory and analysis firm, which was acquired by Dovel Technologies in 2011. He now runs the successor to the LZA program, the Bloomberg Agile Architecture Course, around the world.

Mr. Bloomberg is a frequent conference speaker and prolific writer. He has published over 500 articles, spoken at over 300 conferences, Webinars, and other events, and has been quoted in the press over 1,400 times as the leading expert on agile approaches to architecture in the enterprise.

Mr. Bloomberg’s previous book, Service Orient or Be Doomed! How Service Orientation Will Change Your Business (John Wiley & Sons, 2006, coauthored with Ron Schmelzer), is recognized as the leading business book on Service Orientation. He also co-authored the books XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996).

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting).

@CloudExpo Stories
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
"Since we launched LinuxONE we learned a lot from our customers. More than anything what they responded to were some very unique security capabilities that we have," explained Mark Figley, Director of LinuxONE Offerings at IBM, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor. Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the abili...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...