Welcome!

@CloudExpo Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Cloud Event Processing: CEP in the Cloud

Observations & Next Steps

Over the past few weeks, I’ve implemented map/reduce using techniques commonly found in Complex Event Processing.  Here’s a summary of what was involved, and what tools would make such a deployment easier.

Getting the Data
One of the first tasks accomplished was the creation of an OnRamp – we use OnRamps to get data into our cloud for processing.  The specific OnRamp used in this learning exercise subscribed to Twitter and fed the resulting JSON objects onto the service bus, RabbitMQ in this case.  We had to correctly configure RabbitMQ for this, and the OnRamp needed to be specifically aware of and implement semantics required to publish on this bus.  It would be easier and more portable if this were abstracted in some type of OnRamp api; we had abstracted this at Kaskad.  In Korrelera, the bus didn’t matter – we could just as easily use direct sockets, JMS, Tibco or 29West.  The OnRamp didn’t know, and didn’t care.  In our TwitYourl example, there’s no way to monitor or manage the OnRamp other than tailing its output and visually inspecting it.  There is no central management or operations console.

Definition of Services
Although we’ve used Map/Reduce as our first example, the topology doesn’t really matter.  What matters is that we created a number of services and then deployed them.  In our small example, we wrote a RuleBot that performed the Map function in Map/Reduce.  This RuleBot listen for Tweet JSON objects, pulled them apart, found the information we were interested in, chunked it, and then fed it back onto the service bus. Another RuleBot performed the Reduce function – events were pumped into the Esper open source CEP engine where the could then be queried, Again, the RuleBots had to be aware the underlying bus’s semantics and could not be managed or monitored in our TwitYourl example.

Deployment to the Cloud
All of this had to then be deployed to the cloud – there are two main components to this.  First, we assumed that each node in the cloud was configured correctly.  This had to be done by hand – it would have been much easier to have an image that contained everything we needed from an infrastructure, or plumbing, point of view that could have been deployed to any number of servers via point and click.  Secondly, the services themselves needed to be deployed, and as I’ve already pointed out, those services had to be aware of the bus, could not be managed, and could not be monitored.  All of this had to be done by hand.  And log files, or console windows had to be examined both operationally and to examine the fruits of our labors.

How to Make This Easier
First, we need a tool that will configure and provision any number of nodes in our cloud.  There are several vendors that have products in this space and I’m not going to talk about them here (yet).  Secondly, and more importantly, we need an architecture that is layered on top of the hardware/operating system/ESB/etc. that can accept and deploy services dynamically.  An implementation that can be monitored and managed remotely and allow the management of our solution both physically and at some abstracted level.

Another Layer of Abstraction

It would be very handy indeed if we could define what was going in our Event Processing Cloud and then push it out to the cloud.  We need the ability to iteratively develop services, test them with live data and deploy the service to a service pool.  Service pools define some chunk of work that must be done; RuleBots can join service pools and then be automagically managed by our CEP based load balancing tool.  OnRamps can be managed.  And everything going on can be examined, both physically and from a services point of view.  For example, TwitYourl may be running on 100 machines, but the business user really only cares about whether or not the service is available and that the results can be viewed and utilized.

What’s Next?

I’m going to outline the requirements, at a high level, of what this command and control architecture looks like, and we’re going to re-deploy TwitYourl using this new approach.  By doing this, we will be able to compare the ‘old’ way of deploying 1st generation CEP based solutions, which are designed to scale vertically on multiprocessor based single machines, and our new Cloud Event Processing approach which is designed to scale not only vertically, but also horizontally, running on many more machines either in a public, private, or hybrid cloud.  And then we’ll talk about a much better way to look at output than by monitoring a console or tailing a log file!

Thanks for following along!

Read the original blog entry...

More Stories By Colin Clark

Colin Clark is the CTO for Cloud Event Processing, Inc. and is widely regarded as a thought leader and pioneer in both Complex Event Processing and its application within Capital Markets.

Follow Colin on Twitter at http:\\twitter.com\EventCloudPro to learn more about cloud based event processing using map/reduce, complex event processing, and event driven pattern matching agents. You can also send topic suggestions or questions to [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
There's no doubt that blockchain technology is a powerful tool for the enterprise, but bringing it mainstream has not been without challenges. As VP of Technology at 8base, Andrei is working to make developing a blockchain application accessible to anyone. With better tools, entrepreneurs and developers can work together to quickly and effectively launch applications that integrate smart contracts and blockchain technology. This will ultimately accelerate blockchain adoption on a global scale.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also received the prestigious Outstanding Technical Achievement Award three times - an accomplishment befitting only the most innovative thinkers. Shankar Kalyana is among the most respected strategists in the global technology industry. As CTO, with over 32 years of IT experience, Mr. Kalyana has architected, designed, developed, and implemented custom and packaged software solutions across a vast spectrum o...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, will discuss how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
SAP is the world leader in enterprise applications in terms of software and software-related service revenue. Based on market capitalization, we are the world's third largest independent software manufacturer. Harness the power of your data and accelerate trusted outcome-driven innovation by developing intelligent and live solutions for real-time decisions and actions on a single data copy. Support next-generation transactional and analytical processing with a broad set of advanced analytics - run securely across hybrid and multicloud environments.
Founded in 2002 and headquartered in Chicago, Nexum® takes a comprehensive approach to security. Nexum approaches business with one simple statement: “Do what’s right for the customer and success will follow.” Nexum helps you mitigate risks, protect your data, increase business continuity and meet your unique business objectives by: Detecting and preventing network threats, intrusions and disruptions Equipping you with the information, tools, training and resources you need to effectively manage IT risk Nexum, Latin for an arrangement by which one pledged one’s very liberty as security, Nexum is committed to ensuring your security. At Nexum, We Mean Security®.