Welcome!

Cloud Expo Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, JP Morgenthal, Liz McMillan

Related Topics: SOA & WOA, Java, Linux, Open Source, Cloud Expo, SDN Journal

SOA & WOA: Article

Beyond REST and SOA: Introducing Agent-Oriented Architecture

Dynamic coupling represents a paradigm shift in how to build and utilize APIs

A question we commonly get at EnterpriseWeb is whether our platform follows REST or not. Representational State Transfer (REST) is an architectural style for distributed hypermedia systems such as the World Wide Web, and is perhaps best known for providing a lightweight, uniform Web-style application programming interface (API) to server-based resources. One the one hand, EnterpriseWeb can both consume and expose any type of interface, including tightly coupled APIs, Web Services, as well as RESTful APIs, and the platform has no requirement that customers must build distributed hypermedia systems. It would be easy to conclude, therefore, that while EnterpriseWeb supports REST, it is not truly RESTful.

Such a conclusion, however, would neglect the broader architectural context for EnterpriseWeb. The platform builds on top of and extends REST as the foundation for the dynamic, enterprise-class architectural style we call Agent-Oriented Architecture (AOA). EnterpriseWeb's intelligent agent, SmartAlex, leverages RESTful constraints as part of the core functionality of the EnterpriseWeb platform. The resulting AOA pattern essentially reinvents application functionality and enterprise integration, heralding a new paradigm for distributed computing.

The Limitations of REST
One of the primary challenges to the successful application of REST is understanding how to extend REST to distributed hypermedia systems in general, beyond the straightforward interactions between browsers and Web servers. To help clarify this point, Figure 1 below illustrates a simple RESTful architecture. In this example, the client is a browser, and it sends GETs and PUTs or other RESTful queries to URIs that resolve to resources on a server, which responds by sending the appropriate representation back to the client. In addition, REST allows for a cache intermediating between client and server that might resolve queries on behalf of the server for scalability purposes.

Figure 1: Simple RESTful Architecture

As an architectural style, however, the point of REST isn't the uniform interface that the HTTP verbs enable. REST is really about hypermedia, where hypermedia are the engine of application state - the HATEOAS constraint essential to building hypermedia systems. In figure 1, we're representing HATEOAS by the interactions between human users and their browsers as people click links on Web pages, thus advancing the application state. The RESTful client (in other words, the browser) maintains application state for each user by showing them the Web page (or other representation) they requested when they followed a given hyperlink.

However, software clients that do not necessarily have user interfaces may be problematic for REST, but they are a familiar part of the Service-Oriented Architecture (SOA) architectural style, where we call such clients Service consumers. Combining REST and SOA into the combined architectural style we call REST-Based SOA introduces the notion of an intermediary that presents a Service endpoint and resolves interactions with that endpoint into underlying interactions with various legacy systems. The SOA intermediary in this case exposes RESTful endpoints as URIs that accept GETs, PUTs, etc. from Service consumers, which can be any software client. See figure 2 below for an illustration of the REST-Based SOA pattern.

Figure 2: REST-Based SOA

Note that adding SOA to REST augments the role of the intermediary. Pure REST allows for simple caching and proxy behavior, while SOA calls for policy-based routing and transformation operations that provide the Service abstraction. SOA also reinforces the notion that the Service consumer can be any piece of software, regardless of whether it has a user interface.

Even with REST-based SOA, however, we still have problems with implementing HATEOAS: coding our clients so that they are able to gather the metadata they need by following hyperlinks. In other words, how do we apply REST to any hypermedia system, where instead of a browser we have any piece of software as a client? How do we code the software client to know how to follow hyperlinks, where it doesn't know what the hyperlinks are ahead of time or what representations they're supposed to interact with? Humans simply click hyperlinks until they get the representation they want, even if they don't know beforehand how to find it. How do we teach software to automate this process and gather all the metadata it needs by following a sequence of hyperlinks?

Introducing Agent-Oriented Architecture
The answer to these questions is to cast an intelligent agent in the role of SOA intermediary in the REST-Based SOA pattern in Figure 2. Intelligent software agents (or simply intelligent agents when we know we're talking about software) are autonomous programs that have the authority to determine what action is appropriate based upon the requests made of them. In this new, Agent-Oriented architectural pattern, the agent interacts with any resource as a RESTful client, where the agent must be able to automatically follow hyperlinks to gather all the information it requires in order to respond appropriately to any request from the client.

In other words, when following this newly coined AOA architectural style, software clients do not have to comply with HATEOAS (they may, but such compliance is optional). Instead, the agent alone must follow the HATEOAS constraint as it interacts with resources. To achieve this behavior, we must underspecify the intelligent agent. In other words, the agent can't know ahead of time what it's supposed to do to respond to any particular request. Instead, it must be able to process any request on demand by fetching related resources that provide the appropriate metadata, data, or code it needs to properly respond to that request with a custom response, for each interaction in real time. Figure 3 below illustrates the basic AOA pattern.

Figure 3: Agent-Oriented Architecture

For each request from any client, regardless of whether it has a user interface, the agent constructs a custom response based on latest and most relevant information available. In fact, requests to the agent can come from anywhere (i.e., they follow an event-driven pattern). The agent's underspecification means that it doesn't know ahead of time what behavior it must exhibit, but it does know how to find the information it needs in order to determine that behavior - and it does that by following hyperlinks, as per HATEOAS. In other words, the goal-oriented agent resolves URIs recursively in order to gather and execute the information it needs - a particularly concise example of fully automated HATEOAS in action.

The Benefits of AOA

An earlier Loosely-Coupled newsletter explained that if you follow REST, you're unable to accept out-of-band metadata or business context outside of the hypermedia. Agent-Oriented Architecture, however, solves these problems, because the agent is free to fetch whatever it needs to complete the request, since it treats all entities - metadata, data, code, etc. - as resources. In other words, the agent serves as a RESTful client, even when the software client does not. What was out-of-band for REST isn't out-of-band for AOA. Everything is on the table.

The true power of AOA, though, lies in how it resolves the fundamental challenge of static APIs. Whether they be Web Services, RESTful APIs, or some other type of loosely-coupled interface, every approach to software integration today suffers from the fact that interactions tend to break when API contract metadata change.

By adding an intelligent agent to the mix, we're able to resolve differences in interaction context between disparate software endpoints dynamically and in real time. Far more than a traditional broker, which must rely on static transformation logic to resolve endpoint differences, the agent must be able to interpret metadata, as well as policies, rules, and the underlying data themselves to create real time interactions that maintain the business context - an example of dynamic coupling, a central principle to AOA.

Dynamic coupling, therefore, represents a paradigm shift in how to build and utilize APIs. Up to this point in time, the focus of both SOA and REST has been on building loosely-coupled interfaces: static, contracted interfaces specified by WSDL and various policy metadata when those interfaces are Web Services, or Internet Media Types and related metadata for RESTful interactions. Neither approach deals well with change. AOA, in contrast, relies upon dynamic coupling that responds automatically to change, since the agent interprets current metadata for every interaction in real time.

Icons by http://dryicons.com

More Stories By Jason Bloomberg

Jason Bloomberg is the leading expert on architecting agility for the enterprise. As president of Intellyx, Mr. Bloomberg brings his years of thought leadership in the areas of Cloud Computing, Enterprise Architecture, and Service-Oriented Architecture to a global clientele of business executives, architects, software vendors, and Cloud service providers looking to achieve technology-enabled business agility across their organizations and for their customers. His latest book, The Agile Architecture Revolution (John Wiley & Sons, 2013), sets the stage for Mr. Bloomberg’s groundbreaking Agile Architecture vision.

Mr. Bloomberg is perhaps best known for his twelve years at ZapThink, where he created and delivered the Licensed ZapThink Architect (LZA) SOA course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, the leading SOA advisory and analysis firm, which was acquired by Dovel Technologies in 2011. He now runs the successor to the LZA program, the Bloomberg Agile Architecture Course, around the world.

Mr. Bloomberg is a frequent conference speaker and prolific writer. He has published over 500 articles, spoken at over 300 conferences, Webinars, and other events, and has been quoted in the press over 1,400 times as the leading expert on agile approaches to architecture in the enterprise.

Mr. Bloomberg’s previous book, Service Orient or Be Doomed! How Service Orientation Will Change Your Business (John Wiley & Sons, 2006, coauthored with Ron Schmelzer), is recognized as the leading business book on Service Orientation. He also co-authored the books XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996).

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting).

Cloud Expo Latest Stories
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, will discuss this shifting dynamic with an example of a top European Telco provider. Find out how they are leveraging the power of acloud-based consumption model services to offer more value to the mass market and enable a new revenue model that embraces the true meaning of the Third Industrial Revolution.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.
Until recently, many organizations required specialized departments to perform mapping and geospatial analysis, and they used Esri on-premise solutions for that work. In his session at 15th Cloud Expo, Dave Peters, author of the Esri Press book Building a GIS, System Architecture Design Strategies for Managers, will discuss how Esri has successfully included the cloud as a fully integrated SaaS expansion of the ArcGIS mapping platform. Organizations that have incorporated Esri cloud-based applications and content within their business models are reaping huge benefits by directly leveraging cloud-based mapping and analysis capabilities within their existing enterprise investments. The ArcGIS mapping platform includes cloud-based content management and information resources to more widely, efficiently, and affordably deliver real-time actionable information and analysis capabilities to your organization.
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. In his session at Internet of @ThingsExpo, Mac Devine, Distinguished Engineer at IBM, will discuss bringing these three elements together via Systems of Discover.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...