Welcome!

@CloudExpo Authors: Liz McMillan, Pat Romanski, Automic Blog, Elizabeth White, Nate Vickery

Related Topics: Microservices Expo

Microservices Expo: Article

Is SOA Non-Trivial?

Exploring key service characteristics

Contract Based
Software in general is required to conform to a defined functional interface, and services are no different in this respect. For example, web service interfaces defined in WSDL (Web Service Definition Language) allow the definition of data types, input and output messages, operations, invocation protocols, and even the location of services. WS-Policy may also be used to define additional elements of the interface such as security policies.

Interfaces are however very weak on the behavioural aspects (semantics) of services. A service could return structurally correct data to a consumer but still not meet its functional obligations. Ensuring that services satisfy the behavioural contract is best done through automated test suites in my experience. Perhaps one day we will be able to define pre- and post-conditions and have these verified by the service management software, but don't hold your breath.

In addition to functional obligations, services also have operational obligations both to their consumers and to their operators. In other words service contracts are much more like SLAs and OLAs than traditional functional interfaces.

Contracts should ideally be kept in the service registry/repository, and where possible in a machine readable format to be used by service management software to help enforce the contracts at runtime.  Of course such runtime enforcement of contracts comes with a necessary performance penalty, but lets not spoil a lovely ideal with picky details... ;-)

Composable
Composability is the ability of services to be used in orchestration scenarios by higher level services or processes. It is a special case of the reusability characteristic in that the services need to be uniform as well as reusable. The primary reason for this requirement is that the orchestrating service will in all probability be built using something like BPEL, not a conventional programming language, so any variations in service style become more difficult to deal with.

This uniformity includes things like:

1. Interface Granularity
I hate the phrase "coarse-grained" because it's almost meaningless in any practical sense; however, services do need to ideally offer a uniform level of granularity in order to be composable.

2. Error Handling
Service consumers need to be able to differentiate between unexpected and generally unmanageable "system" exceptions, and recoverable "business" exceptions that may be retried under different conditions or by supplying different data. In the case of system exceptions, the consumer won't know for sure if the service operation completed or not, since, for example, the network connection might have died after the service completed its transaction. For this reason it is desirable that services be idempotent, i.e., retry-able. Where services are not idempotent, compensating (undo) operations should be offered.

3. Security
Composable services should ideally offer a uniform security model to consumers, including single sign-on abilities, channel or payload encryption.

While it is possible to design such uniformity into your services, the reality is that in a typical enterprise landscape the underlying service platforms and functional interfaces will be quite varied. This is where Enterprise Service Buses come in, providing features such as multi-channel adapters, single sign-on, security adapters, routing, message transformation, as well as world peace (if you believe the vendors.)

Abstract
Services must be abstract in the sense that they offer a functional interface that is not tied to any particular underlying implementation of that interface. In other words they should hide implementation details such as programming language, operating system platform, database structure, internal object model, etc. Abstraction supports other service characteristics such as reusability, extensibility, as well as reduces coupling between producer and consumer.

The degree of service abstraction that is achieved is often linked to whether the service was designed top down or bottom up.

Top-down services begin with a business domain model and processes that ultimately translate to service operations and types, e.g., in the form of WSDL and XML Schema, in the case of web services. Such services will offer the highest level of abstraction since they are designed without an implementation in mind. However, top-down services require a translation between the interface and implementation that can sometimes introduce a performance penalty, e.g., translating business keys to database identifiers.

Bottom-up services begin with an implementation and typically involve the use of toolkits to generate the service interfaces. Such services are closely coupled to their implementations and consumers. However, bottom-up services are enticing because first they offer the ability to quickly expose existing code as services, and second because they allow the use of implementation specifics such as database identifiers to improve performance.

These gains offer a false economy and should be strenuously resisted. The long-term gains of well-designed, adaptable services that reflect a business domain far outweigh any short-term performance or time-to-market gains.

Autonomous
A service is autonomous if it has full control over its internal logic. This requires that it has clearly defined and isolated (decoupled) functional and operational boundaries, that it is independent of other services, and that it only communicates via contract-driven messages and policies.

A consumer should exercise no influence over the service other than to execute it and provide input values. The service should have minimal dependency on its execution environment.

Autonomy has benefits for both the service consumer and provider:

  • Consumers are protected in that the service adheres strictly to agreed contracts.
  • Providers have greater reassurance because service-level agreements are adhered to and services offer deployment flexibility because of their environment independence

Extensible
The only constant in life is change, and this is no different for services. Services must be built with this fact in mind: that they will have to adapt to new or changing requirements. Extensibility is the ability of services to adapt while preserving existing consumer contracts.

We would expect the following types of changes to services to have no impact on other consumers:

1. Internal Source Code Changes
Insulating consumers from internal changes is achieved through another service characteristic: abstraction, i.e., it is critical that services do not "leak" implementation details into the interface. A typical example of this is the use of underlying database identifiers in the interface instead of meaningful business keys.

2. Interface Extensions
In general, interface changes will break existing code. The exceptions to this are the addition of new operations or certain types of service data extension. Where breaking changes are introduced, a versioning scheme is required to separate the old and the new, while supporting both concurrently. Versioning for web services is typically done via the introduction of major and minor numbers to XML namespaces, such as proposed <a href="http://blogs.iona.com/sos/20070410-WSDL-Versioning-Best-Practise.pdf">here</a>, or alternatively via a service lookup registry.

3. New Consumer Take-on
The take-on of new consumers will increase the load on a service. Services need to ensure that previously agreed consumer service-level agreements (SLA) are not violated. Achieving this will involve a combination of automated performance regression test suites and runtime SLA monitoring.

4. Environmental changes
Services need to ensure that any planned maintenance that is performed on their underlying execution platform such as the application of critical patches does not affect consumers. Services need to run in a cluster with the ability for service instances to be transparently taken out of commission during maintenance periods while other service instances continue to support consumer requests. It is advisable to run automated regression test suites after such maintenance.

More Stories By Robert Morschel

Robert Morschel is chief architect at Neptune Software Plc and has extensive experience in distributed software development for companies such as British Telecom, Nomura and Fidelity Investments. He blogs on SOA at soaprobe.blogspot.com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices t...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, discussed how given the magnitude of today's application ...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...