@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Zakia Bouachraoui

Related Topics: Microservices Expo

Microservices Expo: Article

Is SOA Non-Trivial?

Exploring key service characteristics

Contract Based
Software in general is required to conform to a defined functional interface, and services are no different in this respect. For example, web service interfaces defined in WSDL (Web Service Definition Language) allow the definition of data types, input and output messages, operations, invocation protocols, and even the location of services. WS-Policy may also be used to define additional elements of the interface such as security policies.

Interfaces are however very weak on the behavioural aspects (semantics) of services. A service could return structurally correct data to a consumer but still not meet its functional obligations. Ensuring that services satisfy the behavioural contract is best done through automated test suites in my experience. Perhaps one day we will be able to define pre- and post-conditions and have these verified by the service management software, but don't hold your breath.

In addition to functional obligations, services also have operational obligations both to their consumers and to their operators. In other words service contracts are much more like SLAs and OLAs than traditional functional interfaces.

Contracts should ideally be kept in the service registry/repository, and where possible in a machine readable format to be used by service management software to help enforce the contracts at runtime.  Of course such runtime enforcement of contracts comes with a necessary performance penalty, but lets not spoil a lovely ideal with picky details... ;-)

Composability is the ability of services to be used in orchestration scenarios by higher level services or processes. It is a special case of the reusability characteristic in that the services need to be uniform as well as reusable. The primary reason for this requirement is that the orchestrating service will in all probability be built using something like BPEL, not a conventional programming language, so any variations in service style become more difficult to deal with.

This uniformity includes things like:

1. Interface Granularity
I hate the phrase "coarse-grained" because it's almost meaningless in any practical sense; however, services do need to ideally offer a uniform level of granularity in order to be composable.

2. Error Handling
Service consumers need to be able to differentiate between unexpected and generally unmanageable "system" exceptions, and recoverable "business" exceptions that may be retried under different conditions or by supplying different data. In the case of system exceptions, the consumer won't know for sure if the service operation completed or not, since, for example, the network connection might have died after the service completed its transaction. For this reason it is desirable that services be idempotent, i.e., retry-able. Where services are not idempotent, compensating (undo) operations should be offered.

3. Security
Composable services should ideally offer a uniform security model to consumers, including single sign-on abilities, channel or payload encryption.

While it is possible to design such uniformity into your services, the reality is that in a typical enterprise landscape the underlying service platforms and functional interfaces will be quite varied. This is where Enterprise Service Buses come in, providing features such as multi-channel adapters, single sign-on, security adapters, routing, message transformation, as well as world peace (if you believe the vendors.)

Services must be abstract in the sense that they offer a functional interface that is not tied to any particular underlying implementation of that interface. In other words they should hide implementation details such as programming language, operating system platform, database structure, internal object model, etc. Abstraction supports other service characteristics such as reusability, extensibility, as well as reduces coupling between producer and consumer.

The degree of service abstraction that is achieved is often linked to whether the service was designed top down or bottom up.

Top-down services begin with a business domain model and processes that ultimately translate to service operations and types, e.g., in the form of WSDL and XML Schema, in the case of web services. Such services will offer the highest level of abstraction since they are designed without an implementation in mind. However, top-down services require a translation between the interface and implementation that can sometimes introduce a performance penalty, e.g., translating business keys to database identifiers.

Bottom-up services begin with an implementation and typically involve the use of toolkits to generate the service interfaces. Such services are closely coupled to their implementations and consumers. However, bottom-up services are enticing because first they offer the ability to quickly expose existing code as services, and second because they allow the use of implementation specifics such as database identifiers to improve performance.

These gains offer a false economy and should be strenuously resisted. The long-term gains of well-designed, adaptable services that reflect a business domain far outweigh any short-term performance or time-to-market gains.

A service is autonomous if it has full control over its internal logic. This requires that it has clearly defined and isolated (decoupled) functional and operational boundaries, that it is independent of other services, and that it only communicates via contract-driven messages and policies.

A consumer should exercise no influence over the service other than to execute it and provide input values. The service should have minimal dependency on its execution environment.

Autonomy has benefits for both the service consumer and provider:

  • Consumers are protected in that the service adheres strictly to agreed contracts.
  • Providers have greater reassurance because service-level agreements are adhered to and services offer deployment flexibility because of their environment independence

The only constant in life is change, and this is no different for services. Services must be built with this fact in mind: that they will have to adapt to new or changing requirements. Extensibility is the ability of services to adapt while preserving existing consumer contracts.

We would expect the following types of changes to services to have no impact on other consumers:

1. Internal Source Code Changes
Insulating consumers from internal changes is achieved through another service characteristic: abstraction, i.e., it is critical that services do not "leak" implementation details into the interface. A typical example of this is the use of underlying database identifiers in the interface instead of meaningful business keys.

2. Interface Extensions
In general, interface changes will break existing code. The exceptions to this are the addition of new operations or certain types of service data extension. Where breaking changes are introduced, a versioning scheme is required to separate the old and the new, while supporting both concurrently. Versioning for web services is typically done via the introduction of major and minor numbers to XML namespaces, such as proposed <a href="http://blogs.iona.com/sos/20070410-WSDL-Versioning-Best-Practise.pdf">here</a>, or alternatively via a service lookup registry.

3. New Consumer Take-on
The take-on of new consumers will increase the load on a service. Services need to ensure that previously agreed consumer service-level agreements (SLA) are not violated. Achieving this will involve a combination of automated performance regression test suites and runtime SLA monitoring.

4. Environmental changes
Services need to ensure that any planned maintenance that is performed on their underlying execution platform such as the application of critical patches does not affect consumers. Services need to run in a cluster with the ability for service instances to be transparently taken out of commission during maintenance periods while other service instances continue to support consumer requests. It is advisable to run automated regression test suites after such maintenance.

More Stories By Robert Morschel

Robert Morschel is chief architect at Neptune Software Plc and has extensive experience in distributed software development for companies such as British Telecom, Nomura and Fidelity Investments. He blogs on SOA at soaprobe.blogspot.com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

CloudEXPO Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. SD-WAN helps enterprises to take advantage of the exploding landscape of cloud applications and services, due to its unique capability to support all things cloud related.