Welcome!

@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Top Ten Pitfalls of P2P Integration to Avoid in the Cloud

While integration isn’t necessarily a new problem, the unique challenges of integrating in the cloud require a new approach

While integration isn’t necessarily a new problem, the unique challenges of integrating in the cloud require a new approach. Many enterprises, however, are still using point-to-point (P2P) solutions to address their cloud integration needs.

In order to tackle cloud integration successfully, we need to move beyond P2P integration and avoid repeating the same mistakes. To aid in that effort, here is a list (in no particular order) of the top 10 pitfalls of P2P integration to avoid repeating in the cloud:

1. Building vs. buying: If you have developers with integration experience in your IT department, you can have them build a custom P2P integration in house, rather than buy a packaged solution. Building your own integration, however, typically means that you will also have to manage and maintain a codebase that isn’t central to your business and is difficult to change.

2. Quickfire integrations: Let’s say you need to integrate two systems quickly and hire a developer to work on the project over a couple of days. You notice an improvement in business efficiency and see an opportunity to integrate additional systems. You hire the same developer and expect the same quickfire integrations, but the complexity of the project has increased exponentially. The takeaway? It’s always a good idea to approach integration systematically and establish a plan up front, rather than integrate your systems in an ad hoc P2P fashion.

3. Embedding integrations in your application: Although it might be tempting to embed P2P integrations in your web application, you should be cautious about this approach. It may be fine for really simple integrations, but over time, your integration logic becomes scattered in different web apps. Instead, you should think of integration as a separate tier of your application architecture and centralize this logic.

One of the consistently recurring mistakes of doing quick P2P integrations is assuming that things will not break.



4. Creating dependencies between applications: When you integrate applications in a P2P fashion, you create a dependency between them. For example, let’s say you’ve integrated App A and App B. When App A is modified or updated, you will need to change the integration that connects it to App B. You also need to re-test the integration to make sure it works properly. If you add App C to the mix, your workload can increase exponentially.

5. Assuming everything always works: One of the consistently recurring mistakes of doing quick P2P integrations is assuming that things will not break. The reality is that integrations don’t always work as planned. As you integrate systems, you need to design for errors and establish a course of action for troubleshooting different kinds of errors. Error handling is particularly troublesome when integrating software-as-a-service (SaaS) applications, because you have limited visibility and control over the changes that SaaS vendors make to them.

Test each integration

6. It worked yesterday: Just because P2P integration worked for one project does not mean it will work for another. The key is to test each integration you build. Unfortunately, P2P integrations are often built and deployed quickly without sufficient planning or proper testing, increasing the chances for errors. Although it can be difficult and does require a decent amount of effort, testing integrations is absolutely critical.

7. Using independent consultants: Many companies are not staffed with developers who have enough integration expertise and hire consultants to resolve their integration issues. The problem with this approach is that you often have limited visibility into whatever the consultant delivers. If you need to make changes, you typically need to work with the same consultant, which is not always possible.

8. Creating single points of failure:
As your P2P integration architecture grows in size and complexity, its chances of becoming a single point of failure in your entire network increase as well. Minimizing the potential for single points of failure should be a priority when it comes to integration, but the lack of decoupling in a P2P approach makes it hard to eliminate bottlenecks in your system.

Quick P2P integrations are relatively manageable when you have 2 or 3 systems to connect, but when you start adding other systems, your architecture quickly becomes a complicated mess.



9. Black-box solutions: Custom-built P2P solutions are usually black box in nature. In other words, they lack reporting capabilities that tell you what is happening between systems. This makes it very hard to debug problems, measure performance, or find out if things are working properly.

10. Creating a monster: Quick P2P integrations are relatively manageable when you have 2 or 3 systems to connect, but when you start adding other systems, your architecture quickly becomes a complicated mess. And because no two P2P integrations are exactly the same, managing your integrations becomes a major pain. If you invest in doing some design work up front, however, this will save you from having to throw away a tangled P2P architecture and starting from scratch to find a new solution under pressure. If you have a well thought out design and a simple architecture, you can reduce the management burdens and costs associated with integration.

You may also be interested in:

More Stories By Ross Mason

Ross Masson is the CTO and Founder of MuleSoft. He founded the open source Mule® project in 2003. Frustrated by integration "donkey work," he set out to create a new platform that emphasized ease of development and re-use of components. He started the Mule project to bring a modern approach, one of assembly, rather than repetitive coding, to developers worldwide. Now, with the MuleSoft team, Ross is taking these founding principles of dead-simple integration to the cloud with Mule iON, the world's first integration platform as a service (iPaaS). Ross holds a BS (Hons) in Computer Science from Bristol, UK and has been named in InformationWeek's Top 10 Innovators & Influencers and InfoWorld's Top 25 CTOs.

CloudEXPO Stories
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properly managed, is poised to bring about a digital transformation to enterprise IT. We will discuss the trend, the technology and the timeline for adoption.
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happen...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San Francisco which creates an "Outcomes-Centric Business Analytics" degree." Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business ou...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a multi-faceted approach of strategy and enterprise business development. Andrew graduated from Loyola University in Maryland and University of Auckland with degrees in economics and international finance.