Click here to close now.


@CloudExpo Authors: Pat Romanski, Elizabeth White, SmartBear Blog, Liz McMillan, AppDynamics Blog

Related Topics: Java IoT

Java IoT: Blog Feed Post

JavaOne 2009: Open Source Project Stonehenge

Interoperability is more than just talking with each other

Microsoft and Sun recently announced their Open Source Project Stonehenge at the JavaOne conference. Stonehenge is a reference implementation that shows how to bridge the two major development platforms Java and .NET using Web Services. This initiative definitely puts the spotlight on heterogeneity and the challenges that come with it.
Interoperability on the platform level is just the starting point of bridging the two worlds. It leads to further challenges down the road and several questions that come with it:

  • Who needs interoperability?
  • How does it affect team productivity?
  • Is it all about application stacks?
  • How effective can we diagnose problems?
  • How to calculate TCO 1 + 1 = 2 or 3?

Who needs interoperability?
There are different use cases where companies need to think about interoperability

  • Integrating different systems implemented on different platforms, e.g.: ERP with CRM
  • Integrating 3rd party solutions that only run on a specific platform, e.g.: Enterprise Search Engines
  • Integrate components inherited from acquisitions

The driving factor of interoperability in all these cases is gained productivity. Instead of re-implementing an existing system in order to bring it on to the platform of choice it is more productive to integrate with the other platform.

Individual platforms also have their individual strengths in different areas. Microsoft technologies for instance provide great flexibility and good tools to implement end-user applications whereas the Java platform has proven itself very strong in backend enterprise systems. Leveraging the best of both sides requires the integration of these two worlds.
Microsoft and Sun took the first step by providing a reference implementation that shows how to technically integrate .NET and Java by using Web Services. This is a first important step – but there is more than the technical integration that we need in order to successfully make interoperability happen.

How does it affect team productivity?
How often have you seen a .NET developer that debugs Java code in Eclipse on Linux? Or how often have you seen a Java developer in front of Visual Studio browsing through C# or VB.NET code?
Cross platform developers are a rare “species”. A typical cross platform development team therefore has developers specialized in either Java or .NET. An individual developer most often sees the other platform as a Black Box and as something that should be avoided if possible. Web Services allow calling from .NET to Java and the other way around. Debugging is easy on each side individually but it becomes a big obstacle having to debug transactions that cross platform boundaries. It either requires the developer to be both acquainted with Visual Studio and Eclipse as well as being familiar with both the Java and .NET code – which rarely is the case.
In a typical heterogeneous team it therefore always requires developers from both sides to analyze transaction flows. This is a tedious manual task by setting the correct breakpoints on each side and in each IDE, then stepping through the code. Debugging through code also only works in a single user environment as it is hardly possible to identify the correct thread on the server side implementation of a Web Service that belongs to the calling client side.

If the team collaboration works well – cross platform problem analysis is a doable task – but as outlined above it requires at least one resource from each side. Far too often – these team members don’t communicate well and simply play the “Blame Game” when coming across an issue by simply blaming the problem to be in the implementation on the other platform. This approach of resolving the problem negatively affects team productivity by introducing extra resolution cycles and it also increases tension between teams and team members.
These cross team issues are similar than thse we have seen between development and testing teams – two teams that work in different domains not having the insight into the others problem domain. This similar problem has been solved by providing testers with diagnostics tools that collect more meaningful information during their tests which help developers to quickly identify the root cause of problems. Getting this type of information not only took out the tension but also fastened the overall development cycle.

The logical conclusion therefore is to equip all teams in a heterogeneous team with tools that can collect and visualize the right set of data to speed up problem resolution, take out the tension and improve the overall team productivity.

Is it all about application stacks?
Integrating the different platforms from an implementation perspective is obviously the mandatory step to allow cross platform communication. This goal has been achieved with Web Services and the correct implementation of Web Service Standards by the different application stack providers.
Development tools like Visual Studio and Eclipse make it easy to create the application code (proxy classes) necessary to call from Java to .NET and vice versa. As long as everything runs fine during runtime developers on both sides can focus on their implementation without needing to worry about what is going on in the other cross platform teams.
In case problems come up, e.g.: calling a .NET Web Service from Java that returns a weird error its not possible for the Java Developer to go beyond the error message received in the Eclipse debugger. Tools on each side are very good in debugging, diagnosing and profiling problems on the respective platform. Cross Platform Support is however missing right now – preventing the Java Developer to follow the problem to the .NET side.

Why do we need cross platform tools?
Coming back to the example from above: When calling a .NET Web Service that throws an error or that executes slow can have multiple reasons. It could be a bug in the .NET Web Service implementation. It could also be a configuration issue in one of the used SOAP Application Stacks causing interoperability issues or it might be problematic input parameters from the Java side that causes unexpected or slow behaviours on the other side. One approach to analyze the problem is analyzing log files from both sides. The problem here is that there is no common log format and that there is no transactional context available that would allow transactional tracing and correlation of log entries.
In order to analyze cross platform problems we therefore need tools that support all involved platforms. Having this ability enables developers on both sides to understand the dynamics of the whole system better and fastens up problem resolution.

How effectively can we diagnose problems?
The first thing in problem diagnosis is to answer the question whether there is a problem or not. Problems can manifest in different ways

  • Bad application performance to the end user
  • Errors in log entries of individual components
  • Resource issues in infrastructure impacting system components

When we know that we have a problem we need to figure out where the problem is. Looking at log files that indicate a problem is almost a best case scenario as it at least gives an indication where the problem surfaced the first time. Problems perceived by end users, e.g.: bad application performance or error pages are harder to track. Where was the time spent? Which component threw the error that made it to the user interface?
Getting alerts by monitoring individual system components can tell us that we run high on CPU on certain servers or that we consume too much network bandwidth. But which component is responsible for the exhaustive use of resources? Is it a bug in a component that runs on these servers or is it a calling service that makes too many calls to a certain component?

Existing Monitoring and Diagnostics solutions focus on a particular environment or single server instances. Application Servers usually come with their own diagnostics support and additionally export performance counters that can be picked up by Enterprise Monitoring Solutions that enable monitoring of the complete infrastructure. These tools are great to analyze general problems in the infrastructure or to analyze standalone problems within a server. All existing tools however lack in analyzing cross platform issues. There are tools that analyze log files from all different components and correlate events in different logfiles to identify individual transactions. This is the right way to go but it relies on having the log information that can be correlated.

Too often problem diagnosis in heterogeneous environments comes back to being done manually. Collecting all available log information and performance counters. As any manual task it’s not a task that is very effective and does not always lead to a successful problem diagnosis. In order to diagnose problems we require a common way of capturing information from all platforms that participate when executing a transaction. In case a transaction has a problem – all this information must be extractable and easy accessible to developers to analyze the problem.

How to calculate TCO 1 + 1 = 2 or 3?
The tool landscape for Java and .NET is a huge one. There are many specialized tools that help improve productivity by supporting all stakeholders involved in running an application.
When working in cross platform environments it’s necessary to ensure good tool support for each platform. Most of the tools on the market are very specialized on a single platform leading to the need of multiple tools in order to get the best support for each platform. More tools means more costs – especially when we want to ensure productivity.
Total Cost of Ownership for heterogeneous environments however is not just defined by the individual costs of the tools it requires. Additionally to buying individual tools there is extra cost involved to integrate them. Getting information from each of the tools is good – but the information is only really valuable when the information can be integrated in a similar way as our applications are integrated.
The lack of standards makes it very hard to actually integrate these tools to get the value out of it that each individual gives on a single platform.
Tools that support all platforms and that are able to provide the data collected on each platform in an integrated way will save costs that would otherwise be necessary to integrate individual island solutions.

Related posts:

  1. Web Service Interoperabilty Issues I’ve been working on building a .NET Client Application to...
  2. Performance Analysis: Identify GC bottlenecks in distributed heterogeneous environments William Louth made a good reference to one of his...
  3. Getting ready for TechReady8: Load- and Web-Testing with VSTS and dynaTrace I’ve been invited by Microsoft to show dynaTrace’s integration into...


More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

@CloudExpo Stories
DevOps and Continuous Delivery software provider XebiaLabs has announced it has been selected to join the Amazon Web Services (AWS) DevOps Competency partner program. The program is designed to highlight software vendors like XebiaLabs who have demonstrated technical expertise and proven customer success in DevOps and specialized solution areas like Continuous Delivery. DevOps Competency Partners provide solutions to, or have deep experience working with AWS users and other businesses to help t...
The modern software development landscape consists of best practices and tools that allow teams to deliver software in a near-continuous manner. By adopting a culture of automation, measurement and sharing, the time to ship code has been greatly reduced, allowing for shorter release cycles and quicker feedback from customers and users. Still, with all of these tools and methods, how can teams stay on top of what is taking place across their infrastructure and codebase? Hopping between services a...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Enterprises can achieve rigorous IT security as well as improved DevOps practices and Cloud economics by taking a new, cloud-native approach to application delivery. Because the attack surface for cloud applications is dramatically different than for highly controlled data centers, a disciplined and multi-layered approach that spans all of your processes, staff, vendors and technologies is required. This may sound expensive and time consuming to achieve as you plan how to move selected applicati...
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
Data loss happens, even in the cloud. In fact, if your company has adopted a cloud application in the past three years, data loss has probably happened, whether you know it or not. In his session at 17th Cloud Expo, Bryan Forrester, Senior Vice President of Sales at eFolder, will present how common and costly cloud application data loss is and what measures you can take to protect your organization from data loss.
The cloud has reached mainstream IT. Those 18.7 million data centers out there (server closets to corporate data centers to colocation deployments) are moving to the cloud. In his session at 17th Cloud Expo, Achim Weiss, CEO & co-founder of ProfitBricks, will share how two companies – one in the U.S. and one in Germany – are achieving their goals with cloud infrastructure. More than a case study, he will share the details of how they prioritized their cloud computing infrastructure deployments ...
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete en...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Interested in leveraging automation technologies and a cloud architecture to make developers more productive? Learn how PaaS can benefit your organization to help you streamline your application development, allow you to use existing infrastructure and improve operational efficiencies. Begin charting your path to PaaS with OpenShift Enterprise.
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction of desired change. Renowned for its approach to leadership and emphasis on their people, organizations increasingly look to our military for insight into these challenges.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driv...
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. Migration to cloud shifts computing resources from your data center, which can yield significant advantages provided that the cloud vendor an offer enterprise-grade quality for your application.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...