Welcome!

@CloudExpo Authors: Liz McMillan, John Esposito, Elizabeth White, Sujoy Sen, Pat Romanski

Related Topics: Java IoT

Java IoT: Blog Feed Post

JavaOne 2009: Open Source Project Stonehenge

Interoperability is more than just talking with each other

Microsoft and Sun recently announced their Open Source Project Stonehenge at the JavaOne conference. Stonehenge is a reference implementation that shows how to bridge the two major development platforms Java and .NET using Web Services. This initiative definitely puts the spotlight on heterogeneity and the challenges that come with it.
Interoperability on the platform level is just the starting point of bridging the two worlds. It leads to further challenges down the road and several questions that come with it:

  • Who needs interoperability?
  • How does it affect team productivity?
  • Is it all about application stacks?
  • How effective can we diagnose problems?
  • How to calculate TCO 1 + 1 = 2 or 3?

Who needs interoperability?
There are different use cases where companies need to think about interoperability

  • Integrating different systems implemented on different platforms, e.g.: ERP with CRM
  • Integrating 3rd party solutions that only run on a specific platform, e.g.: Enterprise Search Engines
  • Integrate components inherited from acquisitions

The driving factor of interoperability in all these cases is gained productivity. Instead of re-implementing an existing system in order to bring it on to the platform of choice it is more productive to integrate with the other platform.

Individual platforms also have their individual strengths in different areas. Microsoft technologies for instance provide great flexibility and good tools to implement end-user applications whereas the Java platform has proven itself very strong in backend enterprise systems. Leveraging the best of both sides requires the integration of these two worlds.
Microsoft and Sun took the first step by providing a reference implementation that shows how to technically integrate .NET and Java by using Web Services. This is a first important step – but there is more than the technical integration that we need in order to successfully make interoperability happen.

How does it affect team productivity?
How often have you seen a .NET developer that debugs Java code in Eclipse on Linux? Or how often have you seen a Java developer in front of Visual Studio browsing through C# or VB.NET code?
Cross platform developers are a rare “species”. A typical cross platform development team therefore has developers specialized in either Java or .NET. An individual developer most often sees the other platform as a Black Box and as something that should be avoided if possible. Web Services allow calling from .NET to Java and the other way around. Debugging is easy on each side individually but it becomes a big obstacle having to debug transactions that cross platform boundaries. It either requires the developer to be both acquainted with Visual Studio and Eclipse as well as being familiar with both the Java and .NET code – which rarely is the case.
In a typical heterogeneous team it therefore always requires developers from both sides to analyze transaction flows. This is a tedious manual task by setting the correct breakpoints on each side and in each IDE, then stepping through the code. Debugging through code also only works in a single user environment as it is hardly possible to identify the correct thread on the server side implementation of a Web Service that belongs to the calling client side.

If the team collaboration works well – cross platform problem analysis is a doable task – but as outlined above it requires at least one resource from each side. Far too often – these team members don’t communicate well and simply play the “Blame Game” when coming across an issue by simply blaming the problem to be in the implementation on the other platform. This approach of resolving the problem negatively affects team productivity by introducing extra resolution cycles and it also increases tension between teams and team members.
These cross team issues are similar than thse we have seen between development and testing teams – two teams that work in different domains not having the insight into the others problem domain. This similar problem has been solved by providing testers with diagnostics tools that collect more meaningful information during their tests which help developers to quickly identify the root cause of problems. Getting this type of information not only took out the tension but also fastened the overall development cycle.

The logical conclusion therefore is to equip all teams in a heterogeneous team with tools that can collect and visualize the right set of data to speed up problem resolution, take out the tension and improve the overall team productivity.

Is it all about application stacks?
Integrating the different platforms from an implementation perspective is obviously the mandatory step to allow cross platform communication. This goal has been achieved with Web Services and the correct implementation of Web Service Standards by the different application stack providers.
Development tools like Visual Studio and Eclipse make it easy to create the application code (proxy classes) necessary to call from Java to .NET and vice versa. As long as everything runs fine during runtime developers on both sides can focus on their implementation without needing to worry about what is going on in the other cross platform teams.
In case problems come up, e.g.: calling a .NET Web Service from Java that returns a weird error its not possible for the Java Developer to go beyond the error message received in the Eclipse debugger. Tools on each side are very good in debugging, diagnosing and profiling problems on the respective platform. Cross Platform Support is however missing right now – preventing the Java Developer to follow the problem to the .NET side.

Why do we need cross platform tools?
Coming back to the example from above: When calling a .NET Web Service that throws an error or that executes slow can have multiple reasons. It could be a bug in the .NET Web Service implementation. It could also be a configuration issue in one of the used SOAP Application Stacks causing interoperability issues or it might be problematic input parameters from the Java side that causes unexpected or slow behaviours on the other side. One approach to analyze the problem is analyzing log files from both sides. The problem here is that there is no common log format and that there is no transactional context available that would allow transactional tracing and correlation of log entries.
In order to analyze cross platform problems we therefore need tools that support all involved platforms. Having this ability enables developers on both sides to understand the dynamics of the whole system better and fastens up problem resolution.

How effectively can we diagnose problems?
The first thing in problem diagnosis is to answer the question whether there is a problem or not. Problems can manifest in different ways

  • Bad application performance to the end user
  • Errors in log entries of individual components
  • Resource issues in infrastructure impacting system components

When we know that we have a problem we need to figure out where the problem is. Looking at log files that indicate a problem is almost a best case scenario as it at least gives an indication where the problem surfaced the first time. Problems perceived by end users, e.g.: bad application performance or error pages are harder to track. Where was the time spent? Which component threw the error that made it to the user interface?
Getting alerts by monitoring individual system components can tell us that we run high on CPU on certain servers or that we consume too much network bandwidth. But which component is responsible for the exhaustive use of resources? Is it a bug in a component that runs on these servers or is it a calling service that makes too many calls to a certain component?

Existing Monitoring and Diagnostics solutions focus on a particular environment or single server instances. Application Servers usually come with their own diagnostics support and additionally export performance counters that can be picked up by Enterprise Monitoring Solutions that enable monitoring of the complete infrastructure. These tools are great to analyze general problems in the infrastructure or to analyze standalone problems within a server. All existing tools however lack in analyzing cross platform issues. There are tools that analyze log files from all different components and correlate events in different logfiles to identify individual transactions. This is the right way to go but it relies on having the log information that can be correlated.

Too often problem diagnosis in heterogeneous environments comes back to being done manually. Collecting all available log information and performance counters. As any manual task it’s not a task that is very effective and does not always lead to a successful problem diagnosis. In order to diagnose problems we require a common way of capturing information from all platforms that participate when executing a transaction. In case a transaction has a problem – all this information must be extractable and easy accessible to developers to analyze the problem.

How to calculate TCO 1 + 1 = 2 or 3?
The tool landscape for Java and .NET is a huge one. There are many specialized tools that help improve productivity by supporting all stakeholders involved in running an application.
When working in cross platform environments it’s necessary to ensure good tool support for each platform. Most of the tools on the market are very specialized on a single platform leading to the need of multiple tools in order to get the best support for each platform. More tools means more costs – especially when we want to ensure productivity.
Total Cost of Ownership for heterogeneous environments however is not just defined by the individual costs of the tools it requires. Additionally to buying individual tools there is extra cost involved to integrate them. Getting information from each of the tools is good – but the information is only really valuable when the information can be integrated in a similar way as our applications are integrated.
The lack of standards makes it very hard to actually integrate these tools to get the value out of it that each individual gives on a single platform.
Tools that support all platforms and that are able to provide the data collected on each platform in an integrated way will save costs that would otherwise be necessary to integrate individual island solutions.

Related posts:

  1. Web Service Interoperabilty Issues I’ve been working on building a .NET Client Application to...
  2. Performance Analysis: Identify GC bottlenecks in distributed heterogeneous environments William Louth made a good reference to one of his...
  3. Getting ready for TechReady8: Load- and Web-Testing with VSTS and dynaTrace I’ve been invited by Microsoft to show dynaTrace’s integration into...

 

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

@CloudExpo Stories
Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments. In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, will discuss how a cloud designed for production operations not only helps accelerate developer...
In his session at 18th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., will focus on real world deployments of DDoS mitigation strategies in every layer of the network. He will give an overview of methods to prevent these attacks and best practices on how to provide protection in complex cloud platforms. He will also outline what we have found in our experience managing and running thousands of Linux and Unix managed service platforms and what specifically c...
As enterprises around the world struggle with their digital transformation efforts, many are finding that innovative digital teams are moving much faster than their hidebound IT organizations. Rather than struggling to convince traditional IT to get with the digital program, executives are taking advice from IT research firm Gartner, and encouraging existing IT to continue in their desultory ways. However, many CIOs are realizing the dangers of following Gartner’s advice. The central challenge ...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningfu...
trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vice president of product management, IoT solutions at GlobalSign, will teach IoT developers how t...
The pace of innovation, vendor lock-in, production sustainability, cost-effectiveness, and managing risk… In his session at 18th Cloud Expo, Dan Choquette, Founder of RackN, will discuss how CIOs are challenged finding the balance of finding the right tools, technology and operational model that serves the business the best. He will discuss how clouds, open source software and infrastructure solutions have benefits but also drawbacks and how workload and operational portability between vendors...
Cloud Object Storage is effectively infinitely scalable and boasts the lowest total costs. But cloud SLAs and T&Cs are traditionally optimized for huge customers like Netflix, so applications demanding better confidentiality or higher availability typically can’t reap the benefits of public cloud storage. In his session at 18th Cloud Expo, Don Martin, CTO of Security First Corp, will provide an overview of innovative technologies available today – secret sharing and information dispersal algori...
When it comes to IoT in the enterprise, namely the commercial building and hospitality markets, a benefit not getting the attention it deserves is energy efficiency, and IoT's direct impact on a cleaner, greener environment when installed in smart buildings. Until now clean technology was offered piecemeal and led with point solutions that require significant systems integration to orchestrate and deploy. There didn't exist a 'top down' approach that can manage and monitor the way a Smart Buildi...
Cloud-based NCLC (No-code/low code) application builder platforms empower everyone in the organization to quickly build applications and executable processes that broaden access, deepen collaboration, and enhance transparency for all team members. Line of business owners (LOBO) and operations managers know best their part of the business and their processes. IT departments are beginning to leverage NCLC platforms to empower and enable LOBOs to lead the innovation, transform the organization, an...
There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the...
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will dis...
See storage differently! Storage performance problems have only gotten worse and harder to solve as applications have become largely virtualized and moved to a cloud-based infrastructure. Storage performance in a virtualized environment is not just about IOPS, it is about how well that potential performance is guaranteed to individual VMs for these apps as the number of VMs keep going up real time. In his session at 18th Cloud Expo, Dhiraj Sehgal, in product and marketing at Tintri, will discu...
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
SYS-CON Events announced today that Ericsson has been named “Gold Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Ericsson is a world leader in the rapidly changing environment of communications technology – providing equipment, software and services to enable transformation through mobility. Some 40 percent of global mobile traffic runs through networks we have supplied. More than 1 billion subscribers around the world re...
The demand for organizations to expand their infrastructure to multiple IT environments like the cloud, on-premise, mobile, bring your own device (BYOD) and the Internet of Things (IoT) continues to grow. As this hybrid infrastructure increases, the challenge to monitor the security of these systems increases in volume and complexity. In his session at 18th Cloud Expo, Stephen Coty, Chief Security Evangelist at Alert Logic, will show how properly configured and managed security architecture can...
The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc...
Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process...
Redis is not only the fastest database, but it has become the most popular among the new wave of applications running in containers. Redis speeds up just about every data interaction between your users or operational systems. In his session at 18th Cloud Expo, Dave Nielsen, Developer Relations at Redis Labs, will shares the functions and data structures used to solve everyday use cases that are driving Redis' popularity.