Welcome!

@CloudExpo Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Aruna Ravichandran, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, @BigDataExpo

@CloudExpo: Blog Post

Four Ways Cloud Has Influenced Application Troubleshooting By @Stackify | @CloudExpo [#Cloud]

As with any new, disruptive technology, new challenges are also par for the course

Four Ways Cloud Has Influenced Application Troubleshooting

The rise of cloud computing has ushered in an era of unprecedented productivity for developers over the past several years. For those who have embraced this new world order, gone are the days of long lead times for hardware procurement and installation, architecture defined by slow-moving hardware upgrades, hardware-constrained scalability and flexibility, and a world where only sys admins have access to the infrastructure. But, as the barriers between development and delivery disappear, new challenges have emerged that can disrupt the lives of developers and slow down delivery of new products and features, giving back some of the efficiency gains that the Software-Defined Data Center (SDDC) created.

Whether you're new to the cloud or you've been around since before cloud was cool, you are likely to see four common challenges emerge that can make troubleshooting your applications in the cloud more difficult. Let's take a closer look at these common pain points first to help build awareness around the challenges, and then I'll offer some suggestions for how to prevent these hurdles from tripping you and your team up when it comes time to unravel an application troubleshooting mystery.

Shifting Ownership
If you're adopting the cloud with limited support from an operations team, or perhaps you're one of the growing numbers of DevOps or even no-ops teams who find themselves bridging both the operations and development worlds, you will find that the responsibility of operating and supporting both your app and your infrastructure, at some level at least, will introduce a new dynamic that you may not have contemplated.

True, the ability to roll your own architecture without the burden of dealing with physical devices is liberating and far more efficient. But, as developer tools, deployment tools, and cloud operations tools become inextricably linked to one another, the old boundaries between who is dev and who is ops become blurred or even get removed altogether. This means the dev team is suddenly an integral part of operations, whether by design or by default, adding yet another responsibility for developers whose chief mandate is often to go faster. The more time you spend in the operations realm, especially in troubleshooting your app or the cloud resources it depends on, the less time you are able to devote to adding new value through code.

Lack of Transparency and Burden of Proof
While it's true that having the full benefits of the cloud available at the press of a button is awesome, you wouldn't be faulted for having a bit of nostalgia about the "good old days" of being able to have a conversation with a real live person down the hall about real physical hardware that's either healthy or isn't (along with the ability to actually lay hands on it). An old familiar refrain when something went wrong with an app in production was for the burden of proof to rest initially with the ops team - prove the hardware is working, the network is healthy, and the SAN hasn't lost disks before making the dev team dig in. Honestly, everyone was just hoping against hope that it was something "easy" in the infrastructure, because when it was the app, that's when things got hard. Well, that script has been reversed with the cloud: now the burden of proof is on the dev team, because what's really hard is finding a problem that originates with someone else's complex, abstracted, virtualized data center.

App returning a 500 error or performing poorly? If you're using something delivered as-a-Service, such as database, queues, cache and the like, you won't really have any visibility into health other than the cloud provider's status page and whatever you can directly observe. It's either working correctly and is speedy, or it isn't; if it isn't, life gets a lot murkier. Likewise, servers can be monitored, but you can't really tell why your virtual resource's performance has trailed off if you are the victim of something environmental that's out of your control.

No matter how good the support team is at your favorite cloud provider, it's rare that they will be as responsive to your requests for more information on an issue as your own in-house ops team could be, and they won't be as well versed on your architecture. To varying degrees, you're at the mercy of the cloud provider for consistent, reliable services, and it's also up to them to offer timely insight when issues arise with the services you depend on. Your mileage may vary, of course, as to whether your cloud provider offers this level of communication and transparency. But, if they don't, then the burden of proof rests squarely with you to show that the issue isn't in your app. Quite a reversal of fortunes, isn't it?

Easy Complexity
Compounding the challenge of sorting through infrastructure issues vs. code issues is the simple fact that applications are becoming far more complex and, in many cases, portions of the overall architecture may be transient in nature. Combine complexity with impermanence and you have a recipe for some real Sherlock Holmes-caliber mysteries at times.

The incredible thing about an SDDC is that you can create nearly any kind of architecture required to support your application stack's needs, all relatively easily - if you can dream it, you can build it. Want to cobble together .NET, Java, PHP, Node.js, Ruby, Database-as-a-Service for SQL and NoSQL, Message-Queues-as-a-Service, and Search-as-a-Service? From a cloud deployment perspective, it's been made devilishly easy to deploy and get started. But with that ultra-polyglot approach and a heavy reliance on software-defined services comes a new set of challenges:

  • First, you have a variety of services that are black boxes to you. Each of these services comes with its own set of tricks for gaining insight into performance and availability, but each one may be different in how you monitor and troubleshoot.
  • Learning how to support a variety of different technologies creates drag on your delivery velocity. It's hard enough learning the performance and reliability tricks for a few technologies; trying it for a wide variety can draw focus from the real goals of building new value through software and making the business more successful.
  • Not every monitoring tool can support every technology stack, and the wider you cast the technology net, the harder it can become to support your full stack from a single monitoring tool.
  • If you're using dynamic (transient) resources, such as scale-on-demand servers, you are quite likely to lose critical data that you need when troubleshooting a problem if you haven't given thought to how you preserve critical insights that disappear with the server when it's de-provisioned.

More Frequent Change
Finally, we come to the double-edged sword that brought this all about in the first place: going faster! The increased agility that the cloud brings, especially when coupled with dev tools that are integrated into the delivery cycle (think PaaS environments), has a way of shortening delivery cycle times and increasing the number of releases crammed into a given week, month, and year. This is especially true in organizations that have also adopted agile development practices. Code can flow to production smoothly with greater frequency, and architecture changes can be made far more swiftly and easily. Unfortunately, with more frequent code releases and architecture changes comes more frequent opportunities to break something.

A big part of the movement toward Agile and Lean is also the notion of always moving forward - rather than rolling back a release in the event of an issue, detect problems early and patch them quickly. To enable this mandate, however, requires two things that are often missing if you are coming from a slower moving environment or from a more traditional hosting model:

  1. Developer visibility into a baseline of behavior telemetry to know what "good" looks like historically
  2. Instant feedback on the health of the application post-release relative to that healthy baseline

Without this, it's hard to know if you've made gains or losses with your release - your users are often your only real barometer.

So... How Do I Code More and Support Less?
There's no denying that the cloud has impacted the life of many developers, mostly in a very positive way. Of course, with new technologies and capabilities always comes a new set of challenges to overcome. In the case of cloud-hosted applications, this includes challenges to effectively and efficiently support those applications in their new environments so that the gains in productivity aren't given back in support of the application.

What can development teams to do adapt to and overcome these challenges?

There are three basic steps that every development team should take to make supporting cloud-based applications easier.

1. Establish Access, Process, and Protocol: The first order of business for helping developers support their cloud-based apps more effectively is giving them safe access to the information and resources they need. Unfortunately, all too often in cloud environments this is an all-or-nothing proposition - full login rights to servers and even potentially full rights to the management portal, or no access at all. Make sure to establish the correct access methods to your developers so that they have the visibility and access they need, without handing over so much control that it increases the likelihood of accidents.

2. Design Supportability Into the Application: Once your application is in production, there are several common questions that you will need to be able to answer at a moment's notice about your application: Is it (and everything it depends on) running? Are users satisfied with the performance? Is anything silently failing and frustrating users without setting off alarms? If something failed, who was impacted, and what caused the issue?

There are also some things that simply cannot be measured and monitored from outside the application, but which speak directly to the health and well being of your application. To enable you to quickly answer the inevitable questions, consider incorporating the following:

  • If it moves, measure it. Report application metrics and KPIs from within your code in order to see events and data that would otherwise be locked away from you. Some events and metrics only you, the developer, have the power to expose. Knowing how your app behaves at a core level can provide levels of insight that prove invaluable when searching for troubleshooting clues. If you can configure monitoring and alerts for those metrics, even better. We've elaborate on this subject in this article Errors & Logs: putting the data to work
  • Log often, and log meaningfully. If you only report errors, you will lack the critical insights necessary to help point to the root cause of the error. By logging at, say, info or debug instead of just warn or error, you will have the breadcrumb trail you need to find it. It's impossible to get the state of the system after the fact - you need to have logged it at the time of the event.
  • Centralize your insights. Remembering that life in the cloud can be both quite distributed, and quite transient, it's always good to bring everything - logs, errors, custom metrics, and other telemetry - into a central location for normalization, correlation, and continuity. You may need the data, and what it tells you, well beyond the ephemeral life span of your cloud resource.

3. Identify Health Baselines Early: Key information like message queue length, average request time, app pool resource utilization, custom metrics values, log and error rates, and more can all be charted for your application these days - monitoring and charting isn't just the domain of ops tools any longer. Understand what your app looks like both when healthy and unhealthy, preferably starting with pre-production environments even, so that you can see how your application morphs from release to release as well as with different loads and as your architecture evolves. By baselining as far back as dev and QA, you can often catch problems well before they impact customers and send you and your team scrambling.

Conclusion
There's no denying the cloud brings incredible capabilities to the lives of developers: speed, agility, flexibility, scalability, and more. As with any new, disruptive technology, new challenges are also par for the course. By applying some basic strategies for application management, monitoring and troubleshooting, you can have all of the advantages of the cloud without giving back the gains during those critical support engagements, and have happier team members and end users as well.

At Stackify we believe we offer a solution to the issues presented in this article learn more at www.stackify.com

More Stories By Stackify Blog

Stackify offers the only developers-friendly solution that fully integrates error and log management with application performance monitoring and management. Allowing you to easily isolate issues, identify what needs to be fixed quicker and focus your efforts – Support less, Code more. Stackify provides software developers, operations and support managers with an innovative cloud based solution that gives them DevOps insight and allows them to monitor, detect and resolve application issues before they affect the business to ensure a better end user experience. Start your free trial now stackify.com

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
SYS-CON Events announced today that Nihon Micron will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nihon Micron Co., Ltd. strives for technological innovation to establish high-density, high-precision processing technology for providing printed circuit board and metal mount RFID tags used for communication devices. For more inf...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launchi...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, will provide a fun and simple way to introduce Machine Leaning to anyone and everyone. Together we will solve a machine learning problem and find an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intellige...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...