Welcome!

@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo

@CloudExpo: Blog Post

Legacy Apps Make the Case for the Cloud

We often talk about CloudSwitch moving legacy applications to the cloud in a simple and secure way

We often talk about CloudSwitch moving legacy applications to the cloud in a simple and secure way; this raises the question of what exactly we mean by “legacy.”  To be more specific, we mean a broad range of apps—including third-party, custom and customized off-the-shelf applications—basically any application that has been developed in your current environment without specific design for a cloud.

It turns out that these existing applications are very important in cloud computing. When we started building CloudSwitch, we were focused on the hybrid cloud computing model; that is, some components must stay in the data center and other applications and functions can move to the cloud. However, it became apparent that “stretching” applications between the data center and cloud only works for certain types of deployments due to the added latency between the data center and the cloud. For this reason, we recommend moving as much of a multi-tier application to the cloud as you can. This allows the application to continue to run with low latencies between the different components. Sounds obvious, but this is where a whole new set of problems arise, and it’s what causes people to start talking about the challenges of moving legacy applications to the cloud.

In order to operate a multi-tier application in the cloud, you need to be able to control the application(s), infrastructure, and operating system, including things like a database tier, middleware, and custom applications. This also means that you have to “cloudify” each of these components. Suddenly you are looking at a lot of work, and potentially facing failure because some of those tiers can’t be modified to run in the cloud.

We saw a great example of this when Microsoft’s Azure service first launched. The initial release of Azure allowed application developers to build .NET applications and run them seamlessly on their local machines or in the Azure cloud. However, people trying to use this cloud usually had other applications/databases/etc. that were part of their solution, and there was no way to run these in Azure. This meant that there were a lot of things that could not be moved to Azure since “stretching” the application caused unacceptable latency and there was no way to connect the Azure deployment to the data center-side applications. Microsoft has since expanded the capabilities of Azure, but there are still many types of applications and services that cannot run in their environment.

Given all the challenges, why is it worth bothering to move legacy applications to the cloud? For most enterprises (as opposed to new ventures and SMBs), legacy apps by definition occupy the majority of the existing IT footprint, far more than newer applications, let alone those designed specifically to run in a cloud. In many of the companies we’ve worked with, legacy apps are well over 75% of the data center footprint, and they’re constantly expanding and creating needs for more capacity. Legacy apps tie up internal processing and storage resources, sometimes continually, sometimes in a “spiky” way to meet occasional massive needs. Their demand for computing power is usually growing (or skyrocketing), and contending with other applications. The enterprise then has to make tough choices about whether to buy more equipment or put up with degraded performance.

By providing access to virtually unlimited resources on demand, the cloud can bring a new level of elasticity and efficiency to a company’s IT environment. Legacy apps are often the best candidates for moving to the cloud, especially in cases where they’re infrequently used, or only need to scale for new releases or for seasonal/marketing-driven events. One of the best use cases for the cloud so far is the ability to offload this type of resource-consuming set of apps to a lower-cost cloud infrastructure, freeing IT to focus limited internal resources where they’re needed most.

Read the original blog entry...

More Stories By John Considine

John Considine is Co-Founder & CTO of Cloudswitch. He brings two decades of technology vision and proven experience in complex enterprise system development, integration and product delivery to CloudSwitch. Before founding CloudSwitch, he was Director of the Platform Products Group at Sun Microsystems, where he was responsible for the 69xx virtualized block storage system, 53xx NAS products, the 5800 Object Archive system, as well as the next generation NAS portfolio.

Considine came to Sun through the acquisition of Pirus Networks, where he was part of the early engineering team responsible for the development and release of the Pirus NAS product, including advanced development of parallel NAS functions and the Segmented File System. He has started and boot-strapped a number of start-ups with breakthrough technology in high-performance distributed systems and image processing. He has been granted patents for RAID and distributed file system technology. He began his career as an engineer at Raytheon Missile Systems, and holds a BS in Electrical Engineering from Rensselaer Polytechnic Institute.

CloudEXPO Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. SD-WAN helps enterprises to take advantage of the exploding landscape of cloud applications and services, due to its unique capability to support all things cloud related.