@CloudExpo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @CloudExpo

@CloudExpo: Blog Post

Dealing with the Cloud's Latent Tendencies

Latency is a function of distances and “hops” across routers

One of the frequent questions we get when we engage with customers moving applications to the cloud is: what about the latency issues when using a cloud?  This question arises because most IT departments have had to struggle with application performance issues and the idea of adding a big chunk of latency when integrating the cloud is very troubling.  Here is how we address this:

1.   Move the whole application to the cloud.  We have been working hard to allow you to move all of the components of your application to the cloud.  With this capability, you are not adding the latency between your data center and the cloud to the interactions between the servers in the cloud.  A simple example is moving both the presentation tiers and database tiers to the cloud.  The front-end servers talk directly to the database in the cloud, and they experience “data center” level latencies (i.e., nearly the same as in your DC).  This often leads to a related question: if I move the whole application, where is the hybridization and integration?  Simple, there are a collection of other services and data that your applications depend on – things like name servers, identity servers, domain controllers, ancillary databases, etc.  Often access to these services is not latency-sensitive because it’s not part of a high transaction rate process.

2.   Use a cloud that is “nearby.” Latency is a function of distances and “hops” across routers.  The closer you are to a cloud, the lower the latency.  This is one of the reasons we have architected for multi-cloud support, and are so focused on zero modification of your servers and applications.  If you have the freedom to use a cloud that is “closer” without having to change your configurations, then you can take advantage of resources that make sense to you.  We’re excited to see more players creating and expanding cloud offerings; more clouds in more locations means that we can help customers integrate cloud with their data center infrastructures, taking advantage of lower latency, higher SLA’s, and better pricing.  With Amazon opening more regions, Savvis and Terremark supporting more than a dozen data centers each, specialized players like BlueLock focusing on security and compliance, and folks like Microsoft and AT&T getting into the mix, we expect that there will be “nearby” resources available for most companies in the near future.

3.   Take advantage of WAN optimization. You cannot defy the laws of physics, the speed of light is a real limit, and thus the distance to the cloud you are using will determine the minimum amount of latency.  Given this, however, there are things that can be done to minimize the impact of latency and bandwidth restrictions.  There are a number of products out there that help with Wide Area Networking optimization and CloudSwitch can take advantage of these in two ways.  The first is that we work with your existing network infrastructures so that if you have optimized links available between your data center and a cloud provider, we can take advantage of them.  The second is something we are working on – integration with these products such that they can be deployed with or alongside of CloudSwitch to optimize cloud communications.

The bottom line is that latency issues are part of your decision process when you determine which applications (or parts of applications) will get moved to which cloud, and you should test the results early in your cloud evaluations. With CloudSwitch you have some good options for dealing with the inherent latency issues in cloud deployments so you can successfully integrate the cloud into your IT infrastructure.

Read the original blog entry...

More Stories By John Considine

John Considine is Co-Founder & CTO of Cloudswitch. He brings two decades of technology vision and proven experience in complex enterprise system development, integration and product delivery to CloudSwitch. Before founding CloudSwitch, he was Director of the Platform Products Group at Sun Microsystems, where he was responsible for the 69xx virtualized block storage system, 53xx NAS products, the 5800 Object Archive system, as well as the next generation NAS portfolio.

Considine came to Sun through the acquisition of Pirus Networks, where he was part of the early engineering team responsible for the development and release of the Pirus NAS product, including advanced development of parallel NAS functions and the Segmented File System. He has started and boot-strapped a number of start-ups with breakthrough technology in high-performance distributed systems and image processing. He has been granted patents for RAID and distributed file system technology. He began his career as an engineer at Raytheon Missile Systems, and holds a BS in Electrical Engineering from Rensselaer Polytechnic Institute.

CloudEXPO Stories
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.