Welcome!

@CloudExpo Authors: Elizabeth White, Pat Romanski, Yeshim Deniz, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Adobe Flex, Eclipse, Recurring Revenue

@CloudExpo: Blog Post

Enterprise Cloud Design and Understanding Latency

Latency will affect the quality of your delivery in the cloud...

Tony Bishop Keynote at Cloud Expo

Design of enterprise clouds incorporate multiple dimensions (security, data,service brokering, infrastructure management, etc..) and one of the most critical to understand is the impact of latency. With Network vendors starting to provide 10GigE connections, switches and fabric, and given the exponentially increasing demand for bandwidth, enterprises will buy this equipment.

Design considerations for eterprise clouds must be aware and accomodate for applications that can gobble up ubiquitous and on-demand bandwidth of cloud delivery model. It is becoming clear that those responsible for the applications in datacenters AND responsible for building Cloud like delivery models should also be concerned about the proximity of collaborating applications and the number of hops critical transactions take.

The importance of this can be better visualized in FEET.

10Gig E means that the communication medium will transmit 10 billion bits per second, but what does a billionth of a second mean to a message? It means about 11 3/4 inches, almost a foot. A message response time of 40 milliseconds between two virtual machines, which would be considered important to achieve, translates to about 39 million feet, equaling about 7,451 miles, or about one third around the world. While this seems like plenty of speed to play with, it is important to remember how much wiring goes into a single computer (or network switch) and the time involved in message translation and protocol switching.

If an enterprise has two datacenters for high availability in a city, a typical distance between them might be 5-10 miles. If highly collaborative systems have been communicating between sites to service critical transactions due to some configuration oversights, it becomes quickly apparent that feet begin to matter. To further the complexities, disaster recovery strategies often require a 100-250 mile distance between the two sites; if consolidation strategies force the DR site to become active to save costs, careful design consideration needs to be given to how systems will interact over such distances in a cloud delivery paradigm:

  • While some mandates are irrevocably givens, such as DR site distances, there are circumstances that a datacenter team (including the application staff) can control to reduce excess lag.
  • Assess how many hops a complete end-to-end transaction takes between machines; it may surprise many to know that some inter-site hopping could have crept in. There are now tools that can explore an applications entire suite of connectivity. We have seen this used and when the reports are provided the application team is often most surprised at the results.
  • Know the critical applications that are especially latency sensitive, that also have high throughput, because rethinking their layout in a datacenter could save significant performance re-engineering.
  • For critical applications consider redeploying the servers on the floor so that all tiers of the application are in close proximity. While this is counter to current layout strategies, it is going to become more prevalent as budgets tighten and consolidation ensues.
  • The increasing use of XML related formats to transmit messages presents the problem of message bloat. Despite the use of these self describing messages, translations between XML formats often need to occur between systems. There is an emerging set of network appliances that can translate these messages at wire speed, saving server processing cycles and decreasing latency - while enhancing cloud like delivery experience.

To save precious FEET, an incredible FEAT of rethinking the datacenter in support of enterprise coud delivery models will be required.

Time and space are one fabric; in 1905 Einstein showed the relationship, 103 years later it is more relevant than ever.

More Stories By Tony Bishop

Blueprint4IT is authored by a longtime IT and Datacenter Technologist. Author of Next Generation Datacenters in Financial Services – Driving Extreme Efficiency and Effective Cost Savings. A former technology executive for both Morgan Stanley and Wachovia Securities.

CloudEXPO Stories
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 Cloud Computing Blogger for IT Integrators" by CRN (2015). Mr. Jackson's professional career includes service in the US Navy Space Systems Command, Vice President J.P. Morgan Chase, Worldwide Sales Executive for IBM and NJVC Vice President, Cloud Services. He is currently part of a team responsible for onboarding mission applications to the US Intelligence Community cloud computing environment (IC ...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the massive amount of information associated with these devices. Ed presented sought out sessions at CloudEXPO Silicon Valley 2017 and CloudEXPO New York 2017. He is a regular contributor to Cloud Computing Journal.
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight and has been quoted or published in Time, CIO, Computerworld, USA Today and Forbes.
They say multi-cloud is coming, but organizations are leveraging multiple clouds already. According to a study by 451 Research, only 21% of organizations were using a single cloud. If you've found yourself unprepared for the barrage of cloud services introduced in your organization, you will need to change your approach to engaging with the business and engaging with vendors. Look at technologies that are on the way and work with the internal players involved to have a plan in place when the inevitable happens and the business begins to look at how these things can help affect your bottom line.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.