@CloudExpo Authors: Zakia Bouachraoui, Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Blog Feed Post

With Clouds Everywhere, It Is Bound to Rain

If you are committed to a cloud architecture, then draw lines in the sand and determine what you need

I was pondering the weather in Northeast Wisconsin this morning, it’s gloomy and oppressively hot. Between heat and humidity, I’d say it felt more like the US’s Pacific Northwest than the Midwest. And it’s been image that  way all summer. We’ve been plowed under with 80+ percent humidity for months, and every once in a while the temperature dips to remind us that we’re in Wisconsin.

It is the last day of August, tomorrow is September, when cool and wet is supposed to start converging upon us. It will be a relief after months of hot and humid. But the one thing we’ve had plenty of this summer? Rain. Lots and lots of rain. Like I said, Pacific Northwest.

And that’s one thing that is certain, where there are a lot of clouds converging, you get rain. If you’re unlucky, you get thunderstorms and lightning also.

That is something I’ve been considering also of late. As you move to cloud, let us assume that you have an internal cloud, two external providers (like any other vendor, to keep the primary honest), two external cloud storage providers, and possibly a few stray apps that are served in a manner similar to cloud but are SaaS in a pretty dress.

Image Courtesy of FloridaLightning.com


That’s a reasonable picture of five years from now. You will still have a network, still have desktops – even if they only host VDI instances, the hardware is there – still have mission-critical applications… But your contract negotiations, uptime monitoring, and security will all have gotten to be more of a burden. Some will be more complex - like guaranteeing security of data across several organizations that aren’t yours, others will just have greater impact – like contract negotiation with the people hosting the bulk of your data. But there’s going to be more.

So you should plan for it. If you are committed to a cloud architecture, then draw lines in the sand and determine what you need in terms of man-hours and tools to negotiate contracts, secure the data, and guarantee your WAN connection.


image If you have remote offices, then likely you have a group that can help with the contract part, but professional acquisition staff is not going to have a handle on the subtleties of guarantees. Bandwidth is relatively simple, but uptime combined with response time when a pool of servers is overloaded and a new one must be brought online? That’s more complex than they’re likely able to articulate or appreciate. Perhaps you’re lucky and your purchasing department has an IT group, then ignore the above. But for the rest of us, we’re going to need those skills in IT. Once the bulk of your data is on a cloud storage provider, IT’s standard negotiating tactic of threatening to move from Microsoft SQL Server to Oracle or some such doesn’t hold nearly the weight – because you have to transfer it off of the cloud providers’ premises, and they will know that’s a lot of work. Start training staff or seeking out partners to help you with this stuff now, rather than when it becomes a problem. Ideally, whomever is going to have to deal with the contracts long-term should be at the table for initial negotiation, so they have the background necessary to deal with the vendor.


Security is a whole different beast. Lots of people have written lots of words about Cloud Security, but it is up to your security staff to make certain that your vendors meet your organizations security requirements. And unfortunately this will often be tied into the negotiation process, so at least one Security peep needs to be an advisor to whomever negotiates the contracts and oversees delivery.

The same is true for your network staff. When first you discover you can’t contact your Cloud Provider over IP is the wrong time to involve your networking team in the cloud. One of them should be there from day one, understand uptime guarantees, and understand the process when the provider fails. Because when there’s a problem is too late to start training them.

Finally, determining where a system is broken will be harder. Lots has also been written about management tools for virtualization, but they’re going to be even more complex in a multiple-cloud scenario. You need nearly instantaneous notification that a piece of this larger overall internetworked system is down, and which piece – explicitly which piece – is down. If you lose access to a cloud application, automation should be able to classify it as an application, host, network, or other error, so that you know where to start. And the applications to do this don’t exist yet – at least not between multiple clouds. Until they do, you’ll want an SOP to handle troubleshooting when things go awry.


These are normal growing pains, all these clouds are going to converge on you, and you’ll need an emergency preparedness plan to make sure you don’t get wet when the rain comes. Eventually these issues will resolve as the market matures, but until then, you’re going to be the point for your organization’s cloud – both getting it started and troubleshooting when things go wrong. So a little planning never hurt.









Related Articles and Blogs


By the way, links are now auto-generated by a great little WLW plug-in developed for us by Joe Pruitt.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.