Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Feed Post

Minnesota and Private Cloud

There goes today's blog

I had a blog partially written for today when @GeorgeVHulme tweeted this: "WAHOO! Minnesota goes Private Cloud!". And that changed my thoughts and direction completely. Here’s the article George linked to State of Minnesota Signs Historic Cloud Computing Agreement With Microsoft. The fact that it was private cloud and with Microsoft got me to read the article. And it’s actually a pretty impressive story for both the state and for Microsoft.

image In essence, this takes “private cloud” to a different place than I would have envisioned. They’re outsourcing. Yes, there’s a line in the sand, beyond which the state has complete control, but they have essentially given Microsoft their infrastructure (the collaboration and email piece of it anyway) and are holding Microsoft accountable for security and software maintenance. That’s a pretty solid plan if the admins at the state can manage the applications as they need/desire. There are gray areas that would need to be covered, like what types of threats are user/application threats that Microsoft isn’t responsible for, what’s the escalation path, etc. But those are no doubt covered in the contract, which we don’t have access to.

Microsoft is giving over dedicated space (notice that the article does not say dedicated hardware anywhere in it), and even has committed a datacenter that the cloud will run out of. The price tag must have been pretty high, but Microsoft Exchange admin, IM (ala Microsoft Communicator) admin, and Microsoft Sharepoint admin – the hardware and software maintenance, routing, upgrades, etc part – is expensive too. The state knows what that portion of its budget will cost and can focus on running the apps that the state and its citizens require to get the job done.

I admit to being a bit intrigued. Not just by the concept, but by the actual architectural implementation. Assuming that the access is via some form of SSL VPN, is it to be expected then that when another portion of the state is signed out to a cloud vendor, another VPN connection will be required? That would seem to be… Awkward. But they do reference a dedicated line, so it is possible that there is no “gateway” to the services, but that would seem irresponsible from a security perspective on both parts, so I doubt it. Though lock-down by IP might be possible, that’s spoofable, so again, I doubt it.

image This arrangement has half of the issues that traditional outsourcing does, but I would argue the worst of the issues are taken care of. In a traditional outsourcing arrangement, the fact that your contract decreases in value as time goes on (assuming your vendor is successful anyway) means that your staffing levels are slowly watered down by other duties, and by the end of the contract you are likely frustrated. This is compounded by the fact that your IT needs grow in the two to five year period of an outsourcing contract.

But in this case, the labor intensive part of the agreement resides with state employees. Upgrading hardware and software is labor intensive but “bursty” to put it in IT terms. You do it, and then it’s done until next time you need it. On the other hand, maintenance of users, modifying software configurations to meet your needs, and managing that software is a constant job that will likely increase over time.

This may be the answer outsourcing has been looking for. To me, having Microsoft employees apply their own security patches sounds like right-sizing. Of course there will be speed bumps, but even that has a pressure release valve. If a server drops for no explainable reason, state IT staff will point to Microsoft, but be quietly glad that they have someone to point at.

Depending upon the agreed-upon price, states with much larger budget woes than Minnesota should probably be considering such an arrangement. Instead of a hazy partial budget that is padded in case Exchange use grows at a faster-than-expected pace, they can have a number that is required to keep the lights on for critical state systems. It cleans up budgeting and allows the state to make critical choices in hard times without as much guesswork. Capital expenditures drop, ostensibly staffing needs will go down, but that greatly depends upon the number of servers this system replaces and what their server:admin ratio is.

And I’m really intrigued by the implication that Microsoft, just by virtue of taking over this function, increases the security of Minnesota’s data. I know that Microsoft has been getting better over the last decade at security, but that is still an intriguing concept to me. Hope my boss (who lives in Minnesota) doesn’t notice it…

By way of disclosure, we are a Microsoft Partner, not that being one had anything to do with this blog, just making sure you know.


Follow me on Twitter icon_facebook

AddThis Feed Button Bookmark and Share

Related Articles and Blogs

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.