Welcome!

@CloudExpo Authors: AppDynamics Blog, Pat Romanski, Zakia Bouachraoui, Liz McMillan, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Feed Post

Minnesota and Private Cloud

There goes today's blog

I had a blog partially written for today when @GeorgeVHulme tweeted this: "WAHOO! Minnesota goes Private Cloud!". And that changed my thoughts and direction completely. Here’s the article George linked to State of Minnesota Signs Historic Cloud Computing Agreement With Microsoft. The fact that it was private cloud and with Microsoft got me to read the article. And it’s actually a pretty impressive story for both the state and for Microsoft.

image In essence, this takes “private cloud” to a different place than I would have envisioned. They’re outsourcing. Yes, there’s a line in the sand, beyond which the state has complete control, but they have essentially given Microsoft their infrastructure (the collaboration and email piece of it anyway) and are holding Microsoft accountable for security and software maintenance. That’s a pretty solid plan if the admins at the state can manage the applications as they need/desire. There are gray areas that would need to be covered, like what types of threats are user/application threats that Microsoft isn’t responsible for, what’s the escalation path, etc. But those are no doubt covered in the contract, which we don’t have access to.

Microsoft is giving over dedicated space (notice that the article does not say dedicated hardware anywhere in it), and even has committed a datacenter that the cloud will run out of. The price tag must have been pretty high, but Microsoft Exchange admin, IM (ala Microsoft Communicator) admin, and Microsoft Sharepoint admin – the hardware and software maintenance, routing, upgrades, etc part – is expensive too. The state knows what that portion of its budget will cost and can focus on running the apps that the state and its citizens require to get the job done.

I admit to being a bit intrigued. Not just by the concept, but by the actual architectural implementation. Assuming that the access is via some form of SSL VPN, is it to be expected then that when another portion of the state is signed out to a cloud vendor, another VPN connection will be required? That would seem to be… Awkward. But they do reference a dedicated line, so it is possible that there is no “gateway” to the services, but that would seem irresponsible from a security perspective on both parts, so I doubt it. Though lock-down by IP might be possible, that’s spoofable, so again, I doubt it.

image This arrangement has half of the issues that traditional outsourcing does, but I would argue the worst of the issues are taken care of. In a traditional outsourcing arrangement, the fact that your contract decreases in value as time goes on (assuming your vendor is successful anyway) means that your staffing levels are slowly watered down by other duties, and by the end of the contract you are likely frustrated. This is compounded by the fact that your IT needs grow in the two to five year period of an outsourcing contract.

But in this case, the labor intensive part of the agreement resides with state employees. Upgrading hardware and software is labor intensive but “bursty” to put it in IT terms. You do it, and then it’s done until next time you need it. On the other hand, maintenance of users, modifying software configurations to meet your needs, and managing that software is a constant job that will likely increase over time.

This may be the answer outsourcing has been looking for. To me, having Microsoft employees apply their own security patches sounds like right-sizing. Of course there will be speed bumps, but even that has a pressure release valve. If a server drops for no explainable reason, state IT staff will point to Microsoft, but be quietly glad that they have someone to point at.

Depending upon the agreed-upon price, states with much larger budget woes than Minnesota should probably be considering such an arrangement. Instead of a hazy partial budget that is padded in case Exchange use grows at a faster-than-expected pace, they can have a number that is required to keep the lights on for critical state systems. It cleans up budgeting and allows the state to make critical choices in hard times without as much guesswork. Capital expenditures drop, ostensibly staffing needs will go down, but that greatly depends upon the number of servers this system replaces and what their server:admin ratio is.

And I’m really intrigued by the implication that Microsoft, just by virtue of taking over this function, increases the security of Minnesota’s data. I know that Microsoft has been getting better over the last decade at security, but that is still an intriguing concept to me. Hope my boss (who lives in Minnesota) doesn’t notice it…

By way of disclosure, we are a Microsoft Partner, not that being one had anything to do with this blog, just making sure you know.


Follow me on Twitter icon_facebook

AddThis Feed Button Bookmark and Share

Related Articles and Blogs

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle.
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and controlling infrastructure. The rise of Site Reliability Engineering (SRE) is part of that redefinition of operations vs development roles in organizations.
Signs of a shift in the usage of public clouds are everywhere Previously, as organizations outgrew old IT methods, the natural answer was to try the public cloud approach; however, the public platform alone is not a complete solutionThe move to hybrid, custom, and multi-cloud will become more and more prevalent At the heart of this technology trend exists a custom solution to meet the needs and concerns of these organizations, including compliance, security, and cost issues Blending Service and Deployment Models
When a company wants to develop an application, it must worry about many aspects: selecting the infrastructure, building the technical stack, defining the storage strategy, configuring networks, setting up monitoring and logging, and on top of that, the company needs to worry about high availability, flexibility, scalability, data processing, machine learning, etc. Going to the cloud infrastructure can help you solving these problems to a level, but what if we have a better way to do things. As a pioneer in serverless notion, Google Cloud offers a complete platform for each of those necessities letting users to just write code, send messages, assign jobs, build models, and gain insights without deploying a single machine. So cloud compute on its own is not enough, we need to think about all of the pieces we need to move architecture from the bottom, up towards the top of the stack. Wi...