Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: Containers Expo Blog

Containers Expo Blog: Article

Distributed Computing and Virtualization

The story around the Cloud is about the efficiencies that can be gained using distributed computing and virtualization

Mike Workman's Blog

Most Cloud providers let you run apps of any kind on their compute, store, and connectivity resources. Salesforce, which up until now limited themselves to their own apps for SFA and CRM, has declared itself a Cloud Computing company. In their case they really are a Cloud Computing Company, but I am going to try to outline this whole phenomenon and discussion in terms that I can relate to. Perhaps you can too.

My friend Tom Mornini of Engine Yard pointed out that the Type 2 analogy was a bit pejorative; he thought it was a negative slant on Cloud computing. So I put the Type 1 analogy in; talk to anyone who owns a boat – 4 out of 5 will tell you that it might be a lot less work and more bang for the buck to ride around in one than to have the headaches of owning one. By the way, Tom wrote a great article on Cloud computing.

Of course to some, owning anything and staffing it is an advantage, especially if it includes proprietary “secret sauce”. So, the beauty of the Cloud is in the eye of the beholder. My mother-in-law uses gmail – and if she could get rid of her computer, she would. We’ve been through this before. Remember WebTV? Your computer was a set-top box. Or your set-top box was your computer.

For lots of IT infrastructure companies it doesn’t really matter. If Pillar sells storage to end users or to people who sell the storage as a service, all is well. People still need to store stuff. We have many customers who do just that.

Pillar sells an Enterprise class product – the Axiom. This matters because data centers that offer cloud computing must be highly reliable, fault tolerant, performance resilient (under fault), serviceable, and virtualized. Pillar’s QoS offers Cloud providers far more than just storage; it gives them the ability to gain the huge efficiencies they need from their capital assets that classical storage solutions don’t allow.

It seems to me that the story around the Cloud is about the efficiencies that can be gained using distributed computing and virtualization. If a Customer is big enough to have a an efficient IT infrastructure, outsourcing brings no more efficiency than the standard “this isn’t a core competency” argument. For small organizations, the efficiency of sharing the Cloud with lots of other small customers can be significant.

So, bring on the Cloud! Of course, a slight interruption in Google’s Cloud and gmail service, say for preventative maintenance, for a year or so, would also be appreciated; I would just have to do without hearing from my mother-in-law for a year. Gad!! (You see, one man’s downtime is another’s silver lining!!)

More Stories By Mike Workman

Mike Workman is Chairman & CEO of Pillar Data Systems. He has spent his career breaking new technical ground in the storage industry. In his 25+ years in the storage business, Mike's appointments have included vice president of worldwide development for IBM's storage technology division, senior vice president and CTO of Conner Peripherals, and vice president of OEM storage subsystems for IBM. He has a PhD and Masters from Stanford, a Bachelors degree from Berkeley and holds over fifteen technology patents.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpdesk on to our product development and cloud service teams. Learn how the Chief Executive team helped guide managers and staff towards this goal with metrics to measure progress, staff hiring or realignment, and new technologies and certifications.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data environments, and cloud data warehouses like Snowflake, Redshift, BigQuery and Azure SQL DW, have given the cloud its own gravity - pulling data from existing environments. In this presentation we will discuss this transition, describe the challenges and solutions for creating the data flows necessary to move to cloud analytics, and provide real-world use-cases and benefits obtained through adop...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.