@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

The Importance of Many Clouds

The cloud market is dominated by a lot of startups

Looking forward to the Redhat Summit this week in Boston with a theme of “Platform, Middleware, Virtualization, Cloud”. The cloud market is dominated by a lot of startups, with some goliath-size companies still waiting in the wings. Depending your point of view, they are either lumbering dinosaurs unaware of the next evolutionary shift, or if you are like me, I think they are poised to strike. If you believe enterprise adoption is the next wave of cloud adoption, then these organizations have huge salesforces with deep customer relationships and services organizations capable of assisting enterprises with the transition to cloud. They have knowledge on the workload profiles that are the essential first step in determining what in the enterprise is suited to cloud deployment.

Many of the workloads running on the infrastructure of these vendors would require significant re-architecting to take advantage of the cloud. These workloads would most likely be the trailing adopters with the more urgent being redeveloped on a new infrastructure platform. If my previous employer (Sun Microsystems) grew up as the platform of choice for the Internet, then Redhat grew up in the era of the commodity architectures made famous by Google and the social networking phenoms. Redhat technology (Linux, JBoss, etc..) supports a large number of the more modern application workloads that are suitable for cloud deployment. They are not to be underestimated.

What I am looking forward to is some good discussion on the Aeolus Project,  “delta cloud” and cloud engine. It's the cloud's best kept secret, and you can find a good overview at the Redhat Cloud Blog. It comprises the vision of “many clouds” and its importance to the future of cloud computing. Redhat's API approach is trying to solve the problem by gathering a community around the interoperability API vs. building portability into a product or development of standards. I think this is a valuable approach.

Why is “many clouds” important ?

The events of April 21st had me thinking about this more over the last few days. A multi-vendor solution as assurance against failures like the one at Amazon Web Services is one use case, but its certainly not the most interesting. The seamless movement of workloads across many clouds will shape the future of cloud services.

  • Portability/Lock-in – reduce the barriers to adoption associated with fear of lock-in to a single vendor. Portability also reduces switching costs which creates greater price/function competitiveness, reducing overall customer costs.
  • Unification – for enterprise, its quite possible you have already deployed multiple flavors of virtualization technology. Some KVM, RHEV, some Hyper-V, Xen and some VMware. You will want to consolidate these into a single automation framework to preserve your investment in OS images and software stacks.
  • Federation - if you look at global infrastructure providers or even service organizations, you will often see different providers having different strengths in different regions. Just the regulatory, cost, political, language, cultural differences alone ensure this is inevitable. Companies might adopt different providers in different regions to leverage those strengths. Creating a federated solution of different providers in different regions is an essential solution for these companies.
  • Hybrids – private clouds often have some limitations is workload elasticity and resource pooling. The ability to leverage hybrid cloud solutions will be a useful solution to minimize these limitations.
  • Disaster Recovery - related to hybrids and Multi-Provider availability, it provides a practically implementable solution. A bridge between private and public clouds to enable hot, even better rapid scaling “warm” disaster solutions.
  • Multi-Provider Availability – finally we reach the “Amazon Effect”. A push to eliminate single vendor point of failure through the use of multiple clouds in your solution. For me, the complexity of this type of architecture far outweighs the benefits that can be obtained (sort of like having a two node cluster with one linux and one solaris node).

If you are @RedhatSummit and want to talk cloud, virtualization and smart infrastructure and the "many clouds" vision, come visit Armada at Booth #211 in the Exhibit hall.

More Stories By Brad Vaughan

Brad Vaughan is a twenty year veteran consultant working with companies around the globe to transform technology infrastructure to deliver enhanced business services.

CloudEXPO Stories
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
All zSystem customers have a significant new business opportunity to extend their reach to new customers and markets with new applications and services, and to improve the experience of existing customers. This can be achieved by exposing existing z assets (which have been developed over time) as APIs for accessing Systems of Record, while leveraging mobile and cloud capabilities with new Systems of Engagement applications. In this session, we will explore business drivers with new Node.js apps for delivering enhanced customer experience (with mobile and cloud adoption), how to accelerate development and management of SoE app APIs with API management.
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage of data and analytics in the cloud, Architecture, integration, governance and security scenarios and Key challenges and success factors of moving data and analytics to the cloud