|By Dustin Amrhein||
|December 7, 2009 01:00 PM EST||
A look at several different cloud computing solutions will reveal a technological enabler present in almost each one. The enabler I'm talking about is the use of virtual images. I cannot think of many, if any, cloud computing solutions that provide software elements (i.e. more than just servers, storage, memory, etc.) that do not use virtual images in some form or fashion.
Of course, one of the reasons virtual images form the backbone of many cloud solutions is obvious. Virtual images provide the benefits of server virtualization. We can activate many virtual images on the same physical machine, thus allowing us to achieve multi-tenancy (multiple operating systems and software stacks installed on the same physical machine). Besides driving higher hardware utilization rates, it also provides us the capability to run heterogeneous software environments on the same piece of hardware. This both enables and encourages the creation of a shared pool of compute resources which is a key characteristic of cloud computing environments.
Server virtualization may be the first thing that comes to mind when we think about virtual images, but at least in the context of cloud computing, I do not believe this is the most important benefit. If we are looking at cloud computing as a means to quickly and consistently provision software environments, then I think virtual images provide us a capability and benefit more valuable than server virtualization. In this respect, virtual images provide us a medium through which we can templatize the configuration of our software environments.
Consider the case of a fairly basic application serving environment. In this environment, you are likely to install an operating system, application server, and probably some type of load balancing solution. Each of these typically requires a different piece of software, different installation procedures, and finally integration with the other components. Installing these into a typical environment, without the use of virtual images, means that you either have scripts that account for each different piece of software and finally integration of the different components, or it means that a person manually installs and integrates the pieces each time you need an environment. Either process can be time-consuming and costly to maintain over time.
Enter the use of virtual images. With a virtual image, you can install and integrate all three components ONE time, and then capture the resultant environment as a virtual image. At this point, when an application environment is needed, the virtual image can simply be activated on top of a hypervisor platform. The application environment is typically available in a much more timely fashion than if manually installed or integrated because the installation and configuration have already been captured in the virtual image.
From what I described above, you may have caught on to what would seem like a drawback of using virtual images to templatize software environments. Specifically, it may seem that you need a distinct virtual image for every unique configuration of your application environment. If this were the case, management of your virtual image library would soon become a nightmare and the resulting cost (in both resource and time) would likely outweigh the original benefits. However, thanks to a relatively new standards-based approach to virtual images, this is not necessarily a problem.
The standard I'm talking about is the Open Virtualization Format (OVF) standard which has been published by the Distributed Management Task Force (DMTF). According to the OVF standard, it "describes an open, secure, portable, efficient and extensible format for the packaging and distribution of software to be run in virtual machines." In particular to our discussion here, there is a part of the standard that describes the use of an ovf-env.xml file within the virtual image.
This file is essentially a key-value style XML file that describes desired aspects of the environment. Keys and values can be supplied during image activation, and configurations scripts that run during virtual image activation can read information from the file and react appropriately. Thus, instead of supplying N different virtual images for N different software environments, you can supply 1 virtual image and utilize the ovf-env.xml file in conjunction with configuration scripts within the image to produce N different environments. The use of this mechanism with virtual images delivers the capability to templatize software environments without sacrificing flexibility or encouraging unsustainable virtual image proliferation.
In WebSphere, we utilize the model outlined in the OVF standard when packaging our WebSphere Application Server Hypervisor Edition virtual images. This allows our WebSphere CloudBurst product to provision these images and create many different types of WebSphere Application Server environments from a single virtual image (read this article for more information). I expect the use of this standard and the mechanisms it provides will become pretty prevalent in the near future. Now if we could just get to the point where virtual disk formats are standardized, but that's an entirely different topic.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
Mar. 29, 2017 06:00 AM EDT Reads: 9,030
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, will discuss how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He will discuss how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
Mar. 29, 2017 06:00 AM EDT Reads: 2,808
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Mar. 29, 2017 05:00 AM EDT Reads: 6,402
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
Mar. 29, 2017 04:00 AM EDT Reads: 15,077
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Mar. 29, 2017 03:45 AM EDT Reads: 2,129
Niagara Networks exhibited at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. Niagara Networks offers the highest port-density systems, and the most complete Next-Generation Network Visibility systems including Network Packet Brokers, Bypass Switches, and Network TAPs.
Mar. 29, 2017 03:30 AM EDT Reads: 3,347
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
Mar. 29, 2017 03:30 AM EDT Reads: 11,749
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Mar. 29, 2017 03:30 AM EDT Reads: 3,207
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Mar. 29, 2017 01:15 AM EDT Reads: 2,485
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Mar. 29, 2017 01:15 AM EDT Reads: 9,280
Interoute has announced the integration of its Global Cloud Infrastructure platform with Rancher Labs’ container management platform, Rancher. This approach enables enterprises to accelerate their digital transformation and infrastructure investments. Matthew Finnie, Interoute CTO commented “Enterprises developing and building apps in the cloud and those on a path to Digital Transformation need Digital ICT Infrastructure that allows them to build, test and deploy faster than ever before. The int...
Mar. 29, 2017 12:15 AM EDT Reads: 1,407
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
Mar. 28, 2017 11:15 PM EDT Reads: 3,479
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Mar. 28, 2017 09:30 PM EDT Reads: 3,863
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Mar. 28, 2017 09:30 PM EDT Reads: 3,753
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Mar. 28, 2017 08:15 PM EDT Reads: 2,356
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, will provide a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services ...
Mar. 28, 2017 07:00 PM EDT Reads: 4,520
SYS-CON Events announced today that Auditwerx will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Auditwerx specializes in SOC 1, SOC 2, and SOC 3 attestation services throughout the U.S. and Canada. As a division of Carr, Riggs & Ingram (CRI), one of the top 20 largest CPA firms nationally, you can expect the resources, skills, and experience of a much larger firm combined with the accessibility and atten...
Mar. 28, 2017 06:15 PM EDT Reads: 483
[session] Composable Infrastructure and Multi-Cloud By @HTBase | @CloudExpo #API #Cloud #Storage #DataCenter
Imagine having the ability to leverage all of your current technology and to be able to compose it into one resource pool. Now imagine, as your business grows, not having to deploy a complete new appliance to scale your infrastructure. Also imagine a true multi-cloud capability that allows live migration without any modification between cloud environments regardless of whether that cloud is your private cloud or your public AWS, Azure or Google instance. Now think of a world that is not locked i...
Mar. 28, 2017 06:15 PM EDT Reads: 482
MongoDB Atlas leverages VPC peering for AWS, a service that allows multiple VPC networks to interact. This includes VPCs that belong to other AWS account holders. By performing cross account VPC peering, users ensure networks that host and communicate their data are secure. In his session at 20th Cloud Expo, Jay Gordon, a Developer Advocate at MongoDB, will explain how to properly architect your VPC using existing AWS tools and then peer with your MongoDB Atlas cluster. He'll discuss the secur...
Mar. 28, 2017 04:45 PM EDT Reads: 527
Deep learning has been very successful in social sciences and specially areas where there is a lot of data. Trading is another field that can be viewed as social science with a lot of data. With the advent of Deep Learning and Big Data technologies for efficient computation, we are finally able to use the same methods in investment management as we would in face recognition or in making chat-bots. In his session at 20th Cloud Expo, Gaurav Chakravorty, co-founder and Head of Strategy Development ...
Mar. 28, 2017 03:45 PM EDT Reads: 3,815