Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Java IoT, IBM Cloud

@CloudExpo: Blog Post

The IBM Workload Deployer

Introducing workload patterns

I hate sitting on secrets. I always have. I understand that sometimes it's in the best interest of everyone (and your job) to keep tight lips, but that does not make it any more fun. Inevitably, the run-up to our annual IMPACT conferences means everyone in the lab is doing their fair share of secret keeping -- just waiting for announce time. For a lot of us, that day ended Tuesday with the announcement of the IBM Workload Deployer v3.0.

Now, you may be wondering, 'I have never heard of this. Why is it version 3.0??' Well, IBM Workload Deployer is a sort of evolution of the WebSphere CloudBurst Appliance, which was previously at version 2.0. This is good news for all of our current WebSphere CloudBurst users because all of the functionality (plus new bits of course) that they have been using in WebSphere CloudBurst are present in IBM Workload Deployer. You can use and customize our IBM Hypervisor Edition images in IBM Workload Deployer. You can build and deploy custom patterns that contain custom scripts in order to create highly customized IBM middleware environments. So, what's the big deal here? Two words: workload patterns.

Workload patterns represent a new cloud deployment model and are an evolution of the traditional topology patterns you may have seen with WebSphere CloudBurst Appliance (I am a little torn between evolution and revolution, but that's splitting hairs). Fundamentally, workload patterns raise the level of abstraction one notch higher than topology patterns and put the focus on the application. That means, when you use a workload pattern the focus is on the application instead of the application infrastructure. Perhaps an example would be helpful to illustrate how a workload pattern may work in IBM Workload Deployer.

Let's consider the use of a workload pattern that was part of the recent announcement, the IBM Workload Deployer Pattern for Web Applications v1.0. Just how might something like this work? It's simple really. You upload your application (maybe a WAR or EAR file), upload a database schema file (if you want to deploy a database with the solution), upload an LDIF file (if you want to setup an LDAP in the deployment to configure application security), attach policies that describe application requirements (autonomic scaling behavior, availability guidelines, etc), and hit the deploy button. IBM Workload Deployer handles setting up the necessary application middleware, installing and configuring applications, and then managing the resultant runtime in accordance with the policies you defined. In short, workload patterns provide a completely application centric approach to deploying environments to the cloud.

Now, if you are a middleware administrator, application developer, or just a keen observer, you probably have picked up on the fact that in order to deliver something as consumable and easy to use as what I described above, one must make a certain number of assumptions. You are right. Workload patterns encapsulate the installation, configuration, and integration of middleware, as well as the installation and configuration of applications that run on that middleware. Most of this is completely hidden from you, the user. This means you have less control over configuration and integration, but you also have significantly reduced labor and increased freedom/agility. You can concentrate on the development of the application and its components and let IBM Workload Deployer create and manage the infrastructure that services that application.

Having shown and lobbied a bit for the benefits of workload patterns, I also completely understand that sometimes you just need more control. That is not a problem in IBM Workload Deployer because as I said before, you can still create custom patterns, with custom scripts based on custom IBM Hypervisor Edition images. The bottom line is that the IBM Workload Deployer offers choice and flexibility. If your application profile meshes well with a workload pattern, by all means use it. If you need more control over configuration or more highly customized environments, look into IBM Hypervisor Edition images and topology patterns. They are both present in IBM Workload Deployer, and the choice is yours.

If you happen to be coming to IBM Impact next week, there will be much more information about IBM Workload Deployer. I encourage you to drop-by our sessions, ask questions, and take the opportunity to meet some of our IBM lab experts. Hope to see you in Las Vegas!

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.