Welcome!

@CloudExpo Authors: Ed Featherston, Rostyslav Demush, Jignesh Solanki, Elizabeth White, Yeshim Deniz

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

Mainframes Provide Fast Access to Cloud Computing

A growing alignment between mainframe computing and cloud computing

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA. Read a full transcript of the interview.

Enterprises are seeking cloud computing efficiency benefits, subsequent lower total costs, and a highly valued ability to better deliver flexible services that support agile business processes.

Turns out so-called private clouds, or those cloud computing models that enterprises deploy and/or control on-premises, have a lot in common with longstanding mainframe computing models and techniques. Back to the future, you might say.

New developments in mainframe automation and other technologies increasingly support the use of mainframes for delivering cloud-computing advantages -- and help accelerate the ability to solve recession-era computing challenges around cost, power, energy use and reliability.

More evidence of the alignment between mainframes, mainframe automation and management, and cloud computing comes with today's announcement that CA has purchased key assets of Cassatt Corp., maker of service level automation and service level agreement (SLA) management software.

I had the pleasure to recently learn more about how the mainframe is in many respects the cloud in a sponsored podcast interview with Chris O'Malley, executive vice president and general manager for CA's Mainframe Business Unit.

Here are some excerpts:

Gardner: What makes cloud so appealing and feasible right now?

O'Malley: Cloud as a concept is, in its most basic sense, virtualizing resources within the data center to gain that scale of efficiency and optimization. ... Physically there are many, many servers that support the ongoing operations of a business. CFOs and CEOs are starting to ask simple, but insightful, questions about why we need all these servers and to what degree these servers are being utilized.

When they get answers back and it's something like 15, 10, or 5 percent utilization, it begs for a solution to the problem to start bringing a scale of virtualization to optimize the overall data center to what has been done on the mainframe for years and years.

... It's about both the need from a business standpoint of trying to respond to reduced cost of computing and increased efficiency at a time when the technologies are becoming increasingly available to customers to manage distributed environments or open systems in a way similar to the mainframe.

Larger customers are using their mainframe in a highly virtualized way. They've been doing it for 30 years. It was the genesis of the platform. ... They try to get as much out of it as they possibly can. So, from its beginning, it was virtualized.

The viability of things like salesforce.com, CRM, and the need to coordinate that data with what for most customers is 80 percent of their mission-critical information residing on the mainframe is making people figure out how to fix those problems. It's making this cloud slowly, but pragmatically, come true and become a reality in helping to better support their businesses.

The distributed environment and the open-system environment, in terms of its genesis, was the reverse of what I described in the mainframe. The mainframe, at some point, I think in the early '90s, was considered to be too slow to evolve to meet the needs of business. You heard things like mounting backlog and that innovation wasn't coming to play.

In that frustration, departments wanted their server with their application to serve their needs. It created a significant base of islands, if you will, within the enterprise that led to these scenarios where people are running servers at 15, 10, or 5 percent utilization. That genesis has been the basic fiber of the way people think in most of these organizations.

This 15 or 10 percent utilization is what we consistently see, customer after customer after customer. ... You're seeing the pendulum come back. This is just getting too expensive, too complex, and too hard to keep up with business demands, which sounds a lot like what people's objections were about the mainframe 20 years ago. We're now seeing that maybe a centralized model is a better way to serve our needs.

Gardner: How does that relate to where the modern mainframe is?

O'Malley: The modern mainframe is effectively an on-demand engine. IBM has created now an infrastructure that, as your needs grow, turns on additional engines that are already housed in the box. With the z10, IBM has a platform that is effectively an in-house utility ... With the z10 and the ability to expand capacity on demand, it's very attractive for customers to handle these peaks, but not pay for it all year long.

... The mainframe has always been very good at resilience from a security standpoint. The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be.

We're just making the economics better over time. The attributes that are professed or promised for the cloud on the distributed side are being realized today by many mainframe customers and are doing great work. It's not just a hope or a promise.

Gardner: There is some disconnect, though, cultural and even generational. A lot of the younger folks brought up with the Web, think of cloud applications as being Web applications.

O'Malley: Despite all these good things that I've said about the mainframe, there are still some nagging issues. The people who tend to work on them tend to be the same ones who worked on them 30 years ago. The technology that wraps it hasn't been updated to the more intuitive interfaces that you're talking about.

CA is taking a lead in re-engineering our toolset to look more like a Mac than it does like a green screen. We have a brand new strategy called Mainframe 2.0, which we introduced at CA World last year. We're showing initial deliverables of that technology here in May.

... Our first technology within Mainframe 2.0, is called the Mainframe Software Manager. It's effectively InstallShield for the mainframe. We developed that with 20-somethings. In our Prague data center, we recruited 120 students out of school and they developed that in Java on a mainframe. ... We have 25-year-old people in Prague that have written lines of code that, within the next 12 months, we'll be running at the top 1,000 companies on the face of the earth. There aren't a lot of jobs in life that present you that kind of opportunity.

... The mainframe technologically can do a lot, if not everything you can do on the distributed side, especially with what z/Linux offers. But, we've got to take what is a trillion dollars of investment that runs in the legacy virtual operating system environment and bring that up to 2009 and beyond.

... An open system has its virtues and has its limits. We're raising the abstract to the point where, in a collective cloud, you're just going to use what's best and right for the nature of work you're doing without really even knowing whether this is a mainframe application -- either in z/OS, or z/Linux -- or it's Linux on the open system side or HP-UX. That's where things are going. At that point, the cloud becomes true in the promise where it's being touted at the moment.

To be very honest, it's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it.

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@CloudExpo Stories
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor. Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the abili...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
"Since we launched LinuxONE we learned a lot from our customers. More than anything what they responded to were some very unique security capabilities that we have," explained Mark Figley, Director of LinuxONE Offerings at IBM, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...