Welcome!

@CloudExpo Authors: Pat Romanski, William Schmarzo, Stefana Muller, Elizabeth White, Karthick Viswanathan

Related Topics: @CloudExpo, Microservices Expo, Open Source Cloud, Containers Expo Blog, Agile Computing, SDN Journal

@CloudExpo: Article

CiRBA Saves Customers 55% on Data Center Software Licensing Costs

New Software License Control System delivers optimal VM placements for processor-based licensing models

CiRBA on Wednesday announced the general availability of a Software License Control System that enables organizations to reduce the costs associated with processor-based software licensing by an average of 55%.

CiRBA CTO and co-founder Andrew Hillier noted that "with the shift to data center class software licensing for virtual infrastructure, where licensing an entire physical host server allows an unlimited number of instances to be run, licensing optimization is now becoming a capacity management challenge."

The Software License Control module is an add-on to CiRBA's Capacity Control Console and optimizes VM placements in virtual and cloud infrastructure in order to:

1. Reduce the number of processors/hosts requiring licenses.
CiRBA determines optimized VM placements to both maximize the density of licensed components on physical hosts and isolate these licensed VMs from those not requiring the licenses.

2. Contain the licensed VMs on the licensed physical servers.
CiRBA's analytics restrict placements of licensed VMs to the designated physical servers over time in order to ensure licensing compliance and continued efficiency.

CiRBA also enables organizations to profile software licensing, configuration, policy and utilization requirements for new VMs that are coming into an environment, to route these VMs to appropriately licensed physical servers, and to reserve capacity for those VMs through its Bookings Management System. This is essential when managing dynamic virtual and cloud environments, and also provides visibility into requirements to grow or modify license pools based on upcoming demand. Through this booking and reservation process, CiRBA ensures that density remains optimized by considering both the bookings and organic growth in the environment, and using this to forecast the impact on capacity and licensing.

All of this translates directly in to savings that can be realized through lower expenditures for renewals, deferral of new software license purchases, and reduced yearly maintenance.  These savings can easily be in the millions of dollars, particularly on expensive operating system, database and middleware platforms.

More Stories By Liz McMillan

News Desk compiles and publishes breaking news stories, press releases and latest news articles as they happen.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, shared success stories from a few folks who have already started using VM-aware storage. By managing storage operations at the VM-level, they’ve been able to solve their most vexing storage problems, and create infrastructures that scale to meet the needs of their applications. Best of all, they’ve got predictable, manageable storage performance – at a level conventional storage can’t match. ...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.