@CloudExpo Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Open Source Cloud, Containers Expo Blog

@CloudExpo: Article

Why Is OpenNebula the Solution for Private Cloud Computing?

Rich functionality, Integration Capabilities and Unique Features for Datacenter Virtualization

OpenNebula comes with several characteristics that makes it a unique software for the management of private clouds. This is actually not a mere coincidence, but rather a consequence of the design principles followed by the OpenNebula team at the time of creating the architecture and whenever a new feature is planned. So, without a shred of a doubt, we can claim that OpenNebula has the ability of managing Private Clouds embedded in its DNA.

Let’s see the three stronger points of OpenNebula when managing a local set of physical and virtual resources, for in-house consumption:

  • Data models adjusted to the datacenters. The entities that OpenNebula work with, as abstractions of the physical and virtual resources, are a reflection of the typical resources found in traditional datacenters. In this way, the command-line interface (CLI) and the APIsoffered by OpenNebula are able to expose all the functionality of the Private Cloud in all its richness, unlike Public Cloud interfaces which hide complexities and don’t allow basic Private Cloud operations. By APIs we are referring here to the low level APIs, like the XML-RPC interface, as well as the slightly higher level OpenNebula Cloud API (OCA) and itsjava bindings.
  • Integration capabilities. If anything was clear for the architecture designers at the dawn of OpenNebula, it was this: “No two datacenters are the same“. What this sentence tries to capture is that the datacenter ecosystem is so vast and rich that it is impossible to build a closed solution that will fit in all the datacenter. Rather, OpenNebula architecture has been heavily tailored to be plug-in oriented. In practice, this translates to the OpenNebula core working with abstraction of the resources, while the operations are delegated to drivers, separated processes that know how to deal with a certain aspect and flavor of the technology (being that a particular hypervisor, certain virtual switch, a disk cabinet of a given brand, etc) gets the job done. This smooths considerably the task of integrating OpenNebula with external components (like LDAP for authentication, ganglia for monitoring, etc).
  • Unique functionality. OpenNebula features several functionalities that are unique and come very handy at the time of managing datacenter resources. The ability to group the physical hosts in clusters, for instance, allows for the segmentation of servers according to their characteristics. This clusters can then register different datastores, in order to balance I/O operations, and different virtual networks, allowing for homogeneous network and storage configurations per cluster, instead of per datacenter. It is also worth noting that the characteristics of OpenNebula regarding scheduling (specially those about the simple yet powerful matchmaking algorithm) can be quite useful to balance the allocation of virtual resources in a Private Cloud, specially in conjunction with clusters. And last, but not least, OpenNebula offers mechanisms to deal with multiple instances (OpenNebula Zones) under a centralized management server (oZones), as well as means to compartimentalize the resources in your datacenter to build smaller Private Clouds that can be offered to and managed by external or internal teams, known as Virtual Data Centers (VDCs).
  • These three characteristics are the pilars that makes OpenNebula a good candidate for the best solution to turn your datacenter into a Private Cloud.

    More Stories By Ignacio M. Llorente

    Dr. Llorente is Director of the OpenNebula Project and CEO & co-founder at C12G Labs. He is an entrepreneur and researcher in the field of cloud and distributed computing, having managed several international projects and initiatives on Cloud Computing, and authored many articles in the leading journals and proceedings books. Dr. Llorente is one of the pioneers and world's leading authorities on Cloud Computing. He has held several appointments as independent expert and consultant for the European Commission and several companies and national governments. He has given many keynotes and invited talks in the main international events in cloud computing, has served on several Groups of Experts on Cloud Computing convened by international organizations, such as the European Commission and the World Economic Forum, and has contributed to several Cloud Computing panels and roadmaps. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface, and has participated in the main European projects in Cloud Computing. Llorente holds a Ph.D in Computer Science (UCM) and an Executive MBA (IE Business School), and is a Full Professor (Catedratico) and the Head of the Distributed Systems Architecture Group at UCM.

    CloudEXPO Stories
    Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
    "When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
    Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
    Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
    A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.