Welcome!

@CloudExpo Authors: Pat Romanski, William Schmarzo, Stefana Muller, Elizabeth White, Karthick Viswanathan

Related Topics: Containers Expo Blog, Java IoT, Linux Containers, Open Source Cloud, Server Monitoring, @CloudExpo

Containers Expo Blog: Blog Feed Post

Red Hat Enterprise Virtualization 3.1: A Brief Overview

The latest version is more scalable than previous versions

Red Hat has been on the leading edge when it comes to open source virtualization solutions. Red Hat Enterprise Virtualization provides robust virtualization from the network to storage and all the way to the desktop. The latest update, version 3.1, offers even more features to add to its list of capabilities.

Here are some of the things you need to know about Red Hat Enterprise Virtualization 3.1:

  • The latest version is more scalable than previous versions, narrowing the gap between the big competitors like VMWare and Microsoft’s Hyper-V.
  • The management interface for Red Hat Enterprise Virtualization 3.1 contains a vast number of improvements, giving admins greater functionality in a more intuitive environment than ever before.
  • While competitors like Fedora are just now starting to get into the cloud and virtualization game, Red Hat has forged ahead creating a wide lead. Currently, it’s the only open source virtualization infrastructure that can support the enterprise in a mission-critical environment and from end to end.
  • The low cost of entry to this latest version puts it in a very competitive spot. Currently, pricing ranges from 50% to 70% less than competitors already in the market, giving Red Had Enterprise Virtualization a definite advantage over the competition.
  • Version 3.1 contains a number of improvements to the hypervisor. The number of guest virtual machines that can be hosted by a server is now 160, as compared to 64 in the previous release of the software. It will also support as much as 2TB of memory for each virtual machine. The hypervisor now supports all of the latest advancements in x86 processors, including Intel, AMD and Opteron.
  • Red Hat Enterprise Virtualization now supports the ability for administrators to create snapshots or even clones of currently-running virtual machines. You don’t’ need to stop the machine, providing significant uptime. The cloning feature is ideal for development and test environments, too.

If you’re looking into new virtualization solutions, make sure to look into Red Hat Enterprise Virtualization 3.1. Not only does this open source solution save you money, it really does have the kind of functionality that most enterprises need.

Read the original blog entry...

More Stories By Unitiv Blog

Unitiv, Inc., is a professional provider of enterprise IT solutions. Unitiv delivers its services from its headquarters in Alpharetta, Georgia, USA, and its regional office in Iselin, New Jersey, USA. Unitiv provides a strategic approach to its service delivery, focusing on three core components: People, Products, and Processes. The People to advise and support customers. The Products to design and build solutions. The Processes to govern and manage post-implementation operations.

CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, shared success stories from a few folks who have already started using VM-aware storage. By managing storage operations at the VM-level, they’ve been able to solve their most vexing storage problems, and create infrastructures that scale to meet the needs of their applications. Best of all, they’ve got predictable, manageable storage performance – at a level conventional storage can’t match. ...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.