Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Containers Expo Blog, Cloud Security

@CloudExpo: Blog Post

Don't Double Down on Infrastructure | @CloudExpo #Cloud #Storage #DataCenter

Manage budgets smarter without fear of scale out

Don't Double-Down on Infrastructure - Scale Out as Needed

There has long been a philosophy in IT infrastructure that whenever you add capacity, you add plenty of room to grow into. This idea is based on traditional architecture that was complex, consisting of many disparate systems held together by the rigorous management of the administrators. The process of scaling out capacity has been a treacherous one that takes weeks or months of stress-filled nights and weekends.  These projects are so undesirable that administrators and anyone else involved would rather spend more than they like to buy more capacity than they need to avoid scaling out again as long as possible.

There are a number of reasons why IT departments may need to scale out.  Hopefully it is because of growth of the business which usually coincides with increased budgets.  It could be that business needs have shifted to require more IT services, demanding more data, more computing, and thus more capacity.  It could be that the current infrastructure was under-provisioned in the first place and creating more problems than solutions. Whatever the case, sooner or later, everyone needs to scale out.

The traditional planning process for scaling out involves first looking at where the capacity is bottlenecking.  It could be storage, CPU, RAM, networking, and any level of caching or bussing in between.  More than likely it is not just one of these but several which causes many organizations to simply hit the reset button and just replace everything, if they can afford it, that is.  Then they implement the new infrastructure to just have to go through the same process a few years down the line.  Very costly.  Very inefficient.

Without replacing the whole infrastructure, administrators must look to the various pieces of their infrastructure that might need refreshed or upgraded. This process can seem like navigating a minefield of unforeseen consequences.  Maybe you want to swap out disks in the SAN for faster, larger disks.  Can the storage controllers handle the increased speed and capacity?  What about the network?  Can it handle the increased I/O from faster and deeper storage? Can the CPUs handle it? Good administrators can identify at least some of these dependencies during planning but it can often take a team of experts to fully understand the complexities and then sometimes only through testing and some trial and error.

Exhausted yet? Fortunately, this process of scaling out has been dramatically simplified with hyperconverged infrastructure.  With a clustered, appliance-based architecture, capacity can be added very quickly.  For example, with HC3 from Scale Computing, a new appliance can be added to a cluster within minutes, with resources then immediately available, adding RAM, CPU, and storage capacity to the infrastructure.

HC3 even lets you mix and match different appliances in the cluster so that you can add just the capacity you need. Adding the new appliance to the cluster (where it is then called a "node", of course) is as simple as racking and cabling it and then assigning it with network settings and pointing it at the cluster.  The capacity is automatically absorbed into the cluster and the storage added seamlessly to the overall storage pool.

This all means that with hyperconverged infrastructure, you do not need to buy capacity for the future right now.  You can get just what you need now (with a little cushion of course), and scale out simply and quickly when you need to in the future.  The traditional complexity of infrastructure architecture is now the real bottleneck of capacity scale out. Hyperconverged Infrastructure is the solution.

More Stories By David Paquette

Starting with a degree in writing and a family history of software development, David entered the industry on the consumer end, providing tech support for dial up internet users before moving into software development as a software tester in 1999. With 16 years of software development experience moving from testing to systems engineering to product marketing and product management, David lived the startup and IPO experience with expertise in disaster recovery, server migration, and datacenter infrastructure. Now at Scale Computing as the Product Marketing Manager, David is leading the messaging efforts for hyperconverged infrastructure adoption.

CloudEXPO Stories
SUSE is a German-based, multinational, open-source software company that develops and sells Linux products to business customers. Founded in 1992, it was the first company to market Linux for the enterprise. Founded in 1992, SUSE is the world’s first provider of an Enterprise Linux distribution. Today, thousands of businesses worldwide rely on SUSE for their mission-critical computing and IT management needs.
The dream is universal: heuristic driven, global business operations without interruption so that nobody has to wake up at 4am to solve a problem. Building upon Nutanix Acropolis software defined storage, virtualization, and networking platform, Mark will demonstrate business lifecycle automation with freedom of choice and consumption models. Hybrid cloud applications and operations are controllable by the Nutanix Prism control plane with Calm automation, which can weave together the following: database as a service with Era, micro segmentation with Flow, event driven lifecycle operations with Epoch monitoring, and both financial and cloud governance with Beam. Combined together, the Nutanix Enterprise Cloud OS democratizes and accelerates every aspect of your business with simplicity, security, and scalability.
Crosscode Panoptics Automated Enterprise Architecture Software. Application Discovery and Dependency Mapping. Automatically generate a powerful enterprise-wide map of your organization's IT assets down to the code level. Enterprise Impact Assessment. Automatically analyze the impact, to every asset in the enterprise down to the code level. Automated IT Governance Software. Create rules and alerts based on code level insights, including security issues, to automate governance. Enterprise Audit Trail. Auditors can independently identify all changes made to the environment.
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Get started on automating your way to a brighter future!
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation.