Welcome!

@CloudExpo Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

@Cloud Expo | Cracking the #Cloud Code

The cloud is one of those technology trends that seems to be perpetually on the cusp of becoming ubiquitous

The cloud is one of those technology trends that seems to be perpetually on the cusp of becoming ubiquitous. But if recent analyst reports are any indication, cloud’s breakthrough moment is imminent. Late last year, Gartner predicted that in 2016, the bulk of new IT spend would shift to the public cloud, and that by the end of 2017, nearly half of all enterprises will have hybrid cloud deployments.

But if cloud has been around for so long, why will it take so long for cloud to become the dominant source of IT spend?

Psychology vs Technology

The determinant for most change is the underlying psychology that drives individuals and organizations. The IT industry as a whole has been underpinned by a deep-seeded need for control. The reason that most companies keep expertise in-house is that they want to maintain control—over their data, over the integration with their business workflows, over their schedules, and over their spend.

Of course control is under constant attack by cost. While traffic is booming, IT spend in most organizations continues to trend flat to down. This means that organizations need to constantly provide more compute resources, more storage, and faster interconnect while working with an increasingly less favorable ratio of admins to devices.

IT leaders looking to evolve their infrastructure for their business are left with a damning choice: do I give up control and move to the cloud, or do I operate under cash duress in trying to deliver an on-premises solution?

Hosting and Colocation as middle ground

The talk about hybrid clouds usually refers to hybrid environments where application workloads are executed both within a company’s owned datacenter and on some public cloud infrastructure. But there are other alternatives between private datacenter and public cloud.

Hosting and colocation service providers allow companies to own their own equipment, place it at a hosting site, and use hosting infrastructure for connectivity between sites and to the Internet. This model grants control of data to the IT organization while leveraging infrastructure and high-speed connectivity that exists at the colocation facility.

As colocation providers build out their own infrastructure, it also allows companies to burst workloads to a datacenter infrastructure that is physically adjacent to the company’s hosted servers. For applications that are particularly sensitive to performance, data locality is important, and this provides a more consistent means of elastically scaling resources during periods of high load.

Implications of cloud workloads on hosting providers

Hosting providers are already charged with providing high-capacity, low-latency connectivity. But as business continuity and data proximity become more important to users, this means that hosting providers have to extend their presence to multiple physical locations. While the idea of geographical expansion might seem simple, it is actually non-trivial to offer connectivity between sites.

The physical fiber infrastructure can be expensive in and of itself. And providing multiple connections via different entry points to a facility is not always as easy as PowerPoint and simple network diagrams might suggest. Beyond that, extending a Layer-2 domain across physical distances requires rethinking network infrastructure. Where a WAN gateway might have previously sufficed, hosting providers have to also consider how best to support tenant applications across physical distances.

What the MEF has to say about it all

The Metro Ethernet Forum explicitly calls out the shift from a WAN paradigm to a more cloud-centric model. In the shift, they indicate changes in the fundamental services that will make up next-generation cloud services.

mef

Ultimately, the MEF sees a move to dynamic, assured services. Dynamism requires interfaces for customer input, and assured means a much tighter alignment around different treatment for different applications and tenants. The physical infrastructure that used to be platform and tenant-agnostic will need to be more aware and more capable if the new generation of services is to really be meaningful.

Where do you start?

As a hosting provider or even as a customer looking to take advantage of newer architectures available to you, where do you actually start? As with anything, it all starts with education. Minimally, you need to begin instrumenting your current environment to get a feel for where your operating expenses currently lie. Additionally, you will want to assess your own application infrastructure to determine how conducive it is to cloud deployments. You need to also consider how your applications are evolving, where your user base is, what your requirements for business continuity are, and how you expect to drive cost (both capital and operational) over a 3-5 year time horizon.

[Today’s fun fact: Wild camels once roamed Arizona’s deserts.]

The post Cracking the cloud code appeared first on Plexxi.

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk was on IBM Cloudant, Apache CouchDB, and related open source tools and open standards.
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
"Peak 10 is a national cloud data center solutions managed services provider, and part of that is disaster recovery. We see a growing trend in the industry where companies are coming to us looking for assistance in their DR strategy," stated Andrew Cole, Director of Solutions Engineering at Peak 10, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happen...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight and has been quoted or published in Time, CIO, Computerworld, USA Today and Forbes.