@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

Services in a Cloud Computing Environment

Expect to see some interesting things in the future around services for the virtualized data center

Omar Sultan's Blog

Once we cut through the cloud-hype and start looking at the practicalities of implementing things like workload portability, I think the lessons of the past will re-assert themselves, this time with things like security and L4-7 services.

Quite a few years ago, when we we positioning the concept of the intelligent network, we had a slide that showed how features moved from servers or dedicated hardware to the the network over time. The trigger was usually when a service, say name resolution, became broadly used. At that point, it was seldom workable to have that service delivered by a place in the network--it needed to be ubiquitous...and highly available...and scaleable...and manageable...and usually ended up as a network service.

Reading a recent post by the ever fearless Christofer Hoff and the related Twitter exchange got me thinking about this again. Once we cut through the cloud-hype and start looking at the practicalities of implementing things like workload portability, I think the lessons of the past will re-assert themselves, this time with things like security and L4-7 services. There was a time when security=firewall, in essence, security was associated with a specific place in your network. Now, to be effective, security needs to be pervasively deployed and deliver security services that ubiquitous and consistent--no matter where a workload runs (my desktop, my data center, someone else’s data center) the security policy must be consistently implemented.

In short, models that depend on services such as security or load-balancing being associated with a specific place in the the network or a specific piece of infrastructure will not survive the transition. We need to be able to implement services wherever they are needed--the ability to provide security services to a given workload cannot be constrained by whether that workload happens to be running on a server that happens to be plugged into a firewall--it would be like saying you can only call certain area codes from certain certain extensions in your house--”Oh, you want to call New York? You’ll have to use the phone in the guest bedroom...”

For us, this is in our DNA--you plug into the network, you get access to all its goodness. As an example, our SAN solutions are built upon the concept of and intelligent fabric, where critical services are a function of the network,not a specific box. This means that I don’t have to worry about a server dying and taking my VSAN routing with it. It also means my capacity and performance automatically scale-up and scale-down with the number of switches in the network.

Unified fabric is an extension of this concept: plug into a unified fabric and you automatically have access to all your storage resources--no HBAs, no fiber runs, no fabric switches--access to storage is no longer a function of having specific infrastructure deployed. VN-Link and the Nexus 1000V are also a logical extension of this concept: no matter where a workload (VM) ends up running, its security policy will stay with it, so application of security policy is no longer a function of having a workload running in a specific location.

As you may guess we are continuing to expand on this concept, so expect to see some interesting things in the future around services for the virtualized data center.


More Stories By Omar Sultan

Omar Sultan is a regular contributor to Cisco's Data Center Blog.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

CloudEXPO Stories
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
All zSystem customers have a significant new business opportunity to extend their reach to new customers and markets with new applications and services, and to improve the experience of existing customers. This can be achieved by exposing existing z assets (which have been developed over time) as APIs for accessing Systems of Record, while leveraging mobile and cloud capabilities with new Systems of Engagement applications. In this session, we will explore business drivers with new Node.js apps for delivering enhanced customer experience (with mobile and cloud adoption), how to accelerate development and management of SoE app APIs with API management.
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage of data and analytics in the cloud, Architecture, integration, governance and security scenarios and Key challenges and success factors of moving data and analytics to the cloud