Welcome!

@CloudExpo Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Industrial IoT, Mobile IoT, @DXWorldExpo

@CloudExpo: Blog Feed Post

The Evolution of Cloud Connectivity | @CloudExpo #Cloud #IoT #BigData #DevOps

KWICies #001 – Life in the Fast Lane

The Evolution of Cloud Connectivity
By Frank Greco

“Intelligence is based on how efficient a species became at doing the things they need to survive.” ― Charles Darwin

“My theory of evolution is that Darwin was adopted.” ― Steven Wright

Yesterday
In case you missed it, the first phase of cloud computing has left the building. Thousands of companies are in the cloud. Practically all organizations regardless of size already have production applications in a public, off-premises cloud or a private cloud. Yep. Been there, done that.

And the vast majority of these applications use the classic “SaaS-style” public cloud model. Someone develops a useful service and hosts it on Amazon Web Services (AWS), Microsoft Azure, IBM Cloud Marketplace, Google Cloud Platform (GCP) or one of several other cloud vendors. Accessing this external service is typically performed via a well-defined API. Typically this API invocation is made using a simple REST call (or a convenient library wrapper around a REST call). This request originates from a web browser, native app on a mobile device or some server-side application and traverses the web. Using only port 443 or 80, it connects through a series of firewalls to the actual service running in the external cloud environment. The request is serviced by a process running in a service provider’s computing environment and returns a result to the client application.

kwic-1

Conventional SaaS-style Access

Only the Beginning
However this scenario greatly simplifies a real-world example of accessing a service. Quite honestly, this is a very basic, hello-world cloud connectivity model.

Today’s enterprise is a federation of companies with vast collections of dynamic services that are enabled/disabled frequently with ever-changing sets of authentication and access control. To survive in this environment, a modern enterprise needs to develop an intimate yet secure ecosystem of partners, suppliers and customers. So unlike the rudimentary connectivity case, the typical production application is composed of many dozens and perhaps hundreds of services, some internal to an enterprise and some residing in a collection of external cloud infrastructures or data centers. For example, the incredibly successful Amazon ecommerce website performs 100-150 internal service calls just to get data to build a personalized web experience.

Many of these external services that exist either in an external cloud vendor or another company’s data center often need to reach back to the originating infrastructure to access internal services and data to complete their tasks. Some services may even go further and also need access to information across cloud, network and company boundaries.

This ain’t your father’s cloud infrastructure.

Get off My Cloud
A particular use case is when a service running in a cloud environment, e.g., AWS, needs to authenticate access to this service. One solution is to provide a duplicate or subset of the internal authentication credentials (usually housed in some LDAP repository, e.g., Active Directory) directly in the public cloud. However this is redundant and brings potentially dangerously insecure authentication-synchronization and general data management issues. Unsurprisingly this scenario of accessing internal authentication or entitlements information residing in an internal directory turns out to be quite common for practically all service access.

Another example involves powerful cloud-based analytics or business intelligence services. In many cases such off-premises analytics-as-a-service providers need access to internal real-time data feeds that reside on the premises of a customer. That customer may not want to put that private real-time stream into the cloud environment for a variety of reasons, e.g., security, unnecessary data synchronization, additional management, etc.

The architectural solutions for both of these use cases involve either negotiating with the enterprise customer to create a REST API and deploy a family of application servers (extremely complex and highly improbable), or more typically, setting up a virtual private network (VPN) to achieve a real-time, “fat-pipe” connection.

kwic-2

Old-School Approach to Application Connectivity

Nothing Else Matters
While the technical aspects of setting up a legacy-style VPN are relatively straightforward, there is often a lengthy period of corporate signoffs and inter-company negotiations that precede the technical work.  For some companies this period of time can be many weeks. For some other large corporations, getting approvals for yet another VPN can take several months. This painfully long lead-time negatively impacts business agility and the all-important time-to-revenue.

In addition, VPN access is at the low-level TCP layer of the network stack. Despite various access control systems, the open nature of a VPN represents a security risk by potentially providing unauthorized (and authorized) users free-reign to many internal enterprise services. Also, VPN implementations vary. Some are proprietary and may cause potential issues when interfacing among various VPN vendors, especially VPNs that extend access to mobile devices.

What a Wonderful World
Ideally you would want to completely eliminate any legacy VPN requirement to significantly reduce unnecessary friction from the sales and deployment process. And you’d want an agile, on-demand connection that connects Application-to-Application (A2A) via a “white list” approach. To help future proof your infrastructure and accelerate operations, a container deployment approach based on the popular Docker would be more than useful and attractive to your developers.

Do You Believe in Magic
As of December 2011, the Internet standards bodies (IETF and W3C) formally approved a mechanism for a persistent connection over the web without using any additional ports and consequently maintaining your friendships in the InfoSec group. This standard is called “WebSocket” and effectively is a “TCP for the Web”.

Like most innovations being used for the first time, WebSocket was initially used as a mere replacement for inelegant browser push (AJAX) mechanisms to send data from a server to a user.

But by using the WebSocket protocol and its standardized API as a powerful foundation for wide-area TCP-style distributed computing, we get a phenomenally powerful innovation.    Enhancing basic WebSocket functionality with the necessary enterprise-grade security and reliability envelope, applications can now easily and most importantly securely access services on-demand through the firewall.   This type of enhanced approach to WebSocket avoids the awkward conversion of any enterprise application protocol to coarse-grained HTTP semantics.  Performance is rarely an issue with WebSocket.

kwic-3

WebSocket for App-to-App (A2A) Communication

This LAN is Your LAN
If you’re looking for a way for an external cloud application to access an internal, on-premises service in an on-demand Application-to-Application manner, the Kaazing Websocket Intercloud Connect (KWIC… yep, yet-another caffeine-induced acronym) provides this functionality. It’s based on the open-source Kaazing Gateway and works with any TCP-based protocol. You can see an example of KWIC used for LDAP access in the AWS Marketplace (if you don’t need support, its totally free).

More Stories By Kaazing Blog

Kaazing is helping define the future of the event-driven enterprise by accelerating the Web for the Internet of Things.

CloudEXPO Stories
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight and has been quoted or published in Time, CIO, Computerworld, USA Today and Forbes.
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the massive amount of information associated with these devices. Ed presented sought out sessions at CloudEXPO Silicon Valley 2017 and CloudEXPO New York 2017. He is a regular contributor to Cloud Computing Journal.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
They say multi-cloud is coming, but organizations are leveraging multiple clouds already. According to a study by 451 Research, only 21% of organizations were using a single cloud. If you've found yourself unprepared for the barrage of cloud services introduced in your organization, you will need to change your approach to engaging with the business and engaging with vendors. Look at technologies that are on the way and work with the internal players involved to have a plan in place when the inevitable happens and the business begins to look at how these things can help affect your bottom line.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.