@CloudExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Liz McMillan, Elizabeth White

Related Topics: Wearables, @CloudExpo

Wearables: Blog Feed Post

The Inter-Cloud: Will MAE Become a MAC?

According to the Wikipedia entry, it was first introduced in 2007 by Kevin Kelly

If public, private, hybrid, cumulus, stratus wasn’t enough, the ‘Inter-Cloud’ concept came up again at the Cloud Connect gathering in San Jose last week.

According to the Wikipedia entry, it was first introduced in 2007 by Kevin Kelly, both Lori MacVittie and Greg Ness wrote about the Intercloud last June and many reference James Urquhart in bringing it to everyone’s attention.

Since there is no real interoperability between clouds, what happens when one cloud instance wants to reference a service in another cloud?  Enter the Inter-Cloud.  As with most things related to cloud computing, there has been lots of debate about exactly what it is, what it’s supposed to do and when it’s time will come.

600_cloud_over_beachIn the Infrastructure Interoperability in a Cloudy World’ session at Cloud Connect, the Inter-Cloud was referenced as the ‘transition point’ when applications in a particular cloud need to move.  Application mobility comes into play with Cloud Balancing, Cloud Bursting, disaster recovery, sensitive data in private/application in public and any other scenario where application fluidity is desired and/or required.  An Inter-Cloud is, in essence, a mesh of different cloud infrastructures governed by standards that allow them to interoperate.  As ISPs were building out their own private backbones in the 1990’s, the Internet needed a way to connect all the autonomous systems to exchange traffic.  The Network Access Points (NAPs) and Metropolitan Area Ethernets (now Exchange – MAE East/MAE West/etc) became today’s Internet Exchange Points (IXP).  Granted, the agreed standard for interoperability, TCP/IP and specifically BGP, made that possible and we’re still waiting on something like that for the cloud; plus we’re now dealing with huge chunks of data (images, systems, etc) rather than simple email or light web browsing.  I would imagine that the major cloud providers already have connections the major peering points and someday there just might be the Metro Area Clouds (MAC West, MAC East, MAC Central) and other cloud peering locations for application mobility.  Maybe cloud providers with similar infrastructures (running a particular hypervisor on certain hardware with specific services) will start with private peering, like the ISPs of yore.

The reality is that it probably won’t happen that way since clouds are already part of the internet, the needs of the cloud are different and an agreed method is far from completion.  It is still interesting to envision though.  I also must admit, I had completely forgotten about the Inter-Cloud and you hear me calling it the ‘Intra-Cloud’ in this interview with Lori at Cloud Connect.  Incidentally, it’s fun to read articles from 1999 talking about the Internet’s ‘early days’ of ISP Peering and those from today on how it has changed over the years.



Read the original blog entry...

More Stories By Peter Silva

Peter is an F5 evangelist for security, IoT, mobile and core. His background in theatre brings the slightly theatrical and fairly technical together to cover training, writing, speaking, along with overall product evangelism for F5. He's also produced over 350 videos and recorded over 50 audio whitepapers. After working in Professional Theatre for 10 years, Peter decided to change careers. Starting out with a small VAR selling Netopia routers and the Instant Internet box, he soon became one of the first six Internet Specialists for AT&T managing customers on the original ATT WorldNet network.

Now having his Telco background he moved to Verio to focus on access, IP security along with web hosting. After losing a deal to Exodus Communications (now Savvis) for technical reasons, the customer still wanted Peter as their local SE contact so Exodus made him an offer he couldn’t refuse. As only the third person hired in the Midwest, he helped Exodus grow from an executive suite to two enormous datacenters in the Chicago land area working with such customers as Ticketmaster, Rolling Stone, uBid, Orbitz, Best Buy and others.

Writer, speaker and Video Host, he's also been in such plays as The Glass Menagerie, All’s Well That Ends Well, Cinderella and others.

CloudEXPO Stories
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning provides predictive models which a business can apply in countless ways to better understand its customers and operations. Since machine learning was first developed with flat, tabular data in mind, it is still not widely understood: when does it make sense to use graph databases and machine learning in combination? This talk tackles the question from two ends: classifying predictive analytics methods and assessing graph database attributes. It also examines the ongoing lifecycle for machine learning in production. From this analysis it builds a framework for seeing where machine learning on a graph can be advantageous.'
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.