Welcome!

Cloud Expo Authors: Elizabeth White, Ali Hussain, Pat Romanski, Jerry Melnick, Liz McMillan

Related Topics: Cloud Expo

Cloud Expo: Article

How the Amazon AWS Extinction Event will Re-Make the Tech Industry

A Turing test is a well known test used to measure the success of early artificial intelligence systems

On June 7th @cloudexpo I attended a presentation on Eucalyptus by Rich Wolski (founder and CTO of Eucalyptus, known to throw lightning bolts from Mount Olympus when he  thinks nobody is looking. He certainly threw a few during his presentation @ CloudExpo that most people didn't notice.) during which he described the genesis of Eucalyptus. Creating a cloud interface indistinguishable from the interface of Amazon AWS was the critical success factor and defined the project. The challenge, Rich explains, was like that of a "Turing Test."

A Turing test is a well known test used to measure the success of early artificial intelligence systems. In a Turing test the user interacts with another 'user' via a keyboard interface not unlike a present day chat system. If after asking and receiving questions and responses from the other user the human user believes the other user is human, then that AI system has met a certain measure of success. IBM's Jeopardy / Watson demonstration provides an interesting example of the Turing test. While IBM focused on the speed of analysis and the depth and volume of data accessed by Watson on my opinion this focus masks the brilliant ability of Watson to simultaneously research and solve for meaning. It's evident but worth noting that for Watson, positing and refining the meaning of the question was significantly more difficult than retrieving the data. And to me it's this kind of  "work" that the Turing Test really tests for.

Decorator Crab in action!

If the goal of Eucalyptus at inception was to provide an interface indistinguishable from Amazon AWS's interface,  what does this mean in terms of the "stickiness" or competitive positioning of web services and their providers? In a marine ecosystem if you are a decorator crab and your unique and carefully constructed shell or wrapper or mediation suddenly ceases to offer you a stable degree of survivability this trouble may be brief and final.

In Amazon's case I wonder if the reason there's been no public conflict between Amazon and Eucalyptus is because essentially, Amazon and Eucalyptus form a symbiotic organism: Amazon AWS provides the Public Cloud and Eucalyptus transmits the Amazon AWS DNA / interface to the Private Cloud. In other words, while Amazon may make money via the Cloud, Amazon  may perceive the competitive landscape (seascape) very different from other participants. What if Amazon.com does not directly compete through AWS technology? Instead, Amazon.com operates several hierarchies above from the Masters of the Universe game room. What if for Amazon the cloud play is to completely disrupt and reorganize the datacenter industry ecosystem by unleashing an "event" which disrupts large and small players alike and results in a disruptive redefinition of service delivery, choice of customer, and pricing. And just for good measure, Amazon.com, may in fact remain the leader and lawgiver of the business internet.

Once Amazon put the "Cloud" horse on this multidimensional chessboard, the marketplace forever changed. While one might argue that teaching consumers to purchase things on the internet took ten years, clearly in the cloud and datacenter space, change ripples or waves through the system an order of magnitude more quickly, and appears to be accelerating. @CloudExpo, June 8th 2011 CEO panel, Treb Ryan of OpSource stated that in just six months from now the industry will moved  forward in a meaningful way and that issues in the foreground now would not be in the foreground then. Unlike other participants in the Cloud Computing space Amazon has hundreds of millions of end users and millions of businesses using its ecommerce interface to purchase and sell goods. Is Amazon playing in a much larger game and other participants only see the shadows of these moves, but not the body or the head of this organism?

Analyzing the marketplace from the point of view of Amazon.com as an ecommerce SaaS market maker may better reveal the nature of Amazon's behavior in the Cloud ecosystem. Amazon.cpm may be the world's largest SaasS provider by revenue. Think about the the evolution of Amazon's business model and position in the marketplace . In every line of business from its genesis to today Amazon.com has begun by using software to redefine a brick and mortar business and over time increasingly seeks to replace physical commerce activity with the activity of third party participants who actually interact with physical goods and services. In other words, Amazon creates a marketplace, demonstrates success, then increasingly moves to profit by selling other market participants the Amazon.com software services that enable the business. Amzon moves to exit the business of moving physical objects as quickly as it possibly can. Amazon seeks to sell the ecommerce application to other businesses that use that system to serve Amazon's customer base.


What Business is Amazon.com Really In? How Does Cloud fit into this Puzzle?
Rich Wolski @CloudExpo pointed this out to me by simply stating that Amazon's real business is being a SaaS provider of an ecommerce application. If delivering ecommerce services is Amazon's core business, then what is Amazon's Cloud business? And how does what Amazon.com is doing in the Cloud really relate to Amazon's core ecommerce business? If we think about the much cited example of the invention of the telephone, the telephone itself wasn't worth anything until a network of communications could be built. In order to monetize the telephone, Bell had to build out the necessary infrastructure. In recent history, in order for Amazon to extinguish physical books and enable the ecosystem for ebooks, Amazon had to invent the Kindle. Think about the Kindle and how it has fostered mobile tablet form factor computing as you consider Amazon's emboldened move to remake the business internet.

Create the Climate and Ecosystem Necessary for Amazon to Achieve Hypergrowth and World Domination (Exaggerated, but maybe not so much)
Amazon's cloud strategy achieves several key objectives:

  1. Disrupt the datacenter market in terms of service delivery, quality, provisiong, size of a viable datacenter, and cost. In other words Amazon invented the Cloud. Rich Wolski validated my opinion of this yesterday.
  2. Disintermediate the major vendors of datacenter compute, network, storage inputs so as to ignite innovation and destroy the viability of current production and pricing models.

Presently,  IBM, Intel, HP, Oracle sell to mid-market firms through an ecosystem of distributors and partners collectively called "The Channel.". However, in the Cloud phase ignited by Amazon's disruptive move, the Channel will cease to participate in this ecosystem. Cloud data-center requirements will determine order size, configuration and delivery. And these types of orders can be fulfilled directly by the manufacturers and do not require a Channel to mediate between the manufacturer and the consumer. Intel could, in theory manufacture and deliver industrial size datacenter compute / network / storage units. At the conferences I've attended one area of consensus is that the compute architecture will be Intel. Maybe this is why Amazon AWS is nurturing Nvidia GPUs for floating point intensive workloads. In any case, I expect to see more compute vendors enter the Datacenter space because the order size and volume will might make it feasible for chip manufacturers smaller than Intel to deliver the compute component, but I don't really know how anyone can surpass Intel in terms of volume. Even, so, with the scale of the Mobile build out, Intel's position in the ecosystem is no longer a certainty in this new ecosystem. Presently Cisco offers PODS, which combine compute / network / storage in a pluggable shipping container.

What and How Will the (Few) Cloud Datacenters Buy?
Rather than buying individual servers, network components, and cable datacenters in the Cloud era purchase large "pluggable" blocks of compute and storage directly from the manufacturers. And in this phase, mid-market firms no longer buy a significant volume of compute or storage because the small and mid-market firms will move first and rapidly to the cloud. And once the mid-market fully commits to Cloud, there will be no turning back. And the agility and competitive advantage these midmarket firms will gain, will result in serious erosion of market power of the Global 2000. Once the Global 2000 feel the heat from the mid-market firms, departments in the Global 2000 will likely defect from the corporate datacenter / "private cloud" at an increasing rate. Unable to stem this tide, the Global 2000 will attempt to accelerate deployment of Private Cloud. However, in my opinion, the Global 2000 will largely fail in their private cloud initiatives and the Diaspora / Exodus will continue until all but very "special" applications will run in the cloud. I commonly see Fortune 500 companies outsource applications like SAP and other "mission-critical" applications to  managed hosting companies.

As a result, pricing power by the compute and storage vendors will diminish because the consumers, the datacenters will purchase in volume and will negotiate in a manner similar to the way airlines buy from Boeing and Airbus. Technology compute / storage channel sales will dwindle the point that they become insignificant and the Channel as we know it will disappear for these kinds of goods. As a result manufacturers such as IBM HP DELL can now standardize and optimize compute storage for the Cloud Era, but (with the exception of Dell) they won't be happy about it because they will make much smaller margins and will sell much less compute and storage.

In other words, while most trendy analysts at the best firms speak passionately about the "consumerization of the internet and of business applications and web services" Amazon has been-there, done-that, and moved several steps ahead to, the "industrialization of the internet."

Amazon survives and thrives through this extinction event, because this new biome enables Amazon to reach and serve more customers and at a much lower price-point. During and after this event, Amazon sells excess datacenter capacity to the market until Amazon is ready to invest in the iteration of the data-center Amazon needs to achieve hypergrowth stage. In other words,  Amazon is a macro player and essentially tipped over the market, creating an "extinction event" if you will, which results in the regrowth of the entire technology ecosystem around the needs of massive scale, cooling, speed, and reliability. Amazon requires this kind of an ecosystem to support Amazon's yet to be grasped and unimaginably bold vision of the world marketplace all running on Amazon's ecommerce platform.

See video of Rich Wolski and CloudExpo

More Stories By Brian McCallion

Brian McCallion, founder of New York City-based consultancy Bronze Drum focuses on the unique challenges of Public Cloud adoption in the Fortune 500. Forged along the fault line of Corporate IT and line of business meet, Brian successfully delivers successful enterprise public cloud solutions that matter to the business. In 2011, while the Cloud was just a gleam in the eye of most Fortune 500 firms Brian designed and proved the often referenced hybrid cloud architecture that enabled McGraw-Hill Education to scale the web and application layer of its $160M revenue, 2M user higher education platform in Amazon Web Services. Brian recently designed and delivered the JD Power and Associates strategic customer facing Next Generation Content Platform, an Alfresco Content Management solution supported by a substantial data warehouse and data mart running in AWS and a batch job that processes over 500M records daily in RDS Oracle.”

Cloud Expo Breaking News
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.
From data center to cloud to the network. In his session at 3rd SDDC Expo, Raul Martynek, CEO of Net Access, will identify the challenges facing both data center providers and enterprise IT as they relate to cross-platform automation. He will then provide insight into designing, building, securing and managing the technology as an integrated service offering. Topics covered include: High-density data center design Network (and SDN) integration and automation Cloud (and hosting) infrastructure considerations Monitoring and security Management approaches Self-service and automation
In his session at 14th Cloud Expo, David Holmes, Vice President at OutSystems, will demonstrate the immense power that lives at the intersection of mobile apps and cloud application platforms. Attendees will participate in a live demonstration – an enterprise mobile app will be built and changed before their eyes – on their own devices. David Holmes brings over 20 years of high-tech marketing leadership to OutSystems. Prior to joining OutSystems, he was VP of Global Marketing for Damballa, a leading provider of network security solutions. Previously, he was SVP of Global Marketing for Jacada where his branding and positioning expertise helped drive the company from start-up days to a $55 million initial public offering on Nasdaq.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 14th Cloud Expo, Marc Jones, Vice President of Product Innovation for SoftLayer, will explain how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.
Are you interested in accelerating innovation, simplifying deployments, reducing complexity, and lowering development costs? The cloud is changing the face of application development and deployment, with enterprise-grade infrastructure and platform services making it possible for you to build and rapidly scale enterprise applications. In his session at 14th Cloud Expo, Gene Eun, Sr. Director, Oracle Cloud at Oracle, will discuss the latest solutions and strategies for application developers and enterprise IT organizations to leverage Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) to build and deploy modern business applications in the cloud.
Hybrid cloud refers to the federation of a public and private cloud environment for the purpose of extending the elastic and flexibility of compute, storage and network capabilities, in an on-demand, pay-as-you go basis. The hybrid approach allows a business to take advantage of the scalability and cost-effectiveness that a public cloud computing environment offers without exposing mission-critical applications and data to third-party vulnerabilities. Hybrid cloud environments involve complex management challenges. First, organizations struggle to maintain control over the resources that lie outside of their managed IT scope. They also need greater infrastructure visibility to help reduce maintenance costs and ensure that their company data and resources are properly handled and secured.