Welcome!

Cloud Expo Authors: Nikita Ivanov, Jerry Melnick, Liz McMillan, Elizabeth White, Esmeralda Swartz

Related Topics: Cloud Expo, .NET, Virtualization, Security, GovIT

Cloud Expo: Article

Cloud Economics – Amazon, Microsoft, Google Compared

A Platform Comparison

Any new technology adoption happens because of one of the three reasons:
  1. Capability: It allows us to do something which was not feasible earlier
  2. Convenience: It simplifies
  3. Cost: It significantly reduces cost of doing something

What is our expectation from cloud computing? As I had stated earlier, it is all about cost saving … (1) through elastic capacity and (2) through economy of scale. So, for any CIO who is interested in moving to cloud, it is very important to understand what the cost elements are for different cloud solutions. I am going to look at 3 platforms: Amazon EC2, Google App Engine and Microsoft Azure. They are sufficiently different from each other and each of these companies is following a different cloud strategy – so we need to understand their pricing model.

(A word of caution: this analysis is as per the published data on 20th January, 2010 and texts in green italics are my interpretation)

[Update on Amazon offering as on June, 2011]

Quick Read: Market forces seem to have ensured that all the prices are similar – for quick rule of thumb calculation to look at viability, use the following numbers irrespective of the provider. You will not go too much off the mark.

  • Base machine = $0.1 per hour (for 1.5 GHz Intel Processor)
  • Storage = $0.15 per GB per month
  • I/O = $0.01 for 1,000 write and $0.001 for 1,000 read
  • Bandwidth = $0.1 per GB for incoming traffic and $0.15 per GB for outgoing traffic

However, if you have time, you can go through the detail analysis given below.

Amazon:
  • Overview: You can create one or more instances of a virtual machine for processing and for storage
    • You pay based on time the instances are running and not on how much they are used – if an instance is idle, you still pay for it
    • There are three physically different locations where the facility is available (called availability zones) – US(N. Virginia, N. California) and EU(Ireland)
    • When you either shutdown the machine instance or it crashes for whatever reason you lose all your data
    • It is possible to have a reserve instance (for 1 year or 3 years) for an initial payment and discounted rate of usage – however, I do not think it provides any guarantee against data loss because of machine crash
    • Data storage can be both relational and non-relational
  • Machine Instance: Virtual machine can be of different capacity – Standard(Small, Large, Extra Large), High-Memory(Double Extra Large, Quadruple Extra Large), High-CPU(Medium, Extra Large)
    • Charge for Machine Usage: You are charged for the time you keep the instance of the machine running – the time is calculated in hours, any fraction of hour is taken as full hour
      • Hourly charge vary from $0.085 (Small – Linux – N. Virginia) to $3.16 (Quadruple Extra Large – Windows – N. California)
      • Both Linux and Windows machine instances are supported – Windows machines are about 40% more expensive – other software charges are extra
    • There are separate charges for mapping IP addresses, for monitoring & auto scaling ($0.015 per instance per hour) and load balancing
    • A message queue is available (Simple Queue Service – SQS) but again it has a separate charge – $0.1 to $0.17 per GB depending on the total monthly volume
  • Data Persistence: To persistent data storage you can one of the 3 alternatives – Simple DB, Simple Storage Service (S3) or Relational Database Service (RDS)
    • Simple DB and S3 storage mechanism is not RDBMS – that is you do not have tables therefore you cannot retrieve records through using JOIN
    • RDS is an instance of MySQL – so you can use it like a normal RDBMS
    • Charges for Simple DB: you pay separately for CPU, disk space and data transfer – though up to a limit they are free (25 CPU hours, 1GB data transfer, 1GB of storage)
      • CPU usage calculation is normalized to 1.7 GHz Xeon (2007) processor and works out to $0.14 to $0.154 per hour depending on location
      • Data transfer In is free till June 2010 and charge for transfer Out is between $0.1 to $0.17
        per GB depending on the total monthly volume
      • Actual storage is charger at $0.25 to $0.275 per GB per month – it includes 45 bytes of overhead for each item uploaded
    • Charges for S3: You are charged for disk space, data transfer and number of request made instead of CPU usage – data transfer charges are the same
      • Storage charge varies from $.055 to $0.165 per GB per month making it slightly cheaper than Simple DB but at a higher level of usage (more than 1000 TB)
      • I/O requests are charged separately – you pay between $0.01 to $0.011 per 1,000 write requests and $0.01 to $0.011 per 10,000 read requests – deletes are free
    • Charge for RDS: You pay for storage, I/O request, data transfer and machine instance (Small, Large, Extra Large, Double Extra Large, Quadruple Extra Large) based on usage
      • You pay for RDS instance – charges vary from $0.11 to $3.10 per hour depending on the instance size
      • The storage charge is not pay as you use – you have to decide in advance (5 GB to 1 TB) and the charges are $0.10 per GB per month
      • The is no charge for backup up to the amount of storage you have chosen but you have to pay $0.15 per GB per month for extra backup
      • You pay separately for I/O at $0.10 per 1 million I/O requests

    Google:
    • Overview: Application written in Python or Java can directly be deployed – the implementation is a subset
      • No need to instantiate any virtual machine
      • You are charged on the actual normalized CPU cycles used
      • Storage is only non-relational
      • Charge is calculated on these parameters – bandwidth, CPU, storage, emails send
      • You have free quota for each of these parameters – it is enough for development, testing and small deployment
      • There are limits imposed for peak usage on many different parameters – with daily limits & limits on usage in a burst
      • You will need to rewrite your application to work on Google App Engine – see this
      • Charge for CPU usage: It is calculated in CPU seconds equivalent to 1.2 GHz Intel x86 processor
        • You pay $0.10 per hour of CPU usage for processing requests
        • 6.5 hours of CPU time is free
        • You do not pay for CPU idle time
      • Charge for storage: Only non-relational storage is available
        • You pay $0.15 per GB per month – the size includes overhead, metadata and storage required for indexes
        • It includes data stored in the datastore, memcache, blobstore
        • You pay for CPU usages for data I/O at $0.10 per hour
        • 60 hours of CPU time for data I/O is free
        • Up to 1 GB of storage is freeFAQ page says that it is 500 MB
        • You are charged every day at $0.005 GB per day after subtracting your free quota
      • Charge for bandwidth usage: Inward and outward bandwidth usage is charged at different rate
        • You pay $0.10 per GB for incoming traffic
        • You pay $0.12 per GB for outgoing traffic
        • 1 GB of incoming traffic and 1 GB of outgoing traffic is free
    Microsoft:
    • Overview: Offering has 3 main parts – Windows Azure, SQL Azure and App Fabric
      • Details available on the Microsoft site is more about the vision of the product than about what is implemented here and now.
      • However this document “Introducing Windows Azure” is good
      • It uses Hyper-V for virtualization – it works more like Amazon than like Google
      • There is an introductory offer where the service can be avail for free
      • The development environment is Visual Studio through an SDK
      • The emphasis of creating applications which partly runs in premise
        and partly on cloud
      • Microsoft wants to keep the programming model as much unaltered as possible – see this
      • Charge for CPU usage: It is calculated in CPU seconds equivalent to 1.2 GHz Intel x86 processor
        • You pay $0.12 per hour of CPU usage for processing requests
      • Charge for storage: Only non-relational storage is available
        • You pay $0.15 per GB per month
        • Storage transactions are charged separately at $0.01 per 10,000 transactions
      • Charge for bandwidth usage: Inward and outward bandwidth usage is charged at different rate
        • You pay $0.10 per GB for incoming traffic – rates for Asia are different $0.30 per GB
        • You pay $0.15 per GB for outgoing traffic – rates for Asia are different $0.45 per GB

Looking at the complexity of pricing I see great prospect for anybody who specializes in optimizing application for cloud – unlike traditional applications – any improvement in cloud application and directly be measured in $$$ saved.

More Stories By Udayan Banerjee

Udayan Banerjee is CTO at NIIT Technologies Ltd, an IT industry veteran with more than 30 years' experience. He blogs at http://setandbma.wordpress.com.
The blog focuses on emerging technologies like cloud computing, mobile computing, social media aka web 2.0 etc. It also contains stuff about agile methodology and trends in architecture. It is a world view seen through the lens of a software service provider based out of Bangalore and serving clients across the world. The focus is mostly on...

  • Keep the hype out and project a realistic picture
  • Uncover trends not very apparent
  • Draw conclusion from real life experience
  • Point out fallacy & discrepancy when I see them
  • Talk about trends which I find interesting
Google

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
tom.eberhard 01/25/10 03:58:00 PM EST

From what I've read, Azure offers relational storage in SQL Azure.
Look at these two different links for example.

Please clarify your article.
Sincerely,
Tom Eberhard.

Cloud Expo Breaking News
MapDB is an Apache-licensed open source database specifically designed for Java developers. The library uses the standard Java Collections API, making it totally natural for Java developers to use and adopt, while scaling database size from GBs to TBs. MapDB is very fast and supports an agile approach to data, allowing developers to construct flexible schemas to exactly match application needs and tune performance, durability and caching for specific requirements.
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
Next-Gen Cloud. Whatever you call it, there’s a higher calling for cloud computing that requires providers to change their spots and move from a commodity mindset to a premium one. Businesses can no longer maintain the status quo that today’s service providers offer. Yes, the continuity, speed, mobility, data access and connectivity are staples of the cloud and always will be. But cloud providers that plan to not only exist tomorrow – but to lead – know that security must be the top priority for the cloud and are delivering it now. In his session at 14th Cloud Expo, Kurt Hagerman, Chief Information Security Officer at FireHost, will detail why and how you can have both infrastructure performance and enterprise-grade security – and what tomorrow's cloud provider will look like.
The social media expansion has shown just how people are eager to share their experiences with the rest of the world. Cloud technology is the perfect platform to satisfy this need given its great flexibility and readiness. At Cynny, we aim to revolutionize how people share and organize their digital life through a brand new cloud service, starting from infrastructure to the users’ interface. A revolution that began from inventing and designing our very own infrastructure: we have created the first server network powered solely by ARM CPU. The microservers have “organism-like” features, differentiating them from any of the current technologies. Benefits include low consumption of energy, making Cynny the ecologically friendly alternative for storage as well as cheaper infrastructure, lower running costs, etc.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.
From data center to cloud to the network. In his session at 3rd SDDC Expo, Raul Martynek, CEO of Net Access, will identify the challenges facing both data center providers and enterprise IT as they relate to cross-platform automation. He will then provide insight into designing, building, securing and managing the technology as an integrated service offering. Topics covered include: High-density data center design Network (and SDN) integration and automation Cloud (and hosting) infrastructure considerations Monitoring and security Management approaches Self-service and automation
In his session at 14th Cloud Expo, David Holmes, Vice President at OutSystems, will demonstrate the immense power that lives at the intersection of mobile apps and cloud application platforms. Attendees will participate in a live demonstration – an enterprise mobile app will be built and changed before their eyes – on their own devices. David Holmes brings over 20 years of high-tech marketing leadership to OutSystems. Prior to joining OutSystems, he was VP of Global Marketing for Damballa, a leading provider of network security solutions. Previously, he was SVP of Global Marketing for Jacada where his branding and positioning expertise helped drive the company from start-up days to a $55 million initial public offering on Nasdaq.