Welcome!

@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Zakia Bouachraoui, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Keys to the PaaSing Game: Multi-Tenancy

There are two primary approaches to multi-tenancy in PaaS

Whether SaaS, IaaS or PaaS, one of the central concepts of all layers of cloud computing is multi-tenancy. If there is no shared resource in a deployment, it's difficult to justify calling that deployment "cloud."

Even NIST makes it more or less official within its formal definition of cloud computing, which reads, in part:

Essential Characteristics: Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

In most SaaS offerings, the multi-tenancy is manifold - the servers, the application code, the database and even the individual tables or pages within the database might be shared among different customers and users of the system.

In IaaS, multi-tenancy is implemented via virtualization technology: a hypervisor allocates and manages a number of complete virtual machines on a particular physical computing resource.

But what about PaaS?

There are two primary approaches to multi-tenancy in PaaS: one simply relies upon the IaaS multi-tenancy, which I call Server PaaS. The other looks more like SaaS multi-tenancy. I call that one Resource PaaS.

Server PaaS is essentially an automated deployment and management system. Although there are managed services providers who can manually set up your application development and deployment environments on cloud servers, and perhaps even automate part of it, this is not Platform-as-a-Service. It is just managed services.

If, in contrast, the developer can directly manage the environments via a user interface that offers operations at a high level (even if that user interface is at the command line), then it is actually a service. Examples of Server PaaS include RightScale, Standing Cloud and EngineYard.

Resource PaaS provides an abstract "container" for an application that allows it to share computing resources in a granular way with other such applications. It eliminates the concept of "servers" in favor of functional resources. The extent to which the application container looks similar to a traditional application deployment depends on the particular service (and that's the subject of a whole other article), but in no case is there "root access." Examples of Resource PaaS include Force.com, Google AppEngine, and Heroku.

There are a number of advantages to Resource PaaS. First, application scaling is granular and fast. As the application receives more requests, or starts performing more work, the required resources are available immediately (within limits, of course). Cost calculations are also granular, so you pay only for the computing resources you use. Another advantage mirrors that of SaaS: a developer does not have to think about or manage servers (including servers that fail), or backups, setup, configuration, etc.

But this granularity and abstraction comes with a cost, and that cost is loss of control. Anything that is shared with other users cannot also be configured arbitrarily by each user. If there is a configuration detail that can be changed by individual users, then the software and systems implementing that detail for the user must be separate. Again, each service has its own place on this continuum. For example, Google AppEngine uses the BigTable database, which is shared by everyone, while Heroku allows each customer to set up a separate NoSQL or relational database.

With a Resource PaaS, there is also a loss of control over where the application runs. The provider controls the computing resources and therefore it behaves as a single point of failure (even though there may be redundancies at lower levels). Typically, there is also no way to deploy your application as a "hybrid," where some of the computing resources are owned and others are shared. (CloudFoundry deployments could in the future be an exception to this.)

Because each Resource PaaS is a unique application environment, application code generally must be developed for that particular PaaS and is then somewhat locked in. In the extreme, some PaaS offerings even have a proprietary programming language and can't ever be ported to other environments.

Finally, a Resource PaaS has greater vulnerability to security breaches. This is due simply to that fact that there are more shared resources, so there are more places for bugs in the operating software to be exploited, or to accidentally expose data to "neighbors." Also, hypervisor technology is widely used and has been subjected to a fair amount of security scrutiny, so its isolation of data is relatively well tested. A typical PaaS, on the other hand, does not have as much breadth of usage and so is not subject to the same scrutiny.

The benefits and disadvantages of a Server PaaS are nearly the mirror image of those of a Resource PaaS. Data isolation relies on proven and secure hypervisor technology. The application environment is usually consistent with more traditional deployment approaches, and the application code can be built for portability. If the PaaS supports it, production deployments can be moved, spread across multiple data centers or providers, or organized into a hybrid. Configuration details of the technology stack, and in some cases even the operating system, are generally visible to the developer.

On the flipside, Server PaaS cannot offer the same level of scaling and cost granularity as Resource PaaS. Generally the unit of resource is a server-hour, and adding new resources can take several minutes or longer. Automatically scaling these resources is less accurate because it relies on secondary measures of resource requirements (e.g., CPU load).

A good Server PaaS automates the server management aspects of the application, including not just initial deployment but the production life cycle of the application as well. Done right, it can be nearly as easy to manage as a Resource PaaS, although it does still require incremental awareness.

Which is better? Clearly the answer depends on both current and future needs. If control, flexibility, security, and portability are important to you, then Server PaaS has many advantages. If ease of deployment and management and/or rapid and efficient scaling are crucial, Resource PaaS probably wins. My only generic advice is that for any new applications you build, minimize the dependencies on a particular PaaS or type of PaaS - because you never know when things will change.

More Stories By Dave Jilk

Dave Jilk has an extensive business and technical background in both the software industry and the Internet. He currently serves as CEO of Standing Cloud, Inc., a Boulder-based provider of cloud-based application management solutions that he cofounded in 2009.

Dave is a serial software entrepreneur who also founded Wideforce Systems, a service similar to and pre-dating Amazon Mechanical Turk; and eCortex, a University of Colorado licensee that builds neural network brain models for defense and intelligence research programs. He was also CEO of Xaffire, Inc., a developer of web application management software; an Associate Partner at SOFTBANK Venture Capital (now Mobius); and CEO of GO Software, Inc.

Dave earned a Bachelor of Science degree in Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power the organization's key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the "Big Data MBA" course. Bill was ranked as #15 Big Data Influencer by Onalytica. Bill has over three decades of experience in data warehousing, BI and analytics. He authored E...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the massive amount of information associated with these devices. Ed presented sought out sessions at CloudEXPO Silicon Valley 2017 and CloudEXPO New York 2017. He is a regular contributor to Cloud Computing Journal.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. End users now struggle to navigate multiple environments with varying degrees of performance. Companies are unclear on the security of their data and network access. And IT squads are overwhelmed trying to monitor and manage it all.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.