Welcome!

@CloudExpo Authors: Yeshim Deniz, XebiaLabs Blog, Pat Romanski, Liz McMillan, William Schmarzo

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Keys to the PaaSing Game: Multi-Tenancy

There are two primary approaches to multi-tenancy in PaaS

Whether SaaS, IaaS or PaaS, one of the central concepts of all layers of cloud computing is multi-tenancy. If there is no shared resource in a deployment, it's difficult to justify calling that deployment "cloud."

Even NIST makes it more or less official within its formal definition of cloud computing, which reads, in part:

Essential Characteristics: Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

In most SaaS offerings, the multi-tenancy is manifold - the servers, the application code, the database and even the individual tables or pages within the database might be shared among different customers and users of the system.

In IaaS, multi-tenancy is implemented via virtualization technology: a hypervisor allocates and manages a number of complete virtual machines on a particular physical computing resource.

But what about PaaS?

There are two primary approaches to multi-tenancy in PaaS: one simply relies upon the IaaS multi-tenancy, which I call Server PaaS. The other looks more like SaaS multi-tenancy. I call that one Resource PaaS.

Server PaaS is essentially an automated deployment and management system. Although there are managed services providers who can manually set up your application development and deployment environments on cloud servers, and perhaps even automate part of it, this is not Platform-as-a-Service. It is just managed services.

If, in contrast, the developer can directly manage the environments via a user interface that offers operations at a high level (even if that user interface is at the command line), then it is actually a service. Examples of Server PaaS include RightScale, Standing Cloud and EngineYard.

Resource PaaS provides an abstract "container" for an application that allows it to share computing resources in a granular way with other such applications. It eliminates the concept of "servers" in favor of functional resources. The extent to which the application container looks similar to a traditional application deployment depends on the particular service (and that's the subject of a whole other article), but in no case is there "root access." Examples of Resource PaaS include Force.com, Google AppEngine, and Heroku.

There are a number of advantages to Resource PaaS. First, application scaling is granular and fast. As the application receives more requests, or starts performing more work, the required resources are available immediately (within limits, of course). Cost calculations are also granular, so you pay only for the computing resources you use. Another advantage mirrors that of SaaS: a developer does not have to think about or manage servers (including servers that fail), or backups, setup, configuration, etc.

But this granularity and abstraction comes with a cost, and that cost is loss of control. Anything that is shared with other users cannot also be configured arbitrarily by each user. If there is a configuration detail that can be changed by individual users, then the software and systems implementing that detail for the user must be separate. Again, each service has its own place on this continuum. For example, Google AppEngine uses the BigTable database, which is shared by everyone, while Heroku allows each customer to set up a separate NoSQL or relational database.

With a Resource PaaS, there is also a loss of control over where the application runs. The provider controls the computing resources and therefore it behaves as a single point of failure (even though there may be redundancies at lower levels). Typically, there is also no way to deploy your application as a "hybrid," where some of the computing resources are owned and others are shared. (CloudFoundry deployments could in the future be an exception to this.)

Because each Resource PaaS is a unique application environment, application code generally must be developed for that particular PaaS and is then somewhat locked in. In the extreme, some PaaS offerings even have a proprietary programming language and can't ever be ported to other environments.

Finally, a Resource PaaS has greater vulnerability to security breaches. This is due simply to that fact that there are more shared resources, so there are more places for bugs in the operating software to be exploited, or to accidentally expose data to "neighbors." Also, hypervisor technology is widely used and has been subjected to a fair amount of security scrutiny, so its isolation of data is relatively well tested. A typical PaaS, on the other hand, does not have as much breadth of usage and so is not subject to the same scrutiny.

The benefits and disadvantages of a Server PaaS are nearly the mirror image of those of a Resource PaaS. Data isolation relies on proven and secure hypervisor technology. The application environment is usually consistent with more traditional deployment approaches, and the application code can be built for portability. If the PaaS supports it, production deployments can be moved, spread across multiple data centers or providers, or organized into a hybrid. Configuration details of the technology stack, and in some cases even the operating system, are generally visible to the developer.

On the flipside, Server PaaS cannot offer the same level of scaling and cost granularity as Resource PaaS. Generally the unit of resource is a server-hour, and adding new resources can take several minutes or longer. Automatically scaling these resources is less accurate because it relies on secondary measures of resource requirements (e.g., CPU load).

A good Server PaaS automates the server management aspects of the application, including not just initial deployment but the production life cycle of the application as well. Done right, it can be nearly as easy to manage as a Resource PaaS, although it does still require incremental awareness.

Which is better? Clearly the answer depends on both current and future needs. If control, flexibility, security, and portability are important to you, then Server PaaS has many advantages. If ease of deployment and management and/or rapid and efficient scaling are crucial, Resource PaaS probably wins. My only generic advice is that for any new applications you build, minimize the dependencies on a particular PaaS or type of PaaS - because you never know when things will change.

More Stories By Dave Jilk

Dave Jilk has an extensive business and technical background in both the software industry and the Internet. He currently serves as CEO of Standing Cloud, Inc., a Boulder-based provider of cloud-based application management solutions that he cofounded in 2009.

Dave is a serial software entrepreneur who also founded Wideforce Systems, a service similar to and pre-dating Amazon Mechanical Turk; and eCortex, a University of Colorado licensee that builds neural network brain models for defense and intelligence research programs. He was also CEO of Xaffire, Inc., a developer of web application management software; an Associate Partner at SOFTBANK Venture Capital (now Mobius); and CEO of GO Software, Inc.

Dave earned a Bachelor of Science degree in Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
China Unicom exhibit at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. China United Network Communications Group Co. Ltd ("China Unicom") was officially established in 2009 on the basis of the merger of former China Netcom and former China Unicom. China Unicom mainly operates a full range of telecommunications services including mobile broadband (GSM, WCDMA, LTE FDD, TD-LTE), fixed-line broadband, ICT, data communica...
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, will provide a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services ...
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Building a cross-cloud operational model can be a daunting task. Per-cloud silos are not the answer, but neither is a fully generic abstraction plane that strips out capabilities unique to a particular provider. In his session at 20th Cloud Expo, Chris Wolf, VP & Chief Technology Officer, Global Field & Industry at VMware, will discuss how successful organizations approach cloud operations and management, with insights into where operations should be centralized and when it’s best to decentraliz...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, explored HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Interoute has announced the integration of its Global Cloud Infrastructure platform with Rancher Labs’ container management platform, Rancher. This approach enables enterprises to accelerate their digital transformation and infrastructure investments. Matthew Finnie, Interoute CTO commented “Enterprises developing and building apps in the cloud and those on a path to Digital Transformation need Digital ICT Infrastructure that allows them to build, test and deploy faster than ever before. The int...
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great dea...
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Deep learning has been very successful in social sciences and specially areas where there is a lot of data. Trading is another field that can be viewed as social science with a lot of data. With the advent of Deep Learning and Big Data technologies for efficient computation, we are finally able to use the same methods in investment management as we would in face recognition or in making chat-bots. In his session at 20th Cloud Expo, Gaurav Chakravorty, co-founder and Head of Strategy Development ...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.