Welcome!

@CloudExpo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Yeshim Deniz

Blog Feed Post

S3 Encryption With Porticor

Your Porticor appliance can encrypt data being stored to Amazon’s Simple Storage Service (S3). Porticor is the only system available that offers the convenience of cloud-based hosted key management without sacrificing trust by requiring someone else to manage the keys. Porticor’s split-key encryption technology protects keys and guarantees they remain under customer control and are never exposed in storage; and with homomorphic key encryption, the keys are protected – even while they are in use.

A variety of applications use S3 to store bulk data, and we support two different ways of enabling Porticor encryption with them.

  1. The first and recommended alternative is to configure the application so that data is written to and read from the Porticor appliance.
  2. Where this is not an option, you can instead configure host-to-IP mapping so that the application writing to s3.amazonaws.com actually writes to the appliance, which encrypts and forwards the data to S3.

These are two options to set up your environment, but the end result is one and the same: your data is encrypted when written to S3 and decrypted when reading from it, with no need to change the client application.

What follows are detailed instructions for both configuration options.

Alternative 1: Explicit Configuration

This alternative depends on the individual application. In fact some S3 clients are hardwired to the Amazon server addresses and cannot be configured. For these clients you must use Alternative 2.

In this alternative you configure the S3 client to connect to a special DNS address for each of your buckets and for the main S3 endpoint. These DNS addresses are installed automatically when you add your S3 “buckets” into the table on the S3 Encryption page. All buckets that are to be encrypted must be listed. If your bucket is called mylittlebucket, it will be mapped to the DNS name mylittlebucket.d.porticor.net.

The specific client configuration depends on the particular client. Following are two examples.

Configuring s3cmd

Edit the file $HOME/.s3cfg, and replace the host_base and host_bucket lines as below, where the long host_base value is your appliance’s address.

# old:
# host_base = s3.amazonaws.com
# host_bucket = %(bucket)s.s3.amazonaws.com

# new:
host_base = itbetb19zy3-pzjy3yty2zw.d.porticor.net
host_bucket = %(bucket)s.d.porticor.net

Configuring S3QL

Edit $HOME/.s3ql/authinfo2, and add a new section:

[porticor]
storage-url: s3c://itbetb19zy3-pzjy3yty2zw.d.porticor.net/bucket-name/
backend-login: aws-key-id
backend-password: aws-secret-key

To create the file system, run:

mkfs.s3ql --plain --ssl s3://bucket-name/

mount.s3ql --ssl s3://bucket-name /mnt/cloud-drive

Note the use of the “s3c” URL method. Both the --ssl and --plain flags are mandatory.

Alternative 2: Host Mapping

With this alternative you configure the host on which your S3 client is running, so that s3.amazonaws.com and bucketname.s3.amazonaws.com are resolved to the IP address of the Porticor appliance. You should not fill in the Bucket table in this alternative.
On Linux, edit the file /etc/hosts and add the lines shown below.
On Windows, add these lines to %windir%\system32\drivers\etc\lmhosts (or ...\hosts, depending on your Windows version).

Appliance-IP bucketname.s3.amazonaws.com # replicate this line for each S3 bucket
Appliance-IP s3.amazonaws.com

Use the so-called “private” IP address (10.x.x.x) of the Porticor appliance, for access from within the EC2 cloud. If you plan to access the appliance from outside the cloud or even from another AWS region you will need to use “public” address instead. Both addresses are listed in the S3 Configuration GUI page.
All communication with the virtual appliance is SSL-protected. In Alternative 2, you should make sure that your project’s CA certificate is installed on the client machine, otherwise your S3 client might refuse to connect.

Additional Notes

  • You need to wait a few minutes after the creation of a new bucket before starting to use it, so that it is recognized by all S3 servers.
  • Note that Porticor does not support “mixed buckets” containing both protected and unprotected objects.
  • Bucket names must conform with the requirements for DNS names, as recommended by Amazon Web Services. In particular they must not contain uppercase letters.

The post S3 Encryption With Porticor appeared first on Porticor Cloud Security.

Read the original blog entry...

More Stories By Gilad Parann-Nissany

Gilad Parann-Nissany, Founder and CEO at Porticor is a pioneer of Cloud Computing. He has built SaaS Clouds for medium and small enterprises at SAP (CTO Small Business); contributing to several SAP products and reaching more than 8 million users. Recently he has created a consumer Cloud at G.ho.st - a cloud operating system that delighted hundreds of thousands of users while providing browser-based and mobile access to data, people and a variety of cloud-based applications. He is now CEO of Porticor, a leader in Virtual Privacy and Cloud Security.

CloudEXPO Stories
Security, data privacy, reliability and regulatory compliance are critical factors when evaluating whether to move business applications from in-house client hosted environments to a cloud platform. In her session at 18th Cloud Expo, Vandana Viswanathan, Associate Director at Cognizant, In this session, will provide an orientation to the five stages required to implement a cloud hosted solution validation strategy.
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
Jo Peterson is VP of Cloud Services for Clarify360, a boutique sourcing and benchmarking consultancy focused on transforming technology into business advantage. Clarify360 provides custom, end-to-end solutions from a portfolio of more than 170 suppliers globally. As an engineer, Jo sources net new technology footprints, and is an expert at optimizing and benchmarking existing environments focusing on Cloud Enablement and Optimization. She and her team work with clients on Cloud Discovery, Cloud Planning, Cloud Migration, Hybrid IT Architectures ,Cloud Optimization and Cloud Security. Jo is a 25-year veteran in the technology field with tenure at MCI, Intermedia/Digex, Qwest/CenturyLink in pre-sales technical, selling and management roles.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.