Welcome!

@CloudExpo Authors: Pat Romanski, William Schmarzo, Stefana Muller, Elizabeth White, Karthick Viswanathan

Related Topics: @CloudExpo

@CloudExpo: Blog Post

Log for Better Clouds - Part 8: Cloud Portability

What happens with logs when the honeymoon is over?

Cloud Portability.

(In the context of Logs of course!!)

So the honeymoon is over.

The Cloud Provider that you so carefully selected is not performing like you expected and you are eying the competition.  You might even be considering re-insourcing back some of your IT services.

So what happens to all the logs? As a customer, can you Trust that your Provider(s) will not let you down and mess with your logs?

Well, first off, whose logs are they?  Are they the Provider's logs because they are logs generated by their physical equipment, or are these your logs because they trace your applications and your virtual systems?

Actually they're both at the same time. Let's see why both parties would need access to the logs and reports.

From a Customer perspective, logs are important because they are an indication of my business processes and I need visibility in those. I need visibility in the usage of my applications, for such purpose as trending and capacity planning.  I also need them for internal reports, to tell the Business Units how they fared and I probably need them for chargeback.

For example, say that I'm using a Platform as a Service Provider to host applications that are enabling my sales team to better serve my customers. What is the usage trending of that application?  Is the application's take rate growing or did we reach a peak and we need to reinvent ourselves?  What kind of horsepower do we need to insure good quality of service and make sure that our applications are humming? How much do I need to bill each BU for their fair share of usage? Can I use this usage information to select my next Provider? What kind of usage expectations do I need to raise? All of these questions are very valid questions, and they can be answered by logs, more specifically by the reports on logs.

Providers have questions of their own, questions dealing with the most popular services and systems, usage trending over time with granularity on time of the day or period in the year, where should resources be focused on, what's the take rate on offers, where to better market them, what are the different services take rate, how much to charge customers?  All of these are questions can be answered with logs and log reports, so Providers also need to keep these logs.

OK, so both parties need to keep the logs or at least reports on the logs.

As a customer, I understand why my Provider needs to generate a last set of report from "my" logs, and I will let him do it provided that I get the guarantee that he will not be able to use this data to infer information about my business. Who knows, maybe he just signed my worst competitor as a client and might be tempted to give him access to my logs?  The Provider can use and generate reports based on my logs, but is not allowed to access the raw logs anymore.

And because I am changing Providers, there is no reason why I should still have access to (old) raw logs anymore, I'm not paying usage fee anymore...  and by the same token I need a last set of reports based on "my" logs.

So the solution could be that raw logs are still being stored a while longer, but they are quarantined, they are still there and available in case of dispute, and in case Law Enforcement needs access to raw logs but otherwise they can't be accessed.

Logs end up in a quarantined bucket for both parties and any/all access to the raw logs or generation of reports should trigger alerts that are sent to both parties so that way there are proper checks and balances on the raw log access and on the report generation.  This bucket does not necessarily need to be a physical bucket, it can be a logical one, all we need is a segregation mechanism that prevents unapproved accesses or at least alerts on these accesses.

Techniques for segregation have existed in the Industry for several years, for example this is exactly how accesses are mediated in DataWarehouse, all data is mixed together - including several owners' data - and access mediation is done at the logical level.

Phew, so are we done with Trust?

In absolute terms we will never be done with Trust.  Building Trust is a process, not an event, and all parties need to constantly make efforts so that everybody else's level of Trust is maintained and better yet improved.

I hope that by now both Providers and Customers trust each other a little bit more thanks to logs; how logs have been collected, how we can prove their integrity, higher-level reports that use them, and can then engage in a mutually-beneficial business relationship.

Next time, we'll talk about some specific use cases, starting with Pay Per Use.  Stay tuned!

More Stories By Gorka Sadowski

Gorka is a natural born entrepreneur with a deep understanding of Technology, IT Security and how these create value in the Marketplace. He is today offering innovative European startups the opportunity to benefit from the Silicon Valley ecosystem accelerators. Gorka spent the last 20 years initiating, building and growing businesses that provide technology solutions to the Industry. From General Manager Spain, Italy and Portugal for LogLogic, defining Next Generation Log Management and Security Forensics, to Director Unisys France, bringing Cloud Security service offerings to the market, from Director of Emerging Technologies at NetScreen, defining Next Generation Firewall, to Director of Performance Engineering at INS, removing WAN and Internet bottlenecks, Gorka has always been involved in innovative Technology and IT Security solutions, creating successful Business Units within established Groups and helping launch breakthrough startups such as KOLA Kids OnLine America, a social network for safe computing for children, SourceFire, a leading network security solution provider, or Ibixis, a boutique European business accelerator.

CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, shared success stories from a few folks who have already started using VM-aware storage. By managing storage operations at the VM-level, they’ve been able to solve their most vexing storage problems, and create infrastructures that scale to meet the needs of their applications. Best of all, they’ve got predictable, manageable storage performance – at a level conventional storage can’t match. ...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.