Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo

@CloudExpo: Article

Live Forensics and the Cloud

Exploring the effect of Cloud Computing on digital forensics - Part II

Cloud Computing offers a sense of "vastness" in terms of storage and remote processing. According to Simpson Garfinkil, a major challenge to any digital forensics investigator investigating data within the cloud; can be an inability to locate or identify data or code that is lost when single data structures are split into elements.

This in effect directly impacts forensic visibility.

Within this ecosystem a major concern can be access to and the preservation of data within an on-going digital forensic investigation. Of consideration as mentioned in Part 1 - is that in a live and dynamic system such as the cloud, it is virtually impossible to go back to an original state of data after obtaining a "snapshot" for investigation.

Also of importance will be jurisdictional and legal ramifications pertaining to the physical location of the cloud systems holding data under investigation.

This part of the article continues from the question, "How can an investigator identify and track such an issue?" It looks at identity within the cloud with regard to the issue of anonymous authentication and how it can impact a digital forensic investigation.

Going a bit back in time we can reference provenance as detailed in a paper published in 2001 by Clifford A Lynch.

Lynch proposed a utilization of tools that allowed for the determination of the source of identity of a person or organization, standing behind a metadata assertion. Consequently this assumption allows for the development of trust in an entity's identity.

Per Foster Zhao Raicu and Lu; provenance references any data product's derivation history. It includes "all the data sources, intermediate data products, and the procedures that were applied to produce the data product." In other words it's somewhat of an "audit trail".

Foster el al also stated that with regard to the cloud that could be existential challenges with an audit trail stemming from "issues such as tracking data production across different service providers (with different platform visibility and access policies) and across different software and hardware abstraction layers within one provider."

Researchers Lu, Lin, Liang and Shen took the process of provenance as suggested by Lynch a step further and proposed that cloud computing should provide provenance "to record ownership and process history of data objects in the cloud," on the assumption that "given its provenance, a data object can report who created and who modified its contents."

This of course will greatly impact the outcome of a digital forensic investigation being conducted by providing some sort of accountability and in a best case a process and user-related footprint.

The Researchers' also stressed that in order to ensure the integrity of data; the data should be secured i.e. secure provenance.

Thus the concept of "secure provenance should satisfy requirements of

•1) Unforgeability and

•2) Conditional privacy preservation where only a trusted authority has the ability to reveal the real identity recorded in the provenance."

The researchers' model proposed a fully secure provenance SP scheme for cloud computing, in a five part process as follows:

"A secure provenance scheme SP is defined by the following algorithms: system setup, key generation, anonymous authentication, authorized access, and provenance tracking : - Setup, KGen, AnonyAuth, AuthAccess, and ProveTrack."

According to the outcome of this paper this system will provide "trusted evidence for data forensics in cloud computing," as applied into a real world cloud ecosystem where if any issues occur, a system manager (SM) can calculate a provenance chain of command by utilizing the provenance tracking algorithm, resulting in an ability to track a specific user identity.

More Stories By Jon Shende

Jon RG Shende is an executive with over 18 years of industry experience. He commenced his career, in the medical arena, then moved into the Oil and Gas environment where he was introduced to SCADA and network technologies,also becoming certified in Industrial Pump and Valve repairs. Jon gained global experience over his career working within several verticals to include pharma, medical sales and marketing services as well as within the technology services environment, eventually becoming the youngest VP of an international enterprise. He is a graduate of the University of Oxford, holds a Masters certificate in Business Administration, as well as an MSc in IT Security, specializing in Computer Crime and Forensics with a thesis on security in the Cloud. Jon, well versed with the technology startup and mid sized venture ecosystems, has contributed at the C and Senior Director level for former clients. As an IT Security Executive, Jon has experience with Virtualization,Strategy, Governance,Risk Management, Continuity and Compliance. He was an early adopter of web-services, web-based tools and successfully beta tested a remote assistance and support software for a major telecom. Within the realm of sales, marketing and business development, Jon earned commendations for turnaround strategies within the services and pharma industry. For one pharma contract he was responsibe for bringing low performing districts up to number 1 rankings for consecutive quarters; as well as outperforming quotas from 125% up to 314%. Part of this was achieved by working closely with sales and marketing teams to ensure message and product placement were on point. Professionally he is a Fellow of the BCS Chartered Institute for IT, an HITRUST Certified CSF Practitioner and holds the CITP and CRISC certifications.Jon Shende currently works as a Senior Director for a CSP. A recognised thought Leader, Jon has been invited to speak for the SANs Institute, has spoken at Cloud Expo in New York as well as sat on a panel at Cloud Expo Santa Clara, and has been an Ernst and Young CPE conference speaker. His personal blog is located at http://jonshende.blogspot.com/view/magazine "We are what we repeatedly do. Excellence, therefore, is not an act, but a habit."

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.