Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Government Cloud, @DXWorldExpo

@CloudExpo: Article

Is Data Classification a Bridge Too Far? | @CloudExpo #API #Cloud #BigData

The rise of data protectionism is now so acute that it threatens to restrict the flow of data across national borders

Today data has replaced money as the global currency for trade.

“McKinsey estimates that about 75 percent of the value added by data flows on the Internet accrues to “traditional” industries, especially via increases in global growth, productivity, and employment. Furthermore, the United Nations Conference on Trade and Development (UNCTAD) estimates that about 50 percent of all traded services are enabled by the technology sector, including by cross-border data flows.”

As the global economy has become fully dependent on the transformative nature of electronic data exchange, its participants have also become more protective of data’s inherent value. The rise of this data protectionism is now so acute that it threatens to restrict the flow of data across national borders. Data-residency requirements, widely used to buffer domestic technology providers from international competition, also tends to introduce delays, cost and limitations to the exchange of commerce in nearly every business sector. This impact is widespread because it is also driving:

  • Laws and policies that further limit the international exchange of data;
  • Regulatory guidelines and restrictions that limit the use and scope of data collection; and
  • Data security controls that route and allow access to data based on user role, location and access device.

A direct consequence of these changes is that the entire business enterprise spectrum is now faced with the challenge of how to classify and label this vital commerce component.

The data life cycle

The challenges posed here are immense. Not only is there an extremely large amount of data being created everyday but businesses still need to manage and leverage their huge store of old data. This stored wealth is not static because every bit of data possesses a lifecycle through which it must be monitored, modified, shared, stored and eventually destroyed. The growing adoption and use of cloud computing technologies layers even more complexity to this mosaic. Another widely unappreciated reality being highlighted in boardrooms everywhere is how these changes are affecting business risk and internal information technology governance. Broadly lumped into cybersecurity, the sparsity of legal precedent in this domain is coupled almost daily with a need for headline driven, rapid fire business decisions.

To deal with this new reality, enterprises must standardize and optimize the complexity associated with managing data. Success in this task mandates a renewed focus on data classification, data labeling and data loss prevention. Although these data security precautions have historically been glossed over as too expensive or too hard, the penalties and long term pain associated with a data breach incident has raised the stakes considerably. According the Global Commission on Internet Governance, the average financial cost of a single data breach could exceed $12,000,000 [1] , which includes:

  • Organizational costs: $6,233,941
  • Detection and Escalation Costs: $372,272
  • Response Costs: $1,511,804
  • Lost Business Costs: $3,827,732
  • Victim Notification Cost: $523,965

So is adequate data classification still just simply a bridge too far?

While the competencies required to implement an effective data management program are significant, they are not impossible. Relevant skillsets are, in fact, foundational to the deployment of modern business automation which, in turn, represents the only economical path towards streamlining repeatable processes and reducing manual tasks. Minimum steps include:

  • Improving enterprise awareness around the importance of data classification
  • Abandoning outdated or realistic classification schemes in order to adopt less complex ones
  • Clarifying organizational roles and responsibilities while simultaneously removing those that have been tailored to individuals
  • Focus on identifying and classifying data, not data sets.
  • Adopt and implement a dynamic classification model.[2]

The modern enterprise must either build these competencies in-house or work with a trusted third party to move through these steps. Since the importance of data will only increase, the task of implementing a modern data classification and modeling program is destined to become even more business critical.

(This post was brought to you by IBM Global Technology Services. For more content like this, visit Point B and Beyond.)

[1] Global Cyberspace Is Safer Than You Think: Real Trends In Cybercrime, Centre for International Governance Innovation 2015.

[2] Recommended steps adapted from "Rethinking Data Discovery And Data Classification" by Heidi Shey and John Kindervag, October 1, 2014, available from IBM.

Cloud Musings

(Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2015)

More Stories By Kevin Jackson

Kevin Jackson, founder of the GovCloud Network, is an independent technology and business consultant specializing in mission critical solutions. He has served in various senior management positions including VP & GM Cloud Services NJVC, Worldwide Sales Executive for IBM and VP Program Management Office at JP Morgan Chase. His formal education includes MSEE (Computer Engineering), MA National Security & Strategic Studies and a BS Aerospace Engineering. Jackson graduated from the United States Naval Academy in 1979 and retired from the US Navy earning specialties in Space Systems Engineering, Airborne Logistics and Airborne Command and Control. He also served with the National Reconnaissance Office, Operational Support Office, providing tactical support to Navy and Marine Corps forces worldwide. Kevin is the founder and author of “Cloud Musings”, a widely followed blog that focuses on the use of cloud computing by the Federal government. He is also the editor and founder of “Government Cloud Computing” electronic magazine, published at Ulitzer.com. To set up an appointment CLICK HERE

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.