@CloudExpo Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, Aruna Ravichandran

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security, @BigDataExpo, SDN Journal

@CloudExpo: Article

Sailing the Seven Cs of Security Monitoring

Establishing alliterative best practices for watching over your IT environment: from continuous to cloud!

What is it your mom used to say? “A watched pot never boils.” This might be true, but a watched pot also never spills; it never allows your younger sister to stick her hand in the hot water; prevents Uncle Jack from tasting before dinner is ready; and if something unforeseen happens, there is time to mitigate the problems.

One of the established best practices in InfoSec is monitoring. People, products and companies get paid a great deal of money and expend a great deal of resources to watch pots. Monitoring simply is the central component to any security initiative. If you don’t watch it, it still happens, (trees in forest fall and still make sounds), you’re simply not aware to possibly prevent the issue, to control the damage, or protect the assets for spiraling beyond your control. Monitoring is the baseline to accountability and responsibility. It provides the necessary information to make risk-based decisions regarding assets supporting core missions and business functions.

But with all best practices, there are variables. How much to monitor? What priorities matter? Where are my greatest vulnerabilities? To this end, I have boiled down monitoring to seven best practices…The 7 Cs of security monitoring:

  1. Consistency
  2. Continuous
  3. Correlation
  4. Contextual
  5. Compliant
  6. Centralization
  7. Cloud

Consistency – Every company is different. Each has their own thresholds of organizational risk. A credit union or health clinic is much more likely to need a higher bar than an air and heating contractor. However, this doesn’t mean the smaller company can ignore risk. It simply means (typically) the levels and layers that require monitoring are less complex. The key to consistency is process. And to divine a process you must first define a strategy, agree on the measures and metrics and follow through with a monitoring program. Start with understanding how your users interact with the network and the various risk that proposes. Once you know what needs to be monitored and the baselines (risk tolerance) of what constitutes alerts and other suspicious activity, then you can build a program and standardize that configuration and analyze the results to make adjustments. From there it is wash, rinse and repeat.

Recently the Department of Homeland Security director of federal network resilience noted: as you move to standardize configurations networks are not only more secure but they lower operational costs. “There is almost a trifecta of controlling cost, increasing service and improving security, he said.

Continuous –Hackers don’t sleep, so why should your security?  It is understood that continuous monitoring is the best method to prevent breaches, discover anomalies and, and control assets. However, there are differences of opinion as to what does continuous mean. Are you to hire a dedicated analyst to watch every ping, blurp and log? Guards armed with wiener dog lasers in front of your server room? Of course not. In this case, our working definition of “continuous” is unique for every organization and needs to be commensurate with their risk and resources. NIST (National Institute of Standards and Technology) recommends an ongoing “frequency sufficient to support risk-based security decisions as needed to adequately protect organization information.” Despite the variable vagueness of that statement, the goal nonetheless must be 7/24/365 coverage. To achieve this degree of continuity, an initiative requires a series of automated processes and controls combined with the expertise to analyze vulnerability and initiate action. Yet the lynchpin for effectiveness of a round-the-clock strategy is that it is doing in real time. See the “C” for cloud, to show you this approach is affordable, efficient and manageable. If there are issues, as you define them, you get the alerts immediately, not a week later as you look through log transcripts. Continuous monitoring is about proactivity, as much as it is about response. In that it allows for such immediacy in action mitigates any potential threat.

Continuous monitoring has been defined by NIST and the SANS 20 Critical Security Controls as key to reducing risk in IT environments. Now I am not saying continuous monitoring is a silver bullet, but it certainly lessens the possibility of attack, carelessness and operational failure.

Correlation: In the modern enterprise, there are simply too many silos of information, too many endpoints for access, too many variables of risk and not enough visibility or resources to properly protect all the assets of an enterprise. Monitoring in its simplest form looks at one of the silos, one of the applications– it examines possible events, or log-ins, or credentials. To enhance the effectiveness, there needs to be a tight collaboration of all the resources. This expands the visibility and creates a more accurate view of all online and network assets. Correlation needs to tie together the cooperative capabilities of such tools as SIEM, Log Management, Identity and Access Management, malware scanning, etc… If security is about maintaining visibility, correlation would be its magnifying glass. Or to mix my metaphors, it’s like a lens on a camera that can bring blurry visions into sharp focus. For example good correlation removes the specter of false positives and more. Consider, the entitlement management configuration from an Access Management feature set is part of the correlation engine of SIEM to help distinguish authorized access from suspicious activity. The resulting alerts happen in real time and provide the directed response necessary to remediate any issues. Additionally, all of this detail is historically recorded for various reports and compliance regulations through the log management capabilities.

Correlation is rooted in the aspects of consistency. You first need to know the landscape in order to create the rules. The rules of correlation create the baseline in which to manage a consistent initiative. This also goes a long way in underscoring the next 2 C’s Context and Compliance.

Context: Automation can make the process of continuous monitoring more cost-effective, consistent, and efficient. But continuous monitoring without intelligence can result in simply more data. For example, the network processes an application log in request from an approved user name and password. That in itself is not remarkable. However, the IP address doesn’t match the user’s usual location or a device’s usual behavior. This one is coming from Zagreb. Is Mike from sales in Zagreb? The system says no, because only 4 short hours ago he was logging off from an office in Denver. This situational awareness raises a red flag and escalates an alert. And because this is done in real time, IT catches the activity and is able to block access.

Compliance: The common thread for the alphabet soup that is compliance (HIPAA, PCI, FISMA, FFIEC, CIP, SOX, etc…) is the need to know who is logging in, accessing what assets and ensuring only the appropriately credentialed users can do those things. When you are dealing with sensitive information like credit card numbers, social security numbers, patient history/records, and the like, the need to have a strong and continuous monitoring initiative is not just a driving force to avoid fines, but it is the basis of good and trustworthy operation.

So much has been written about compliance and network security, so that all I will add is understand the responsibility you have towards customers, partners, employees, users, accurately calculate the risk in maintaining their information and vigilantly maintain the monitoring process that makes you a good steward of their trust. And of course, a solid monitoring strategy will provide the industry regulators the reporting and evidence of your compliance.

Centralization: With all the moving parts and all the silos, device types and elements to monitor, without a means to centralize, a security infrastructure becomes disjointed, uncoordinated and considerably harder to manage. The continual increase in daily network threats and attacks makes it challenging to maintain not only a complex heterogeneous environment but to also ensure compliancy by deploying network-wide security policies. The ability to forensically analyze the infrastructure under a single pane of glass is not just a convenience factor, but one that seals up the vulnerability cracks.

Cloud: Best practice monitoring requires more than just a pair of eyes. The strategy includes investment in a variety of solutions, tools, servers, analysts and more. For many companies, this is not tenable in terms of human resources, budgets and core competencies. This is why continuous monitoring from the cloud (aka security-as-a-service) provides the great equalizer. Through the application of cloud-based security, a small health clinic in Bozeman, Montana can wrangle to same enterprise capabilities as New York Presbyterian. The only difference is the necessary scale to achieve a strong deployment and sustainable initiative.

Addressing the issue from the cloud solves several pressing issues while providing the necessary heft to create the visibility to govern credentialing policies, remediate threats and satisfy compliance requirements across any sized enterprise. What’s more, all the solutions noted from above – from SIEM to Access Management—are available from the cloud. And there are a few providers that can harness all the solutions collectively and centralize them under that single pain of glass.

As you embark to set sail on the 7 Cs, leave a note for your mother to watch the pot.

Kevin Nikkhoo
Captain of Continuous Monitoring

More Stories By Kevin Nikkhoo

With more than 32 years of experience in information technology, and an extensive and successful entrepreneurial background, Kevin Nikkhoo is the CEO of the dynamic security-as-a-service startup Cloud Access. CloudAccess is at the forefront of the latest evolution of IT asset protection--the cloud.

Kevin holds a Bachelor of Science in Computer Engineering from McGill University, Master of Computer Engineering at California State University, Los Angeles, and an MBA from the University of Southern California with emphasis in entrepreneurial studies.

@CloudExpo Stories
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, will provide a fun and simple way to introduce Machine Leaning to anyone and everyone. Together we will solve a machine learning problem and find an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intellige...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Though cloud is the future of enterprise computing, a smooth transition of legacy applications and systems is critical for seamless business operations. IT professionals are eager to start leveraging the cost, scale and other benefits of cloud, but with massive investments already in place in existing infrastructure and a number of compliance and resource hurdles, it can be challenging to move to a cloud-based infrastructure.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's applicati...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant th...
SYS-CON Events announced today that SkyScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, will go over the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, applicatio...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Microsoft Azure Container Services can be used for container deployment in a variety of ways including support for Orchestrators like Kubernetes, Docker Swarm and Mesos. However, the abstraction for app development that support application self-healing, scaling and so on may not be at the right level. Helm and Draft makes this a lot easier. In this primarily demo-driven session at @DevOpsSummit at 21st Cloud Expo, Raghavan "Rags" Srinivas, a Cloud Solutions Architect/Evangelist at Microsoft, wi...
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...