Welcome!

@CloudExpo Authors: Pat Romanski, Zakia Bouachraoui, Elizabeth White, Yeshim Deniz, Liz McMillan

Related Topics: @CloudExpo, Agile Computing, Cloud Security

@CloudExpo: Article

Beyond Walls: Modern Security Detection | @CloudExpo #Cloud #Security

Our walls of security prevention are actually being surmounted every day – we just don’t always know it

Our walls of security prevention are actually being surmounted every day - we just don't always know it. Assaults from within the network, as well as zero-day threats, are driving new types of solutions referred to as "advanced threat detection" (ATD). ATD brings in real-time packet capture and analysis in addition to monitoring of logs and NetFlow information, as well as recording of packet capture data for near-real-time and post-analysis. By analyzing data traffic, it is possible to build a profile of normal network behavior that can then be compared against real-time data or recorded data to detect anomalies. Alerts can be compared against security prevention solution data to assess if an attack is underway. Conversely, it can be used to determine "false positives."

The foundation for solutions like this is continuous monitoring and analysis, not just of logs and NetFlow data but of packets. Packet capture and network traffic analysis are therefore the very foundation that supports security detection solutions. Having an efficient, reliable security detection infrastructure is therefore paramount.

Here are a few suggestions for what to demand of your detection infrastructure:

  1. The ability to capture all traffic, all the time, without losing any data. This requires solutions with the capacity and speed to handle full theoretical throughput, not just to keep up, but also to avoid being overwhelmed by data deluges, which can be instigated as part of an orchestrated attack.
  2. The ability to analyze the data in real time, but also in near-real time and after the fact. This requires the ability to capture data reliably to disk and stored at full line rate without losing any data.
  3. The ability to go back and understand when and where a breach occurred is fundamental. That requires the ability to replay what happened on the network exactly as it happened. With the average cost of breaches exceeding $3 million for a typical organization, as well as the cost to reputations and executive careers, perhaps it is an investment in self-preservation that can be justified.

Attacks from within and internal vulnerabilities that no one could have dreamed of until recently now dictate a new strategy. A combined approach that captures all network data, continuously monitors it and uses automated tools to correlate alerts will provide the security detection and prevention that walls alone no longer can.

More Stories By Daniel Joseph Barry

Daniel Joseph Barry is VP Positioning and Chief Evangelist at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, he was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.

From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. He joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Transformation Abstract Encryption and privacy in the cloud is a daunting yet essential task for both security practitioners and application developers, especially as applications continue moving to the cloud at an exponential rate. What are some best practices and processes for enterprises to follow that balance both security and ease of use requirements? What technologies are available to empower enterprises with code, data and key protection from cloud providers, system administrators, insiders, government compulsion, and network hackers? Join Ambuj Kumar (CEO, Fortanix) to discuss best practices and technologies for enterprises to securely transition to a multi-cloud hybrid world.
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. SD-WAN helps enterprises to take advantage of the exploding landscape of cloud applications and services, due to its unique capability to support all things cloud related.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.