Welcome!

@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo, Agile Computing, Cloud Security

@CloudExpo: Blog Post

An Overview of DDoS Attacks | @CloudExpo #Cloud #Security #DataCenter

Cloud computing provides access to multiple virtual machines with many having unique public facing IP addresses

Powerful Denial of Service attacks are becoming increasingly common. A Distributed Denial of Service attack is when the attacker uses multiple machines to flood the resources of the target to overwhelm it and deny the legitimate users access to the service. The DDoS attack on Dyn in October 2016 was one of most powerful attacks in history. Many DDoS attacks can be thwarted to a large extent by increasing the system’s capacity during an attack, but that is not a solution because it still causes monetary losses.

DDoS attacks typically employ three methods:

  1. Finding a vulnerability in the victim’s service
  2. Finding an endpoint in the victim’s service that is computationally heavy (for example, a login system that uses a slow hashing algorithm) but without a rate limit and abusing it
  3. Depleting the available bandwidth of the victim’s service

Cloud computing provides access to multiple virtual machines with many having unique public facing IP addresses. These virtual machines, however, are a goldmine of “zombies” for an attacker. Attackers are always doing automated scans on the internet for easily exploitable machines and put them to use as “zombie machines” in an attack. These machines form a part of botnets, which are available for lease in the dark web for anyone willing to go down that path! Botnets as a Service (BaaS?) are a reality now.

Cloud providers are very active in taming these bots. They lock down any virtual machine that exhibits abnormal behavior. Such botnets are hard to detect though, because they can remain dormant for a long time and take turns in collaborative attacks, thus never raising any suspicions towards their behavior. The idea is to find machines with known vulnerabilities. It is absurd that so many machines are still left open in the wild to be exploited.

Some scan strategies employed by attackers are:

  1. Random scan: In this strategy, many hosts scan the whole IPv4 address space. IPv6 space is protected from this type of scan since it is too large to operate on effectively.
  2. Hitlist scan: This scan works on a list of machines that the attacker wants to pwn. When a vulnerable machine is detected and pwned, the attacker sends a part of the list to that machine to operate on.
  3. Route-based scan: This type of scan uses BGP routing prefixes to reduce the address space.
  4. Divide and Conquer scan: In this strategy, different hosts act on different parts of the address space.
  5. Topological scan: This scan uses the information on the compromised host to select new targets. All email worms work in this way by exploiting the address books.
  6. Local Subnet scans: This type of scans use any of the above defined techniques to find out vulnerable machines in the local network of a compromised host. This is effective when all machines in the subnet do not have a publicly accessible IP address and thus cannot be attacked directly.

It is also important to make a distinction between the hosts and the victims in a DDoS attack, because sometimes the hosts themselves are the ones attacked. While hosts are usually used as “zombies” in DDoS attacks, they can be the victims themselves. While having control over a host, the attacker can:

  1. Use fork() bombs, in which a process is made to replicate itself continually until the machine freezes up
  2. Intentionally generate errors to fill up the logs and thus, depleting the disk space to cause the application to crash
  3. Execute a simple shutdown, taking that node down.

DDoS attacks are becoming a menace. As difficult as it is, safeguarding their web services against such attack should be a top priority of any sysadmin. Several steps can be taken to mitigate them, like subscribing to more bandwidth, upstream blackholing or using a third-party frontline defense like Cloudflare.

More Stories By Harry Trott

Harry Trott is an IT consultant from Perth, WA. He is currently working on a long term project in Bangalore, India. Harry has over 7 years of work experience on cloud and networking based projects. He is also working on a SaaS based startup which is currently in stealth mode.

CloudEXPO Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. SD-WAN helps enterprises to take advantage of the exploding landscape of cloud applications and services, due to its unique capability to support all things cloud related.