DDoS protection market

Started by Лара, Aug 29, 2022, 07:33 AM

Previous topic - Next topic

ЛараTopic starter

The DDoS protection market and the attack protection technologies used by operators are still quite closed.
I'll tell you what I learned about it while maintaining sites and Internet webservices that have been under continuous attacks for the past few years.
The first attacks appeared almost simultaneously with the Internet. DDoS as a phenomenon has become massive since the late 2000s.



Since about 2016-2017, almost all webhosting providers have come under protection from DDoS attacks, like most of the notable sites in competitive areas.

If 10-20 years ago most attacks could be repelled on the server itself, now everything is more difficult.
First, briefly about the types of attacks.

Types of DDoS attacks in terms of choosing a protection operator

Attacks at the L3 / L4 level (according to the OSI model)

    UDP flood from a botnet (many requests are sent directly from infected devices to the attacked webservice, the channel is flooded to the servers);
    DNS/NTP/etc amplification (many requests are sent from infected devices to vulnerable DNS/NTP/etc, the sender's address is forged, a cloud of packets with responses to queries floods the channel of the one being attacked; this is how the most massive attacks on the modern Internet are carried out);
    SYN / ACK flood (many requests to establish a connection are sent to the attacked servers, the connection queue overflows);
    packet fragmentation attacks, ping of death, ping flood (google plz);
    etc.


These attacks aim to "fill up" the channel to the server or "kill" its ability to accept new webtraffic.
Although SYN/ACK flooding and amplification are very different, many companies deal with them equally well. Problems arise with attacks from the following group.

Attacks on L7 (Application Layer)

    http flood (if a website or some http api is attacked);
    an attack on vulnerable parts of the website (having no cache, very heavily loading the site, etc.).


The goal is to make the server "work hard", process a lot of "seemingly real requests" and be left without resources for real requests.
Although there are other attacks, these are the most common.
Serious attacks at L7 level are created in a unique way for each attacked webproject.

Why 2 groups?

Because there are many who are good at repelling attacks at the L3 / L4 level, but either do not take up protection at the application level (L7) at all, or are still weaker than their alternatives.

Who's Who in the DDoS Protection Market

(my personal view)

L3/L4 protection

To repel attacks with amplification ("blockage" of the server channel), wide channels are enough (many of the protection services connect to most large backbone providers and have web channels with a theoretical capacity of more than 1 Tbit). Don't forget that very rare amplification attacks last longer than an hour. If you are Spamhaus and everyone doesn't like you, yes, they may try to shut down your channels for several days, even at the risk of further survival of the global botnet being used.

To repel attacks with SYN / ACK flood, packet fragmentation, of course, you need equipment or software systems to detect and cut off such attacks.

Many people produce such equipment (Arbor, there are solutions from Cisco, Huawei, software implementations from Wanguard, etc.), many backbone operators have already installed it and sell DDoS protection services. Some companies develop their own software solutions (technologies like DPDK allow processing tens of gigabit traffic on one physical x86 machine).

Of the well-known players, everyone is able to repel L3 / L4 DDoS more or less effectively. I won't say now who has more maximum channel capacity (this is insider information), but usually this is not so important, and the only difference is how quickly the protection works (instantly or after a few minutes of project downtime, as in Hetzner).

The question is how well this is done: an amplification attack can be repelled by blocking traffic from countries with the largest amount of harmful traffic, or only really unnecessary traffic can be discarded.
.
Some operators have a separate service "protection against attacks at the L3 / L4 level", or "protection of channels", it costs much less than protection at all levels.

L7 security (application layer)

Attacks at the L7 level (application level) are able to consistently and efficiently hit units.
I have real enough experience with

    qrator.net;
    DDoS Guard;
    G-Core Labs;
    Kaspersky.

Defense will be very expensive. I can tell in the following topics how to design applications in order to save very well on the capacity of protection channels.

The real "king of the hill" is Qrator.net, the rest are somewhat behind them. So far, Qrator is the only one in my practice that gives a percentage of false positives close to zero, but at the same time they are several times more expensive than other market players.

Other operators also have high-quality and stable protection. Many services we support (including very well-known ones in the country!) are protected by DDoS-Guard, G-Core Labs, and are quite satisfied with the result, I can recommend them.

There is also experience with small security operators like  ddosa.net, etc. I definitely can't recommend it, because. experience is not very great, I'll tell you about the principles of their work. Their cost of protection is often 1-2 orders of magnitude lower than that of major players. As a rule, they buy a partial protection service (L3 / L4) from one of the larger players + make their own protection against web attacks at higher levels. It can be quite effective + you can get a good service for less money, but these are still small companies with a small staff, please note.

CloudFlare

CloudFlare is a separate phenomenon. They are already a huge multi-billion dollar company, their customers are half of the world's web traffic generators, and their DDoS protection service is simply the best known among their services. We also constantly use them for DNS hosting, CDN, as a traffic proxying service.


Security operators

What is the difficulty of repelling attacks at the L7 level?
All applications are unique, and you need to allow traffic that is useful for them and block harmful traffic. It is not always possible to unequivocally weed out bots, so you have to use many, really MANY levels of traffic cleaning.

Once upon a time, the nginx-testcookie module was enough, and now it is enough to repel a large number of web attacks. When I was in the hosting industry, my L7 security was built around nginx-testcookie.

Alas, the attacks have become more difficult. testcookie uses JS-based bot checks, and many modern bots can successfully pass them.

Attack botnets are also unique, and each major botnet needs to be considered.
Amplification, direct flood from a botnet, filtering traffic from different countries (different filtering for different countries), SYN / ACK flood, packet fragmentation, ICMP, http flood, while at the application / http level you can come up with an unlimited number of different web attacks.

totally, at the level of channel protection, specialized equipment for cleaning traffic, special software, additional filtering settings for each client, there can be tens and hundreds of filtering levels.

To properly manage this and correctly tune the filtering settings for different users, you need a lot of experience and qualified personnel. Even a major operator who decides to provide protection services cannot "stupidly throw money at the problem": experience will have to be gained on lying sites and false positives on legitimate web traffic.

There is no "repel DDoS" button for the protection operator, there are a large number of tools, you need to be able to use them.

L3 / L4 attacks and protection against them are more trivial, mainly they depend on the thickness of the channels, the algorithms for detecting and filtering attacks.

L7 attacks are more complex and original, they depend on the attacked application, capabilities and imagination of the attackers. Protection from them requires great knowledge and experience, and the result may not be instantly and not one hundred percent. Until Google came up with another neuronic network for protection.
  •  

BrettUK

Blocking is more accurate than an IP address is a myth. It is realizable in simple cases (roughly speaking, when the bot does not use a browser), but in the general case it does not work and, on the contrary, reduces the response time to an attack.

In practice, in IPv4, not to mention IPv6, blocking to "IP address" precision levels only leads to user problems in degenerate isolated cases, which are more helpful and correct to solve as they arise.
  •  

ufobm

The User-Agent of a significant number of clients is equivalent to the version of Chrome that is current on the current date. If the bot uses the same browser (or pretends to), you will miss it.
Moreover, you are actually allowing the bot to generate tickets on your side by iterating over User-Agents, which will either cause you to run out of memory for tickets, or to the fact that you will block all new tickets with IP- addresses, that is, in fact, the entire IP address.
Behavior-based analysis cannot be done on the basis of a single request, we need a history, in fact, of behavior. At the identical time, if you try to ban with an accuracy less than the IP address, then the bot can start imitating a crowd of users, each of which sends its very first request to the website, which will also lead to either exhausting your memory or blocking the IP address entirely.

"First request from bot" can be cut off if the bot is not browser-like. This is not "advanced", this is just the simplest filtering option that almost everybody has. But it generally does not work on browser bots.
  •