How Internet comes to Data Center

Started by JohnS, Aug 18, 2022, 02:55 AM

Previous topic - Next topic

JohnSTopic starter

A VDS hosting user asked a question about where the Internet comes from. And then we realized that this is the same childish question, which is not so easy to answer normally.

I think that 99% of people will answer: they take it and come, and then they will think about it. This is so natural and obvious that it is usually simply not discussed. Well, there, you know, optics, it comes to the house, there is a box in the house, and copper comes out of the box. That's pretty much how it works.

In theory, this is true, but in practice it looks a little different. And there are nuances. Let's show where the Internet comes from and in what. And at the same time, let's talk about where traffic is filtered and how it works in hosting server.

This is direct dark optics, we do not have any optical seals (DWDM). If we need more capacity, then the providers will simply allocate more fibers to us, and the ring capacity will be enough for this: there are still many free bundles.
Further, all these optical links come to the switching core, which consists of Juniper switches. The outputs from the kernel are already connected to the rack switches where the servers that host your virtual machines are located.

That is, an optical cable for 32 cores of dark optics comes to the switching core, most of which is a reserve for expansion (because anyway, that cable is thrown once, and the cost of the cable does not even close matter against the background of the cost of laying and a communication contract) . Magic happens in the switching core, which further spills the Internet through local optics to rack switches. Each server has two uplinks: from its own switch and from one of the neighboring ones in case of failure of its own. Rack switches add some switching magic and also turn the optical cable into a copper cable that is already plugged into the server LAN ports.

once more port connects our service hub for the engineering interface with the server. That is, in total, two "fat" connections for a large Internet and one smaller one for engineering access enter the server so as not to roll a cart with a console. Among other things - so as not to roll it from Moscow to Novosibirsk, but this is already talking about other data centers.

All that works simply to ensure that packets from the Internet come to your server, get to the right place and, accordingly, are sent back to the Internet in the right way. Traffic filtering and other clever protections take place in the switching core. That is, it receives raw traffic. A software-defined loop is created that drives traffic not to the rack switch, but to once more part of the kernel that filters according to the rules (firewall) or calculates DDoS attack signatures (anti-DDoS).

Next, a cleared stream is created, which is already further launched into the routers of the rack and by consumers. For more sophisticated protections that require statistical analysis of traffic, an additional loop can be created through special devices connected to the kernel. For some responsible government customers, domestic firewalls are still needed; these devices are installed on a rack at the exit from the server. Once the special services caught the owner of the drug exchange from us, they brought a bunch of papers, court decisions and their firewall-type piece of iron, which they put in the gap between the server and the rack switch.
You can also connect traffic cleaning centers to tunnel the flow through them so that cleaning does not take place at our point, but cleaned traffic already arrives at us: this makes sense for directed attacks starting from a DDoS level of several hundred gigabits per second. We have not had such a need yet.

That is how the Internet comes to the data center. If you have questions of any complexity, then ask them. As it turned out, children's questions can be confusing for a while.


What delays are introduced by filters and information gathering tools?

to my mind, I liked the routers, either hanging on a twisted pair cable, or glued to double tape.


Before renting a dedicated server, for example, I need to calculate its configuration. What are the short steps I need to take to achieve this? That is, if approached purely experimentally, then I probably need to somehow simulate the load (maximum) on the server. It is probably necessary to draw up some kind of graph of the load of the processor, memory and disk system. But the dependence of the workload of one or another system component on specific processes is not always linear... In general, can this approach be considered correct? ???


Already by the first three or four phrases (rocket, "first entry" ...) you can shoot a continuation of the series in the spirit of "Chernobyl".
Closed doors upholstered with rusty iron, whose skewed flaps are held in space and vertical position only by a miniature padlock, let the main Internet into the darkness of the buildings of the vanished era! Rusty switchboards on the walls with heights from constant humidity in unlit corridors in which you will not meet a single living soul... The name, of course, is just "data center".