Decided to try cloudflare and ran into some problem. The essence of the problem is as follows: huge waiting time for files not cached on cloudflare. Perhaps I configured cloudflare incorrectly and my problem is solved by setting it up?
It was useful to study how the trace is built and was surprised. It turned out that everything not cached goes through the USA without fail. There is a possibility that the ping-admin service is lying, tell me an alternative way to check the trace.
Those. js, css, jpg, etc. are given from the nearest servers, and if you want to give the content from the original server located in NYC (often the face of the website is needed in an uncached form), then the route goes through the USA, and from there the request goes to the original server. Accordingly, the response to the user from the original server goes the same way.
The problem here is this: it is important to give the first content as quickly as possible, and when the tracing goes across the ocean, you can already forget about the speed. Of course, it's cool that js, css, jpg are given from the nearest server and quickly, but when you need to run across the ocean and back for html, then all the speed of fast loading js, css, jpg is useless.
The question is: why is this done and how to give non-cached content faster?
Ping-Admin determines the country by IP. CloudFlare's IPs are from the US, which is why the traffic is shown to go through the US.
But this is a very conditional display, because although the IP belongs to an American company, the server may be located in a completely different place (it is impossible to find out exactly where the server is located).
Judging by the time indicated in the table, it is definitely not the USA there, because. between India and the US would definitely be more than 100 ms.
Cloudflare has announced the transfer of its content delivery network to the use of a Pingora proxy written in Rust.
The new proxy has replaced the NGINX server-based configuration with Lua scripts, and processes more than a trillion requests per day. It is noted that the transition to a specialized proxy allowed not only to realize new features and increase security due to secure work with memory, but also led to a significant increase in performance and resource savings - the Pingora-based solution does not require the use of Lua and uses a Cloudflare architecture optimized for load, therefore consumes 70% less CPU resources and 67% less memory when processing the same amount of traffic.
For a long time, a system for proxying traffic between users and end servers based on NGINX and Lua scripts met the needs of Cloudflare, but with the growth of the network and the increase in its complexity, a universal solution was not enough, both in terms of performance and due to limitations in extensibility and the implementation of new features for customers.
In particular, there were difficulties in adding functionality beyond a simple gateway and load balancer. For instance, there was a need in case of failure of request processing by the server to re-send the request to another server, providing it with a different set of HTTP headers.
Instead of an architecture with the separation of requests by separate processing processes (worker), Pingora uses a multithreaded model, which in Cloudflare usage scenarios (high concentration of traffic from different sites with a large statistical shift) showed a more efficient allocation of resources between CPU cores. In particular, linking unbalanced requests to nginx processes led to an unbalanced load on CPU cores, as a result of which resource-intensive requests and blocking I/O slowed down the processing of other requests. In addition, binding the connection pool to the handler processes did not allow reuse of already established connections of other handler processes, which reduces the efficiency of work with a large number of handler processes.