How do you optimize a server for running a news site? What is the ideal setup?

Started by jina, Jun 17, 2022, 04:13 AM

Previous topic - Next topic

jinaTopic starter

Wondering this since it's very first news site I get to handle.
At the beginning, web core vitals were good but suddenly in April they started to suck.
What it's weirder is that the server is not pushed to its limits in terms of resources. Average of 0.50 CPU loads and majority of free ram and hard disk are never used not even during peaks of traffic.

So I don't think it's a matter of server resources. The site is built on Wordpress, it has more than 20k articles. It has W3 Total Cache. Server-side it has FastCGI Nginx.

But yeah, as I said with this config the core vitals were fine until April but after that they started to sucks and can't understand why. My guess is that news sites with large amount of articles need special configuration/softwares but I know any. Can you give me any idea please on what I could try to restore decent Core Vitals metrics? Thank you all


There could be multiple things that you can optimize...

Optimizing number of concurrent connections in nginx
Optimize output buffer size in PHP-FPM
Enable Gzip
Monitor and adjust the FastCGI settings correctly
Use CDN for your static content like images, JS libraries
Monitor and choose a host closer to your majority audience for lower latency
Monitor TTFB and try to optimize it
And many more that are not on the top of my mind at the moment


This issue is probably related to website design. Check your site on GTMetrix, examine the waterfall tab.

If you want to learn server administration by troubleshooting, you would be up for a disaster on unmanaged servers.

On managed servers, the provider takes care of the server. Get a second server or VPS, clone the site there and do your tests.

I'd argue being a web developer and being a system admin aren't the same. Having knowledge in both helps. But being a master in one helps more
█||||[ MechanicW


Database server

As a rule, the database is an element of the infrastructure in which it is very scary to change something. Loss of data, even if it is partial, is a small local catastrophe for a business. Therefore, even if the database uses only two-thirds of the web server, companies are not eager to switch to a more economical configuration - more expensive for themselves.

But databases are prone to growth and increased load, so here are a few tips on how to extend the life of a live server.

Check the slowest and most frequent queries

Using the selected monitoring tool, display the top slow requests, or the so-called slowlog. If you have SQL queries that take longer than a second, take the time to fix them. It also makes sense to review the most frequent requests. You will gain performance even if you optimize them for 50ms. After performing these simple operations, you will not have to choose a more powerful server for moving the database.

Share if you can

"There is a case where a company uses a fairly large PostgreSQL database, about 1.3 TB. It is noteworthy that the load is mainly for recording. A separate service reads data from the replica and transfers it to ClickHouse, where analytics is already being built. It turns out that the company removes part of the load from its database and continues to use a cloud server that is rather modest in terms of power. By the way, usually dedicated servers are used for loaded databases, but in that case a virtual server is cheaper," .

This "separation of duties" can be attributed to best practices. It is important for any company to evaluate the nature of the DB. Is it worth loading the server under the database with additional operations?
It may be cheaper to move them to a small inexpensive virtual server (you can consider a percentage instance) and perform all data operations at night, when the overall load on the systems is significantly reduced.

Hire a DBA

If you have a large loaded database, consider hiring a specialist (in-house or outsourced) called a Database administrator, or a database administrator. In fact, that is a DevOps specialist who profiles working with databases.

Where is the savings here if we have to pay the salary of an employee? In fact, a good DBA will save you money and benefit you in excess of your monthly salary. In addition to the optimization described above, he can revise the database architecture: the database will work like clockwork, data processing will not slow down, your service or application will start to work faster for the end user.

Infrastructure optimization is a job that is often not enough time. There are always more important tasks related to product development. Be that as it may, constantly postponing optimization leads to the accumulation of technical debt.
In a critical case, you will either have to spend much more time paying it off, or involve third-party specialists to fix the architecture or application code. And, as practice shows, technical debt reminds of itself in the hottest time for the company.

A good time to optimize is when infrastructure is moved. If you are migrating your web services from one provider to another, put a few optimization items into your preparation plan. So, if monitoring shows that you do not need such a large network drive for a web server, order a more modest server at a new location.

Infrastructure as code
In the long run, that item will save the company's resources more than once. Take the time to understand Ansible or learn how to work with Terraform. The described infrastructure, the configurations saved in Git, will greatly facilitate the deployment of systems and configuration changes - both on new servers and in a new location.

Time is money. With an Infrastructure-as-Code approach, your move can be as simple as deploying cloud infrastructure through Terraform and configuring systems with the Ansible playbook set. And that is a couple of hours against several days of moving infrastructure. Yes, that requires a good DevOps specialist, but here, as with DBA, investments in people pay off.

Managed solutions
If migration is required quickly, you can consider PaaS solutions offered by most major providers. For instance, cloud databases or Managed Kubernetes. Here, the stage of interaction with the infrastructure is excluded, which greatly saves time at the start. It can take more than a day to deploy a database on a server, but in a managed solution, you immediately proceed to the stage of choosing a cluster configuration.
additionally, PaaS solutions usually have built-in autoscaling, which also saves resources for system administrators and the company.