My Plesk experience

Started by spinneren, Sep 04, 2022, 08:10 AM

Previous topic - Next topic

spinnerenTopic starter

I want to share some impressions about the necessity or uselessness of such a thing as a control panel for a commercial single-server web project in the presence of a very part-time admin. The story began a couple of years ago when friends of friends asked me to accompany the purchase of a business — a news web site — from a technical point of view. It was necessary to get a little insight into what works on what, make sure that all the necessary details were transferred in the proper form and volume, and strategically figure out what could be improved.



The website was spinning on a dual-core 4x-gig VM on Linode, on some mossy Debian6 with an uptime of 400 days and a list of such un-updated packages. Web part on a self-written CMS, nginx, php7.3 FPM, mysql tuned Percona. In principle, it worked.

In parallel with the conversations with me, the new owner was looking for a programmer to bring the project to expectations. Found. The programmer evaluated the traffic and volumes and decided that he knows how to optimize and cost management. He migrated the entire site to a $10 shared web  hosting under the control of his usual IS ****er.
A few days later, another call from the owner: "everything is slowing down and it seems we have been broken." I tried to fix the situation through the panel, but after some time of fruitless attempts to change the version of the PCP or handler from fcgi to fpm, I gave up and climbed into the shell. There I found the included debug, which shone on the entire Internet with a password from the muscle, 777 on some folders, which by that time were crаcking from flooded malware and similar game. The owner realized and decided that it was wrong to save on web hosting, a programmer, and an administrator who would look with one eye at how things were going.

We are going to RuVDS. A little closer than the British Linode, and if you suddenly want to store personal data and all this, you don't have to move anywhere else. Since the project was planned to expand, they took a VM "for growth": 4 cores, 8 gig of memory, 80GB of disk. It's not that I don't know how to do nginx configs with my hands, I just didn't have the enthusiasm to do this project so intiмately (see above about part time). Therefore, I installed Plesk (here I will omit the installation details, because by and large there are none:
I launched the installer, set the password to the admin, entered the key – everything), at that time it was 17.0. The basic settings work well out of the box, there is fail2ban and the latest available versions of PHP, nginx.

It's probably worth stopping and explaining why his. Since I rarely do such things, and I don't have any special tools and a set of blanks for each case, it was clear that some kind of automation of basic things was needed, so that, firstly, quickly, secondly, safely, and thirdly, someone had already implemented all the best practices.

So, I put it. I saved a lot of time, restarting the website on a new server turned out to be almost instant. It remained to saw the muscle config, giving it half of the memory and increasing the number of buffer pools, and give nginx half of the cores (Splash does not touch global configs), and within a couple of days go to the shell to look at mysqltuner stats. Yes, and I bought a paid ImunifyAV from the extensions catalog to get rid of the flooded malware. There were about 11,000 infected files. The abomination is that obfuscated pieces of code were poured into the static, and it would be completely depressing to clean it with your hands. At first I tried ClamAV, but, as it turned out, it does not take such things, but ImunifyAV could.
Moreover, the cured files remain in working condition, a piece with malware is simply deleted.

The arithmetic is simple: $ 50 per month for VMku, $ 10 for Splash (actually less, because they bought it right away for a year with a discount of two months) and $ 3 for antivirus. Or a lot of suitcases of money for my time, which I would have spent on the server at first raking these stables manually. The owner was quite satisfied with this arrangement.

Meanwhile, we found a new programmer. We agreed with him on the distribution of responsibility, made a subdomain for the test version, and work went on. He was sawing a new version of the website on Laravel, and I was looking at fail2ban%).

Interestingly, the flow of curious people does not stop and there are always about a hundred addresses in the banned list. The effect is interesting: in particular, usually if I log into shell, I see about 20,000-30,000 failed SSH login attempts on the greeting. With fail2ban enabled, about 70. Invested efforts: 0. Unfortunately, it was not without a drop of tar. By default, WAF (modsecurity) was "semi-enabled": in detection mode. That is, he wrote suspicious activity to the log, but actually did not take any action.
And fail2ban indiscriminately read all the logs, according to the included jails, and soaked everything that moves. Thus, we banned half of the editorial office:D. We had to disable this jail, and whitelist the necessary IP addresses for reliability. Effort invested: poke the mouse twice and teach editors to tell their IP address.

What I liked — logs and backups. Logs are written and rotated themselves out of the box; backups are configured very simply. At the most sluggish time, a full backup is made, somewhere for 10 gigs, and then every day incrementally, 200 megabytes, for a week. Recovery is granular, up to a specific file or database. If you need to restore from the incremental, then you do not need to mess around first with the full and restor of the entire chain, the Splash does everything by itself. You can upload backups anywhere: to ftp, to dropbox, s3 bucket, google drive and others.

the programmer finally finished the new engine, we poured it into the prod, imported the old data and sat down to choose the color of our future Maserati. We are still sitting and choosing.

The first problems began. The new website was expected to be heavier than the old one, but the real rake was that Yandex was used to attract traffic, among other things.Zen, which caught up with the visitors in batches. The site was bent at 160 simultaneous connections (I'm not talking about RPS, because they didn't measure it).

Op, already holds 500 connections. As the credit card was applied to the means of promotion, the traffic waves became larger. The next milestone is 1000 simultaneous connections. Here I had to re-file the code and look into the soul of the muscle. Splashing didn't help, but it wasn't really expected. We turned on slow queries log, hung indexes on the database, removed unnecessary queries from the code, once again combed the mysql config according to the advice of mysqltuner.

The new challenge is 2000 connections. The Plesk 17.8 version just managed to come out, in which, among other things, nginx caching was screwed on. Updated (surprisingly easy). We're trying. It works! And then they stepped into a soft spot, the Yandex.zen feed stopped working. The website  is working, the feed is not working. The feed is not working, there is no traffic. The atmosphere is heating up. Under the pressure of circumstances and from lack of imagination, I immediately went to strace nginx and found what I was looking for. It turns out that at some point stupid nginx cached a stray 500th error as a response to a Yandex get feed.xml

It is clear that the owner needs MORE, the waves are slowly increasing. We are still coping, but we started experimenting with memcached in advance, since Laravel supports it almost out of the box. I didn't want to put memcached with my hands to "play around" somehow, so we put a docker image. Directly from the panel.

Well, I'm lying, I had to go into shell and install the module via pecl. Right according to the instructions. There is nothing to say about the increase in throughput yet, there were no large enough inflows. The website engine got hooked on localhost:11211, stats are shown, memory is being eaten. If you like it, we'll see what to do next. Either we'll leave it that way, or we'll put the "real" one right in the Axis. Or try redis in the same way

Then it was necessary to attach a mailing list. No relays, only smtp authentication. I got a mailing address, we do a mailing list through its details via PHP.
Not so long ago, Plesk Obsidian (18.0) was released, updated from past experience without fear. Everything went very smoothly, there's not even anything to tell. From the pleasant — the interface has greatly improved in quality, has been modernized and has become more convenient in some places. Cool thing Advanced Monitoring on Grafana.

I haven't dealt with it in detail yet, but you can, for instance, set up alerts for any parameter in the mail. The owner, lol.

Since I'm talking about the interface, it's adaptive and really works well on the phone. In the early stages, while we were trying to find the optimal settings for PHP and other things, it helped a lot. And especially when a programmer in a fit of work enthusiasm does something at 23 o'clock, and I in a fit of work enthusiasm drink vodka in the bath, and URGENTLY need to switch something.

Oh, by the way. The picture shows that PHP Composer has appeared. We haven't played with it yet, but, say, for the same Laravel, it can save a couple of shell logins and some time to install dependencies. The same system exists for Node.JS and Ruby.

With SSL, everything is simple. If the domain resolves where it should, Let's Encrypt is done with one click and updated further by itself, both on the domain itself and on subdomains, and even mail services.

Plesk itself as a software for the current time is quite pleasant and stable. Updates itself and the Axis is quiet, consumes little resources, works smoothly. I don't even remember that I stepped on something somewhere, which would be a clear defect of the product.
Of course there were problems, but they are either from imperfection of the configuration, or somewhere at the junction, so there is not much to complain about. The experience of working with Splash is generally pleasant. What it doesn't have, and we need to understand it, is any (any) clustering. Neither LB nor HA. You can try, but there will be so much effort invested that it is better to do something differently initially.

I think we can summarize. For the case when there is no admin, or there is not enough of it, when the price of web hosting and the website (s) spinning on it exceeds, well, say, 100u.e., when we are not talking about a bestial sharing of 1600 web sites on the server, when the decision-maker has a choice to hire a part-time admin, or buy software and to get an admin on "half a bump", or not to start him at all — there is definitely a sense.
From the point of view of the remote admin — the same thing. $ 10 per month, and saves time and gives flexibility in work for a very large amount. If, for instance, I am strongly asked to take a similar project under my wing, I will insist on moving to Plesk.
  •  

kaufenpreis

What is 1000 simultaneous connections? Parallel requests?
You just have a news website, there's just a request-response. The processing time of such a connection should be about 100 ms. To create 1000 simultaneous connections in such conditions, you need a lot of users. Very.

Accordingly, php-fpm simply does not pull you out. Perhaps there are simply not enough workers, but most likely the problem is in the code itself.
I'm being pelted with tomatoes right now, but why do you need Laravel on a news site? Even "since the days" of Symfony, such frameworks have been large and clumsy, it is unlikely that something has changed now. And you will use it on a news website... but you will not actually use anything.

Some micro-framework (essentially a router + DI) will cover 90% of the tasks, the rest can be finished with pens. It will work much faster simply due to a smaller code base.

memcached in one and a half gigs? :) What are you caching there? This is a huge amount for a regular site. It is better to transfer the memory to php-fpm or mysql.

As said above, that is really a review for modern developers.
Just wondering what is the size of the database and the InnoDB buffer?
P.S. Transfer SSH to some non-standard port. Almost all bots will leave.

  •  

arpitapatel9689

You can go to high LA not only by percentage. For instance, if you hit the disk, LA will also rise, and sometimes it is very critical.
Of course, it's difficult to point your finger at the sky like that, but given the number of visitors, there is probably a budget for a web server better than $60. Ideally, of course, to allocate separate machines for the web and database, but most likely it's easier to stupidly throw the web site with capacities: more cores + for mysql, allocate a buffer completely for the entire database to reduce the load on the disk.

It also helps a lot for php scripts to allocate /tmp to tmpfs (in ram). Of course, if you don't need a lot of space and memory allows.
About SSD /NVMe it seems to have been said above, but I think you already have it. VDS on HDD still needs to be found today.

Just in case, memcached will eat off everything that was allocated to it, but in fact it can use less. Need to look at internal stats.
  •  

centigon

I agree in principle that the architecture could have been improved, but no one foresaw such an increase in traffic (especially with regard to what was at the very start).
There are a lot of servers with separation of roles, this may be correct, but it is also proportionally more hassle. And there will be no more time to do this household, therefore, so. If we run into something again, we will finish either the cores or the frames.
On the other hand, isn't it good to take the very soul from the available configuration?

previously, Plesk was not able to restore from backups on a new server. He needed some kind of map-smaps, and most importantly — the site might not recover or not fully recover.
So he left for Vesta a long time ago, and for commercial — for DirectAdmin.
  •