Creating My Own Hosting Services Panel.

Started by span4bob, Jun 27, 2022, 03:05 AM

Previous topic - Next topic

span4bobTopic starter

Since a few months I am busy building my own Hosting Services Web-Based Panel.

Do you have any experiences with existing web-based server management tools ?
  •  

RZA2008

you would separate sites out, having nginx on one server, Apache on a second, Lighttpd on a third and OLS on a fourth.

It would make no sense to run more services that needed on any one physical or virtual machine, there is nothing to be gained by running all of them at once, it just adds to complexity and every running service comes with a resource overhead, why use more system resources than you need to?
  •  

arpitapatel9689

If the server is without a raid controller, then the mdadm soft raid is used. The installation and configuration of proxmox on a software raid is described in detail by me. Selectel conveniently has basic templates for installation. If I order a server, I immediately choose Debian on raid1 as the system. The installer automatically rolls the system to mdadm. It remains only to install the hypervisor. You won't have to split disks and assemble mdadm yourself.

In the simplest case, the hypervisor host system is used as the gateway. Iptables and Nat are configured on it. If you will use several dedicated servers and they will have to be combined via vpn over the Internet, then I configure the gateway as a separate virtual machine. It is possible on the hypervisor itself, but I don't like to mix functionality. In addition, when you have a gateway in a separate virtual machine, it is easier to back up and deploy it elsewhere. That is, in general, moving the project will be easier and faster.

If you have only one external IP address, then 2 network interfaces are used on the hypervisor:

A real network card with an external IP address configured.
Virtual bridge, usually vmbr0 for a network of virtual machines, as well as for their connection with the hypervisor itself.
If several IP addresses are used, which will need to be assigned to different virtual machines, then there will be 3 network interfaces:

A real network card without IP address settings.
vmbr1 bridge, which will include a network interface to which a link with external ip addresses comes. The real IP address of the hypervisor itself will be configured on this bridge, if there is no separate gateway. The same bridge will be connected to those virtual machines where an external ip address is needed.
VMBR0 bridge for a virtual network of virtual machines.

Frontend nginx server
Nginx acts as the frontend server, running in proxy_pass mode. In the context of this section, I recommend 2 articles on Nginx settings - proxying settings and basic nginx settings.

In this case, it is not necessary to use Nginx specifically. In some situations, it will be more relevant to take HAProxy. But in general, Nginx will do. All external http requests will be sent to the frontend server. Ports 80 and 443 will be routed to it. I'll tell you why to use a separate frontend server. After all, you can do without it.

Ease of operation and updating. For instance, I want to try some new setup - changing the kernel, nginx module, etc. I make a copy of frontend, configure everything I need on it and switch traffic to it. If everything is in order, then I leave it in operation, or I transfer the settings to the main server. If there are any problems, I just switch the port forwarding to the old server and everything immediately starts working as before. In general, the frontend server is very small (20-30 GB for the system and logs).
It is convenient to work with external requests on frontend - to control, process, block, preventing access to the backend with data. For instance, you can configure ipset and block individual countries. You can build the simplest ddos protection.
You can configure a separate ModSecurity module for Nginx. When you have frontend separately, it is easier for you to configure and update components without fear of disrupting the operation of web sites.
The frontend server accumulates all logs from external requests and transmits them to ELK. Next, I'll show you how convenient it is to automatically transfer everything to storage at once, and then analyze it conveniently.

In general, it is safer with a separate frontend server. All external requests come to him. Many malware (as well as legal components and plugins) do not understand how to work on the backend, which is behind the frontend. This is both a plus and a minus in some cases, as operation becomes more complicated. It is especially troublesome with bitrix, since it has a lot of services and checks that directly knock on the external ip of the domain and do not understand what to do when they get to the frontend. But it's all solvable.
Free certificates from Let's Encrypt are configured on frontend. Since all traffic from the front to the back is spinning within a single hypervisor, there is no point in transmitting it in encrypted form. So it goes to the backend via http.

With frontend, I send request logs to ELK only to php scripts. They are the ones that provide useful information for me, as they allow me to evaluate the speed of the website. I personally do not need information from the static output. In general, I do not collect it, but turn it on separately as needed. This allows you not to inflate the log files. The smaller their volume, the more convenient and faster it is to work with them.

nginx, apache, php backend server
There's not much to tell about the backend server. It is customized depending on the needs of the project. In general, for php sites it will be either nginx + php-fpm setup, or apache + php. As I said, there can be several backends. If you are a web studio or some kind of agency that hosts client sites itself, then you can have both a classic php web server and bitrixenv for web hosting bitrix websites.
And they are very popular now. Almost all the online stores I worked with were on bitrix. Plus boxes with bitrix24 are sometimes bought. If you cooperate with a small or medium-sized business, you probably can't do without Bitrix. I don't like him, but I have to work.

In general, I do not configure ssl on the backend, but there are exceptions or various errors. Here are examples of such errors in the work of typical php sites:

WordPress Error
phpmyadmin error
Bitrix also has similar errors, but I didn't fix them in the articles.

I place each site in a separate directory, for instance /mnt/web/sites/site.com. This directory already has its own subdirectories www, logs, php_sessions, etc. The owner of each site is a separate system user. A php-fpm pool works from that user, which only serves this site. SFTP access to a specific site is configured for each user. Each site has access only to its mysql or postgresql database.

With this scheme, almost complete isolation of sites is obtained. They only spin in their sandboxes. Plus, it is easy to organize access to a separate site if necessary. This could be replaced with containers for complete isolation, but I think that in such a private web hosting case it is an unnecessary entity, although I have an understanding of how this can be organized using docker. But it's still for other occasions.

Monitoring of sites and servers
I always use Zabbix to monitor virtual machines and our web  hosting services. I have accumulated a huge number of articles on it for almost all cases that I encounter. In general, I configure:

MDADM or iron controller monitoring. Unfortunately, I have no articles on the latter, but in general there are no problems with the setup. I've always Googled the right solutions. If the server has idrac, ilo, ipmi, you can take the necessary data from them.
If there is access to smart disks, then I configure smart monitoring. I highly recommend doing that so that in the event of a failure of a disk, you have complete information about it in order to transfer it to the technical support service for replacement.
Monitoring ssh connections. I immediately get a notification if someone connects to the server via ssh. If I'm not the only one with access, then I'll definitely set it up. Greatly simplifies life and prepares for problems :) If only I have access, then this is a small protection and an opportunity to quickly react to unauthorized access, although in reality I have never had such a thing.

Web server monitoring, in this case frontend and backend. Sometimes I don't do backup monitoring. In reality, it is not so often needed, although it seems that it is useful to get all the metrics. But personally, my practice is such that I don't really need them most of the time.
Site monitoring. This is one of the most important metrics, as it directly answers the question of whether everything is in order. If the site does not work or is not available, then that is the highest priority of the problem. Since we have local monitoring, it does not give a complete picture of what is happening, we need another external one. I will tell you more about it later. Local monitoring immediately determines, for instance, if our backend has fallen and instead of the website page we see the 500th nginx error.

Or something else. In general, an important thing, I recommend that you carefully monitor the website. I recommend contacting him directly through the internal network of the hypervisor via the local IP of the front. To do this, you either need to add all sites to the host file of the virtual machine with zabbix by local ip, or start your own local dns server. I usually do that if a separate virtual machine is used for the gateway.
Monitoring of domain delegation and ssl certificate. The piece is optional, it can be configured at will. If delegation is not so critical, since registrars will be flooded with reminders to the mail, then I recommend monitoring ssl certificates. They are often forgotten to extend or technical errors occur when working with Let's Encrypt auto-renewal.
I always set up backup monitoring in one form or another. It strongly depends on the specific situation, on the data, on the storage location of backups, etc. There are no ready-made solutions, you have to improvise on the spot. But if I don't set up backup monitoring, I can't sleep well. Backups are periodically deployed manually and checked. This greatly limits the number of clients with whom I can cooperate, since the work is manual. But it saved me many times. So I do not neglect.

If there is a mail server, I configure postfix monitoring. I recommend watching the mail server carefully, especially the queue and the number of messages sent. Sometimes the accounts of the boxes leak into the network and the server starts to spam massively. If you don't notice it in time and don't stop it, you can fly into spam lists and sit there for a long time. This can paralyze the work of the same online store, since without mail it ceases to function normally.

I have listed the main monitoring tasks. I often set up something else, depending on the needs of a particular customer. If the solution is not typical and niche, then I do not write an article on it, although I keep the templates for myself. If there are any critical linux services, you can monitor them as well.

It is especially convenient to monitor the website response from a local server. There are no network delays that occur during the operation of external monitoring. Here is the net performance of the web server. Coupled with external monitoring, a complete and easily interpreted picture of the performance of the web server and the speed of access to the site is obtained. Only with two monitoring - external and internal, it is possible to adequately assess and look for bottlenecks in the work of the website.

Storing logs in the ELK Stack
I put all the logs in elasticsearch. I have an article about installing and configuring the ELK Stack. I recently updated the installation instructions, but I left the old screenshots. It is very troublesome to replace them. The installation process itself is reflected correctly, since I regularly use my article. I have several examples of how you can analyze the logs of various services.

In the context of that topic on setting up private web hosting, we will be interested in collecting web logs and analyzing them:

Dashboard for Nginx logs in Kibana+Elasticsearch
Backend performance monitoring using ELK Stack
The articles are a little outdated in the sense that my dashboards have changed during operation, but the principle is the same. The main thing is to understand it, and then there will be no problems doing what is convenient for you personally. For instance, I don't set up GEO maps. In reality, I don't need them. So, for beauty only. Below is an example of my current dashboard for that website.

On the dashboard, I immediately get up-to-date information about the status of the site - information about the average response rate to php queries and a map of the distribution of responses on a scale. Almost all requests fit into intervals of up to 300 ms, which I think is a good result. I remind you that that information is only about php requests. Caching is configured on the website, so that most pages go to the visitor much faster directly through nginx, bypassing the php handler.

Here you can also make a selection for slow queries, for queries from certain ip addresses, view queries with different error codes, etc. In general, without such a dashboard, I feel blind. I don't understand how to understand that everything is fine with the website, or vice versa, to find out what problems it has if you don't have such information at hand.
The website may start pouring out five hundredth errors, and you need to somehow manually rattle the access log and try to understand whether the problem is single or large-scale. And here everything is at hand.

I'm so used to ELK that I almost don't go to the servers. I collect all logs in it (necessarily system logs) and view them there. Plus, monitoring and management via ansible, but more on that later. There is practically no need to go to the servers via ssh.

that approach scales very well, that's why I use it, although it's not so relevant on my scale, but nevertheless, I want to do everything right with a foundation for the future. I had a project that started with one server and several docker containers, and ended with about a hundred virtual machines. I really regretted that I didn't start automating processes from the very beginning. I just wasn't ready for it. There was no experience and understanding. Everything grew gradually and it was faster to do it manually each time. But at some point I just started to sew up. It was lucky that the project eventually shrunk, but not through my fault :)

At the front, I have logs of all sites put into one directory /var/log/nginx and from there they go to filebeat with a single template, and from it to logstash and then to elasticsearch into one common index that beats by the day. I used to send each site to a separate index, but over time I realized that it was not convenient. So you have to create your own visualizations and dashboards for each index. When there are a lot of websites, it's troublesome, although it can be automated, but it doesn't make much sense on my scale.

Now I collect everything in one index, make a single dashboard and in it I already use filters to view data on different sites. I have output information about the virtual domain name to the nginx log. It's convenient and quick to set up. For each new website, you don't have to do anything at all. Filebeat automatically takes his logs. With the help of a filter in Kibana, the information in the logs is viewed.

  •