How hosting customers lose money

Started by Franklin, Oct 14, 2022, 09:15 AM

Previous topic - Next topic

FranklinTopic starter

Last year, a client came to our support – a large pharmaceutical company that needed to maintain >15 product sites. In addition to the ordinary work on the sites themselves, the contract also included the administration of the servers on which these web sites are located.
But, the client insisted that he does not want to change the hosting provider and does not migrate to our virtual servers.



Initially, we were neutral about this decision – he wants and wants. But the moment we started setting up basic site monitoring, the "fun" started. In short: all channels configured for notification began to drown in "alerts". About the reasons, attempts to solve this issue and about our conclusions, and there will be a further longrid.
Classic hosting is a loss of money

It is important to note here, for the convenience of further reading and understanding, that "classic hosting" in our understanding is shared web hosting that is losing its relevance, which I will call "hosting", virtual servers will be called that. More detailed information is available on our site.

So what is the global problem with web sites? As you may have guessed, it is precisely in the fact that they were hosted on the hosting. Next, let's look at what the main problems are when using it.

It is worth noting right away that the situation here is individual and in the case of a different client + hosting provider bundle, everything could have turned out better differently.

Here are our introductory:

+ access from web site admin panel;

+ ftp access;

– ssh access;

– access to the hosting control panel.

The situation, though individual, is not unique.
Problem with monitoring

It will not be possible to set up full–fledged monitoring on the hosting - this is the first thing we encountered.

The reason is that there are no ssh accesses (more precisely, they were not given to us). As a result, it will not be possible to configure a number of CMS monitoring functions, which means that something can be missed. But this is a problem of our proprietary system. If we take third-party ones, for example, Zabbix or Nixstats, which we also use, then there will be a problem, since it will not be possible to install the agents that are necessary for these systems on the hosting.

Nevertheless, monitoring of a number of important points is possible, and you can live with it.
It is difficult to use Git

We use Git to control changes to web site files, and since we had access only via ftp, its use becomes almost impossible. Another point is a minus to security, because we sometimes use Git to control the integrity of files (the solution is non-standard, but working). Let's move on.
No logs

It is not possible to log system events, track attacks, errors in the operation of web site and a number of other important parameters. For example, it is not possible to install the fail2ban utility (if it or the hosting provider's analogues are not installed for some reason), which means that web site is not protected from the same bruteforce.

It will also not work to log POST requests, which means that if a CMS is used on web site that does not have a regular function of fixing changes on the site, then this will also not be tracked, it will not be possible to identify the attack vector.
In some cases, increased security requirements may be imposed on sites, which implies the use of tools such as WAF (for example Nemesida-WAF) or host-based IDS (OSSEC), which is also not possible, again, if the hosting provider did not take care of it for some reason.

Why? It's simple: the hosting provider will not always give you the rights necessary for installation. And since there is no access to the logs, it will not be possible to analyze them either.

Here and here we have already written about the Graylog system that we use, but on hosting, of course, it is not applicable without crutches (if you know where the logs of the web server are stored).

An example from life: web site has fallen and we understand that the problem is not in hosting. If it is not solved, the situation will repeat itself. In the case of using a virtual server, you can look at the logs, understand what the trouble is, and fix it. In their absence, you have to act blindly, trying different options, which turns out to be both longer and more labor-intensive.
Problems with updates

Did you want to update 1C-Bitrix? It may be necessary to configure the parameters of the web server, which is not possible on the hosting, since you will never get root access.

Another life example: there was a vulnerability in the CMS that needs to be closed urgently. On the one hand, you can wait for updates if there are any (and this is not "urgent"), on the other, we return to the need to close something on the side of the web server and, of course, it will not work.

Problems with SSL certificates

SSL installation when using hosting is possible only from the control panel, where there is still no access. For the same reason, if the certificate does not "stand up" correctly or does not work, it will also not be possible to solve this issue promptly.
Backup Problem

In the case of hosting, we can only hope for a hosting provider who:

a) makes backups;

b) will be able to promptly provide access to them (by the way, sometimes for money and not always promptly).

Moreover, you won't be able to pick up the whole container either, and that's a shame.

The reality is that even if the hosting provider makes backups, it is not a fact that it stores the right amount of them for a long time, but in the case of a virtual server, you can configure any schedule and storage depth.
Scaling problem

An actual topic for large and loaded systems, which has grown into such a concept as "kubernetes", but for a regular product site is almost irrelevant. However, in the case of hosting, there is no chance of promptly adding memory or processor power, in principle, unlike a virtual server.

And it will be more difficult to find out about such a need.

There are a couple more problems, but it is impossible to say about them without mentioning a single hosting provider (in our case, it was a well-known nickname).
The problem with the availability of sites

This is a base that not only consists of the points above, but is also a problem in itself.

We started purposefully collecting statistics in mid-September 2020. At the moment, the situation is as follows:

> 30 hours of downtime;

> 270 "alerts".

On a yearly basis, these figures may not look scary, but it's a stretch to call it a Tier II availability level. It's worth noting right away that we considered the time when web site was unavailable. If it opens, but you need to wait (sometimes > 30 seconds), which, in fact, is also unavailability, then the above indicators can be multiplied by 1.5-2. By the way, they wrote and called in support, but it won't work to call their answers operational – whoever applied will understand.

It would be unfair not to point out positive developments. At the end of the summer, after another prolonged fall, the hosting work improved markedly. 2-3 times for sure.

As for monitoring developments, the situation is, of course, purely individual, but this is more than a hundred hours of work by engineers, system administrator, manager. Don't think that I'm complaining, that's partly what we get paid for, but still there is an opportunity to spend this time with more benefit. If you don't have the support you pay for, then feel sorry for your employees, give up hosting!
The problem with web site ranking (SEO)

How the content of search queries is formed is a mystery behind seven seals. Nevertheless, it is no secret that web site's downtime affects its ranking and the more often and longer the downtime, the lower the chances of the resource getting into advantageous positions.
If hosting sites regularly "stick" or are unavailable for some (albeit short) time, then in a bad scenario we get two obvious points:

    users who try to visit web site at this time will not wait and leave, which will be a negative factor for the search engine;

    a search engine robot, by analogy with users, may not wait or not see a site that has decided to rest.

And there is also an interesting article about how the "neighborhood" on shared hosting affects the ranking of sites (original in English, translation). But that's a completely different story.

The last problem is again related to the hosting provider and its security policy ..

Let's call it a bonus!
If you do load testing, they will ban you!

There were agreements with the client that we conduct load testing twice a year. Nothing criminal, no fanaticism. When it was time for the first attempt, they began to prepare, and for reinsurance they decided to contact support and clarify one point... And how will they react to this at all?

They asked the client to contact them, explain the situation, but the result was not surprising:

"Hello!

In case of detection of abnormal activity on a particular hosting service, which poses a threat to the stability of RU-CENTER networks,

the operation of the service may be fully or partially suspended in accordance with the regulations for the provision of the relevant service." (c)

We can safely say that the testing was successful and with minimal labor costs. Yes, we pulled copies of some sites to our hosting and conducted load testing there, but who cares?

What did we do with it?

Our first and logical reaction was that we offered the client to move in with us.

But as I wrote at the very beginning, the client refuses to change the hosting provider and this is taboo. As an alternative, they asked to purchase virtual servers for sites.

We talked a lot on this topic, explained the situation, which is no less important – we were heard and understood, but then the decision was drowned in a series of approvals within the company, which is still large and not without its own characteristics.

We tried to change the "sensitivity" of monitoring in order to at least get rid of conditionally false positives when web site is a little "stuck", but it didn't help much.

To do checks less often and even more so to disable monitoring conscience does not allow – you can skip really important situations.

A series of negotiations with the client led to positive changes: some sites migrated to virtual servers, but they still live in <url>. However, when using virtual servers, the situation has improved significantly: 5-10 minutes of downtime per quarter! Not perfect, but much nicer and more adequate.

One site, albeit temporarily, managed to be dragged away to our hosting (for the time of global improvements), there is uptime – 99.999%, and we hope that this may affect further communication.

What I would like to say in conclusion

    If you have a commercial site, please host it on a virtual server, this is correct.

    It is impossible to live without monitoring, because this is the only way to see the problem before customers and respond to it. If there is no monitoring on your resource, install it urgently! Configure it yourself, use third-party services, order the setup.

    An opinion from the outside will never be superfluous. Check your hosting / employees, the familiar is always closer and warmer.

    Hosting problems are money that you can lose without even noticing it. It is not customary to count other people's money, so you can count it yourself. There is a good article on this topic here.

Don't abandon your web sites – let your customers use your services!
  •  

robicse

The problems of shared hosting are sucked out of the finger to convince the client to buy a VPS from you.
If you follow your logic, then a hardwired server will be better than a virtual one, since its resources will be allocated for you.
And the server cluster will be even better and will provide fault tolerance. And the Internet connection from two providers is more reliable than from one. And dedicated optics to the server is even better. And your DC is better than someone else's, since you can't be 100% sure of its reliability. So you can safely persuade the client to build his DC under your leadership, this will close all his problems and provide you with a good profit.

And seriously, there are two contradictions here: a client who wants cheaper and you, as a new contractor, who wants to breed the client to the maximum (did the client set the task of monitoring post requests or is this your suggestion? The client wanted to see the statistics of the use of resources of the entire server?). Shared web hosting just turned out to be extreme here.
  •  

np.carzspa

The thing is that each task has its own tool.
And it would be interesting to hear about the case of transferring a client to a VPS with a description of the economic feasibility of this project (how much the client lost due to hosting downtime, how many new orders and customers he received due to the fact that the site began to work better, how much money was saved on incidents that your monitoring system tracked in time).
Then you would have shown in all its glory that the previous tool (shared hosting) was chosen incorrectly and could not qualitatively solve business problems. And so you just "pulled" all sorts of technical features of both shared hosting in general and a specific client in particular, and based on this you built both the rationale for moving to VPS and the rationale that shared hosting is the last century and the loss of money.
  •  

arthyk

And for what such a weighty reason did the farm company refuse to transfer sites to your hosting? After all, it was clear that there would be problems associated with the management and monitoring of the work of the resource. And in general, this whole report just 'exposes' you as inexperienced beginners who have encountered non-unique problems and 'survey' this whole 'nightmare', and even share it with others. ::)
  •