Servers in different DCs

Started by anilkumartgsb, Mar 29, 2023, 07:12 AM

Previous topic - Next topic

anilkumartgsbTopic starter

Hello, everyone!

I have a task at hand - to separate the current server into three exact replicas in order to achieve complete autonomy from a single hosting provider.
Can you suggest any solutions for this problem?

I understand the option of nginx load balancing, but it requires maintaining copies of all files on remote servers. The challenge arises when dealing with large video files, weighing around 35-60 MB.

Personally, I believe that using a cluster FS might be a suitable solution. However, I am uncertain if I am heading in the right direction.
If anyone has relevant experience, I would greatly appreciate your advice.
  •  

etdigital

We have implemented a system where one server is designated as the master-stat, and all updates are uploaded to it.

The remaining stat servers located in different data centers will refer to the master-stat if they cannot locate a specific file locally. They will then download the missing file from the master-stat:

set $root /opt/www/img.domain.com;

location / {
root $root;
try_files $uri @master-stat;
}

location @master-stat{
internal;
proxy_pass img.master-stat.domain.com;
proxy_set_header Host img.domain.com;
proxy_store on;
proxy_store_access user:rw group:rw all:r;
proxy_temp_path /opt/tmp;
root $root;
break;
}

Additionally, once a day, the master-stat server will perform an rsync operation on all the other servers. This ensures that unnecessary files are removed, and the updated ones are synchronized (although this process is rare, it is still necessary).
  •  

johnmart1

In my current work with Windows Azure, I prefer not to be limited to specific implementations like nginx or apache. However, I can provide specific insights on the Azure implementation if needed.

Let's address the tasks at hand:
1. Ensuring that one of the servers responds to requests, even in the event of a failure - this can be achieved through DNS tricks.
2. Making sure that copies of user-generated content (UGC), such as large downloaded files, are accessible from other servers.

To address the second task, here's an approach I would consider: when a server receives a new file, it notifies the other servers about its existence, and they update their databases accordingly. When a user requests this file from another server, it is uploaded to that server, marked as available, and then served to the user.

If our focus is on replicating blog entries, for instance, a simpler approach would be to immediately replicate them to other servers upon receiving a new entry. This ensures consistent availability across all servers.
  •  

raveinfosys

For instance, in the Windows ecosystem, there is a feature called DFSR (Distributed File System Replication) that effectively handles file replication challenges. Additionally, DNS round-robin (RR) can be used to ensure fault tolerance between data centers. As for database replication, it depends on the specific solution, but often a master-slave replication setup with a third data center for resilience is sufficient.

Similarly, I believe that Linux-based systems have their own comparable mechanisms for file replication. These mechanisms may vary depending on the specific distribution or technology used.
  •