If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

Servers in different DCs

Started by anilkumartgsb, Mar 29, 2023, 07:12 AM

Previous topic - Next topic

anilkumartgsbTopic starter

Hello, everyone!

I have a task at hand - to separate the current server into three exact replicas in order to achieve complete autonomy from a single hosting provider.
Can you suggest any solutions for this problem?

I understand the option of nginx load balancing, but it requires maintaining copies of all files on remote servers. The challenge arises when dealing with large video files, weighing around 35-60 MB.

Personally, I believe that using a cluster FS might be a suitable solution. However, I am uncertain if I am heading in the right direction.
If anyone has relevant experience, I would greatly appreciate your advice.
  •  

etdigital

We have implemented a system where one server is designated as the master-stat, and all updates are uploaded to it.

The remaining stat servers located in different data centers will refer to the master-stat if they cannot locate a specific file locally. They will then download the missing file from the master-stat:

set $root /opt/www/img.domain.com;

location / {
root $root;
try_files $uri @master-stat;
}

location @master-stat{
internal;
proxy_pass img.master-stat.domain.com;
proxy_set_header Host img.domain.com;
proxy_store on;
proxy_store_access user:rw group:rw all:r;
proxy_temp_path /opt/tmp;
root $root;
break;
}

Additionally, once a day, the master-stat server will perform an rsync operation on all the other servers. This ensures that unnecessary files are removed, and the updated ones are synchronized (although this process is rare, it is still necessary).
  •  

johnmart1

In my current work with Windows Azure, I prefer not to be limited to specific implementations like nginx or apache. However, I can provide specific insights on the Azure implementation if needed.

Let's address the tasks at hand:
1. Ensuring that one of the servers responds to requests, even in the event of a failure - this can be achieved through DNS tricks.
2. Making sure that copies of user-generated content (UGC), such as large downloaded files, are accessible from other servers.

To address the second task, here's an approach I would consider: when a server receives a new file, it notifies the other servers about its existence, and they update their databases accordingly. When a user requests this file from another server, it is uploaded to that server, marked as available, and then served to the user.

If our focus is on replicating blog entries, for instance, a simpler approach would be to immediately replicate them to other servers upon receiving a new entry. This ensures consistent availability across all servers.
  •  

raveinfosys

For instance, in the Windows ecosystem, there is a feature called DFSR (Distributed File System Replication) that effectively handles file replication challenges. Additionally, DNS round-robin (RR) can be used to ensure fault tolerance between data centers. As for database replication, it depends on the specific solution, but often a master-slave replication setup with a third data center for resilience is sufficient.

Similarly, I believe that Linux-based systems have their own comparable mechanisms for file replication. These mechanisms may vary depending on the specific distribution or technology used.
  •  

IdeaPad

The option of using a distributed file system (DFS) such as GlusterFS or Ceph is indeed a suitable solution for this scenario. Let's dive into a more detailed explanation.
Implementing a DFS involves creating a unified storage system across multiple servers. This allows you to distribute and replicate data seamlessly, providing fault tolerance, load balancing, and scalability. When it comes to managing large video files, a DFS excels in distributing and accessing these files efficiently across the replicas.

In the context of your specific challenge, utilizing a DFS like GlusterFS or Ceph would involve setting up a cluster of storage nodes, with each replica serving as a part of the distributed file system. Large video files, which can range from 35-60 MB, would be distributed across these replicas, ensuring that the load is balanced and access to the files is optimized.

One notable advantage of DFS is the ability to handle large files without performance degradation. By distributing the video files across the replicas, you can ensure that the storage load is evenly distributed, preventing any single replica from becoming a bottleneck.

From a technical standpoint, setting up a DFS requires careful planning and configuration. However, once implemented, it provides seamless file replication and distribution, addressing the challenge of maintaining autonomy from a single hosting provider while effectively managing large video files.

Considering the need to achieve autonomy from a single hosting provider and efficiently manage large video files, implementing a distributed file system such as GlusterFS or Ceph is a well-aligned solution. This approach not only addresses the replication requirements but also ensures efficient management of large video files across the replicas, meeting the goals of the task at hand.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...