If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

File Modification Management in Hosting

Started by BluellFrono, Oct 13, 2023, 12:14 AM

Previous topic - Next topic

BluellFronoTopic starter

Managing file alterations on the web host - how is it done?
We have:

A VPS hosting two websites (a and b)
Shared hosting 1 carrying a website (c)
Shared hosting 2 supporting a site (d)
An Internet-connected VM

Requirements include:

Observing file modifications
Reverting any changes, if necessary
Validating planned alterations.

I'm under the impression that a system similar to Git could be a solution, yet I'm uncertain about its functionality and the update procedures.
  •  


enfomaemotte

Indeed, using a version control system like Git can help you manage, observe, and revert changes in your files. To ensure the best use of Git, you will need to set an appropriate workflow based on your needs. Here's a general guide of how you might accomplish this:

Initialize Git repositories: For each of your websites (a, b, c, d) which are apparently stored in different locations, you will need to initialize a separate Git repository. This can be done by navigating to the top-level directory for each website (where your files are stored), and running the command git init.

Commit Initial Version: Start by adding all the files to the repository and creating an initial commit. Use commands git add . to add files and git commit -m "Initial commit" to commit the changes.

Remote Repositories: You should then create remote repositories (on a service like GitHub, GitLab, or Bitbucket) and link each of your local repositories to a remote one. This allows you to track changes across different environments, plus you get an extra backup of your code. Use command git remote add origin <remote-repo-url> to link repositories, and git push -u origin main (or git push -u origin master for older versions of Git) to push your code to the remotes.

Regular Observations and Updates: Using version control doesn't automatically track all file modifications. Each time a modification occurs, those changes need to be added to the repository with git add command. Afterwards, the changes need to be committed using git commit. Routine processes like these can potentially be automated with scripts or tools if modifications are occurring regularly.

Reverting Changes: If a bad change is committed, you can revert it with Git commands such as git revert <commit-id> to revert the changes made in a specific commit, or git reset --hard <commit-id> to discard all changes after a specific commit. Remember that --hard option might discard your changes permanently.

Planned Alterations: If you're planning to make changes, it's best-practice to create a new branch with git checkout -b <branch-name>, make your modifications there, then merge your changes back into the main or master branch when you're ready. If changes are validated and ready to be implemented in the production environment, you can manually pull changes on the respective hosts using git pull origin main.

Set up Automatic Deployment: For deploying changes onto servers, you can use a CI/CD pipeline or even a simple git pull approach where changes are pulled from the repository and applied to the live server automatically whenever an update is pushed to the main branch of your remote repository.

Notifications: You can set up notifications for alterations. Whenever a push is made to your remote repository triggering a deployment, an email/notification is sent. This is dependent on the platform used. For instance, GitHub Actions, Bitbucket pipelines, Jenkins, or CircleCI can be utilized to automate these processes and notifications.


As a more detailed continuation, adding a few layers of complexity to the previously mentioned setup can offer you more control and security:

Pull Requests and Code Reviews: Instead of directly merging changes into the main branch, you create a "pull request" - a proposal to merge your changes into the main codebase. This gives others on your team the opportunity to review your work, provide feedback, and potentially catch issues before they go live.

Test Environments: Before deploying changes to your live website, you may want to first deploy them on a test environment. This is typically a separate server or set of servers that mimics your live environment as exactly as possible. You can then use these environments to thoroughly test your changes.

Automated Testing: To further guard against regressions or bugs in your website, you can also integrate automated testing into your deployment process. With a tool like Jenkins, Travis CI, or GitHub Actions, you can set up your system to automatically run your suite of automated tests every time changes are pushed to your repository.

Access Control and Security: Git allows you to control who can access and make changes to your repository. This can help you prevent unauthorized changes and provide an audit trail in the case changes need to be reviewed or reverted.

Backup and Disaster Recovery: Having a good backup strategy is essential. Git provides a certain level of data redundancy since the entire repository is cloned on every developer's machine and potentially external servers if you're using a hosted service. However, it's also a good idea to have a separate backup system in place.

Configuration Management: In many cases, you'll have configuration settings that differ between your development, testing, and production environments. Tools like Ansible, Chef, or Puppet can help you manage this, ensuring that you have the correct settings in each environment.

Monitoring and Alerting: Once the site is live, it's crucial to have a system in place for monitoring the site and alerting you about any issues that arise. There are many tools available for this, from simple uptime monitors like Pingdom, to more complex application monitoring solutions like New Relic or Logz.io.


If you're dealing with multiple servers for multiple websites and a version control system, you might want to look into additional tooling to support management and monitoring at this scale. Here are some more steps to optimize your code and configuration management:

Infrastructure As Code (IaC): Tools like Terraform and CloudFormation allow you to manage and provision your data centers through code. This means that the process of setting up a new server or duplicating an existing one can become as simple as running a script. This makes it easier to keep different environments synchronized and can greatly reduce the time and effort required to troubleshoot environment-specific issues.

Containerization: Tools like Docker allow for simplified packaging, distribution, and deployment of applications. Containers bundle up an application with all of its dependencies and run it in a virtualized environment, meaning it should operate the same way regardless of where it's run. This can reduce the "it works on my machine" issues that sometimes occur when trying to go from development to production.

Orchestration: If you're using containers, you may also benefit from an orchestration tool like Kubernetes or Docker Swarm. These tools help you manage, scale, and maintain your containers across multiple servers.

Continuous Integration/Continuous Deployment (CI/CD): CI/CD pipelines use automation to simplify and make deployment more efficient. When code is committed to the repository (CI - Continuous Integration), it is automatically tested and built. If everything passes, it's automatically deployed to the production environment (CD - Continuous Deployment) .

Manage Certificates and Secrets: Tools like HashiCorp's Vault is a tool for securely accessing secrets. A "secret" is anything that you want to tightly control access to, such as credentials, API keys, certificates etc. Secrets should not be stored in the Git repository for security reasons.

Web Application Firewalls (WAF): WAFs help to protect your sites from various exploits and attacks that can be carried out against them. Deploying one as part of your infrastructure provides an additional layer of security.

Logging and Distributed Tracing: Tools like the ELK Stack (Elastic Search, Logstash, and Kibana), Loki, or Datadog can enhance your ability to monitor, manage, and diagnose issues across your many code bases and servers when things go wrong.

Performance Monitoring: Tools like New Relic, AppDynamics, Stackify Retrace, and many others provide deep insights into how your code is running in production, allowing you to find slow queries, slow pages, and other performance bottlenecks.

Managing dozens of websites on different types of hosting can be a logistical challenge, but by leveraging these tools and methodologies, you can make it manageable, secure, and efficient.
  •  

Cathe2525

Git is installed on the shared drive, I verified this today.

Deciding on how to kick-off remains a concern...

I'm thinking of initiating a repository on the Virtual Machine and pushing shared drive and VPS content into it.

If there's a detailed guide that you're aware of, kindly share it with me...

Don't forget to frequently commit your changes whilst working on a project; this ensures a safe rollback point if anything goes wrong or the project takes an unexpected turn.
  •  

johnmart1

Adjust modifications through git deployment from booking.com.

Preventing uncontrolled changes is often more desirable.

When dealing with site infections, one can rely on CXS.

For collaborative work amongst multiple individuals, using git is generally the superior approach.

Git ensures that changes are properly versioned, which can avoid potential conflicts and make it easier to rollback in case of errors. This is why it's especially valuable when multiple people are working in parallel.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...