How to automate DB backup

Started by Novel Web Solution, Jul 18, 2022, 01:05 AM

Previous topic - Next topic

Novel Web SolutionTopic starter

Making manual backups of the site database has become exhausting for me, and I'm searching for a way to automate it. Specifically, I want a backup to be created three times daily at 18:00, 21:00, and 24:00 automatically from the hosting, saved to a folder at the root of my E drive.

I don't want the 21:00 backup to overwrite the 18:00 backup or vice versa; rather, each backup should be saved in a new folder with the date and time as the name. Is there a better solution or approach to accomplish this? Ideally, after three days, backups older than that will be deleted, ensuring that only nine copies of the database exist in the past three days.
This task must be completed using free hosting services from Beget.


It is necessary to create a backup dump on the server using cron.
 There are several methods to pull it together onto a local machine: using wget with a scheduler or an FTP client. As a user of Cygwin under Windows, I plan to configure cron and at least transfer it using ssh.


The cron program permits the user to execute processes at specific times or frequency. It is important to note that cron only exists under Unix, so Windows OS users should consult their hosting provider for advice on how to run processes at the specified time. This information may only be useful to Unix users in general.

To create a backup of the database, we would need to run the following command in the Unix shell by running crontab -e:

0 0 * * * /usr/local/bin/mysqldump -uLOGIN -PPORT -hHOST -pPASS DBNAME | gzip -c > date "+\%Y-\%m-\%d".gz
This command creates a dump file of the DBNAME database every day at midnight (00:00) and archives the file with gzip into an archive file named after the current date. To enable creation of files with date names, we use the date command, which allows arbitrary date output formatting. By using backticks, we force the Unix shell to insert the result of another command into the current one.

Save the rule for cron and wait for the results. The command creates an archived copy of the database on disk daily. You can easily locate archives by name, which will correspond to the date when they were created, allowing you to restore data if needed.

If you want to automate the removal of old archives, utilize cron and try incorporating the find command, which is available in Unix. It deletes archives older than seven days by periodically running find ~/directory-with-archives -name "*.gz" -mtime +7. Find's documentation is available through the man find command in the Unix shell.

If your machine is constantly connected to the Internet, you can copy the backup you created to it using cron. Use the ftp and scp commands to copy the file to another machine. Add their launch to cron. As reliable as a hosting provider's machine may be, it is better to be safe than sorry.