If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

How to quickly upload large dump to MySQL?

Started by Franklin, Mar 02, 2023, 12:27 AM

Previous topic - Next topic

FranklinTopic starter

Is there a comparable feature in MySQL to PostgreSQL's binary dump that allows for pouring into multiple streams? I ask because there is a sizable dump in MySQL that needs to be filled once a week using the command mysql dbname < dbname.sql, and it typically lasts for several weeks.

On a related note, does anyone know if there are any additional convenient backup options available for MySQL?
  •  


Tatwa Technologies

To optimize MYISAM, it is advisable to enlarge the key_buffer_size. When dealing with InnoDB, a helpful approach is to enclose the dump with the SET FOREIGN_KEY_CHECKS=0 command at the start and SET FOREIGN_KEY_CHECKS=1 command at the conclusion.

For additional information on enhancing bulk data loading in InnoDB, consider referring to the following resource: https://dev.mysql.com/doc/refman/8.0/en/optimizing-innodb-bulk-data-loading.html.
  •  

VivianStevenson


SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;

In order to optimize InnoDB bulk data loading, it is recommended to temporarily disable autocommit, unique checks, and foreign key checks. This can be done by setting the corresponding variables to 0. Once the bulk data loading is completed, these checks can be re-enabled by setting them back to 1 and performing a commit. More information on this topic can be found in the MySQL dоcumentation: https://dev.mysql.com/doc/refman/5.7/en/optimizing-innodb-bulk-data-loading.html
  •  

bewennick

In my experience, I have only encountered recovery from a cold backup, which essentially involves copying files. However, to carry out this process, it is crucial to first backup the database files. This is typically done on a replica that is temporarily stopped in order to avoid any downtime on the master server.

Traditional backup and recovery methods tend to be less effective when dealing with databases that are several hundred gigabytes in size.

When it comes to managing large databases, alternative approaches such as incremental backups, replication, or even cloud-based solutions can prove to be more efficient and reliable. These methods offer faster backup and recovery times, as well as improved scalability and flexibility. Additionally, implementing specialized tools and technologies specific to database management can further enhance the overall backup and recovery process.
  •  

kerry28vann

One option is to use the mysqldump command with the --extended-insert and --quick options. The --extended-insert option allows mysqldump to use multiple-row INSERT statements, which can significantly speed up the import process. The --quick option tells mysqldump to dump data in a way that is faster but less safe, by buffering output in memory instead of writing it to disk. However, be aware that this option can cause problems if the dump is interrupted, as the data will not be consistent.

Another option is to use the mydumper tool, which is a multi-threaded mysqldump alternative that can dump large databases in parallel. Mydumper can be configured to use multiple threads to dump different tables or even different parts of the same table, which can significantly speed up the dump process.

As for convenient backup options, MySQL provides several alternatives. One popular option is mysqlpump, which is a utility that allows you to dump databases in a more flexible and efficient way than mysqldump. Mysqlpump can be used to dump databases in parallel, and it also supports incremental backups.

Another option is to use a third-party tool like Percona XtraBackup, which is a popular open-source backup tool for MySQL. XtraBackup allows you to take online backups of your database, which means that your database remains available to your applications during the backup process. XtraBackup also supports incremental backups and can be used to backup large databases.

In terms of pouring the dump into multiple streams, you can use a combination of mysqldump and split command to split the dump into smaller files, and then use mysql command to import each file in parallel. However, this approach requires careful planning and scripting to ensure that the data is consistent and accurate.

Here is an example of how you can use mysqldump and split command to split the dump into smaller files:

mysqldump -u username -p password dbname > dump.sql
split -b 100m dump.sql dump_
This will split the dump into smaller files of 100MB each. You can then use mysql command to import each file in parallel:

mysql -u username -p password dbname < dump_aa
mysql -u username -p password dbname < dump_ab
...
Note that this approach requires careful planning and scripting to ensure that the data is consistent and accurate.
While there isn't a direct equivalent to PostgreSQL's binary dump feature in MySQL, there are several alternatives that can help speed up the dump process, including using mysqldump with --extended-insert and --quick options, using mydumper tool, and using third-party tools like mysqlpump and Percona XtraBackup. Additionally, you can use a combination of mysqldump and split command to split the dump into smaller files and import them in parallel.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...