LP #1589334: After upgrading to 5.7 mysqldump much slower

Description

**Reported in Launchpad by Nikita Belecky last update 05-08-2016 11:11:42

1. Get latest 5.7 version (I use official Docker container, but if i build a package from source bug also reproduced)
root@4d9cbcb420f4:/# mysql --version
mysql Ver 14.14 Distrib 5.7.11-4, for debian-linux-gnu (x86_64) using 6.3

2. Create 300k innodb table with simple python/php script. The greater the number of tables => slower dump.

3. Create sakila database:
wget http://downloads.mysql.com/docs/sakila-db.tar.gz

4. Create dump sakila database per table.
root@4d9cbcb420f4:/# time mysql sakila -BNe 'show tables' | while read line; do mysqldump --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila $line > /tmp/$line.sql; done

real 0m33.789s

Or just database:

root@4d9cbcb420f4:/# time mysqldump --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila > /tmp/sakila.sql

real 0m2.023s
user 0m0.104s
sys 0m0.028s

5. Percona 5.6 (Per table):

root@daeaf38383ef:/# time mysql sakila -BNe 'show tables' | while read line; do mysqldump --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila $line > /tmp/$line.sql; done

real 0m0.606s

Just:

root@daeaf38383ef:/# time mysqldump --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila > /tmp/sakila.sql

real 0m0.281s

6. Most of time is spend to query INFORMATION_SCHEMA.FILES:

Percona 5.7:

SELECT COUNT FROM INFORMATION_SCHEMA.FILES;
----------

COUNT

----------

327317

----------
1 row in set (1.98 sec)

Percona 5.6:

SELECT COUNT FROM INFORMATION_SCHEMA.FILES;
----------

COUNT

----------

0

----------
1 row in set (0.00 sec)

Environment

None

Smart Checklist

Activity

Show:

Nickolay Ihalainen January 10, 2019 at 2:11 PM

Verified as described.

Create required docker images with libeatmydata:

 

FROM percona:5.7-stretch # Install eat my data RUN apt-get update && apt-get install -y eatmydata && apt-get clean # Wrap the entrypoint with eat my data ENTRYPOINT ["eatmydata", "docker-entrypoint.sh"] CMD ["mysqld"]

 

Build image:

 

docker pull library/percona:5.7-stretch;docker build -t eatmydata-ps-5.7 .

 

replace from tag with 5.6-stretch

Create test.sh script:

#!/bin/bash P="--user=root --password=secret --protocol=tcp" IMG=$1 docker run -d -e MYSQL_ROOT_PASSWORD=secret --name ps3460 $IMG wget -c https://downloads.mysql.com/docs/sakila-db.tar.gz 2>/dev/null tar xzf sakila-db.tar.gz docker exec -i ps3460 bash -c "while ! (mysqladmin $P --silent -c 2 -i 1 --wait=300 ping &>/dev/null) ;do echo -n . ; sleep 1 ; done" docker exec -i ps3460 bash -c "mysql --silent $P -e 'create database test;create database sakila;select version()' 2>/dev/null" cat sakila-db/sakila-schema.sql sakila-db/sakila-data.sql | docker exec -i ps3460 bash -c "mysql sakila --silent $P 2>/dev/null" echo -n "$IMG only sakila, per-table" docker exec -i ps3460 bash -c "time mysql $P sakila -BNe 'show tables' 2>/dev/null | while read line; do mysqldump $P --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila $line > /tmp/$line.sql 2>/dev/null ; done" echo -n "$IMG only sakila, whole database" docker exec -i ps3460 bash -c "time mysqldump $P --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila > /tmp/sakila.sql 2>/dev/null" docker exec -i ps3460 bash -c "for i in \$(seq 300000) ; do echo CREATE TABLE t\$i \\(id int\\) engine=innodb\\; ; done|mysql $P test 2>/dev/null" #docker exec -i ps3460 bash -c "time mysql $P test -BNe 'show tables' 2>/dev/null" sync;sync;sync echo -n "$IMG many tables in test, sakila, per-table" docker exec -i ps3460 bash -c "time mysql $P sakila -BNe 'show tables' 2>/dev/null | while read line; do mysqldump $P --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila $line > /tmp/$line.sql 2>/dev/null ; done" echo -n "$IMG many tables in test, sakila, whole database" docker exec -i ps3460 bash -c "time mysqldump $P --max_allowed_packet=1024M --force --compress --hex-blob --skip-extended-insert --opt --quote-names --routines --single-transaction --default-character-set=utf8 sakila > /tmp/sakila.sql 2>/dev/null" docker exec -it ps3460 rm -rf /var/lib/mysql/* docker rm -f ps3460 &>/dev/null

 

 

Run test script with 5.6 and 5.7 tag:

for i in eatmydata-ps-5.6 eatmydata-ps-5.7 ; do ./test.sh $i 2>&1 ; done |tee -a bug.log

 

5.7.23-24 eatmydata-ps-5.7 only sakila, per-table real 0m6.863s eatmydata-ps-5.7 only sakila, whole database real 0m0.264s eatmydata-ps-5.7 many tables in test, sakila, per-table real 1m5.419s eatmydata-ps-5.7 many tables in test, sakila, whole database real 0m2.559s

 

 

lpjirasync January 24, 2018 at 9:38 AM

**Comment from Launchpad by: Georgi Georgiev on: 05-08-2016 11:11:41

Hello,

We have the same problem - after upgrading from 5.6 to 5.7, mysqldump for a single empty table on a server with many InnoDB databases/tables became much slower. We found that the reason for this behaviour are these two queries which are executed by mysqldump:

SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA='test_db' AND TABLE_NAME IN ('test_table'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE, TOTAL_EXTENTS, INITIAL_SIZE, EXTRA ORDER BY LOGFILE_GROUP_NAME;

SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA='test_db' AND TABLE_NAME IN ('test_table')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME;

In 5.6 the table INFORMATION_SCHEMA.FILES is empty and the above queries are fast. In 5.7, if you use the innodb_file_per_table option, the INFORMATION_SCHEMA.FILES has an entry for each innodb table on the server. So for a sever with thousands of innodb tables the above queries are slow and the mysqldump takes longer. Our solution was to use mysqldump's option "--no-tablespaces" as we are not using general tablespaces for our tables, but in the case where general tablespaces are used I guess this is not an option.

Done

Details

Assignee

Reporter

Affects versions

Priority

Smart Checklist

Created January 24, 2018 at 9:37 AM
Updated December 25, 2024 at 9:51 AM
Resolved December 25, 2024 at 9:51 AM