Read Free Replication failed for index contain blob field
Description
Environment
Attachments
- 16 Nov 2018, 01:55 PM
- 16 Nov 2018, 01:55 PM
- 16 Nov 2018, 01:55 PM
Smart Checklist
Activity
Lalit Choudhary April 22, 2019 at 10:06 AM
Thank you for the report.
Due to the no feedback from a long time on requested details we are closing this bug. Please report a new bug with the repeatable test case, if you are still able to reproduce the issue.
Lalit Choudhary November 16, 2018 at 1:54 PM
Hi @Fungo Wang
Tested described behavior with Percona server 5.6.41-84.1 binary tarball.
I can not reproduce it with the given test case. Are you using a custom build with a specific option?
MTR Testcase attached.
$ ./mtr --suite=tokudb bug4932.test
Logging: ./mtr --suite=tokudb bug4932.test
2018-11-16 19:14:41 0 [Warning] Insecure configuration for --secure-file-priv: Current value does not restrict location of generated files. Consider setting it to a valid, non-empty path.
2018-11-16 19:14:41 0 [Note] /home/lalit/mysql_tar/percona/5.6.41/bin/mysqld (mysqld 5.6.41-84.1) starting as process 26744 ...
2018-11-16 19:14:41 26744 [Note] Plugin 'FEDERATED' is disabled.
2018-11-16 19:14:41 26744 [Note] Binlog end
2018-11-16 19:14:41 26744 [Note] Shutting down plugin 'MyISAM'
2018-11-16 19:14:41 26744 [Note] Shutting down plugin 'CSV'
MySQL Version 5.6.41
Checking supported features...
- SSL connections supported
Collecting tests...
Checking leftover processes...
Removing old var directory...
Creating var directory '/home/lalit/mysql_tar/percona/5.6.41/mysql-test/var'...
Installing system database...
Using parallel: 1
============================================================================================================================================================
TEST RESULT TIME (ms) or COMMENT--------------------------------------------------------------------------
worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 13000..13009include/master-slave.incWarnings:Note #### Sending passwords in plain text without SSL/TLS is extremely insecure.Note #### Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.[connection master]set global tokudb_rpl_lookup_rows = off;stop slave;start slave;
create table t1(id int auto_increment, tid int, msg text, primary key(id), index idx_msg(tid, msg(16))) engine=TokuDB;
insert into t1 values (1, 1, 'hi'), (2, 2, 'hello'), (3, 3, 'nihao'), (4, 4, 'MySQL');
select count(tid) from t1 force index(primary);
count(tid)
4
select count(tid) from t1 force index(idx_msg);
count(tid)
4
select count(tid) from t1 force index(primary);
count(tid)
4
select count(tid) from t1 force index(idx_msg);
count(tid)
4
update t1 set msg = 'InnoDB' where id = 4;
select count(tid) from t1 force index(primary);
count(tid)
4
select count(tid) from t1 force index(idx_msg);
count(tid)
4
select count(tid) from t1 force index(primary);
count(tid)
4
select count(tid) from t1 force index(idx_msg);
count(tid)
4
select tid from t1 force index(primary);
tid
1
2
3
4
select tid from t1 force index(idx_msg);
tid
1
2
3
4
tokudb.bug4932 [ pass ] 562
Fungo Wang October 15, 2018 at 10:35 AM
Just tried RocksDB, also affected if rocksdb_read_free_rpl_tables is enabled for table test.t1.
Fungo Wang October 15, 2018 at 10:16 AM
After some debugging and code investigating, I found the root cause is the way blob field data organized into table->record is different from other kind of fields.
for INT, varchar etc. fields, the data are put directly into table->record.
for blob field, only length and a memory pointer(type is uchar*) is put into table->record, the actual data are stored at the memory pointed by the previous mentioned pointer.
In the code below, in the before image m_table->record[1] and after image record m_table->record[0], the blob filed data pointer are the same one, which is table->blob_field->value.ptr().
With the first unpack_current_row(), BI image is unpacked successfully into table->record[0], then copied to table->record[1].
With the second unpack_current_row(), AI image is unpacked to table->record[0], and update the value.ptr() memory.
Because BI and AI pointed to the same blob data, the blob data pointed by BI is overwritten. And finally the data passed to ha_update_row is actually wrong, the old data is not deleted.
int
Update_rows_log_event::do_exec_row(const Relay_log_info *const rli)
{
DBUG_ASSERT(m_table != NULL);
int error= 0;
if (m_rows_lookup_algorithm == ROW_LOOKUP_NOT_NEEDED) {
error= unpack_current_row(rli, &m_cols);
if (error)
return error;
}
/*
This is the situation after locating BI:
===|=== before image ====|=== after image ===|===
^ ^
m_curr_row m_curr_row_end
BI found in the table is stored in record[0]. We copy it to record[1]
and unpack AI to record[0].
*/
store_record(m_table,record[1]);
m_curr_row= m_curr_row_end;
/* this also updates m_curr_row_end */
if ((error= unpack_current_row(rli, &m_cols_ai)))
return error;
Why the original update code path updated successfully?
When tokudb_rpl_lookup_rows = on, update row will first fetch data from TokuDB engine, and store engine data into table->record[0], the blob pointer in table->record[0] will pointed to memory alloced by TokuDB, not table->blob_field->value.ptr();
Details
Details
Details
Smart Checklist
Open Smart Checklist
Smart Checklist
Open Smart Checklist
Smart Checklist

RFR is a nice feature, which could speed up slave applying update/delete row events, especially useful for write optimized engine such as TokuDB, and MyRocks.
But recently we found a serious bug about RFR, index is corrupted with dirty records, which should have been deleted but actually not.
As long as the index contains a blob field, and the blob field is updated, the index is corrupted on slave under RFR.
Test case is as below:
--source include/master-slave.inc ## make sure slave use read free replication for update_rows --connection slave set global tokudb_rpl_lookup_rows = off; stop slave; start slave; ## prepare data --connection master create table t1(id int auto_increment, tid int, msg text, primary key(id), index idx_msg(tid, msg(16))) engine=TokuDB; insert into t1 values (1, 1, 'hi'), (2, 2, 'hello'), (3, 3, 'nihao'), (4, 4, 'MySQL'); --sync_slave_with_master ## master and slave both contains 4 record --connection master select count(tid) from t1 force index(primary); select count(tid) from t1 force index(idx_msg); --connection slave select count(tid) from t1 force index(primary); select count(tid) from t1 force index(idx_msg); ## update --connection master update t1 set msg = 'InnoDB' where id = 4; --connection master select count(tid) from t1 force index(primary); select count(tid) from t1 force index(idx_msg); ## wait slave sync --sync_slave_with_master --connection slave ## slave idx_msg and primary mismatch select count(tid) from t1 force index(primary); select count(tid) from t1 force index(idx_msg); select tid from t1 force index(primary); select tid from t1 force index(idx_msg);
master and slave both configured with:
--gtid-mode=on --enforce-gtid-consistency --log-bin --log-slave-updates --binlog_format=row
from the result, we can see that PK mismatchs with SK idx_msg.
idx_msg contains one more record which should be deleted.
select tid from t1 force index(primary); tid 1 2 3 4 select tid from t1 force index(idx_msg); tid 1 2 3 4 4