If there are lot of encrypted tables whose page 0 is not flushed, xtrabackup will not be able to decrypt them. The encryption information on page0 will be available in redo log.
Xtrabackup doesn't abort on such 'on-flight'/un-flushed encrypted tablespaces. It maintains a list of such encrypted tablespaces and when xtrbackup reaches/parses the redo log belonging to these tablespaces, it decrypts the tablespaces.
The race can occur when inserting such tablespace_ids into this 'invalid_encrypted_tablespace_ids' list. This is because *.ibd scan is parallel during backup.
To be specific, the insertions into invalid_encrypted_tablespace_ids vector is not protected by a mutex.
May be better if we can reuse the m_errored_spaces map? This is sharded and protected by shard mutex. If this cannot be re-used, something similar should be created at shard level.
If there are lot of encrypted tables whose page 0 is not flushed, xtrabackup will not be able to decrypt them. The encryption information on page0 will be available in redo log.
Xtrabackup doesn't abort on such 'on-flight'/un-flushed encrypted tablespaces. It maintains a list of such encrypted tablespaces and when xtrbackup reaches/parses the redo log belonging to these tablespaces, it decrypts the tablespaces.
The race can occur when inserting such tablespace_ids into this 'invalid_encrypted_tablespace_ids' list. This is because *.ibd scan is parallel during backup.
To be specific, the insertions into invalid_encrypted_tablespace_ids vector is not protected by a mutex.
May be better if we can reuse the m_errored_spaces map? This is sharded and protected by shard mutex. If this cannot be re-used, something similar should be created at shard level.