PITR test issues

Description

On GKE 1.22 PITR test seems to timeout on deleting the backups:

+ kubectl delete pxc-backup --all --all-namespaces Sending interrupt signal to process /mnt/jenkins/workspace/pxc-operator-gke-latest/source/e2e-tests/pitr/../functions: line 840: 11011 Terminated kubectl "$@" > "$LAST_OUT" 2> "$LAST_ERR" + exit_status=143 + [[ ehB != ehxB ]] + echo '--- 0 stdout' + cat - /tmp/tmp.wh3U4ykL86 --- 0 stdout perconaxtradbclusterbackup.pxc.percona.com "on-pitr-minio" deleted perconaxtradbclusterbackup.pxc.percona.com "on-pitr-minio1" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111922160-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111922170-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111922180-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111922190-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111922200-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111922160-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111922170-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111922180-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111922190-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111922200-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111922160-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111922170-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111922180-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111922190-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111922200-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111921300-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111921310-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111921320-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111921330-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111921340-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-aws-s3-202111921350-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111921300-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111921310-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111921320-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111921330-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111921340-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-gcp-cs-202111921350-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111921300-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111921310-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111921320-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111921330-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111921340-q6fav" deleted perconaxtradbclusterbackup.pxc.percona.com "cron-scheduled-backup-minio-202111921350-q6fav" deleted + [[ ehB != ehxB ]] + echo '--- 0 stderr' + cat - /tmp/tmp.3QMg8wovms --- 0 stderr + [[ 143 != 0 ]] + sleep 0 + for i in '$(seq 0 2)' + kubectl delete pxc-backup --all --all-namespaces xargs: kubectl: terminated by signal 15 /mnt/jenkins/workspace/pxc-operator-gke-latest@tmp/durable-94ed73c4/script.sh: line 39: 25412 Terminated ./e2e-tests/pitr/run script returned exit code 143

Unfortunately while running locally I was not able to get to that step because the test fails even sooner with data diff:

----------------------------------------------------------------------------------- check data after backup restore-on-pitr-minio-gtid ----------------------------------------------------------------------------------- + compare_mysql_cmd select-2 'SELECT * from test.test;' '-h pitr-pxc-0.pitr-pxc -uroot -proot_password' + local command_id=select-2 + local 'command=SELECT * from test.test;' + local 'uri=-h pitr-pxc-0.pitr-pxc -uroot -proot_password' + local postfix= + local expected_result=/Users/plavi/Development/percona/kubernetes-operators/production/percona-xtradb-cluster-operator/e2e-tests/pitr/compare/select-2.sql + [[ percona/percona-xtradb-cluster:8.0.23-14.1 =~ 8\.0 ]] + '[' -f /Users/plavi/Development/percona/kubernetes-operators/production/percona-xtradb-cluster-operator/e2e-tests/pitr/compare/select-2-80.sql ']' + run_mysql 'SELECT * from test.test;' '-h pitr-pxc-0.pitr-pxc -uroot -proot_password' + local 'command=SELECT * from test.test;' + local 'uri=-h pitr-pxc-0.pitr-pxc -uroot -proot_password' ++ get_client_pod ++ kubectl_bin get pods --selector=name=pxc-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.vuD0fgaR +++ mktemp ++ local LAST_ERR=/var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.9JTHtr0a ++ local exit_status=0 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ kubectl get pods --selector=name=pxc-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ [[ hB != hxB ]] ++ echo '--- 0 stdout' ++ cat - /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.vuD0fgaR --- 0 stdout pxc-client-75d7f67bd4-lcrtg++ [[ hB != hxB ]] ++ echo '--- 0 stderr' ++ cat - /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.9JTHtr0a --- 0 stderr ++ [[ 0 != 0 ]] ++ cat /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.vuD0fgaR ++ cat /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.9JTHtr0a ++ rm /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.vuD0fgaR /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.9JTHtr0a ++ return 0 + client_pod=pxc-client-75d7f67bd4-lcrtg + wait_pod pxc-client-75d7f67bd4-lcrtg + local pod=pxc-client-75d7f67bd4-lcrtg + local max_retry=480 + local ns= ++ echo pxc-client-75d7f67bd4-lcrtg ++ /usr/local/bin/gsed -E 's/.*-(pxc|proxysql)-[0-9]/\1/' ++ egrep '^(pxc|proxysql)$' + local container= + set +o xtrace pxc-client-75d7f67bd4-lcrtg.Ok + [[ ehB != ehxB ]] + echo '+ kubectl exec -it pxc-client-75d7f67bd4-lcrtg -- bash -c "printf '\''SELECT * from test.test;\n'\'' | mysql -sN -h pitr-pxc-0.pitr-pxc -uroot -proot_password"' + kubectl exec -it pxc-client-75d7f67bd4-lcrtg -- bash -c "printf 'SELECT * from test.test;\n' | mysql -sN -h pitr-pxc-0.pitr-pxc -uroot -proot_password" + set +o xtrace + '[' '!' -s /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.5e7qPWVn/select-2.sql ']' + sleep 20 + run_mysql 'SELECT * from test.test;' '-h pitr-pxc-0.pitr-pxc -uroot -proot_password' + local 'command=SELECT * from test.test;' + local 'uri=-h pitr-pxc-0.pitr-pxc -uroot -proot_password' ++ get_client_pod ++ kubectl_bin get pods --selector=name=pxc-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.v6k3fMs1 +++ mktemp ++ local LAST_ERR=/var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.2Ek9yW5G ++ local exit_status=0 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ kubectl get pods --selector=name=pxc-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ [[ hB != hxB ]] ++ echo '--- 0 stdout' ++ cat - /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.v6k3fMs1 --- 0 stdout pxc-client-75d7f67bd4-lcrtg++ [[ hB != hxB ]] ++ echo '--- 0 stderr' ++ cat - /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.2Ek9yW5G --- 0 stderr ++ [[ 0 != 0 ]] ++ cat /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.v6k3fMs1 ++ cat /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.2Ek9yW5G ++ rm /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.v6k3fMs1 /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.2Ek9yW5G ++ return 0 + client_pod=pxc-client-75d7f67bd4-lcrtg + wait_pod pxc-client-75d7f67bd4-lcrtg + local pod=pxc-client-75d7f67bd4-lcrtg + local max_retry=480 + local ns= ++ echo pxc-client-75d7f67bd4-lcrtg ++ /usr/local/bin/gsed -E 's/.*-(pxc|proxysql)-[0-9]/\1/' ++ egrep '^(pxc|proxysql)$' + local container= + set +o xtrace pxc-client-75d7f67bd4-lcrtg.Ok + [[ ehB != ehxB ]] + echo '+ kubectl exec -it pxc-client-75d7f67bd4-lcrtg -- bash -c "printf '\''SELECT * from test.test;\n'\'' | mysql -sN -h pitr-pxc-0.pitr-pxc -uroot -proot_password"' + kubectl exec -it pxc-client-75d7f67bd4-lcrtg -- bash -c "printf 'SELECT * from test.test;\n' | mysql -sN -h pitr-pxc-0.pitr-pxc -uroot -proot_password" + set +o xtrace + diff -u /Users/plavi/Development/percona/kubernetes-operators/production/percona-xtradb-cluster-operator/e2e-tests/pitr/compare/select-2.sql /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.5e7qPWVn/select-2.sql --- /Users/plavi/Development/percona/kubernetes-operators/production/percona-xtradb-cluster-operator/e2e-tests/pitr/compare/select-2.sql 2021-11-05 08:35:51.000000000 +0100 +++ /var/folders/fw/lxhtthdn7d9610w5p3wb33340000gn/T/tmp.5e7qPWVn/select-2.sql 2021-11-11 18:44:49.000000000 +0100 @@ -1,2 +0,0 @@ -100500 -100501

To reproduce use following images and GKE 1.22:
IMAGE=percona/percona-xtradb-cluster-operator:1.10.0
IMAGE_HAPROXY=percona/percona-xtradb-cluster-operator:1.10.0-haproxy
IMAGE_PROXY=percona/percona-xtradb-cluster-operator:1.10.0-proxysql
IMAGE_BACKUP=percona/percona-xtradb-cluster-operator:1.10.0-pxc8.0-backup
IMAGE_LOGCOLLECTOR=percona/percona-xtradb-cluster-operator:1.10.0-logcollector
IMAGE_PXC=percona/percona-xtradb-cluster:8.0.23-14.1
IMAGE_PMM=percona/pmm-client:2.23.0

Environment

None

Smart Checklist

Activity

Tomislav Plavcic November 12, 2021 at 5:23 PM

I have moved the scheduled-backup test to the last place and now it seems better, but I will close the ticket after more runs.

Slava Sarzhan November 12, 2021 at 3:25 PM

Hi,

I can't reproduce this issue connected with the deletion of the backups. Everything is ok for me. I think It is connected with test order (we need to move schedule backup test after the PITR test). 

Done

Details

Assignee

Reporter

Fix versions

Affects versions

Priority

Smart Checklist

Created November 11, 2021 at 5:55 PM
Updated March 5, 2024 at 5:43 PM
Resolved November 16, 2021 at 1:42 PM