restarted pods are broken, because pg can’t load pg_stat_monitor and postgresql.conf has options for this extension. Load is not working, because upgrade skipped without PG_VERSION
Environment
None
AFFECTED CS IDs
CS0044560
Activity
Show:
inel.pandzic June 5, 2024 at 1:35 PM
We tried to reproduce this, deployed 2.2.0 cluster (from custom operator image percona-postgresql-operator:2.2.0-custom-158), performed a backup then, restored, and performed an upgrade to 2.3.0 without any issues.
Based on the info from @Nickolay Ihalainen , the PG_VERSION file was missing before the backup so in that case the operator can't do anything about it.
Nickolay Ihalainen June 5, 2024 at 8:17 AM
PG_VERSION is created during database init and always exists for the valid database installation and backups. If it’s missing, the instance is not fully restored or not initialized.
install 2.2.0 and upgrade crd to 2.3.1 with
./anydbver deploy k3d cert-manager:1.11.0 k8s-minio minio-certs:self-signed k8s-pg:2.2.0,db-version=14,namespace=pgo k8s-pg:2.3.1,namespace=pgo1,standby,db-version=14
delete PG_VERSION from
/pgdata/pg14
on each instance.upgrade
kubectl -n pgo apply --force-conflicts --server-side -f data/k8s/percona-postgresql-operator/deploy/bundle.yaml kubectl -n pgo apply --force-conflicts --server-side -f data/k8s/percona-postgresql-operator/deploy/cr.yaml
restarted pods are broken, because pg can’t load pg_stat_monitor and postgresql.conf has options for this extension. Load is not working, because upgrade skipped without PG_VERSION