sync users fails during new cluster creation
General
Escalation
General
Escalation
Description
Environment
None
Smart Checklist
Activity
Sergey Pronin March 10, 2021 at 1:55 PM
@Matt Hamann I will close this bug as we cannot reproduce this. Please let us know if you need help here.
Lalit Choudhary February 10, 2021 at 2:14 PMEdited
Hi @Matt Hamann
Even I don't see the issue in my test for proxysql user, using 1.6.0 operator version.
[root@lalit-choudhary-default ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
minio-service-6db6667fb9-2kft9 1/1 Running 0 6h19m
percona-xtradb-cluster-operator-7886ccf6b5-nqtdt 1/1 Running 0 6h19m
cluster1-proxysql-0 3/3 Running 0 9m46s
cluster1-proxysql-1 3/3 Running 0 9m44s
cluster1-proxysql-2 3/3 Running 0 9m43s
cluster1-pxc-0 1/1 Running 0 9m46s
cluster1-pxc-1 1/1 Running 0 9m26s
cluster1-pxc-2 1/1 Running 0 8m47s
Ref: https://www.percona.com/doc/kubernetes-operator-for-pxc/users.html
[root@lalit-choudhary-default ~]# cat percona-xtradb-cluster-operator/deploy/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: my-cluster-secrets
type: Opaque
data:
root: cm9vdF9wYXNzd29yZA==
xtrabackup: YmFja3VwX3Bhc3N3b3Jk
monitor: bW9uaXRvcg==
clustercheck: Y2x1c3RlcmNoZWNrcGFzc3dvcmQ=
proxyadmin: YWRtaW5fcGFzc3dvcmQ=
pmmserver: c3VwYXxefHBheno=
operator: b3BlcmF0b3JhZG1pbg==
[root@lalit-choudhary-default ~]# echo -n "admin_password" | base64
YWRtaW5fcGFzc3dvcmQ=
Maksim Dudin February 5, 2021 at 1:02 PM
Hello @Matt Hamann. We were trying to reproduce this bug using your CR but everything looks fine. We think that the reason may be in your secret `vault-db-cluster-secrets`. Looks like there is no "proxyadmin" field in "data" object, or it's written with mistake. If it will not help, please give us your steps to reproducing.
Matt Hamann October 20, 2020 at 4:41 AM
The `proxyadmin` user doesn't seem to exist within the database's list of users.
mysql> select User from user;
+----------------------------+
| User |
+----------------------------+
| monitor |
| operator |
| root |
| clustercheck |
| mysql.infoschema |
| mysql.pxc.internal.session |
| mysql.pxc.sst.role |
| mysql.session |
| mysql.sys |
| root |
| xtrabackup |
+----------------------------+
11 rows in set (0.00 sec)
Cannot Reproduce
Details
Assignee
Lalit ChoudharyLalit ChoudharyReporter
Matt HamannMatt HamannLabels
Priority
High
Details
Details
Assignee
Lalit Choudhary
Lalit ChoudharyReporter
Matt Hamann
Matt HamannLabels
Priority
Smart Checklist
Open Smart Checklist
Smart Checklist
Open Smart Checklist
Smart Checklist

Open Smart Checklist
Created October 20, 2020 at 3:56 AM
Updated March 5, 2024 at 6:04 PM
Resolved March 10, 2021 at 1:55 PM
I've reproduced this issue on 1.5 and 1.6 so far.
When creating a new cluster, things start to come up in a normal way: proxysql, the cluster itself, etc, but at some point, I start getting logs like this from proxysql:
2020-10-20 03:23:20 MySQL_Session.cpp:4685:handler___status_CONNECTING_CLIENT___STATE_SERVER_HANDSHAKE(): [ERROR] ProxySQL Error: Access denied for user 'proxyadmin'@'127.0.0.1' (using password: YES) 2020-10-20 03:23:20 ProxySQL_Cluster.cpp:217:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer vault-data-proxysql-2.vault-data-proxysql-unready.pxc.svc.cluster.local:6032 . Error: ProxySQL Error: Access denied for user 'proxyadmin'@'10.244.3.38' (using password: YES)
And the operator shows this:
percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163390.794787,"logger":"controller_perconaxtradbcluster","msg":"set PXC version to 8.0.20-11.2"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163393.4688568,"logger":"controller_perconaxtradbcluster","msg":"add new job: 0 4 * * *"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163563.5375235,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"error","ts":1603163563.9657736,"logger":"controller_perconaxtradbcluster","msg":"sync users","error":"exec syncusers: command terminated with exit code 1 / / ERROR 1045 (28000): Access denied for user 'operator'@'vault-data-proxysql-0.vault-data-proxysql-unready.pxc.svc.cluste' (using password: YES)\nERROR 1045 (28000): Access denied for user 'operator'@'vault-data-proxysql-0.vault-data-proxysql-unready.pxc.svc.cluste' (using password: YES)\nERROR (line:2027) : Could not find a primary cluster node\n","errorVerbose":"exec syncusers: command terminated with exit code 1 / / ERROR 1045 (28000): Access denied for user 'operator'@'vault-data-proxysql-0.vault-data-proxysql-unready.pxc.svc.cluste' (using password: YES)\nERROR 1045 (28000): Access denied for user 'operator'@'vault-data-proxysql-0.vault-data-proxysql-unready.pxc.svc.cluste' (using password: YES)\nERROR (line:2027) : Could not find a primary cluster node\n\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).syncPXCUsersWithProxySQL\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/users.go:320\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).resyncPXCUsersWithProxySQL.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:993\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).resyncPXCUsersWithProxySQL.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:995"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163628.6458352,"logger":"controller_perconaxtradbcluster","msg":"primary pod is vault-data-pxc-0.vault-data-pxc.pxc.svc.cluster.local"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163628.645961,"logger":"controller_perconaxtradbcluster","msg":"apply changes to secondary pod vault-data-pxc-2"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163674.7451856,"logger":"controller_perconaxtradbcluster","msg":"pod vault-data-pxc-2 is running"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163676.7785485,"logger":"controller_perconaxtradbcluster","msg":"pod vault-data-pxc-2 is online"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163676.7790804,"logger":"controller_perconaxtradbcluster","msg":"apply changes to secondary pod vault-data-pxc-1"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"error","ts":1603163976.858519,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"perconaxtradbcluster-controller","request":"pxc/vault-data","error":"pxc upgrade error: failed to apply changes: failed to wait pod: reach pod wait limit","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163977.9001682,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163977.9026382,"logger":"controller_perconaxtradbcluster","msg":"can't start/continue 'SmartUpdate': waiting for all replicas are ready"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163978.0396187,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163978.0400581,"logger":"controller_perconaxtradbcluster","msg":"can't start/continue 'SmartUpdate': waiting for all replicas are ready"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603163983.035328,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} ... percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164115.9998043,"logger":"controller_perconaxtradbcluster","msg":"can't start/continue 'SmartUpdate': waiting for all replicas are ready"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164121.0971215,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164121.0972383,"logger":"controller_perconaxtradbcluster","msg":"can't start/continue 'SmartUpdate': waiting for all replicas are ready"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164126.1966465,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164126.1969595,"logger":"controller_perconaxtradbcluster","msg":"can't start/continue 'SmartUpdate': waiting for all replicas are ready"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164131.3002262,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164131.3002856,"logger":"controller_perconaxtradbcluster","msg":"can't start/continue 'SmartUpdate': waiting for all replicas are ready"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164136.4016547,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"info","ts":1603164136.4016984,"logger":"controller_perconaxtradbcluster","msg":"can't start/continue 'SmartUpdate': waiting for all replicas are ready"} percona-xtradb-cluster-operator-846b97cb67-8p6vn percona-xtradb-cluster-operator {"level":"error","ts":1603164213.8016162,"logger":"controller_perconaxtradbcluster","msg":"sync users","error":"exec syncusers: command terminated with exit code 1 / / ERROR 1045 (28000): ProxySQL Error: Access denied for user 'proxyadmin'@'127.0.0.1' (using password: YES)\nERROR (line:713) : ProxySQL connection check failed. \n-- Could not connect to ProxySQL at localhost:6032 \n-- Please check the ProxySQL connection parameters and status.\n","errorVerbose":"exec syncusers: command terminated with exit code 1 / / ERROR 1045 (28000): ProxySQL Error: Access denied for user 'proxyadmin'@'127.0.0.1' (using password: YES)\nERROR (line:713) : ProxySQL connection check failed. \n-- Could not connect to ProxySQL at localhost:6032 \n-- Please check the ProxySQL connection parameters and status.\n\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).syncPXCUsersWithProxySQL\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/users.go:320\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).resyncPXCUsersWithProxySQL.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:993\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).resyncPXCUsersWithProxySQL.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:995"}
I've tried to bring this cluster up a couple different times, and this seems to always be the result.
Not sure what I'm doing wrong, but this does seem like a bug.
For reference, the CR for this cluster looks like this:
apiVersion: pxc.percona.com/v1-6-0 kind: PerconaXtraDBCluster metadata: name: vault-data finalizers: - delete-pxc-pods-in-order spec: secretsName: vault-db-cluster-secrets sslSecretName: vault-db-cluster-ssl sslInternalSecretName: vault-db-cluster-ssl-internal allowUnsafeConfigurations: false updateStrategy: SmartUpdate upgradeOptions: versionServiceEndpoint: https://check.percona.com/versions apply: recommended schedule: "0 4 * * *" pxc: size: 3 image: percona/percona-xtradb-cluster:8.0.20-11.1 serviceAccountName: vault-db resources: requests: memory: 1G cpu: 600m nodeSelector: dedicated: percona affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" tolerations: - key: "dedicated" operator: "Equal" value: "percona-db" effect: "NoSchedule" podDisruptionBudget: maxUnavailable: 1 volumeSpec: persistentVolumeClaim: resources: requests: storage: 6Gi gracePeriod: 600 haproxy: enabled: false size: 3 image: percona/percona-xtradb-cluster-operator:1.6.0-haproxy resources: requests: memory: 256M cpu: 200m limits: memory: 1G cpu: 700m nodeSelector: dedicated: percona serviceAccountName: vault-db affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" tolerations: - key: "dedicated" operator: "Equal" value: "percona-db" effect: "NoSchedule" podDisruptionBudget: maxUnavailable: 1 gracePeriod: 30 proxysql: enabled: true size: 3 image: percona/percona-xtradb-cluster-operator:1.6.0-proxysql resources: requests: memory: 512M cpu: 200m limits: memory: 1G cpu: 700m nodeSelector: dedicated: percona serviceAccountName: vault-db affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" tolerations: - key: "dedicated" operator: "Equal" value: "percona-db" effect: "NoSchedule" volumeSpec: persistentVolumeClaim: resources: requests: storage: 2Gi podDisruptionBudget: maxUnavailable: 1 gracePeriod: 30 pmm: enabled: false image: percona/percona-xtradb-cluster-operator:1.6.0-pmm serverHost: monitoring-service serverUser: pmm backup: enabled: false image: percona/percona-xtradb-cluster-operator:1.6.0-pxc8.0-backup storages: fs-pvc: type: filesystem volume: persistentVolumeClaim: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 6Gi schedule: - name: "vault-daily-backup" schedule: "0 0 * * *" keep: 5 storageName: fs-pvc