Can not run pxc operator locally outside the cluster
Description
Environment
go 1.13.14
operator-sdk v0.17.2
kind cluster
Smart Checklist
Activity
BitsBeats August 13, 2020 at 10:10 AM
PR: https://github.com/percona/percona-xtradb-cluster-operator/pull/538/
Following error appears:
{"level":"error","ts":1597312961.7669334,"logger":"controller_perconaxtradbcluster","msg":"sync users","error":"exec syncusers: command terminated with exit code 1 / / ERROR (line:765) : The cluster (with writer hostgroup:11) has not been configured in ProxySQL\n","errorVerbose":"exec syncusers: command terminated with exit code 1 / / ERROR (line:765) : The cluster (with writer hostgroup:11) has not been configured in ProxySQL
I do not think it is coming from the run-local changes.
Mykola Marzhan August 10, 2020 at 4:40 PM
I think it has the sense even if Operator runs inside Kubernetes itself because communication to services can be limited via Network Policies (example).
@BitsBeats, as usual feel free to contribute.
BitsBeats August 10, 2020 at 3:34 PM
So i tried to implement out of cluster and in cluster support:
The operator is starting okay and deploy also works.
But than the following errors are thrown:
{"level":"error","ts":1597072466.6638331,"logger":"controller_perconaxtradbcluster","msg":"failed to create db instance","error":"dial tcp 172.18.0.5:3306: connect: no route to host","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).fetchVersionFromPXC\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/pkg/controller/pxc/version.go:232\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:409\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1597072466.6695597,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"perconaxtradbcluster-controller","request":"pxc/cluster1","error":"update CR version: failed to reach any pod","errorVerbose":"failed to reach any pod\nupdate CR version\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:410\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/tti/IdeaProjects/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
So it looks like the controller is trying to connect to the cluster by itself.
This could be implemented via port-forward by default, so out of cluster and in cluster will work both.
https://github.com/kubernetes/client-go/issues/51
https://gianarb.it/blog/programmatically-kube-port-forward-in-go
This could speed up development time.
BitsBeats August 10, 2020 at 11:15 AM
Hey @mykola.marzhan,
okay thanks, for Your reply.
Mykola Marzhan August 10, 2020 at 10:03 AMEdited
Hi @BitsBeats,
we use the following commands during development
export IMAGE=my-docker-hub/repository:my-branch
./e2e-tests/build-and-run
feel free to prepare PR to this doc if needed - https://github.com/percona/percona-xtradb-cluster-operator/blob/master/e2e-tests/README.md
—
please escalate issue via regular channels if you have a contract with Percona
Details
Assignee
UnassignedUnassignedReporter
BitsBeatsBitsBeatsLabels
Components
Affects versions
Priority
Medium
Details
Details
Assignee
Reporter
Labels
Components
Affects versions
Priority
Smart Checklist
Open Smart Checklist
Smart Checklist
Open Smart Checklist
Smart Checklist

It seems, this commit: https://github.com/percona/percona-xtradb-cluster-operator/commit/47a5705f0e5aa858f27b0506b6317a4a9e392c03 broke the "operator-sdk run --local" feature.
It fails with:
{"level":"error","ts":1596454906.902439,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"perconaxtradbcluster-controller","request":"pxc/cluster1","error":"get operator deployment: open /var/run/secrets/kubernetes.io/serviceaccount/namespace: no such file or directory","errorVerbose":"open /var/run/secrets/kubernetes.io/serviceaccount/namespace: no such file or directory
How are you locally developing the operator? Always building the dockerimage and deploy to local kind cluster eg. ?
Any docs for local dev ?