All work
- After cluster got new certificates operator cannot pause cluster by his selfK8SPXC-797Resolved issue: K8SPXC-797ege.gunes
- S3 backup deletion doesn't delete PodsK8SPXC-796Resolved issue: K8SPXC-796Dmytro Zghoba
- Provide a way to easily change some settingsK8SPXC-795
- Flood of rotate information in logsK8SPXC-794Resolved issue: K8SPXC-794Slava Sarzhan
- Improve output of log collectorK8SPXC-793Resolved issue: K8SPXC-793Slava Sarzhan
- SmartUpdate for operatorK8SPXC-792Resolved issue: K8SPXC-792
- allow "sleep infinity" on non-debug imagesK8SPXC-791Resolved issue: K8SPXC-791Slava Sarzhan
- DR Replication - tune master retries for replication between two clustersK8SPXC-789Resolved issue: K8SPXC-789Slava Sarzhan
- mbind: Operation not permittedK8SPXC-788
- The cluster doesn't become ready after password for xtrabackup user is changedK8SPXC-787Resolved issue: K8SPXC-787Bulat Zamalutdinov
- Fix duplication in service handlingK8SPXC-786Resolved issue: K8SPXC-786George Kechagias
- Backup to S3 produces error messages even during successful backupK8SPXC-785Resolved issue: K8SPXC-785Slava Sarzhan
- Parameterize operator deployment nameK8SPXC-784Resolved issue: K8SPXC-784ege.gunes
- Do not allow 'root@%' user to modify the monitor/clustercheck usersK8SPXC-783Resolved issue: K8SPXC-783Dmytro Zghoba
- Backup stails in running state after garb ssl connection problemK8SPXC-782Slava Sarzhan
- Remove unused fileK8SPXC-781Resolved issue: K8SPXC-781Bulat Zamalutdinov
- spec.tls.issuerConf is not documentedK8SPXC-780Resolved issue: K8SPXC-780dmitriy.kostiuk
- Image pulled several timesK8SPXC-778
- logs is still spammed with DNS messagesK8SPXC-777Slava Sarzhan
- The custom mysqld config isn't checked in case of cluster updateK8SPXC-775Resolved issue: K8SPXC-775Slava Sarzhan
- Add common labels to serviceK8SPXC-772Resolved issue: K8SPXC-772Bulat Zamalutdinov
- Expose all fields supported in the CRD to the Helm chart for PXC-DBK8SPXC-771Resolved issue: K8SPXC-771Tomislav Plavcic
- CRD API version deprecatedK8SPXC-770Resolved issue: K8SPXC-770
- Operator reports "Unknown MySQL server host"K8SPXC-769Resolved issue: K8SPXC-769Lalit Choudhary
- Operator can not create 2nd instance on OpenShift 4.6.31K8SPXC-768Resolved issue: K8SPXC-768
- On demand backup hangs if it was created when the cluster was in 'initializing' stateK8SPXC-767Resolved issue: K8SPXC-767ege.gunes
- S3 Delete not working/stuckK8SPXC-766Resolved issue: K8SPXC-766
- Add ConfigMaps deletion for custom configurationsK8SPXC-765Resolved issue: K8SPXC-765Sergey Pronin
- Allow backups even if just a single node is availableK8SPXC-764Resolved issue: K8SPXC-764ege.gunes
- [BUG] Proxysql statefulset, PVC and services get mistakenly deleted when reading stale proxysql informationK8SPXC-763Resolved issue: K8SPXC-763ege.gunes
- Validating webhook not accepting scale operationK8SPXC-762Resolved issue: K8SPXC-762Dmytro Zghoba
- HAProxy container not setting explicit USER id, breaks runAsNonRoot security policy by defaultK8SPXC-761Resolved issue: K8SPXC-761Alex Miroshnychenko
- Document - new feature - skip TLS verification for backupsK8SPXC-760Resolved issue: K8SPXC-760dmitriy.kostiuk
- Allow to skip TLS verification for backup storageK8SPXC-758Resolved issue: K8SPXC-758Andrii Dema
- Manual Crash Recovery interferes with auto recovery even with auto_recovery: falseK8SPXC-757Resolved issue: K8SPXC-757Slava Sarzhan
- While cluster is paused - operator schedule backups.K8SPXC-756Resolved issue: K8SPXC-756ege.gunes
- Nothing happensK8SPXC-755
- kubectl delete takes very long timeK8SPXC-754Resolved issue: K8SPXC-754Slava Sarzhan
- Allow disabling TLS when taking backupsK8SPXC-752Resolved issue: K8SPXC-752
- Document - new feature - replication to another siteK8SPXC-751Resolved issue: K8SPXC-751
- ProxySQL can't connect to PXC if allowUnsafeConfiguration = trueK8SPXC-750Resolved issue: K8SPXC-750Andrii Dema
- Add tunable parameters for any timeout existing in the checksK8SPXC-749Resolved issue: K8SPXC-749Slava Sarzhan
- Install 'vim-minimal' for haproxy docker imageK8SPXC-746Resolved issue: K8SPXC-746Slava Sarzhan
- pxc operator robustness improvementK8SPXC-745Resolved issue: K8SPXC-745Lalit Choudhary
- Remove confusing error messages from the log of backupK8SPXC-743Resolved issue: K8SPXC-743Slava Sarzhan
- socat in percona/percona-xtradb-cluster-operator:1.7.0-pxc5.7-backup generates "E SSL_read(): Connection reset by peer"K8SPXC-742Resolved issue: K8SPXC-742Slava Sarzhan
- Document - cluster name limitationK8SPXC-740Resolved issue: K8SPXC-740dmitriy.kostiuk
- Operator doesn't scale for more than one podK8SPXC-739Resolved issue: K8SPXC-739
- Labels are not applied to ServiceK8SPXC-738Resolved issue: K8SPXC-738Andrii Dema
- proxysql-admin --syncusers rollbacks proxysql settings updatesK8SPXC-737Resolved issue: K8SPXC-737
pxc operator robustness improvement
Description
Environment
Details
Details
Assignee
Reporter
Components
Priority
Activity
Jira Bot August 29, 2021 at 11:57 AM
Hello ,
It's been 52 days since this issue went into Incomplete and we haven't heard
from you on this.
At this point, our policy is to Close this issue, to keep things from getting
too cluttered. If you have more information about this issue and wish to
reopen it, please reply with a comment containing "jira-bot=reopen".
Jira Bot August 21, 2021 at 11:56 AM
Hello ,
It's jira-bot again. Your bug report is important to us, but we haven't heard
from you since the previous notification. If we don't hear from you on
this in 7 days, the issue will be automatically closed.
Jira Bot August 6, 2021 at 10:57 AM
Hello ,
I'm jira-bot, Percona's automated helper script. Your bug report is important
to us but we've been unable to reproduce it, and asked you for more
information. If we haven't heard from you on this in 3 more weeks, the issue
will be automatically closed.
Lalit Choudhary July 8, 2021 at 10:16 AM
Hi
Thank you for the report and your inputs.
if the operator is crash or the required condition is err, the cluster will not auto recovery(such as when restore failed, the pxc size information will lose)。
In k8s, operator always push the state to the final state according the current state, it is not procedure oriented. The operator will become more stable if we use k8s thinkind.
In PXC-Operator version 1.7 and 1.8 there few improvements for auto-recovery .
New feature 1.7.0 and 1.8.0
: Add support for point-in-time recovery
: PXC cluster will now recover automatically from a full crash when Pods are stuck in CrashLoopBackOff status
: Operator can now automatically recover Percona XtraDB Cluster after the network partitioning
https://www.percona.com/doc/kubernetes-operator-for-pxc/ReleaseNotes/index.html
Apart from this if have further improvement suggestion, it would be better if you can add example use case and your expectation as an improvement.
Feel free to add a comment here.
Now the operator always sleep to meet the required condition, this lead to block. And if the operator is crash or the required condition is err, the cluster will not auto recovery(such as when restore failed, the pxc size information will lose)。
In k8s, operator always push the state to the final state according the current state, it is not procedure oriented. The operator will become more stable if we use k8s thinkind.
I am sorry that my english is poor, I don't know if anyone understand my idea. But i am glad to communicate with the operator team and make my effort to improve the program robustness. Any group or way i can chat to the team i want to know