Details
Assignee
UnassignedUnassignedReporter
hoyhbxhoyhbxFound by Automation
YesNeeds Review
YesNeeds QA
YesAffects versions
Priority
Low
Details
Details
Assignee
Unassigned
UnassignedReporter
hoyhbx
hoyhbxFound by Automation
Yes
Needs Review
Yes
Needs QA
Yes
Affects versions
Priority
Smart Checklist
Smart Checklist
Smart Checklist
Created July 23, 2022 at 7:30 AM
Updated March 5, 2024 at 5:35 PM
In a XtraDB cluster deployment managed by this operator, we can have mainly three kinds of pods: PXC pods, haproxy pods and proxysql pods. The specs for the three kinds of pods are different due to their different functionalities and reconciliation process.
However, these three kinds of pods share the same PodSpec struct definition, which leads to a consequence that certain fields in CRD are ineffective for certain kinds of pods.
For example, the following listed fields are ineffective:
spec.haproxy.sslInternalSecretName
spec.logcollector.runtimeClassName
spec.haproxy.vaultSecretName
spec.pxc.replicasExternalTrafficPolicy
spec.pxc.replicasServiceType
spec.pxc.serviceAnnotations
spec.pxc.serviceLabels
spec.pxc.serviceType
spec.pxc.sidecarResources
Steps To Reproduce
As an example, we will be reproducing this issue using field spec.pxc.serviceLabels.
Apply the following CR in which spec.pxc.serviceLabels is set to key: value.
apiVersion: pxc.percona.com/v1 kind: PerconaXtraDBCluster metadata: finalizers: - delete-pxc-pods-in-order name: test-cluster spec: pxc: serviceLabels: key: value affinity: antiAffinityTopologyKey: kubernetes.io/hostname autoRecovery: true gracePeriod: 600 image: percona/percona-xtradb-cluster:8.0.27-18.1 imagePullPolicy: IfNotPresent podDisruptionBudget: maxUnavailable: 1 resources: requests: cpu: 100m memory: 512Mi size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 512Mi haproxy: labels: key: value affinity: antiAffinityTopologyKey: kubernetes.io/hostname enabled: true gracePeriod: 30 image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy imagePullPolicy: IfNotPresent podDisruptionBudget: maxUnavailable: 1 resources: requests: cpu: 100m memory: 512Mi size: 3 allowUnsafeConfigurations: false backup: image: percona/percona-xtradb-cluster-operator:1.11.0-pxc8.0-backup imagePullPolicy: IfNotPresent pitr: enabled: false schedule: - keep: 3 name: sat-night-backup schedule: 0 0 * * 6 storageName: s3-us-west - keep: 5 name: daily-backup schedule: 0 0 * * * storageName: fs-pvc storages: fs-pvc: type: filesystem volume: persistentVolumeClaim: accessModes: - ReadWriteOnce resources: requests: storage: 512Mi s3-us-west: s3: bucket: S3-BACKUP-BUCKET-NAME-HERE credentialsSecret: my-cluster-name-backup-s3 region: us-west-2 type: s3 verifyTLS: true crVersion: 1.11.0 enableCRValidationWebhook: true logcollector: enabled: true image: percona/percona-xtradb-cluster-operator:1.11.0-logcollector imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 100M pmm: enabled: false proxysql: enabled: false updateStrategy: SmartUpdate upgradeOptions: apply: 8.0-recommended schedule: 0 4 * * * versionServiceEndpoint: https://check.percona.com
Observe that the key: value does not show up in any kubernetes resources.
Suggested Fix
To fix the custom resource definition, an naive approach would be to define different PodSpec for PXC, haproxy and proxySQL.