Description

As per Percona Operator for MongoDB - Custom Resource options we should be able to specify the replica set shared key with secrets.key attribute. However the following is reported:

Error from server (BadRequest): error when creating "deploy/cr.yaml": PerconaServerMongoDB in version "v1" cannot be handled as a PerconaServerMongoDB: strict decoding error: unknown field "spec.secrets.key"

 

Full cr.yaml:

 

apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: name: my-cluster-name finalizers: - delete-psmdb-pods-in-order spec: crVersion: 1.16.2 image: percona/percona-server-mongodb:7.0.8-5 imagePullPolicy: Always unsafeFlags: tls: false replsetSize: false mongosSize: false updateStrategy: SmartUpdate upgradeOptions: versionServiceEndpoint: https://check.percona.com apply: disabled schedule: "0 2 * * *" setFCV: false secrets: users: my-cluster-name-secrets encryptionKey: my-cluster-name-mongodb-encryption-key key: my-cluster-name-keyfile pmm: enabled: false image: percona/pmm-client:2.41.2 serverHost: monitoring-service replsets: - name: rs0 size: 1 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 expose: enabled: false exposeType: ClusterIP resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "300m" memory: "0.5G" volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi nonvoting: enabled: false size: 3 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "300m" memory: "0.5G" volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi arbiter: enabled: false size: 1 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "300m" memory: "0.5G" sharding: enabled: true configsvrReplSet: size: 3 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 expose: enabled: false exposeType: ClusterIP resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "300m" memory: "0.5G" volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi mongos: size: 1 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "300m" memory: "0.5G" expose: exposeType: ClusterIP backup: enabled: true image: percona/percona-backup-mongodb:2.4.1 pitr: enabled: false oplogOnly: false compressionType: gzip compressionLevel: 6

The workaround is to create the Secret with hard-coded name `my-cluster-name-mongodb-keyfile` before deploying the cr.yaml but that is not ideal

Environment

None

Activity

Show:

Pavel Tankov November 1, 2024 at 7:19 AM

So, I tried this: I uncommented the keyFile: my-cluster-name-mongodb-keyfile option and deployed a new psmdb from scratch

message: "handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t67238607add35fcc546566ea\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsCertificateKeyFile=%2Ftmp%2Ftls.pem&tlsAllowInvalidCertificates=true&tlsCAFile=%2Fetc%2Fmongodb-ssl%2Fca.crt&appName=mongosh+2.3.0\n / MongoServerSelectionError: connection <monitor> to 127.0.0.1:27017 closed\n\nhandleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t67238616975edd265b6566ea\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsCertificateKeyFile=%2Ftmp%2Ftls.pem&tlsAllowInvalidCertificates=true&tlsCAFile=%2Fetc%2Fmongodb-ssl%2Fca.crt&appName=mongosh+2.3.0\n

and operator log:

2024-10-31T13:35:10.863Z ERROR Reconciler error {"controller": "psmdb-controller", "object": {"name":"my-cluster-name","namespace":"psmdb"}, "namespace": "psmdb", "name": "my-cluster-name", "reconcileID": "f583ed00-173b-4229-b67d-b97728c58085", "error": "reconcile statefulsets: handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t6723877cae4b32e3b26566ea\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsCertificateKeyFile=%2Ftmp%2Ftls.pem&tlsAllowInvalidCertificates=true&tlsCAFile=%2Fetc%2Fmongodb-ssl%2Fca.crt&appName=mongosh+2.3.0\n / MongoServerSelectionError: connection <monitor> to 127.0.0.1:27017 closed\n\nhandleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t6723878b3e22ece3ac6566ea\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsCertificateKeyFile=%2Ftmp%2Ftls.pem&tlsAllowInvalidCertificates=true&tlsCAFile=%2Fetc%2Fmongodb-ssl%2Fca.crt&appName=mongosh+2.3.0\n / MongoServerSelectionError: connection <monitor> to 127.0.0.1:27017 closed\n", "errorVerbose": "handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t6723877cae4b32e3b26566ea\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsCertificateKeyFile=%2Ftmp%2Ftls.pem&tlsAllowInvalidCertificates=true&tlsCAFile=%2Fetc%2Fmongodb-ssl%2Fca.crt&appName=mongosh+2.3.0\n / MongoServerSelectionError: connection <monitor> to 127.0.0.1:27017 closed\n\nhandleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t6723878b3e22ece3ac6566ea\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsCertificateKeyFile=%2Ftmp%2Ftls.pem&tlsAllowInvalidCertificates=true&tlsCAFile=%2Fetc%2Fmongodb-ssl%2Fca.crt&appName=mongosh+2.3.0\n / MongoServerSelectionError: connection <monitor> to 127.0.0.1:27017 closed\n\nreconcile statefulsets\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:426\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/internal/controller/controller.go:116\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/internal/controller/controller.go:303\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/internal/controller/controller.go:224\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695"}

Pavel Tankov November 1, 2024 at 7:19 AM

So, we have the option keyFile: my-cluster-name-mongodb-keyfile, which is commented out by default. The value my-cluster-name-mongodb-keyfile is the default one, which means, if I uncomment that line and leave the default value, re-apply cr.yaml, nothing would change, correct? If so, then why does my psmdb deployment enter error state?

host: my-cluster-name-mongos.psmdb.svc.cluster.local message: 'Error: reconcile StatefulSet for rs0: failed to run smartUpdate: failed to check active jobs: getting PBM object: create PBM connection to my-cluster-name-rs0-0.my-cluster-name-rs0.psmdb.svc.cluster.local:27017,my-cluster-name-rs0-1.my-cluster-name-rs0.psmdb.svc.cluster.local:27017,my-cluster-name-rs0-2.my-cluster-name-rs0.psmdb.svc.cluster.local:27017

Slava Sarzhan October 3, 2024 at 1:57 PM
Edited

The new option was added:

spec: secrets: keyFile: my-cluster-name-mongodb-keyfile

QA: We need to try set the custom one and connect to the components using keyfile (sharding and not sharding deployment)

Slava Sarzhan August 2, 2024 at 3:35 PM

The operator does not support spec.secrets.key: my-cluster-name-keyfile at all disappointed face We need to add this option.
For developers: It is better to use spec.secrets.keyFile name.

Ivan Groenewold August 2, 2024 at 3:12 PM

Hi Slava, yes I tested with those options and auto-generation works. However as per https://docs.percona.com/percona-operator-for-mongodb/users.html?h=key#mongodb-internal-authentication-key-optional we should be able to provide the secret name as part of cr.yaml. This is not working

Done
Pinned fields
Click on the next to a field label to start pinning.

Details

Assignee

dmitriy.kostiuk

Reporter

Ivan Groenewold

Labels

Needs QA

Yes

Needs Doc

Yes

Story Points

1

Sprint

None

Fix versions

Priority

Smart Checklist

Created July 29, 2024 at 3:16 PM
Updated November 14, 2024 at 5:21 PM
Resolved November 11, 2024 at 1:38 PM

Flag notifications