Backup cronJob no resources

Description

Percona cronjob created from the operator does not include resource(limits and requests) which prevents it from running when there is a namespace quota limits.

 

Error creating: pods "mycluster-name-backup-daily-minio-1623657000-98vhv" is forbidden: failed quota: staging-cpu-memory-quota: must specify limits.cpu,requests.cpu

Environment

None

Smart Checklist

Activity

Slava Sarzhan July 12, 2021 at 8:52 AM

 The issue was fixed. Fix will be available in the next release.

Slava Sarzhan July 8, 2021 at 8:53 PM

Hi ,

First of all thank you for your report. You created the report in wrong project that is why we could not reproduce it for PXC operator because it works:

> kubectl get jobs xb-cron-cluster1-fs-pvc-20217818550-8fa30 -o jsonpath='{.spec.template.spec.containers[0].resources}' {"requests":{"cpu":"600m","memory":"1G"}}%

But if we are talking about MongoDB, yes we have such issue. I am moving this task to another project.  

George Asenov June 30, 2021 at 10:46 AM

I already did that and it doesn't work. 

here is my cr.yaml

apiVersion: psmdb.percona.com/v1-6-0 kind: PerconaServerMongoDB metadata: name: mycluster-name spec: crVersion: 1.6.0 image: percona/percona-server-mongodb:4.4.2-4 imagePullPolicy: IfNotPresent allowUnsafeConfigurations: false updateStrategy: SmartUpdate upgradeOptions: versionServiceEndpoint: https://check.percona.com apply: recommended schedule: "0 2 * * *" secrets: users: my-cluster-name-secrets pmm: enabled: false image: percona/pmm-client:2.12.0 serverHost: monitoring-service replsets: - name: rs0 size: 3 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 expose: enabled: false exposeType: LoadBalancer arbiter: enabled: False size: 1 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "10m" memory: "0.2G" volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi sharding: enabled: false configsvrReplSet: size: 1 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "100m" memory: "0.2G" volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi mongos: size: 1 affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 resources: limits: cpu: "300m" memory: "128Mi" requests: cpu: "100m" memory: "64Mi" expose: exposeType: ClusterIP mongod: net: port: 27017 hostPort: 0 security: redactClientLogData: false enableEncryption: true encryptionKeySecret: my-cluster-name-mongodb-encryption-key encryptionCipherMode: AES256-CBC setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger inMemory: engineConfig: inMemorySizeRatio: 0.9 wiredTiger: engineConfig: cacheSizeRatio: 0.5 directoryForIndexes: false journalCompressor: snappy collectionConfig: blockCompressor: snappy indexConfig: prefixCompression: true operationProfiling: mode: slowOp slowOpThresholdMs: 100 rateLimit: 100 backup: enabled: true restartOnFailure: true image: percona/percona-server-mongodb-operator:1.6.0-backup serviceAccountName: percona-server-mongodb-operator resources: limits: cpu: "300m" memory: "0.5G" requests: cpu: "100m" memory: "0.22" storages: minio: type: s3 s3: bucket: mycluster-name-backup region: us-east-1 credentialsSecret: mongodb-backup-minio endpointUrl: http://minio-server:9000 tasks: - name: daily-minio enabled: true schedule: "*/10 * * * *" storageName: minio compressionType: gzip

Here is the produced pod manifest:

kind: Job apiVersion: batch/v1 metadata: name: mycluster-name-backup-daily-minio-1623657000 namespace: staging uid: 1029c715-1cfa-4014-9a60-07b635c08aaa resourceVersion: '50925245' creationTimestamp: '2021-06-14T07:50:01Z' labels: app.kubernetes.io/component: backup-schedule app.kubernetes.io/instance: mycluster-name app.kubernetes.io/managed-by: percona-server-mongodb-operator app.kubernetes.io/name: percona-server-mongodb app.kubernetes.io/part-of: percona-server-mongodb app.kubernetes.io/replset: general ownerReferences: - apiVersion: batch/v1beta1 kind: CronJob name: mycluster-name-backup-daily-minio uid: d8ed81d7-8929-4b77-ae20-e370f6f8ed7b controller: true blockOwnerDeletion: true managedFields: - manager: kube-controller-manager operation: Update apiVersion: batch/v1 time: '2021-06-14T07:50:01Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:labels': .: {} 'f:app.kubernetes.io/component': {} 'f:app.kubernetes.io/instance': {} 'f:app.kubernetes.io/managed-by': {} 'f:app.kubernetes.io/name': {} 'f:app.kubernetes.io/part-of': {} 'f:app.kubernetes.io/replset': {} 'f:ownerReferences': .: {} 'k:{"uid":"d8ed81d7-8929-4b77-ae20-e370f6f8ed7b"}': .: {} 'f:apiVersion': {} 'f:blockOwnerDeletion': {} 'f:controller': {} 'f:kind': {} 'f:name': {} 'f:uid': {} 'f:spec': 'f:backoffLimit': {} 'f:completions': {} 'f:parallelism': {} 'f:template': 'f:spec': 'f:containers': 'k:{"name":"backup"}': .: {} 'f:args': {} 'f:command': {} 'f:env': .: {} 'k:{"name":"NAMESPACE"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'k:{"name":"psmdbCluster"}': .: {} 'f:name': {} 'f:value': {} 'f:image': {} 'f:imagePullPolicy': {} 'f:name': {} 'f:resources': {} 'f:securityContext': .: {} 'f:runAsNonRoot': {} 'f:runAsUser': {} 'f:terminationMessagePath': {} 'f:terminationMessagePolicy': {} 'f:dnsPolicy': {} 'f:restartPolicy': {} 'f:schedulerName': {} 'f:securityContext': .: {} 'f:fsGroup': {} 'f:serviceAccount': {} 'f:serviceAccountName': {} 'f:terminationGracePeriodSeconds': {} selfLink: >- /apis/batch/v1/namespaces/staging/jobs/mycluster-name-backup-daily-minio-1623657000 spec: parallelism: 1 completions: 1 backoffLimit: 6 selector: matchLabels: controller-uid: 1029c715-1cfa-4014-9a60-07b635c08aaa template: metadata: creationTimestamp: null labels: controller-uid: 1029c715-1cfa-4014-9a60-07b635c08aaa job-name: mycluster-name-backup-daily-minio-1623657000 spec: containers: - name: backup image: 'percona/percona-server-mongodb-operator:1.8.0-backup' command: - sh args: - '-c' - "curl \\\n\t\t\t-vvv \\\n\t\t\t-X POST \\\n\t\t\t--cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt \\\n\t\t\t-H \"Content-Type: application/json\" \\\n\t\t\t-H \"Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)\" \\\n\t\t\t--data \"{ \n\t\t\t\t\\\"kind\\\":\\\"PerconaServerMongoDBBackup\\\",\n\t\t\t\t\\\"apiVersion\\\":\\\"psmdb.percona.com/v1\\\",\n\t\t\t\t\\\"metadata\\\":{\n\t\t\t\t\t\\\"finalizers\\\": [\\\"delete-backup\\\"],\n\t\t\t\t\t\\\"generateName\\\":\\\"cron-${psmdbCluster:0:16}-$(date -u \"+%Y%m%d%H%M%S\")-\\\",\n\t\t\t\t\t\\\"labels\\\":{\n\t\t\t\t\t\t\\\"ancestor\\\":\\\"mycluster-name-backup-daily-minio\\\",\n\t\t\t\t\t\t\\\"cluster\\\":\\\"${psmdbCluster}\\\",\n\t\t\t\t\t\t\\\"type\\\":\\\"cron\\\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\\\"spec\\\":{\n\t\t\t\t\t\\\"psmdbCluster\\\":\\\"${psmdbCluster}\\\",\n\t\t\t\t\t\\\"storageName\\\":\\\"minio\\\"\n\t\t\t\t}\n\t\t\t}\" \\\n\t\t\thttps://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/apis/psmdb.percona.com/v1/namespaces/${NAMESPACE}/perconaservermongodbbackups" env: - name: psmdbCluster value: mycluster-name - name: NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent securityContext: runAsUser: 1001 runAsNonRoot: true restartPolicy: Never terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst serviceAccountName: percona-server-mongodb-operator serviceAccount: percona-server-mongodb-operator securityContext: fsGroup: 1001 schedulerName: default-scheduler status: {}

and the error i get from kubernetes 

Error creating: pods "mycluster-name-backup-daily-minio-1623657000-nzmvw" is forbidden: failed quota: staging-cpu-memory-quota: must specify limits.cpu,requests.cpu

If there are no namespace Resource Quotas this error want happen because resources are not mandatory 

Mykola Marzhan June 14, 2021 at 8:10 AM

Done

Details

Assignee

Reporter

Time tracking

5h logged

Components

Fix versions

Affects versions

Priority

Smart Checklist

Created June 14, 2021 at 8:02 AM
Updated March 5, 2024 at 4:50 PM
Resolved September 30, 2021 at 2:09 PM

Flag notifications