PMM enabling issue on percona-xtradb-cluster-operator 1.4.0

Description

Hello team,

Faced to the issue with PMM enabling on 1.4.0. The issue is related that there is no required env values like DB_HOST etc which leads to cluster crashing

 

Steps for reproduce:

1) install cluster without PMM (it can be installed with PMM but the result will be the same). I just installed cluster befoure monitoing. Here is cluster CR which I used:

 

apiVersion: pxc.percona.com/v1 kind: PerconaXtraDBCluster metadata: creationTimestamp: "2020-06-19T07:40:51Z" finalizers: - delete-pxc-pods-in-order generation: 3 name: mysql namespace: prod resourceVersion: "22153354" selfLink: /apis/pxc.percona.com/v1/namespaces/prod/perconaxtradbclusters/mysql uid: 330dae03-b200-11ea-bfc4-12f03fa577ed spec: allowUnsafeConfigurations: false backup: image: percona/percona-xtradb-cluster-operator:1.4.0-pxc8.0-backup schedule: - keep: 10 name: s3-daily-backup schedule: 0 0 * * * storageName: s3-us-east - keep: 1 name: daily-backup schedule: 0 0 * * * storageName: fs-pvc serviceAccountName: percona-xtradb-cluster-operator storages: fs-pvc: type: filesystem volume: persistentVolumeClaim: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi s3-us-east: s3: bucket: mysql-prod-backup-artpix3d credentialsSecret: my-cluster-name-backup-s3 region: us-east-1 type: s3 pmm: enabled: false image: percona/percona-xtradb-cluster-operator:1.4.0-pmm serverHost: monitoring-service serverUser: pmm proxysql: affinity: antiAffinityTopologyKey: kubernetes.io/hostname enabled: true gracePeriod: 30 image: percona/percona-xtradb-cluster-operator:1.4.0-proxysql podDisruptionBudget: maxUnavailable: 1 resources: requests: cpu: 600m memory: 1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 10Gi pxc: affinity: antiAffinityTopologyKey: kubernetes.io/hostname configuration: | [mysqld] max_allowed_packet=64M slow_query_log=ON gracePeriod: 600 image: percona/percona-xtradb-cluster-operator:1.4.0-pxc8.0 podDisruptionBudget: maxUnavailable: 1 resources: requests: cpu: 600m memory: 1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 100Gi secretsName: dev-cluster-secrets sslInternalSecretName: my-cluster-ssl-internal sslSecretName: my-cluster-ssl vaultSecretName: keyring-secret-vault

 

3) Install PMM from you official helm chart, create user in Grafana and add it into the cluster secrets (offtop: please documentize it, because I spent too much time to find that password value in helm chart is outdated).

4) changed to true:

 

pmm: enabled: true

 

The result:

 

 

[yaroslav@yaroslav percona-openshift]$ oc get pod NAME READY STATUS RESTARTS AGE monitoring-0 1/1 Running 0 27m mysql-proxysql-0 3/4 CrashLoopBackOff 9 8m mysql-proxysql-1 3/4 CrashLoopBackOff 8 9m mysql-proxysql-2 3/4 CrashLoopBackOff 9 11m mysql-pxc-0 1/2 CrashLoopBackOff 6 7m mysql-pxc-1 1/2 CrashLoopBackOff 6 10m mysql-pxc-2 1/2 CrashLoopBackOff 6 11m percona-xtradb-cluster-operator-548f5c54d-wrkr7 1/1 Running 0 46m

Error logs on pmm-client container:

 

aroslav@yaroslav ~]$ oc logs -f mysql-pxc-1 -c pmm-client + main + '[' -z monitoring-service ']' + '[' -n 1q2w3e4r ']' + ARGS+=--server-password=1q2w3e4r ++ ping -c 1 monitoring-service ++ grep PING ++ sed -e 's/).*//; s/.*(//' + PMM_SERVER_IP=172.30.0.151 ++ grep 'src ' ++ ip route get 172.30.0.151 ++ sed -e 's/.* src //; s/ .*//' + SRC_ADDR=10.129.0.61 + CLIENT_NAME=mysql-pxc-1 ++ curl -k -s -o /dev/null -w '%{http_code}' https://pmm:1q2w3e4r@monitoring-service/v1/readyz + SERVER_RESPONSE_CODE=200 + [[ 200 == \2\0\0 ]] + export PATH=/usr/local/percona/pmm2/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin + PATH=/usr/local/percona/pmm2/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin + pmm2_start + AGENT_CONFIG_FILE=/usr/local/percona/pmm2/config/pmm-agent.yaml + rm -f /usr/local/percona/pmm2/config/pmm-agent.yaml + '[' -n pmm ']' + ARGS+=' --server-username=pmm' + pmm-agent setup --force --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml --server-address=monitoring-service --server-insecure-tls --container-id=mysql-pxc-1 --container-name=mysql-pxc-1 --ports-min=30100 --ports-max=30200 --listen-port=7777 --server-password=1q2w3e4r --server-username=pmm 10.129.0.61 container mysql-pxc-1 INFO[2020-06-19T08:23:22.614+00:00] Loading configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml. component=setup INFO[2020-06-19T08:23:22.616+00:00] Using /usr/local/percona/pmm2/exporters/node_exporter component=setup INFO[2020-06-19T08:23:22.616+00:00] Using /usr/local/percona/pmm2/exporters/mysqld_exporter component=setup INFO[2020-06-19T08:23:22.616+00:00] Using /usr/local/percona/pmm2/exporters/mongodb_exporter component=setup INFO[2020-06-19T08:23:22.616+00:00] Using /usr/local/percona/pmm2/exporters/postgres_exporter component=setup INFO[2020-06-19T08:23:22.616+00:00] Using /usr/local/percona/pmm2/exporters/proxysql_exporter component=setup INFO[2020-06-19T08:23:22.616+00:00] Updating PMM Server address from "monitoring-service" to "monitoring-service:443". component=setup Checking local pmm-agent status... pmm-agent is not running. Registering pmm-agent on PMM Server... Registered. Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated. Please start pmm-agent: `pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml`. + '[' -n monitor ']' + DB_ARGS+=' --username=monitor' + '[' -n nfEfm331d2dDk ']' + DB_ARGS+=' --password=nfEfm331d2dDk' + '[' -n '' ']' + '[' -n '' -a '' ']' + cat /usr/local/percona/pmm2/config/pmm-agent.yaml # Updated by `pmm-agent setup`. --- id: /agent_id/b703c8aa-8cbf-4da4-8cd8-fc44f32a4409 listen-port: 7777 server: address: monitoring-service:443 username: pmm password: 1q2w3e4r insecure-tls: true paths: exporters_base: /usr/local/percona/pmm2/exporters node_exporter: /usr/local/percona/pmm2/exporters/node_exporter mysqld_exporter: /usr/local/percona/pmm2/exporters/mysqld_exporter mongodb_exporter: /usr/local/percona/pmm2/exporters/mongodb_exporter postgres_exporter: /usr/local/percona/pmm2/exporters/postgres_exporter proxysql_exporter: /usr/local/percona/pmm2/exporters/proxysql_exporter tempdir: /tmp ports: min: 30100 max: 30200 debug: false trace: false + wait_for_url http://127.0.0.1:7777 + local URL=http://127.0.0.1:7777 + local RESPONSE= + pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml --ports-min=30100 --ports-max=30200 ++ seq 1 60 + for i in '`seq 1 60`' + curl -k http://127.0.0.1:7777 + grep '' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed connect to 127.0.0.1:7777; Connection refused + result=1 + '[' 1 -eq 0 ']' + sleep 1 + for i in '`seq 1 60`' + curl -k http://127.0.0.1:7777 + grep '' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10 100 10 0 0 1516 0 --:--:-- --:--:-- --:--:-- 1666 + result=0 + '[' 0 -eq 0 ']' + return + cat /usr/local/percona/pmm2/pmm-agent-tmp.log INFO[2020-06-19T08:23:22.839+00:00] Loading configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml. component=main INFO[2020-06-19T08:23:22.840+00:00] Using /usr/local/percona/pmm2/exporters/node_exporter component=main INFO[2020-06-19T08:23:22.840+00:00] Using /usr/local/percona/pmm2/exporters/mysqld_exporter component=main INFO[2020-06-19T08:23:22.840+00:00] Using /usr/local/percona/pmm2/exporters/mongodb_exporter component=main INFO[2020-06-19T08:23:22.840+00:00] Using /usr/local/percona/pmm2/exporters/postgres_exporter component=main INFO[2020-06-19T08:23:22.840+00:00] Using /usr/local/percona/pmm2/exporters/proxysql_exporter component=main INFO[2020-06-19T08:23:22.840+00:00] Starting... component=client INFO[2020-06-19T08:23:22.841+00:00] Connecting to https://pmm:1q2w3e4r@monitoring-service:443/ ... component=client INFO[2020-06-19T08:23:22.840+00:00] Starting local API server on http://127.0.0.1:7777/ ... component=local-server/JSON INFO[2020-06-19T08:23:22.847+00:00] Started. component=local-server/JSON INFO[2020-06-19T08:23:22.872+00:00] Connected to monitoring-service:443. component=client INFO[2020-06-19T08:23:22.873+00:00] Establishing two-way communication channel ... component=client + '[' -n mysql ']' INFO[2020-06-19T08:23:22.880+00:00] Two-way communication channel established in 7.640337ms. Estimated clock drift: 998.279µs. component=client + case "${DB_TYPE}" in INFO[2020-06-19T08:23:22.881+00:00] Starting 1, restarting 0, and stopping 0 agent processes. component=supervisor INFO[2020-06-19T08:23:22.882+00:00] Process: starting. agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg="Enabled collectors:" source="node_exporter.go:98" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - bonding" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - buddyinfo" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - cpu" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - diskstats" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - entropy" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - filefd" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - filesystem" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - hwmon" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - loadavg" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - meminfo" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - meminfo_numa" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - netdev" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - netstat" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - processes" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - standard.go" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - standard.process" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - stat" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - textfile" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - textfile.hr" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - textfile.lr" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - textfile.mr" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - time" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - uname" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg=" - vmstat" source="node_exporter.go:105" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg="HTTP Basic authentication is enabled." source="basic_auth.go:91" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter INFO[2020-06-19T08:23:23.005+00:00] time="2020-06-19T08:23:23Z" level=info msg="Starting HTTP server for http://:30100/metrics ..." source="server.go:140" agentID=/agent_id/9e001b50-796b-4af2-96d6-6407393024ca component=agent-process type=node_exporter + pmm-admin add mysql --skip-connection-check --server-url=https://pmm:1q2w3e4r@monitoring-service/ --server-insecure-tls --query-source=perfschema --username=monitor --password=nfEfm331d2dDk mysql-pxc-1 : strconv.Atoi: parsing "": invalid syntax [yaroslav@yaroslav ~]$

Please pay you attention on this:

+ pmm-admin add mysql --skip-connection-check --server-url=https://pmm:1q2w3e4r@monitoring-service/ --server-insecure-tls --query-source=perfschema --username=monitor --password=nfEfm331d2dDk mysql-pxc-1 : strconv.Atoi: parsing "": invalid syntax

I checked entrypoint sh for this container and found that the semicolon should splic DB_USER and DB_HOST, but I don't have this env in sts.

But I have these errors in perconaxtradbclusters.pxc.percona.com CR:

[yaroslav@yaroslav ~]$ oc describe perconaxtradbclusters.pxc.percona.com mysql Name: mysql Namespace: prod Labels: <none> Annotations: <none> API Version: pxc.percona.com/v1 Kind: PerconaXtraDBCluster Metadata: Creation Timestamp: 2020-06-19T07:40:51Z Finalizers: delete-pxc-pods-in-order Generation: 3 Resource Version: 22155372 Self Link: /apis/pxc.percona.com/v1/namespaces/prod/perconaxtradbclusters/mysql UID: 330dae03-b200-11ea-bfc4-12f03fa577ed Spec: Allow Unsafe Configurations: false Backup: Image: percona/percona-xtradb-cluster-operator:1.4.0-pxc8.0-backup Schedule: Keep: 10 Name: s3-daily-backup Schedule: 0 0 * * * Storage Name: s3-us-east Keep: 1 Name: daily-backup Schedule: 0 0 * * * Storage Name: fs-pvc Service Account Name: percona-xtradb-cluster-operator Storages: Fs - Pvc: Type: filesystem Volume: Persistent Volume Claim: Access Modes: ReadWriteOnce Resources: Requests: Storage: 100Gi s3-us-east: s3: Bucket: mysql-prod-backup-artpix3d Credentials Secret: my-cluster-name-backup-s3 Region: us-east-1 Type: s3 Pmm: Enabled: true Image: percona/percona-xtradb-cluster-operator:1.4.0-pmm Server Host: monitoring-service Server User: pmm Proxysql: Affinity: Anti Affinity Topology Key: kubernetes.io/hostname Enabled: true Grace Period: 30 Image: percona/percona-xtradb-cluster-operator:1.4.0-proxysql Pod Disruption Budget: Max Unavailable: 1 Resources: Requests: Cpu: 600m Memory: 1G Size: 3 Volume Spec: Persistent Volume Claim: Resources: Requests: Storage: 10Gi Pxc: Affinity: Anti Affinity Topology Key: kubernetes.io/hostname Configuration: [mysqld] max_allowed_packet=64M slow_query_log=ON Grace Period: 600 Image: percona/percona-xtradb-cluster-operator:1.4.0-pxc8.0 Pod Disruption Budget: Max Unavailable: 1 Resources: Requests: Cpu: 600m Memory: 1G Size: 3 Volume Spec: Persistent Volume Claim: Resources: Requests: Storage: 100Gi Secrets Name: dev-cluster-secrets Ssl Internal Secret Name: my-cluster-ssl-internal Ssl Secret Name: my-cluster-ssl Vault Secret Name: keyring-secret-vault Status: Conditions: Last Transition Time: 2020-06-19T07:40:52Z Status: True Type: Initializing Last Transition Time: 2020-06-19T07:49:17Z Status: True Type: Ready Last Transition Time: 2020-06-19T08:09:46Z Status: True Type: Initializing Last Transition Time: 2020-06-19T08:12:27Z Message: ProxySQL upgrade error: Operation cannot be fulfilled on statefulsets.apps "mysql-proxysql": the object has been modified; please apply your changes to the latest version and try again Reason: ErrorReconcile Status: True Type: Error Last Transition Time: 2020-06-19T08:12:28Z Status: True Type: Initializing Host: mysql-proxysql.prod Message: PXC: pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-pxc-0_prod(d6e32fc9-b204-11ea-bfc4-12f03fa577ed); pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-pxc-1_prod(69f167cc-b204-11ea-bfc4-12f03fa577ed); ProxySQL: pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-proxysql-0_prod(a690d629-b204-11ea-bfc4-12f03fa577ed); pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-proxysql-1_prod(847aa822-b204-11ea-bfc4-12f03fa577ed); pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-proxysql-2_prod(4abda8f7-b204-11ea-bfc4-12f03fa577ed); Observed Generation: 3 Proxysql: Message: pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-proxysql-0_prod(a690d629-b204-11ea-bfc4-12f03fa577ed); pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-proxysql-1_prod(847aa822-b204-11ea-bfc4-12f03fa577ed); pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-proxysql-2_prod(4abda8f7-b204-11ea-bfc4-12f03fa577ed); Size: 3 Status: initializing Pxc: Message: pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-pxc-0_prod(d6e32fc9-b204-11ea-bfc4-12f03fa577ed); pmm-client: Back-off 5m0s restarting failed container=pmm-client pod=mysql-pxc-1_prod(69f167cc-b204-11ea-bfc4-12f03fa577ed); Size: 3 Status: initializing State: initializing

 

Environment

None

Smart Checklist

Activity

Sergey Pronin March 10, 2021 at 1:41 PM

I'm closing this one as aged. We have added official PMM v2 support into our Operator in v 1.7.0. There is no official support for PMM v1 in our Operators, but we will be glad to help with migration to v2.

Any ideas for PMM v1 support are also welcomed.

Sami Ahlroos October 14, 2020 at 1:57 PM
Edited

Just a note, this is probably caused by the apiVersion setting in cr.yaml:

 

apiVersion: pxc.percona.com/v1

The operator code checks "CompareVersionWith("1.2.0") >= 0" before setting DB_HOST and DB_PORT: https://github.com/percona/percona-xtradb-cluster-operator/blob/master/pkg/pxc/app/statefulset/node.go#L240

Fixing the apiVersion in CR file should fix this issue.

Mykola Marzhan June 25, 2020 at 11:20 AM

Hi ,

feel free to prepare a fix for our scripts.

please escalate issue via regular channels in you have a contract with Percona.

Yaroslav Kasatikov June 25, 2020 at 7:33 AM

Hi team,

Do you have any updates?

Yaroslav Kasatikov June 19, 2020 at 8:29 AM

sorry, forgot env details:

 

oc version:

oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
openshift v3.11.0+d699176-406
kubernetes v1.11.0+d4cacc0

 

operator version 1.4.0 (images can be found in listing)

 

Done

Details

Assignee

Reporter

Priority

Smart Checklist

Created June 19, 2020 at 8:27 AM
Updated March 5, 2024 at 6:01 PM
Resolved March 10, 2021 at 1:41 PM

Flag notifications