Skip to:
If you have a PMM running in Kubernetes and the pod restarts, all the custom email alerts inside /usr/share/grafana/public/emails/ are lost
/usr/share/grafana/public/emails/
I’ve deployed a pmm and edited one email alert just with a simple line:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES percona-server-mongodb-operator-5488d45f6-82k6k 1/1 Running 0 14m 10.42.2.5 k3d-zelmar-server-0 <none> <none> my-cluster-name-rs0-0 2/2 Running 0 14m 10.42.0.6 k3d-zelmar-agent-0 <none> <none> my-cluster-name-rs0-1 2/2 Running 0 13m 10.42.1.6 k3d-zelmar-agent-1 <none> <none> my-cluster-name-rs0-2 2/2 Running 0 13m 10.42.2.7 k3d-zelmar-server-0 <none> <none> pmm-0 1/1 Running 0 8m29s 10.42.1.8 k3d-zelmar-agent-1 <none> <none>
First, I check the edited file inside the pod:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl exec -it pmm-0 -- /bin/bash [root@pmm-0 opt] # head -n1 /usr/share/grafana/public/emails/alert_notification.html test
Then, I delete the pod to force a restart:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl delete pods pmm-0 pod "pmm-0" deleted zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES percona-server-mongodb-operator-5488d45f6-82k6k 1/1 Running 0 24m 10.42.2.5 k3d-zelmar-server-0 <none> <none> my-cluster-name-rs0-0 2/2 Running 0 24m 10.42.0.6 k3d-zelmar-agent-0 <none> <none> my-cluster-name-rs0-1 2/2 Running 0 23m 10.42.1.6 k3d-zelmar-agent-1 <none> <none> my-cluster-name-rs0-2 2/2 Running 0 22m 10.42.2.7 k3d-zelmar-server-0 <none> <none> pmm-0 0/1 Running 0 3s 10.42.1.10 k3d-zelmar-agent-1 <none> <none>
And check the file again:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl exec -it pmm-0 -- /bin/bash [root@pmm-0 opt] # head -n1 /usr/share/grafana/public/emails/alert_notification.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
The edited file is replaced with the original.
I tried to find usage of /usr/share/grafana/public/emails/ anywhere in Grafana documentation and couldn’t find it, so I don’t consider it as a bug.Please use extraVolumeMounts and extraVolumes to mount non-standard volumes. https://github.com/percona/percona-helm-charts/blob/04f2c1f1de494dcb2a6f7b2f5eda371c3408d64a/charts/pmm/values.yaml#L262-L267 Hovewer we provide such option it’s not something we support in terms of unexpected behavior or bugs, because of these mounts.
extraVolumeMounts
extraVolumes
If you have a PMM running in Kubernetes and the pod restarts, all the custom email alerts inside
/usr/share/grafana/public/emails/
are lostI’ve deployed a pmm and edited one email alert just with a simple line:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES percona-server-mongodb-operator-5488d45f6-82k6k 1/1 Running 0 14m 10.42.2.5 k3d-zelmar-server-0 <none> <none> my-cluster-name-rs0-0 2/2 Running 0 14m 10.42.0.6 k3d-zelmar-agent-0 <none> <none> my-cluster-name-rs0-1 2/2 Running 0 13m 10.42.1.6 k3d-zelmar-agent-1 <none> <none> my-cluster-name-rs0-2 2/2 Running 0 13m 10.42.2.7 k3d-zelmar-server-0 <none> <none> pmm-0 1/1 Running 0 8m29s 10.42.1.8 k3d-zelmar-agent-1 <none> <none>
First, I check the edited file inside the pod:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl exec -it pmm-0 -- /bin/bash [root@pmm-0 opt] # head -n1 /usr/share/grafana/public/emails/alert_notification.html test
Then, I delete the pod to force a restart:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl delete pods pmm-0 pod "pmm-0" deleted zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES percona-server-mongodb-operator-5488d45f6-82k6k 1/1 Running 0 24m 10.42.2.5 k3d-zelmar-server-0 <none> <none> my-cluster-name-rs0-0 2/2 Running 0 24m 10.42.0.6 k3d-zelmar-agent-0 <none> <none> my-cluster-name-rs0-1 2/2 Running 0 23m 10.42.1.6 k3d-zelmar-agent-1 <none> <none> my-cluster-name-rs0-2 2/2 Running 0 22m 10.42.2.7 k3d-zelmar-server-0 <none> <none> pmm-0 0/1 Running 0 3s 10.42.1.10 k3d-zelmar-agent-1 <none> <none>
And check the file again:
zelmar@LAPTOP-MD0FVN06:~/percona-server-mongodb-operator$ kubectl exec -it pmm-0 -- /bin/bash [root@pmm-0 opt] # head -n1 /usr/share/grafana/public/emails/alert_notification.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
The edited file is replaced with the original.