smart update in cluster wide adds version service check job repeatedly
General
Escalation
General
Escalation
Description
Environment
None
Activity
Show:

Percona Bot
updated the AFFECTED USER LEVELMarch 5, 2024 at 6:03 PM
None
Internal
Secure Custom Fields for Jira (Security & Permission)
updated the AFFECTED USER LEVELDecember 18, 2023 at 6:20 PM
None
This field doesn't support this UI

Tomislav Plavcic
changed the StatusFebruary 2, 2021 at 3:49 PM
Pending Release
Done

Tomislav Plavcic
updated the ResolutionFebruary 2, 2021 at 3:49 PM
Fixed
Done

Pavel Kasko
changed the StatusJanuary 11, 2021 at 12:20 PM
In Progress
Pending Release

Pavel Kasko
updated the ResolutionJanuary 11, 2021 at 12:20 PM
None
Fixed

Pavel Kasko
logged 5hJanuary 11, 2021 at 12:19 PM

Pavel Kasko
updated the Remaining EstimateJanuary 11, 2021 at 12:19 PM
0m
0m

Pavel Kasko
updated the Time SpentJanuary 11, 2021 at 12:19 PM
20m
5h 20m

Pavel Kasko
changed the StatusDecember 21, 2020 at 9:52 AM
Open
In Progress

Mykola Marzhan
updated the Fix versionsDecember 9, 2020 at 10:48 AM
None
1.7.0

Mykola Marzhan
changed the AssigneeDecember 9, 2020 at 10:48 AM
Unassigned

Pavel Kasko

Tomislav Plavcic
updated the LabelsDecember 3, 2020 at 9:18 AM
bug-new-feature
bug-new-feature discover-qa

Sergey Pronin
updated the Linked IssuesDecember 1, 2020 at 1:25 PM
None
This issue blocks RM-820

Sergey Pronin
updated the LabelsNovember 4, 2020 at 12:32 PM
None
bug-new-feature

Tomislav Plavcic
logged 20mOctober 28, 2020 at 1:07 PM

Tomislav Plavcic
updated the Time SpentOctober 28, 2020 at 1:07 PM
0m
20m

Tomislav Plavcic
changed the StatusOctober 28, 2020 at 1:07 PM
Confirmation
Open

Tomislav Plavcic
changed the StatusOctober 28, 2020 at 1:07 PM
New
Confirmation

Tomislav Plavcic
created the BugOctober 28, 2020 at 1:07 PM
If we deploy operator in cluster wide and two clusters in different namespaces, then enable smart update in at least one cluster, you'll get something like this in the logs:
{"level":"info","ts":1603889380.1463509,"logger":"controller_perconaxtradbcluster","msg":"update PXC version to 8.0.20-11.1 (fetched from db)"} {"level":"info","ts":1603889451.1035082,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889460.3854651,"logger":"controller_perconaxtradbcluster","msg":"update PXC version from 8.0.20-11.1 to 8.0.20-11.2"} {"level":"info","ts":1603889462.63149,"logger":"controller_perconaxtradbcluster","msg":"statefulSet was changed, run smart update"} {"level":"info","ts":1603889462.6391134,"logger":"controller_perconaxtradbcluster","msg":"primary pod is cluster1-pxc-0.cluster1-pxc.pxc1"} {"level":"info","ts":1603889462.6391613,"logger":"controller_perconaxtradbcluster","msg":"apply changes to secondary pod cluster1-pxc-2"} {"level":"info","ts":1603889541.6645136,"logger":"controller_perconaxtradbcluster","msg":"pod cluster1-pxc-2 is running"} {"level":"info","ts":1603889546.6787336,"logger":"controller_perconaxtradbcluster","msg":"apply changes to secondary pod cluster1-pxc-1"} {"level":"info","ts":1603889612.7021763,"logger":"controller_perconaxtradbcluster","msg":"pod cluster1-pxc-1 is running"} {"level":"info","ts":1603889617.7145846,"logger":"controller_perconaxtradbcluster","msg":"apply changes to primary pod cluster1-pxc-0"} {"level":"info","ts":1603889697.7419097,"logger":"controller_perconaxtradbcluster","msg":"pod cluster1-pxc-0 is running"} {"level":"info","ts":1603889702.7532291,"logger":"controller_perconaxtradbcluster","msg":"smart update finished"} {"level":"info","ts":1603889863.1805832,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889868.3008893,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889873.4332998,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889878.5491076,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889882.4969003,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889888.7722158,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889894.041432,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889899.1608202,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889904.2993724,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889909.4454117,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889914.5720103,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889919.683838,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889924.830826,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889929.9449697,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889935.0692508,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889940.1881974,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889945.3079748,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889950.43332,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889955.555427,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889960.674868,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889965.812302,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889970.9342175,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889976.0745318,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889981.1991239,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889986.3639224,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889991.485472,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603889996.6022882,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890002.5909438,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890007.8100843,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890012.9314384,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890018.0461905,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890023.1613872,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890028.2779198,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890033.3999033,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890038.5166242,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"} {"level":"info","ts":1603890043.6368077,"logger":"controller_perconaxtradbcluster","msg":"add new job: * * * * *"}
as you can see the "add new job" is printed repeatedly, when it should be printed only once. Smart update did work so not sure what other consequences this has in the background except for flooding the logs at the moment.
Also what I found is that if you have cluster wide operator, but only 1 pxc cluster this doesn't happen, but when you add second pxc cluster then it starts happening.
Steps to reproduce:
kubectl create namespace pxc-operator kubectl create namespace pxc1 kubectl apply -f cw-bundle.yaml -n pxc-operator kubectl apply -f cr.yaml -n pxc1 kubectl create namespace pxc2 kubectl apply -f cr.yaml -n pxc2 kubectl patch pxc cluster1 --type=merge --patch '{ "spec":{"updateStrategy":"SmartUpdate", "upgradeOptions":{"versionServiceEndpoint":"https://check.percona.com","apply":"recommended","schedule":"* * * * *"}}}' -npxc1