Issues
- Time monotonicity violation after restore a logical backup in a new clusterK8SPSMDB-1336
- Cluster failure after physical restore failure due to bad storage permissionsK8SPSMDB-1309
- Physical restore gets error state when updateStrategy is SmartUpdateK8SPSMDB-1302Eleonora Zinchenko
- Sharded cluster restoration failure in 1.19.1K8SPSMDB-1289
4 of 4
Time monotonicity violation after restore a logical backup in a new cluster
General
Escalation
General
Escalation
Description
Environment
None
AFFECTED CS IDs
CS0053759
Details
Assignee
UnassignedUnassignedReporter
zelmar.michelinizelmar.micheliniNeeds QA
YesAffects versions
Priority
High
Details
Details
Assignee
Unassigned
UnassignedReporter
zelmar.michelini
zelmar.micheliniNeeds QA
Yes
Affects versions
Priority
Smart Checklist
Smart Checklist
Smart Checklist
Created 3 days ago
Updated 2 days ago
If you restore a backup into a new k8s cluster you will get
Time monotonicity violation
errors on cfg servers and mongos and the pods will start to restart:Time monotonicity violation error:
{"t":{"$date":"2025-04-02T18:12:27.649+00:00"},"s":"E", "c":"ASSERT", "id":4457000, "ctx":"CatalogCache-3","msg":"Tripwire assertion","attr":{"error":{"code":6493100, "codeName":"Location6493100","errmsg":"Time monotonicity violation: lookup time { chunkVersion: { e: ObjectId('67ed72e8739da6f4f6c648d6'), t: Timestamp(1743614696, 13), v: Timestamp(1, 0) }, forcedRefreshSequenceNum: 9, epochDisambiguatingSequenceNum: 340 } which is less than the earliest expected timeInStore { chunkVersion: { e: Object Id('67ed778e6982c61201c9628f'), t: Timestamp(1743615886, 10), v: Timestamp(1, 0) }, forcedRefreshSequenceNum: 9, epochDisambiguatingSequenceNum: 8 }."},"location":"{file Name:\"src/mongo/util/read_through_cache.h\", line:549, functionName:\"operator()\"}"}}
zelmar@LAPTOP-MD0FVN06:~/CS0053759/CS0053759_oldcluster-dump/cluster-dump/dbaas-mongodb-rs-mongodb$ kubectl get pods NAME READY STATUS RESTARTS AGE percona-server-mongodb-operator-7f7764cd57-rzmjq 1/1 Running 0 84m my-cluster-name-rs0-1 2/2 Running 0 81m my-cluster-name-rs0-0 2/2 Running 1 (79m ago) 84m my-cluster-name-rs0-2 2/2 Running 0 79m my-cluster-name-cfg-0 2/2 Running 7 (9m36s ago) 84m my-cluster-name-cfg-1 2/2 Running 7 (7m23s ago) 81m my-cluster-name-mongos-0 1/1 Running 9 (2m44s ago) 59m my-cluster-name-cfg-2 2/2 Running 5 (2m17s ago) 79m my-cluster-name-mongos-1 1/1 Running 9 (2m19s ago) 58m my-cluster-name-mongos-2 1/1 Running 9 (2m9s ago) 58m
To reproduce:
Deploy a cluster (Operator 1.19.1 and percona-server-mongodb:7.0.16-10)
Insert some data
Take a backup
Deploy a new cluster
Restore the backup.