mongos fails when allowUnsafeConfiguration=true and without TLS enabled

Description

So I have 3 shards with set "allowUnsafeConfigurations: true" and I don't have cert-manager and I didn't apply any SSL certificates, so basically tried to use sharding without TLS/SSL.

What is happening is that readiness probe for mongos is failing like:

Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m27s default-scheduler Successfully assigned psmdb-test/my-cluster-name-mongos-64df598464-h7jtp to gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Normal Pulling 7m26s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Pulling image "percona/percona-server-mongodb-operator:1.7.0" Normal Pulled 7m26s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Successfully pulled image "percona/percona-server-mongodb-operator:1.7.0" Normal Created 7m26s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Created container mongo-init Normal Started 7m26s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Started container mongo-init Normal Pulling 7m25s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Pulling image "percona/percona-server-mongodb:4.4.3-5" Normal Pulled 7m25s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Successfully pulled image "percona/percona-server-mongodb:4.4.3-5" Normal Created 7m25s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Created container mongos Normal Started 7m25s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Started container mongos Warning Unhealthy 7m15s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:15:53Z"} Warning Unhealthy 7m14s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:15:54Z"} Warning Unhealthy 7m13s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:15:55Z"} Warning Unhealthy 7m12s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:15:56Z"} Warning Unhealthy 7m11s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:15:57Z"} Warning Unhealthy 7m10s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:15:58Z"} Warning Unhealthy 7m9s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:15:59Z"} Warning Unhealthy 7m8s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:16:00Z"} Warning Unhealthy 7m7s kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:16:01Z"} Warning Unhealthy 2m26s (x279 over 7m6s) kubelet, gke-tomislav-cluster-117-default-pool-2abca4b1-rgn5 (combined from similar events): Readiness probe failed: {"level":"fatal","msg":"Cannot parse command line: path '/etc/mongodb-ssl/ca.crt' does not exist","time":"2021-02-18T14:20:42Z"}

and in the mongos logs there are errors like:

{"t":{"$date":"2021-02-18T14:21:39.882+00:00"},"s":"I", "c":"-", "id":4333222, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM received failed isMaster","attr":{"host":"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017","error":"HostUnreachable: Error connecting to my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017 (10.48.10.26:27017) :: caused by :: Connection refused","replicaSet":"cfg","isMasterReply":"{}"}} {"t":{"$date":"2021-02-18T14:21:39.882+00:00"},"s":"I", "c":"NETWORK", "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"cfg","host":"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017","error":{"code":6,"codeName":"HostUnreachable","errmsg":"Error connecting to my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017 (10.48.10.26:27017) :: caused by :: Connection refused"},"action":{"dropConnections":true,"requestImmediateCheck":false,"outcome":{"host":"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017","success":false,"errorMessage":"HostUnreachable: Error connecting to my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017 (10.48.10.26:27017) :: caused by :: Connection refused"}}}} {"t":{"$date":"2021-02-18T14:21:40.873+00:00"},"s":"I", "c":"NETWORK", "id":4333213, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM Topology Change","attr":{"replicaSet":"cfg","newTopologyDescription":"{ id: \"7b01cfa7-d2b1-4ab7-a09e-455941e0ed7b\", topologyType: \"ReplicaSetWithPrimary\", servers: { my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017: { address: \"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017: { address: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", topologyVersion: { processId: ObjectId('602e774f2f426b4455123b5d'), counter: 3 }, roundTripTime: 203356, lastWriteDate: new Date(1613658099000), opTime: { ts: Timestamp(1613658099, 9), t: 4 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 9, me: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", setName: \"cfg\", setVersion: 48841, primary: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", lastUpdateTime: new Date(1613658100873), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 1: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 2: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\" }, arbiters: {}, passives: {} }, my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017: { address: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", topologyVersion: { processId: ObjectId('602e778ec9ee3d41d1b66322'), counter: 5 }, roundTripTime: 441044, lastWriteDate: new Date(1613658094000), opTime: { ts: Timestamp(1613658094, 2), t: 4 }, type: \"RSPrimary\", minWireVersion: 9, maxWireVersion: 9, me: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", setName: \"cfg\", setVersion: 48841, electionId: ObjectId('7fffffff0000000000000004'), primary: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", lastUpdateTime: new Date(1613658094381), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 1: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 2: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\" }, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \"cfg\", compatible: true, maxSetVersion: 48841, maxElectionId: ObjectId('7fffffff0000000000000004') }","previousTopologyDescription":"{ id: \"7b01cfa7-d2b1-4ab7-a09e-455941e0ed7b\", topologyType: \"ReplicaSetWithPrimary\", servers: { my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017: { address: \"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017: { address: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", topologyVersion: { processId: ObjectId('602e774f2f426b4455123b5d'), counter: 3 }, roundTripTime: 203356, lastWriteDate: new Date(1613658090000), opTime: { ts: Timestamp(1613658090, 1), t: 3 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 9, me: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", setName: \"cfg\", setVersion: 48841, primary: \"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", lastUpdateTime: new Date(1613658090797), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 1: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 2: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\" }, arbiters: {}, passives: {} }, my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017: { address: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", topologyVersion: { processId: ObjectId('602e778ec9ee3d41d1b66322'), counter: 5 }, roundTripTime: 441044, lastWriteDate: new Date(1613658094000), opTime: { ts: Timestamp(1613658094, 2), t: 4 }, type: \"RSPrimary\", minWireVersion: 9, maxWireVersion: 9, me: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", setName: \"cfg\", setVersion: 48841, electionId: ObjectId('7fffffff0000000000000004'), primary: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", lastUpdateTime: new Date(1613658094381), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 1: \"my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\", 2: \"my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017\" }, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \"cfg\", compatible: true, maxSetVersion: 48841, maxElectionId: ObjectId('7fffffff0000000000000004') }"}} {"t":{"$date":"2021-02-18T14:21:40.873+00:00"},"s":"I", "c":"SHARDING", "id":471693, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Updating the shard registry with confirmed replica set","attr":{"connectionString":"cfg/my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017,my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017,my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017"}} {"t":{"$date":"2021-02-18T14:21:40.874+00:00"},"s":"I", "c":"SHARDING", "id":22846, "ctx":"UpdateReplicaSetOnConfigServer","msg":"Updating sharding state with confirmed replica set","attr":{"connectionString":"cfg/my-cluster-name-cfg-0.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017,my-cluster-name-cfg-1.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017,my-cluster-name-cfg-2.my-cluster-name-cfg.psmdb-test.svc.cluster.local:27017"}} {"t":{"$date":"2021-02-18T14:21:41.619+00:00"},"s":"I", "c":"SHARDING", "id":22792, "ctx":"ShardRegistry","msg":"Term advanced for config server","attr":{"opTime":{"ts":{"$timestamp":{"t":1613658101,"i":2}},"t":4},"prevOpTime":{"ts":{"$timestamp":{"t":1613658091,"i":1}},"t":3},"reason":"reply from config server node","clientAddress":"(unknown)"}} {"t":{"$date":"2021-02-18T14:21:41.621+00:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"Uptime-reporter","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}}

This is the current status of the setup:

$ k get pods NAME READY STATUS RESTARTS AGE my-cluster-name-cfg-0 2/2 Running 2 7m7s my-cluster-name-cfg-1 2/2 Running 4 6m42s my-cluster-name-cfg-2 2/2 Running 3 6m13s my-cluster-name-mongos-64df598464-h7jtp 0/1 Running 2 6m59s my-cluster-name-mongos-64df598464-nrhzv 0/1 Running 2 6m59s my-cluster-name-mongos-64df598464-rqcpp 0/1 Running 2 6m59s my-cluster-name-rs0-0 2/2 Running 2 7m6s my-cluster-name-rs0-1 2/2 Running 2 6m32s my-cluster-name-rs0-2 2/2 Running 1 6m7s my-cluster-name-rs1-0 2/2 Running 2 7m4s my-cluster-name-rs1-1 2/2 Running 2 6m36s my-cluster-name-rs1-2 2/2 Running 2 6m9s my-cluster-name-rs2-0 2/2 Running 2 7m2s my-cluster-name-rs2-1 2/2 Running 2 6m36s my-cluster-name-rs2-2 2/2 Running 1 6m9s percona-server-mongodb-operator-6f866d7857-5bkxc 1/1 Running 0 7m43s $ k get psmdb my-cluster-name -oyaml host: my-cluster-name-mongos.psmdb-test.svc.cluster.local mongoImage: percona/percona-server-mongodb:4.4.3-5 mongoVersion: 4.4.3-5 mongos: ready: 0 size: 3 status: initializing observedGeneration: 2 pmmVersion: 2.12.0 replsets: cfg: initialized: true ready: 3 size: 3 status: ready rs0: initialized: true ready: 3 size: 3 status: ready rs1: initialized: true ready: 3 size: 3 status: ready rs2: initialized: true ready: 3 size: 3 status: ready state: initializing

Environment

None

Smart Checklist

Activity

Done

Details

Assignee

Reporter

Needs Review

Yes

Needs QA

Yes

Time tracking

1d 2h logged

Fix versions

Affects versions

Priority

Smart Checklist

Created February 18, 2021 at 2:50 PM
Updated March 5, 2024 at 4:57 PM
Resolved March 8, 2021 at 4:49 PM

Flag notifications