PXC node evicted when create function by user don't have super privilege and binary loggin is enabled

Description

PXC node will got evicted if an user try to create function without super privilege and get error code 1419. I am managing a cluster of 3 nodes by percona operator in kubernetes.

To reproduce the issue:
1. Create a cluster of 3 node by percona operator. My yaml file is attached.
2. Connect to the mysql instance and create a table(tb1) for test.
3. Create a normal user(test_user) and grant all privilege on tb1.*
4. Reconnect mysql with user test_user and create a function

use tb1;
create function dna_isleapyear(ts datetime)
returns bit
deterministic
begin
declare y int;
set y = year(ts);
return (case when y%400=0 then 1 when y%100=0 then 0 when y%4=0 then 1 else 0 end);
end

Then the first node will leave the cluster.

{"log":"2024-01-11T12:12:12.056366Z 0 [Note] [MY-000000] [Galera] Member 1(test-pxc-1) responds to vote on 4a52ea14-b079-11ee-8f5a-0ffd89748842:44,0000000000000000: Success\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.056406Z 0 [Note] [MY-000000] [Galera] Votes over 4a52ea14-b079-11ee-8f5a-0ffd89748842:44:\n 0000000000000000: 2/3\n cd85036667697b73: 1/3\nWinner: 0000000000000000\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.056612Z 1399 [ERROR] [MY-000000] [Galera] Inconsistency detected: Inconsistent by consensus on 4a52ea14-b079-11ee-8f5a-0ffd89748842:44\n\t at /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.34/percona-xtradb-cluster-galera/galera/src/replicator_smm.cpp:process_apply_error():1454\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.058824Z 1399 [Note] [MY-000000] [Galera] Closing send monitor...\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.058906Z 1399 [Note] [MY-000000] [Galera] Closed send monitor.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.058930Z 1399 [Note] [MY-000000] [Galera] gcomm: terminating thread\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.058965Z 1399 [Note] [MY-000000] [Galera] gcomm: joining thread\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.059225Z 1399 [Note] [MY-000000] [Galera] gcomm: closing backend\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.060315Z 1399 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node\nview (view_id(NON_PRIM,5f4698c9-bee1,3)\nmemb {\n\t5f4698c9-bee1,0\n\t}\njoined {\n\t}\nleft {\n\t}\npartitioned {\n\t8d6ad044-a757,0\n\tbbd9b077-9abe,0\n\t}\n)\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.060431Z 1399 [Note] [MY-000000] [Galera] PC protocol downgrade 1 -> 0\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.060463Z 1399 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node\nview ((empty))\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.061732Z 1399 [Note] [MY-000000] [Galera] gcomm: closed\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.061856Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.061956Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [100, 100]\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.061974Z 0 [Note] [MY-000000] [Galera] Received NON-PRIMARY.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.061987Z 0 [Note] [MY-000000] [Galera] Shifting SYNCED -> OPEN (TO: 44)\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062021Z 0 [Note] [MY-000000] [Galera] New SELF-LEAVE.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062080Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [0, 0]\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062102Z 0 [Note] [MY-000000] [Galera] Received SELF-LEAVE. Closing connection.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062114Z 0 [Note] [MY-000000] [Galera] Shifting OPEN -> CLOSED (TO: 44)\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062148Z 0 [Note] [MY-000000] [Galera] RECV thread exiting 0: Success\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062182Z 10 [Note] [MY-000000] [Galera] ================================================\nView:\n id: 4a52ea14-b079-11ee-8f5a-0ffd89748842:44\n status: non-primary\n protocol_version: 4\n capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO\n final: no\n own_index: 0\n members(1):\n\t0: 5f4698c9-b079-11ee-bee1-9236cf144bcd, test-pxc-0\n=================================================\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062244Z 10 [Note] [MY-000000] [Galera] Non-primary view\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062268Z 10 [Note] [MY-000000] [WSREP] Server status change synced -> connected\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062287Z 10 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062310Z 10 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062424Z 1399 [Note] [MY-000000] [Galera] recv_thread() joined.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062454Z 1399 [Note] [MY-000000] [Galera] Closing replication queue.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062471Z 1399 [Note] [MY-000000] [Galera] Closing slave action queue.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062533Z 10 [Note] [MY-000000] [Galera] ================================================\nView:\n id: 4a52ea14-b079-11ee-8f5a-0ffd89748842:44\n status: non-primary\n protocol_version: 4\n capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO\n final: yes\n own_index: -1\n members(0):\n=================================================\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062555Z 10 [Note] [MY-000000] [Galera] Non-primary view\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062571Z 10 [Note] [MY-000000] [WSREP] Server status change connected -> disconnected\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062585Z 10 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062604Z 10 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.062637Z 10 [Note] [MY-000000] [Galera] Waiting 600 seconds for 2 receivers to finish\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.072258Z 2 [Note] [MY-000000] [Galera] Slave thread exit. Return code: 6\n","file":"/var/lib/mysql/mysqld-error.log"}

{"log":"2024-01-11T12:12:12.072347Z 2 [Note] [MY-000000] [WSREP] Applier thread exiting ret: 6 thd: 2\n","file":"/var/lib/mysql/mysqld-error.log"}

Environment

None

Activity

Show:

Kamil Holubicki January 19, 2024 at 3:04 PM

The issue will be fixed in 8.0.36
Setting log_bin_trust_function_creators is the proper workaround. Note that log_bin_trust_function_creators is deprecated and will be removed in the future.

 

Note that the proper behavior for log_bin_trust_function_creators=0 is that the query fails and the function is not created on any node.

Song Yang January 19, 2024 at 1:19 PM

Sorry for the mess of issue description. I have no editing permission on my own posted issue. I am continuing here in this comment.


After the evicted node rejoin to the cluster, and only after the node recovered from xtrabackup procedure, the function being created unsuccessfully before would show in the functions list, even there was error during creating before.

This is the cluster yaml:

 

My temporary solution is by setting log_bin_trust_function_creators = 1 .

PS: I just upgrade my cluster to percona/percona-xtradb-cluster:8.0.35-27.1 and the issue is still there.

Done

Details

Assignee

Reporter

Needs Review

Yes

Needs QA

Yes

Sprint

Affects versions

Priority

Smart Checklist

Created January 11, 2024 at 12:44 PM
Updated June 5, 2024 at 10:23 AM
Resolved April 3, 2024 at 8:03 PM