You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the Percona Telemetry Agent is not closing connections, causing MySQL error 08004/1040: Too many connections after some time.
Operating system: Debian 12 Database: Percona Server 8.0.37 for Linux Configuration: Access for the root-user on localhost is disabled. The only way for the root user to login onto the database is specifying 127.0.0.1 as the host, like this: mysql -h 127.0.0.1 -u root -p. We don't use user root anywhere in our application.
Searhing the syslog, we've found the rules, logged by the percona_telemetry_agent, runned once a day:
/var/log/syslog.3.gz:Jan 20 20:15:00 mysqld[1694]: 2025-01-20T20:15:00.098778Z 0 [Warning] [MY-011071] [Server] Component percona_telemetry reported: 'Collecting db_instance_id failed. It may be caused by server still initializing.'
/var/log/syslog.4.gz:Jan 19 20:15:00 mysqld[1694]: 2025-01-19T20:15:00.098412Z 0 [Warning] [MY-011071] [Server] Component percona_telemetry reported: 'Collecting db_instance_id failed. It may be caused by server still initializing.'
/var/log/syslog.5.gz:Jan 18 20:15:00 mysqld[1694]: 2025-01-18T20:15:00.098010Z 0 [Warning] [MY-011071] [Server] Component percona_telemetry reported: 'Collecting db_instance_id failed. It may be caused by server still initializing.'
/var/log/syslog.6.gz:Jan 17 20:15:00 mysqld[1694]: 2025-01-17T20:15:00.097674Z 0 [Warning] [MY-011071] [Server] Component percona_telemetry reported: 'Collecting db_instance_id failed. It may be caused by server still initializing.'
/var/log/syslog.7.gz:Jan 16 20:15:00 mysqld[1694]: 2025-01-16T20:15:00.097363Z 0 [Warning] [MY-011071] [Server] Component percona_telemetry reported: 'Collecting db_instance_id failed. It may be caused by server still initializing.'
When using systemctl status mysql, the Percona Service is active (running) with status "Server is operational". After some time (couple of weeks/months), the MySQL processlist is filling up with processes with state "login" and info "PLUGIN", see also the output below and the attached screenshot.
It looks like the percona_telemetry_agent creates a new process once a day (the difference between the values in column Time from the processlist is exactly 86400 seconds), but is not closing it properly.
With the default setting for max_connections (151), after 151 days, the processlist is completely full and new connections are refused. The only solutions seems to restart the MySQL server, but after some time, the same behaviour occurs again.
Output from /var/log/percona/telemetry-agent/telemetry-agent.log:
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:279","msg":"start metrics processing iteration"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:281","msg":"cleaning up history metric files","directory":"/usr/local/percona/telemetry/history"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:287","msg":"processing Pillars metrics files"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:92","msg":"processing PS metrics","directory":"/usr/local/percona/telemetry/ps"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"metrics/metrics.go:59","msg":"pillar metric directory is empty, skipping","directory":"/usr/local/percona/telemetry/ps"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:99","msg":"processing PXC metrics","directory":"/usr/local/percona/telemetry/pxc"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/pxc"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:106","msg":"processing PSMDB (mongod) metrics","directory":"/usr/local/percona/telemetry/psmdb"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/psmdb"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:113","msg":"processing PSMDB (mongos) metrics","directory":"/usr/local/percona/telemetry/psmdbs"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/psmdbs"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:120","msg":"processing PG metrics","directory":"/usr/local/percona/telemetry/pg"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/pg"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:135","msg":"no Pillar metrics files found, skip scraping host metrics and sending telemetry"}
{"level":"info","ts":"2025-01-19T20:12:59.621Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:289","msg":"sleep for 86400 seconds"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:279","msg":"start metrics processing iteration"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:281","msg":"cleaning up history metric files","directory":"/usr/local/percona/telemetry/history"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:287","msg":"processing Pillars metrics files"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:92","msg":"processing PS metrics","directory":"/usr/local/percona/telemetry/ps"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"metrics/metrics.go:59","msg":"pillar metric directory is empty, skipping","directory":"/usr/local/percona/telemetry/ps"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:99","msg":"processing PXC metrics","directory":"/usr/local/percona/telemetry/pxc"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/pxc"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:106","msg":"processing PSMDB (mongod) metrics","directory":"/usr/local/percona/telemetry/psmdb"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/psmdb"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:113","msg":"processing PSMDB (mongos) metrics","directory":"/usr/local/percona/telemetry/psmdbs"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/psmdbs"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:120","msg":"processing PG metrics","directory":"/usr/local/percona/telemetry/pg"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/pg"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:135","msg":"no Pillar metrics files found, skip scraping host metrics and sending telemetry"}
{"level":"info","ts":"2025-01-20T20:12:59.620Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:289","msg":"sleep for 86400 seconds"}gger":"telemetry-agent","caller":"telemetry-agent/main.go:106","msg":"processing PSMDB (mongod) metrics","directory":"/usr/local/percona/telemetry/psmdb"}
{"level":"info","ts":"2025-01-22T13:47:09.830Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/psmdb"}
{"level":"info","ts":"2025-01-22T13:47:09.830Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:113","msg":"processing PSMDB (mongos) metrics","directory":"/usr/local/percona/telemetry/psmdbs"}
{"level":"info","ts":"2025-01-22T13:47:09.830Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/psmdbs"}
{"level":"info","ts":"2025-01-22T13:47:09.830Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:120","msg":"processing PG metrics","directory":"/usr/local/percona/telemetry/pg"}
{"level":"info","ts":"2025-01-22T13:47:09.830Z","logger":"telemetry-agent","caller":"metrics/metrics.go:49","msg":"pillar metric directory is absent, skipping","directory":"/usr/local/percona/telemetry/pg"}
{"level":"info","ts":"2025-01-22T13:47:09.830Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:135","msg":"no Pillar metrics files found, skip scraping host metrics and sending telemetry"}
{"level":"info","ts":"2025-01-22T13:47:09.830Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:289","msg":"sleep for 86400 seconds"}
{"level":"info","ts":"2025-01-23T10:51:03.954Z","logger":"telemetry-agent","caller":"utils/signal_runner.go:36","msg":"Received signal: terminated, shutdown"}
{"level":"info","ts":"2025-01-23T10:51:03.954Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:273","msg":"terminating main loop"}
{"level":"info","ts":"2025-01-23T10:51:03.954Z","logger":"telemetry-agent","caller":"telemetry-agent/main.go:298","msg":"finished"}
For now, we've disabled the telemetry-agent using systemctl stop percona-telemetry-agent and systemctl disable percona-telemetry-agent in order to prevent it to fill up the processlist .
The text was updated successfully, but these errors were encountered:
It looks like the Percona Telemetry Agent is not closing connections, causing MySQL error 08004/1040: Too many connections after some time.
Operating system: Debian 12
Database: Percona Server 8.0.37 for Linux
Configuration: Access for the root-user on localhost is disabled. The only way for the root user to login onto the database is specifying 127.0.0.1 as the host, like this:
mysql -h 127.0.0.1 -u root -p
. We don't use user root anywhere in our application.Searhing the syslog, we've found the rules, logged by the percona_telemetry_agent, runned once a day:
When using
systemctl status mysql
, the Percona Service is active (running) with status "Server is operational". After some time (couple of weeks/months), the MySQL processlist is filling up with processes with state "login" and info "PLUGIN", see also the output below and the attached screenshot.It looks like the percona_telemetry_agent creates a new process once a day (the difference between the values in column Time from the processlist is exactly 86400 seconds), but is not closing it properly.
With the default setting for max_connections (151), after 151 days, the processlist is completely full and new connections are refused. The only solutions seems to restart the MySQL server, but after some time, the same behaviour occurs again.
Output from /var/log/percona/telemetry-agent/telemetry-agent.log:
For now, we've disabled the telemetry-agent using
systemctl stop percona-telemetry-agent
andsystemctl disable percona-telemetry-agent
in order to prevent it to fill up the processlist .The text was updated successfully, but these errors were encountered: