User Tools

Site Tools


linux:ubuntu_proxysql_install

Ubuntu Install ProxySQL

We are going to install ProxySQL which is a SQL load balancer.

In this guide we are focusing on using the ProxySQL as a load balancer in front of a Galera cluster. We are slso going to enable the ProxySQL web interface. Be warned though the web interface is for statistics only and cannot not be used to configure ProxySQL.

Prerequisites

This guide assumes that a Galera Cluster is already present and configured. If you need a guide on how to install and configure a Galera Cluster the is a guide [Here]

Prepare The Repository

First we need to make the repositories available.

wget -nv -O /etc/apt/trusted.gpg.d/proxysql-2.6.x-keyring.gpg 'https://repo.proxysql.com/ProxySQL/proxysql-2.6.x/repo_pub_key.gpg'
echo deb https://repo.proxysql.com/ProxySQL/proxysql-2.6.x/$(lsb_release -sc)/ ./ | tee /etc/apt/sources.list.d/proxysql.list
apt update

Install ProxySQL

Run the following using sudo or root

apt install proxysql

Make sure ProxySQL is started and starts on boot time.

systemctl start proxysql
systemctl enable proxysql
systemctl status proxysql

In order to communicate with ProxySQL we need to install a SQL client as mentioned before ProxySQL has no interface and is being configured solely through mysql statements.

apt install mariadb-client

Now we can connect to ProxySQL using the default credentials.

mysql -u admin -padmin -h 127.0.0.1 -P 6032 --prompt='ProxySQLAdmin> '

Changing Login Credentials

First let's look at the current values

SELECT * FROM global_variables WHERE variable_name = 'admin-admin_credentials';
+-------------------------+----------------+
| variable_name           | variable_value |
+-------------------------+----------------+
| admin-admin_credentials | admin:admin    |
+-------------------------+----------------+

Let's change the password for the admin user.

UPDATE global_variables SET variable_value='admin:NewAdminPassWord' WHERE variable_name = 'admin-admin_credentials';

Verify the new settings.

SELECT * FROM global_variables WHERE variable_name = 'admin-admin_credentials';
+-------------------------+------------------------+
| variable_name           | variable_value         |
+-------------------------+------------------------+
| admin-admin_credentials | admin:NewAdminPassWord |
+-------------------------+------------------------+

Move the above settings from memory to runtime configuration and save the settings.

LOAD ADMIN VARIABLES TO RUNTIME;
SAVE ADMIN VARIABLES TO DISK;

About Saving Configurations

Now this may seem confusing at first, but ProxySQL when in comes to configuration acts more or less like a switch or a router. Where changes you make are stored to memory first. This means if the server is booting or ProxySQL is stopped while changes are in memory they will be gone once ProxySQL or the server is responding again.

And more so the term “SAVE TO DISK” is misguided to say the least. “SAVE TO DISK” actually saves the current runtime variables to a database used by ProxySQL. you can see the table layout of ProxySQL here.

show databases;
+-----+---------------+-------------------------------------+
| seq | name          | file                                |
+-----+---------------+-------------------------------------+
| 0   | main          |                                     |
| 2   | disk          | /var/lib/proxysql/proxysql.db       |
| 3   | stats         |                                     |
| 4   | monitor       |                                     |
| 5   | stats_history | /var/lib/proxysql/proxysql_stats.db |
+-----+---------------+-------------------------------------+

So in order to make the configuration changes persistent you need to move the changes from memory to runtime, and then save the runtime configuration to the database.

Test Login Credentials

Logout of ProxySQL and log in again whith the new password.

mysql -u admin -pNewAdminPassWord -h 127.0.0.1 -P 6032 --prompt='ProxySQLAdmin> '

Creating Hostgroups

Now lets create some host groups. Note: This guide assumes we are creating hostgroups for a Galera Cluster. If you are using another multi master cluster like mysql see this Link

How host groups definitions are beeing created and used can be shown in proxysql executing the following statement.

SHOW CREATE TABLE mysql_galera_hostgroups\G
table: mysql_galera_hostgroups
Create Table: CREATE TABLE mysql_galera_hostgroups (
writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL,
reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0),
offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0),
active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1,
writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1,2)) NOT NULL DEFAULT 0,
max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0,
comment VARCHAR,
UNIQUE (reader_hostgroup),
UNIQUE (offline_hostgroup),
UNIQUE (backup_writer_hostgroup))

Explanation:

writer_hostgroup default id=2 The id of the hostgroup that will contain all the members that are writer.
backup_writer_hostgroup default id=4 If the group is running in multi-primary mode, there are multi writers (read_only=0) but if the amount of these writer is larger than the max_writers, the extra nodes are located in that backup writer group.
reader_hostgroup default id=3 The id of the hostgroup that will contain all the members that are reader: nodes that have read_only=1 will be assigned to this hostgroup.
offline_hostgroup default id=1 When ProxySQL’s monitoring determines a host is offline, it will be put such host into the offline_hostgroup.
active boolean 0 or 1 When enabled boolean value of 1, ProxySQL monitors the hostgroups and moves accordingly the server in the appropriate hostgroups.
max_writers numeric Limit the amount of nodes in the writer hostgroup: nodes in excess of this value will be put into the backup_writer_hostgroup.
writer_is_also_reader boolean 0 or 1 When enabled boolean value of 1, a node in the writer hostgroup will also belongs to the reader hostgroup.
max_transactions_behind numeric Default value 100. This value determines the maximum number of writesets behind the cluster that ProxySQL should allow before shunning the node to prevent stale reads this is determined by querying the wsrep_local_recv_queue galera variable).

In this example we will use ProxySQL with a 5 node Galera Cluster. And the nodes will be configured as shown below.

172.26.3.80 is writer / reader
172.26.3.84 is writer / reader
172.26.3.85 is writer / reader
172.26.3.98 is writer / reader
172.26.3.99 is writer / reader

writer_hostgroup id 2
backup_writer_hostgroup id 4
reader_hostgroup id 3
offline_hostgroup id 1
active 1 (Automatically move nodes between hostgroups 1=enabled 0=disabled)
max_writers 5 (We will use all nodes as writers as well)
writer_is_also_reader 1 (1=enabled 0=disabled)
max_transactions_behind) 100

Lets set this to effect so that ProxySQL knows how to distribute nodes across hostgroups. I've added the id's and values to the statement for easier understanding this is not a real sql statement and should not be used its only to clarify how to this is working. The real sql statement follows beneath this one.

mysql_galera_hostgroups writer_hostgroup 2 , backup_writer_hostgroup 4 , reader_hostgroup 3 , offline_hostgroup 1 , active 1 , max_writers 5 , writer_is_also_reader 1 , max_transactions_behind 100 VALUES 2,4,3,1,1,5,1,100

And now lets make the real changes.

INSERT INTO mysql_galera_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) VALUES (2,4,3,1,1,5,1,100);

Verify the settings.

SELECT * FROM mysql_galera_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 2                | 4                       | 3                | 1                 | 1      | 5           | 1                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+

Adding Servers To The Pool

Note: The weight at the end of the statement means the node with the lowest weight will get reconfigured in backup_writer_hostgroup after loading configuration into runtime. Also the weight is a mandatory field. When assigning the same value to all notes we make sure than none will be put in the backup_writer_hostgroup.

INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (2,'172.26.3.80',3306,100);
INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (2,'172.26.3.84',3306,100);
INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (2,'172.26.3.85',3306,100);
INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (2,'172.26.3.98',3306,100);
INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (2,'172.26.3.99',3306,100);

Verify the settings.

select hostgroup_id,hostname,port,status,weight,max_connections from mysql_servers;
+--------------+-------------+------+--------+--------+-----------------+
| hostgroup_id | hostname    | port | status | weight | max_connections |
+--------------+-------------+------+--------+--------+-----------------+
| 2            | 172.26.3.80 | 3306 | ONLINE | 100    | 1000            |
| 2            | 172.26.3.84 | 3306 | ONLINE | 100    | 1000            |
| 2            | 172.26.3.85 | 3306 | ONLINE | 100    | 1000            |
| 2            | 172.26.3.98 | 3306 | ONLINE | 100    | 1000            |
| 2            | 172.26.3.99 | 3306 | ONLINE | 100    | 1000            |
+--------------+-------------+------+--------+--------+-----------------+

Once done we need to save the changes. Note The save statement is different this time as it is “servers” we are saving.

LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

Configure Monitoring

As of now we are basically done, but if you where to stop the MariaDB service on one of the nodes not much would happen due to the fact that ProxySQL at the moment only can monitor the state of the psychical server and not the state of the MariaDB or Galera Cluster. Therefore we are going to configure monitoring så that ProxySQL knows about the status of the databases.

Log into one of the Galera nodes and create a monitoring user (since the cluster is already running) this only has to be done on 1 node.

CREATE USER 'monitor'@'%' IDENTIFIED BY 'MonitorPassword';
GRANT SELECT ON sys.* TO 'monitor'@'%';
GRANT SELECT ON performance_schema.* TO 'monitor'@'%';
GRANT USAGE, REPLICATION CLIENT ON *.* TO 'monitor'@'%';
FLUSH PRIVILEGES;

On ProxySLQ

UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';
UPDATE global_variables SET variable_value='MonitorPassword' WHERE variable_name='mysql-monitor_password';
LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;

Try to stop a MariaDB service and then on proxysql see if it gets “SHUNNED” like this.

SELECT hostgroup_id, hostname, status FROM runtime_mysql_servers;
+--------------+-------------+--------------+
| hostgroup_id | hostname    | status       |
+--------------+-------------+--------------+
| 1            | 172.26.3.80 | SHUNNED      |
| 1            | 172.26.3.85 | ONLINE       |
| 1            | 172.26.3.99 | ONLINE       |
| 2            | 172.26.3.84 | ONLINE       |
| 2            | 172.26.3.85 | ONLINE       |
| 2            | 172.26.3.98 | ONLINE       |
| 2            | 172.26.3.99 | ONLINE       |
| 3            | 172.26.3.84 | ONLINE       |
| 3            | 172.26.3.85 | ONLINE       |
| 3            | 172.26.3.98 | ONLINE       |
| 3            | 172.26.3.99 | ONLINE       |
+--------------+-------------+--------------+

Change Monitor Update Intervals

The default monitor values can be shown using the following statement.

SELECT * FROM global_variables WHERE variable_name LIKE 'mysql-monitor_%';
+----------------------------------------------------------------------+-----------------+
| variable_name                                                        | variable_value  |
+----------------------------------------------------------------------+-----------------+
| mysql-monitor_enabled                                                | true            |
| mysql-monitor_connect_timeout                                        | 600             |
| mysql-monitor_ping_max_failures                                      | 3               |
| mysql-monitor_ping_timeout                                           | 1000            |
| mysql-monitor_read_only_max_timeout_count                            | 3               |
| mysql-monitor_replication_lag_group_by_host                          | false           |
| mysql-monitor_replication_lag_interval                               | 10000           |
| mysql-monitor_replication_lag_timeout                                | 1000            |
| mysql-monitor_replication_lag_count                                  | 1               |
| mysql-monitor_groupreplication_healthcheck_interval                  | 5000            |
| mysql-monitor_groupreplication_healthcheck_timeout                   | 800             |
| mysql-monitor_groupreplication_healthcheck_max_timeout_count         | 3               |
| mysql-monitor_groupreplication_max_transactions_behind_count         | 3               |
| mysql-monitor_groupreplication_max_transactions_behind_for_read_only | 1               |
| mysql-monitor_galera_healthcheck_interval                            | 5000            |
| mysql-monitor_galera_healthcheck_timeout                             | 800             |
| mysql-monitor_galera_healthcheck_max_timeout_count                   | 3               |
| mysql-monitor_replication_lag_use_percona_heartbeat                  |                 |
| mysql-monitor_query_interval                                         | 60000           |
| mysql-monitor_query_timeout                                          | 100             |
| mysql-monitor_slave_lag_when_null                                    | 60              |
| mysql-monitor_threads_min                                            | 8               |
| mysql-monitor_threads_max                                            | 128             |
| mysql-monitor_threads_queue_maxsize                                  | 128             |
| mysql-monitor_local_dns_cache_ttl                                    | 300000          |
| mysql-monitor_local_dns_cache_refresh_interval                       | 60000           |
| mysql-monitor_local_dns_resolver_queue_maxsize                       | 128             |
| mysql-monitor_wait_timeout                                           | true            |
| mysql-monitor_writer_is_also_reader                                  | true            |
| mysql-monitor_username                                               | monitor         |
| mysql-monitor_password                                               | monitorpassword |
| mysql-monitor_history                                                | 600000          |
| mysql-monitor_connect_interval                                       | 60000           |
| mysql-monitor_ping_interval                                          | 10000           |
| mysql-monitor_read_only_interval                                     | 1500            |
| mysql-monitor_read_only_timeout                                      | 500             |
+----------------------------------------------------------------------+-----------------+

Default connection interval is 1 minute
Default ping interval is 10 seconds
Default read attempt is 1.5 seconds

You can set them individually like this.

UPDATE global_variables SET variable_value='60000' WHERE variable_name='mysql-monitor_connect_interval';
UPDATE global_variables SET variable_value='10000' WHERE variable_name='mysql-monitor_ping_interval';
UPDATE global_variables SET variable_value='1500' WHERE variable_name='mysql-monitor_read_only_interval';

Or you can set all values at once to two seconds like this.

UPDATE global_variables SET variable_value='2000' WHERE variable_name IN ('mysql-monitor_connect_interval','mysql-monitor_ping_interval','mysql-monitor_read_only_interval');

Be sure to save the changes.

LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;

Load Balance Test

ProxySQL needs the same users and passwords as well as privileges in order to forward requests to the backend servers. So in order to test this we have 2 options.

  1. Use an existing application that is actually running and use this for testing.
  2. Create a new account for testing and delete it after the test is done.

In this guide we will create a new privliged user in order to test if the setup is working. This is not a good practice. Best practice would be to test with a real account that needs to use ProxySQL.

On the cluster create a privileged user like this.

CREATE USER 'testuser'@'%' IDENTIFIED BY 'somepassword';
GRANT ALL PRIVILIGES ON *.* TO 'testuser'@'%';

Verify the account has ben created.

SELECT User FROM mysql.user;
+-------------+
| User        |
+-------------+
| monitor     |
| testuser    |
+-------------+

On the ProxySQL we will need to create the same account with the same credentials. Note That we don't need to apply the “Grant” priviliges.

INSERT INTO mysql_users(username, password, default_hostgroup) VALUES ('testuser', 'somepassword', 2);

Verify the account has ben created.

SELECT * FROM mysql_users;
+----------+--------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------+
| username | password     | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections | attributes | comment |
+----------+--------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------+
| testuser | somepassword | 1      | 0       | 2                 | NULL           | 0             | 1                      | 0            | 1       | 1        | 10000           |            |         |
+----------+--------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------+

Save the settings.

LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

Let's test if the everything is working. On the ProxySQL from the command line use the following command to see if we can connect to the backend servers and verify that the connections are being load balanced.

mysql -u testuser -psomepassword -h 127.0.0.1 -P6033 -e "SELECT @@hostname;"

I you repeat this query several times you will see that ProxySQL is actually distributing the load.

+-------------+
| @@hostna    |
+-------------+
| node1.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node3.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node1.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node1.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node4.local |
+-------------+

Note the example above may vary as the node names in this guide may not reflect the names you may have given them.

Change Listening Port

Configure interface to listen on 3306 instead of 6033 Note this requires restart of the ProxySQL server also note that LOAD MYSQL VARIABLES TO RUNTIME should Not be executed in this case.

Lets check the current values.

select variable_name,variable_value from global_variables where variable_name='mysql-interfaces';
+------------------+----------------+
| variable_name    | variable_value |
+------------------+----------------+
| mysql-interfaces | 0.0.0.0:6033   |
+------------------+----------------+

Let's change the litening to port 3306

UPDATE global_variables SET variable_value='0.0.0.0:3306' WHERE variable_name='mysql-interfaces';

Verify the settings

select variable_name,variable_value from global_variables where variable_name='mysql-interfaces';
+------------------+----------------+
| variable_name    | variable_value |
+------------------+----------------+
| mysql-interfaces | 0.0.0.0:3306   |
+------------------+----------------+

Now save the settings and restart ProxySQL

SAVE MYSQL VARIABLES TO DISK;
PROXYSQL RESTART;

Let's test if the everything is working. On the ProxySQL from the command line use the following command to see if we can connect to the backend servers and verify that the connections are being load balanced.

mysql -u testuser -psomepassword -h 127.0.0.1 -e "SELECT @@hostname;"

Note as you can see this time we can omit the port number i.e “-P6033” as we are now using the default port 3306.

Again as before if you repeat this query several times you will see that ProxySQL is actually distributing the load.

+-------------+
| @@hostna    |
+-------------+
| node1.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node3.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node1.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node1.local |
+-------------+
+-------------+
| @@hostna    |
+-------------+
| node4.local |
+-------------+

Note the example above may vary as the node names in this guide may not reflect the names you may have given them.

Enabling The Web Interface

ProxySQL comes with the web interface disabled and running on port 6080 as can we can see here.

Note The web interface appears to be very buggy making ProxySQL coredump at least on RedHat. So use on your own risk.

SELECT * FROM global_variables WHERE variable_name like "admin-web_%";
+---------------------+----------------+
| variable_name       | variable_value |
+---------------------+----------------+
| admin-web_enabled   | false          |
| admin-web_port      | 6080           |
| admin-web_verbosity | 0              |
+---------------------+----------------+

Let's enable it and change the port to 8080.

<code mysql>
SET admin-web_enabled='true';
UPDATE global_variables SET variable_value='8080' WHERE variable_name='admin-web_port';

And verify the settings.

SELECT * FROM global_variables WHERE variable_name like "admin-web_%";
+---------------------+----------------+
| variable_name       | variable_value |
+---------------------+----------------+
| admin-web_enabled   | true           |
| admin-web_port      | 8080           |
| admin-web_verbosity | 0              |
+---------------------+----------------+

Save the settings.

SAVE ADMIN VARIABLES TO DISK;

Usefull ProxySQL Queries

Check if proxysql can get data from the database node i.e is maraidb running.

SELECT hostname, hostgroup_id, status FROM runtime_mysql_servers;

Show pinglog from the physical servers.

SELECT hostname, port, ping_success_time_us, ping_error FROM monitor.mysql_server_ping_log DESC LIMIT 10;

Show the current monitor values.

SELECT * FROM global_variables WHERE variable_name LIKE 'mysql-monitor_%';
linux/ubuntu_proxysql_install.txt · Last modified: 03/10/2024 13:00 by Allan