To speed up setting multiple DB nodes with one master and several slaves without going through a lot of manual MySQL installation steps, we’re going to use Puppet to install and configure MySQL Master/Slave replication in this blog.
This is just an example in my Puppet series to explain how we can use Puppet to handle multiple use cases. Also, I want to share the application of roles and profiles in Puppet and how we use Hiera to change configuration parameters of each DB server without modifying Puppet code.
If you follow me from Mastering Puppet: Implementing Roles and Profiles Effectively In Reality, we created a standard Puppet project on GitLab at https://gitlab.com/binhdt2611/puppet-demo designed for the Roles and Profiles pattern. The production branch (main branch) will be rolled out to /etc/puppetlabs/code/environments/ directory in puppet-master server by r10k.
From the project root path (which is the path of the puppet-demo project), we can develop new code directly on the production branch, but to be safe, we create a branch named automate_deploy_mysql, and start developing on this branch.
Looking at the puppet-demo project, we have created the roles::database {} class at site/roles/manifests/database.pp, which is used by all DB-related servers. We have defined Hiera data to tell Puppet where to look for this role class. Hiera searches for key/value in YAML files from data/ directory. For example, we then had data/nodes/db-01.srv.local.yaml file that contains:
---
# Let Hiera knows that db-01 should use 'roles::database'
server::role: 'roles::database'db-01 server and other db servers (we declare later) use hiera to search for the key server::role setting to value roles::database class, and the roles::database class also contains two other profile classes defined in site/roles/manifests/database.pp:
- profiles::base – inherited from the
roles::baseclass, to set up common things that a server needs in the system. - profiles::mysql – contains the MySQL module and other MySQL-related configurations for a DB server.
We’re going to care about the profiles::mysql {} class only in this post.
NOTE: If you don’t know how to set up Puppet Master/Agent and configure basic roles and profiles, please read Setup Puppet 8 on Ubuntu 24.04 – Configuration Management for a Scaling Enterprise and Mastering Puppet: Implementing Roles and Profiles Effectively in Reality to learn more. If you’re familiar with this setup and just want to have a quick look at how we use Hiera with roles and profiles, then you can skip those blogs.
Prerequisite
- 4 x Ubuntu 24.04 with 1 Puppet Master and 3 Puppet Agent (1 is master and 2 are slaves).
- We use puppetlabs-mysql module to configure MySQL in
profiles::mysql {} - Use default MariaDB on Ubuntu 24.04 to set up MariaDB master-slave replication.
| Hostname | IP address | Role |
| puppet-master.srv.local | 192.168.68.117 | Puppet Server |
| db-01.srv.local | 192.168.68.113 | Puppet Agent (DB Master) |
| db-02.srv.local | 192.168.68.119 | Puppet Agent (DB Slave 1) |
| db-03.srv.local | 192.168.68.120 | Puppet Agent (DB Slave 2) |
- All the code written in this blog belongs to Puppet Series. You can check out https://gitlab.com/binhdt2611/puppet-demo as a reference.
Adding puppetlabs-mysql module
Next, to use Puppet to configure MySQL, we will need to add puppetlabs-mysql module and its module dependencies to the Puppetfile file (under the project root path).
mod 'puppetlabs-stdlib', '9.7.0'
mod 'puppetlabs-mysql', '16.2.0'Ensure having these modules are added if you have not.
Create a central profile class
We will create a central profile class that all DB-related servers will apply to. This profile class is in charge of installing and setting up basic stuff for a MySQL server. We will have a separate class for configuring slave.
In the project root path site/profiles/manifests/mysql.pp file, we have class profiles::mysql {}, we declare the following content:
# Class: profiles::mysql
#
# Install MySQL for a database server
#
class profiles::mysql (
$is_master = true,
$override_options = undef,
) {
# Install MySQL server with custom $override_options varible
class { 'mysql::server':
root_password => 'strongpassword',
remove_default_accounts => true,
restart => true,
override_options => $override_options,
}
# $is_master set default to "true" to install optional settings for a master
# server. And we disable it through hiera for slave server.
if $is_master {
# Create a MySQL admin user to export DB remotely
mysql_user { 'db_admin@192.168.68.%':
ensure => 'present',
password_hash => mysql::password('PleaseChangeMe'),
}
-> mysql_grant { 'db_admin@192.168.68.%/*.*':
ensure => 'present',
options => ['GRANT'],
privileges => ['ALL'],
table => '*.*',
user => 'db_admin@192.168.68.%',
}
# Create a MySQL user for replication
-> mysql_user { 'repl_user@192.168.68.%':
ensure => 'present',
password_hash => mysql::password('PleaseChangeMe'),
}
-> mysql_grant { 'repl_user@192.168.68.%/*.*':
ensure => 'present',
privileges => ['REPLICATION SLAVE'],
table => '*.*',
user => 'repl_user@192.168.68.%',
}
}
# Create a 'my-demo-db' database
mysql::db { 'my-demo-db':
user => 'myuser',
password => 'mypass',
host => 'localhost',
grant => ['SELECT', 'UPDATE'],
}
}The parameter $override_options defined in the class is where we change the configurations by using Hiera, which applies reusable code for each DB with its own setting. This way helps us organize the data structure without modifying Puppet code directly, making Puppet code look less messy. See Create hiera data for each DB node section.
Create a configure_replication class
Still in the project root path, we need to create a path site/profiles/manifests/mysql, go to this directory and create configure_replication.pp file, and add the content:
# Class: profiles::mysql::configure_replication
#
# Configure MySQL replication
#
class profiles::mysql::configure_replication {
$configure_repl_script = '/usr/local/bin/configure_replication.sh'
file { $configure_repl_script:
ensure => 'file',
source => 'puppet:///modules/profiles/mysql/configure_replication.sh',
owner => 'root',
group => 'root',
mode => '0700',
notify => Exec['configure_replication'],
}
# $facts['path'] sets to "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/opt/puppetlabs/bin"
exec { 'configure_replication':
command => "${configure_repl_script} >> /var/log/configure-mysql-repl.log 2>&1",
environment => ['HOME=/root', 'USER=root'],
path => $facts['path'],
user => 'root',
unless => ['test -f /tmp/.replication.done'],
}
}Create configure_replication bash script
Unfortunately, using puppetlabs-mysql module is not able to configure MySQL replication completely for us; we still have to take some custom steps outside to achieve the goal. Therefore, we will use a bash script to do that and use Puppet to run the script for us.
If you see the code in class profiles::mysql::configure_replication {}, we have setting below to tell Puppet where to copy the content and create the /usr/local/bin/configure_replication.sh script with its content on the target server :
source => 'puppet:///modules/profiles/mysql/configure_replication.sh',We need to create site/profiles/file/mysql path, go to this directory, and create configure_replication.sh file with content:
#!/bin/bash
#
# Author: Binh
# Website: https://turndevopseasier.com/
# Puppet Series - Automate configure MySQL Master-Slave Replication
#
#
# Pre-defined variables
master_host="db-01.srv.local";
repl_user="repl_user";
repl_pass="PleaseChangeMe";
db_admin="db_admin";
db_pass="PleaseChangeMe";
REPL_STATE_FILE="/tmp/.replication.done"
echo "$(date) - Configuring Master-Slave replication..."
if ! [ -f $REPL_STATE_FILE ];then
mysql -e "STOP SLAVE;"
master_status=$(mysql -h"$master_host" -u"$db_admin" -p"$db_pass" -ANe "SHOW MASTER STATUS;" | awk '{print $1 " " $2}') && \
master_log_file=$(echo $master_status | cut -d" " -f1) && \
master_log_pos=$(echo $master_status | cut -d" " -f2)
mysql -e "RESET SLAVE;"
mysql -e "CHANGE MASTER TO MASTER_HOST='$master_host', MASTER_PORT=3306, MASTER_USER='$repl_user', MASTER_PASSWORD='$repl_pass', MASTER_LOG_FILE='${master_log_file}', MASTER_LOG_POS='${master_log_pos}';"
mysql -e "START SLAVE;"
sleep 5;
mysql -e "SHOW SLAVE STATUS\G;" | grep "Waiting for master"
slave_status="$?";
if [ "$slave_status" -eq "0" ];then
touch $REPL_STATE_FILE
echo "$(date) >>> DONE - $(hostname) was configured successfully as a slave for $master_host";
else
echo "$(date) >>> ERROR - Failed to configure slave";
exit 1;
fi
else
echo "$(date) >>> INFO - $(hostname) is configured as slave already";
fiThe bash script basically runs on slave servers only which does the following:
- Query to the master server to collect
master_log_fileandmaster_log_pos - Configure the current slave to form a replication with the master based on the collected master’s information.
This is a basic setup when you create a cluster of a master and multiple slave servers with a blank database. It might not be useful in a real case when your master has its data already. That’s when you probably have to export data from the master and import the data into the slave server in order to configure the slave. There are many ways to export/import data. You could search for it on Google and modify the script based on your expectations.
Create hiera data for each DB node
This section, we need to create Hiera data yaml files for each DB server so that they will use different settings based on their role. For example, the master server has master’s settings, and the slave server has slave’s settings as well.
To design reusable data so that when any DB server has the same settings, we don’t have to duplicate those configurations multiple times. We just need to change custom configurations that apply to a specific server that we want.
We are going to create a path data/mysql/, go to this directory, and create a default.yaml file. This file contains default settings that all DBs need; add the content below:
---
# Defult optional configs for all nodes if not specified in other areas
lookup_options:
profiles::mysql::override_options:
merge: deep
profiles::mysql::override_options:
mysqld:
bind_address: '0.0.0.0'
datadir: '/var/lib/mysql'
log_error: '/var/log/mysql/error.log'
pid_file: '/var/run/mysqld/mysqld.pid'
server_id: '1'
innodb_flush_log_at_trx_commit: '1'
sync_binlog: '1'
log_bin: 'mysql-bin'
relay_log: 'mysql-relay-log'
mysqld_safe:
log_error: '/var/log/mysql/error.log'The content above is also used to configure the master server as well. So we don’t have to add the same content to the data/nodes/db-01.srv.local.yaml file, just leave it like this:
---
# Let Hiera knows that db-01 should use 'roles::database'
server::role: 'roles::database'The default content is also used by other DB servers (that’s mean reusable code). But our slave servers need to change values of server_id and innodb_flush_log_at_trx_commit parameters. Therefore, we need to create data/nodes/db-02.srv.local.yaml file with content:
---
lookup_options:
profiles::mysql::override_options:
merge: deep
# Let Hiera knows that db-02 should use 'roles::database'
server::role: 'roles::database'
# Optional configs for slave node
profiles::mysql::is_master: false
# This inherites all configurations from data/mysql/default.yaml but we override two parameters by defining them like below in this db-02.srv.local.yaml file.
profiles::mysql::override_options:
mysqld:
server_id: '2'
innodb_flush_log_at_trx_commit: '2'Similar to db-02.srv.local, create data/nodes/db-03.srv.local.yaml file with content:
---
lookup_options:
profiles::mysql::override_options:
merge: deep
# Let Hiera knows that db-03 should use 'roles::database'
server::role: 'roles::database'
# Optional configs for slave node
profiles::mysql::is_master: false
# This inherites all configurations from data/mysql/default.yaml but we override two parameters by defining them like below in this db-03.srv.local.yaml file.
profiles::mysql::override_options:
mysqld:
server_id: '3'
innodb_flush_log_at_trx_commit: '2'After that, still in the project root path, we need to configure hiera.yaml to instruct Puppet where to search for the data/mysql/default.yaml file. So in the hiera.yaml file, basically update the highlighted content below, for example:
from:
---
version: 5
defaults:
hierarchy:
- name: "Per-node data (yaml version)"
path: "nodes/%{::trusted.certname}.yaml"
- name: "Other YAML hierarchy levels"
paths:
- "common.yaml"to:
---
version: 5
defaults:
hierarchy:
- name: "Per-node data (yaml version) overrides all config defined in file below"
path: "nodes/%{::trusted.certname}.yaml"
# Other configs...
- name: "Default MySQL-config data for all nodes"
path: "mysql/default.yaml"
# Other configs...
- name: "Other YAML hierarchy levels"
paths:
- "common.yaml"In the class profiles::mysql {} we defined above, we have two parameters: $is_master and $override_options. The $override_options in this class uses a method called automatic lookup of class parameters. So if we define profiles::mysql::override_options in any Hiera data yaml file, it will become the value of the $override_options variable.
In here, I’m using lookup_options feature with option merge: deep, it basically means that taking all values of each profiles::mysql::override_options key, and merge them together under Hash format. With the several values of this profiles::mysql::override_options key defined in data/nodes/db-03.srv.local.yaml overrides the value defined in data/mysql/default.yaml
Configure role class to run replication configuration
Now we have to modify roles::database class to instruct Puppet to apply profiles::mysql::configure_replication class only when parameter profiles::mysql::is_master is set to false on hiera data of each DB. By default is true, which treats all DB servers is the master server. We have db-02 and db-03 having this parameter set to false because they’re configured as slave
# Class: roles::database
#
# Inherit configurations from roles::base and install configuration for a database server
#
class roles::database inherits roles::base {
include profiles::mysql
$is_master = lookup('profiles::mysql::is_master', { 'default_value' => true })
# If not master, run this class below.
if !$is_master {
include profiles::mysql::configure_replication
# Instruct Puppet to know that it should run class 'profiles::mysql' prior to 'profiles::mysql::configure_replication'
Class['profiles::mysql'] -> Class['profiles::mysql::configure_replication']
}
}Ok. All done. Now, you need to commit and push all your changes to the remote branch on GitLab. Then log in to your puppet-master server, and run r10k to deploy your changes to the corresponding environments. The automate_deploy_mysql branch is then located at /etc/puppetlabs/code/environments/
Next, on db-01.srv.local server, we need to run the puppet command to deploy the configuration on this server first, to turn it become a MySQL master server.
sudo /opt/puppetlabs/bin/puppet agent -t --environment=automate_deploy_mysqlOnce we have db-01 up and running, we continue running puppet command above on db-02.srv.local, and db-03.srv.local to install and configure them as slaves.
Noted: I’m specifying
--environment=automate_deploy_mysqlto test the code we wrote. When we make sure it works, we should merge it into theproductionbranch.
Testing
Login db-01 server, and access mysql by running sudo mysql, create a table “blog_content” and insert data into it by running the commands below:
use my-demo-db;
CREATE TABLE IF NOT EXISTS blog_content (post VARCHAR(255) NULL);
INSERT INTO blog_content (post) VALUES ('Here is my first post');Output:

Go to db-02 and db-3 to check whether the table and data are replicated to it or not
db-02.srv.local show:

We found the table and data is replicated to db-02.
db-03.srv.local show:

We got the same result on db-03.
That’s all for this post. Honestly, in a real case, I would develop my own MySQL module rather than relying on a module from the community. Not because the module from the community is not good, but rather because we have more control over what we build, and we can tailor the module according to what our business or IT team needs. For example, can always install the latest MySQL version from other vendors like Percona,etc without waiting for the community module to update its support. However, we can still learn a lot from the community module.
Discover more from Turn DevOps Easier
Subscribe to get the latest posts sent to your email.
