Setup a redundant PostgreSQL database with repmgr and pgpool

Using a database to store information is very common. The more information gets stored in a database, the more critical it becomes. At a certain point, the need for a redundant, high(er) available database-setup raises. When you’re using PostgreSQL, you can setup streaming replication quite easily in order to have your data redundant on two or more nodes. But wouldn’t it be a shame if you wouldn’t use the copies of your data to balance the load of your primary database-server? In this post, I’ll try to explain how to setup a redundant PostgreSQL database with load balancing.

To achieve redundancy, I’ve chosen to use the built-in streaming replication of PostgreSQL. Another tool, repmgr, will be used for easier control of the replication between those redundant PostgreSQL-instances. The tool doesn’t replicate data itself but it allows you to easily control the replication, the standby-server(s) and monitor the status of the whole replication process.

More information about repmgr:

In order to use both (or more if you would like to) copies of the database-data, I will use pgpool. Pgpool can pool the connections to the nodes and will monitor their status and trigger failover (by repmgr) if needed. Pgpool has the capability to load balance traffic, based on the type of SQL-query. For example, a SELECT query can perfectly be executed on a slave (read-only) node and save resources on the master (read-write) node.

More information about pgpool:


For this post, I’ll setup two nodes as redundant PostgreSQL DB (a primary and a standby) and one machine which will do the pooling and distribution. The pool is a single point of failure and ideally should also be redundant. For my case, the most important is that all data stays available in case of a failure.


As you can see in the above scheme, SQL queries will be distributed to the primary and standby, based on the type of query. Data written to the master should get replicated to the standby.

In case of a failure of the primary, the standby should replace the primary and become read-write. This will make sure that the database stays available for all applications. In the meanwhile, you can investigate what happened to the old primary and, after a checkup, start it as the new standby.

The version of PostgreSQL used in this post will be 9.4 but you can basically replace that with any other supported (recent) PostgreSQL version.



Before we will get started with the setup, it’s important that all nodes in the setup can connect to each other via their hostname. If that’s not an option, you could use IP’s instead of hostnames but that would make this explanation less clear :)

Public key authentication

Another prerequisite is that all nodes can connect to each other over SSH without a password prompt with users root and postgres. SSH will be used to rsync the data from the primary to the standby and to initiate a failover from pgpool. Password-less SSH can be achieved with public key authentication.

For root, we’ll need to generate a new RSA-keypair for all nodes:

As you can see, I did a cat of the public part of the keypair on each node. To allow all machines to connect to each other an accept each other’s key, we’ll need to add the generated public keys of all hosts to /root/.ssh/authorized_keys:

In the above, I added each of the generated public RSA-keys to the authorized_keys-file on pgdb1. Don’t forget the last line to change the permissions or SSH will not accept this file.

Since we really want unattended access (so no password or any other question), I’ll also add all hosts to the known_hosts-file. This prevents the question to add the hosts fingerprint on the first connection:

Since my authorized_keys and known_host on pgdb1 is fine for all other hosts, I’ll scp these files to the other nodes:

It’s a good thing to test the authentication now, by just trying to SSH between all nodes. You shouldn’t get any prompt at all and be presented with the other host’s command prompt.

For example:

To setup the SSH-authentication for the postgres-user, we first need to create the user:

One option at this point, to setup public key authentication, would be to repeat the steps as we did for root. But I’ll just reuse the generated keys, known_hosts and authorized_keys from root and use them for user postgres:

We created the homedir for postgres as root and also copied the ssh-related files with root permissions, so for everything to work, we need to change the ownership and the SELinux context:

It’s also a good idea to test the SSH-connection between all hosts using the postgres user from everywhere:


The standard CentOS repositories do not contain pgpool or repmgr and for CentOS 7, the supported PostgreSQL version will be 9.2. Because we want to use Postgesql 9.4, repmgr and pgpool, we need to add the PGDG repository to Yum.

PGDG is a project that maintains RPM builds of PostgreSQL and related components. You can find more information and repo-URL’s for PGDG here:

Let’s add the repo on all nodes:

Setup the primary PostgreSQL node

Now that we finished with all prerequisites, it’s time to really get started. The first step is to install the PostgreSQL database for the master and configure it correctly for replication.

Let’s start with installing the necessary packages:

I experienced some problems with the latest verion of repmgr (repmgr94-3.0.1-1 at the time of writing), especially to recover from a failed node, so I decided to use the previous version of repmgr. When I did, all problems seemed to disappear.

After the installation, initialize PostgreSQL:

Configure database access by editing /var/lib/pgsql/9.4/data/pg_hba.conf:

The above gives access to both DB-nodes to the databases used for replication without a password, allows user pgpool from the PGpool-node and allows all other users, using a password via pgpool.

Configure PostgreSQL itself and streaming replication by editing /var/lib/pgsql/9.4/data/postgresql.conf:

It’s a good idea to merge the changed parameters with the existing, default, contents of postgresql.conf since it contains a lot of useful comments (annotated). I’ve just posted the effective settings here to keep it clear.

To get good values for your buffers and memory reservations, you can use this as a source:

Create the directory for the repmgr configuration files:

Configure repmgr by editing /var/lib/pgsql/repmgr/repmgr.conf:

Don’t forget to set the owner of the configuration file to postgres:

Open TCP port 5432 in the firewall for PostgreSQL-communication:

Enable and Start PostgreSQL on the master:

Create the required users for replication and repmgr and the repmgr DB:

The last step is to register pgdb1 as master node for repmgr:

Setup the standby PostgreSQL node

Setting up the standby node doesn’t require a lot of configuration. On the first sync with the primary, most of the configuration is taken from here.

As with the primary, the first thing is to start with installing the necessary packages:

When all packages are installed, we can sync the configuration and contents of the primary with the standby:

In case you experience problems with the synchronisation and you need to repeat this step, first delete the contents of the PostgreSQL datadir (rm -rf /var/lib/pgsql/9.4/data/*) and try again.

After the synchronization with the primary, configure repmgr, similar as you did on the primary by creating the directory for the repmgr configuration files:

and by editing /var/lib/pgsql/repmgr/repmgr.conf:

Don’t forget to set the owner of the configuration file to postgres:

Open TCP port 5432 in the firewall for PostgreSQL-communication:

Enable and Start PostgreSQL on the standby:

As the last step in the standby node setup, register the standby in repmgr:

Test the replication

Before we will do a test, let’s first look at the status of repmgr:

The above seems to look good. At this point, updates on the master should be replicated to the slave so it’s a good time to see if that really works as we would expect by creating a new database on the primary node.

Before we create the database on the primary, let’s list the databases from the standby-node:

Create the database on the primary:

Same command as we did before the DB-create: list the current databases on the standby:

As you can see, the new database, test, got replicated to the standby.

Another test which you could do, is to check if the standby is read-only:

The above result is good, since we don’t want somebody to update our standby, everything should pass via the primary and get replicated to the standby.

Setup pgpool

The next chapter is to setup pgpool in order to distribute queries to the primary and standby and to make sure that we failover to the standby in case the primary isn’t available anymore.

As with the DB-nodes, we’ll start with installing the necessary packages:

Configure pgpool by copying the sample configuration file:

Edit the following settings in the copied sample file /etc/pgpool-II-94/pgpool.conf, do not replace the contents of the file but edit everything listed under here:

In the above settings, we specified that on failover, the script should be executed, so we’ll need to create this file in /etc/pgpool-II-94/

Make sure that this script is executable:

We also specified in the configuration that we want to use pool_hba.conf so we need to create this file in /etc/pgpool-II-94/pool_hba.conf:

The above allows all users to connect using a password.

Every user that needs to connect via pgpool needs to be added in pool_passwd. First we need to create this file and let it be owned by postgres:

Next, we can add users to the file. Let’s do that for the pgpool-user which we created earlier:

You’ll have to repeat this step for every user that needs access to the DB.

Last step is to allow connection via PCP to manage pgpool. This requires a similar approach as with pool_passwd:

Now that the configuration of pgpool is complete, it’s time to open up the firewall, enable and start the pgpool service:

Test pgpool

If all went well, at this point, you should have a working installation of pgpool that will execute queries on the replicated PostgreSQL nodes.

Let’s test if everything is working by executing a query on the databases via pgpool:

This should also work from another host that has network access to the pgpool-machine.

Pgpool has a range of show-commands available that allow you to get the status of pgpool itself. They’re executed as normal SQL but will be intercepted by pgool.

To get the status of the nodes connected to pgpool:

As you can see, pgdb1 is the primary and pgdb2 is the standby node.

Test failover

We can easily test the failover-scenario by bringing down the primary database node and see what happens:

Our setup becomes like this:


Result on pgpool:

You can see that the former slave (pgdb2) has now became the primary. the former primary (pgdb1) has now become a standby but has an offline status (3).

Result in repmgr:

In the script which we defined and was called when the primary went unavailable (/etc/pgpool-II-94/, we specified to log to /tmp/pgpool_failover.log. Contents of the file after the failover:

As you can see, the database stays available via pgpool and the former standby became read-write.

Recover after failover to a master-slave situation

After the failover, you end up in a working, non-redundant situation. Everyhting keeps working for users of the database but it would be good to go back to a primary-standby configuration.

The goal is to get to this setup:


The first step in this process is to investigate why the primary DB became unavailable and resolve that problem. Once you’re sure that the host (pgdb1 in our example) is healthy again, we can start using that host as the new standby-server.

First, make sure that the database on pgdb1 isn’t started:

Then, we’ll sync the data of pgdb2 (the current primary) to the previously failed node (pgdb1). In the meanwhile, the database has got some updates and changes so we need to make sure that those get replicated to pgdb1 before we start it as a standby:

At this point, the data is (more or less) in sync with the current primary on pgdb2, so let’s start pgdb1 to act as the standby:

Repmgr notices automatically that the new standby got available:

For pgpool, we need to re-attach the failed node in order for it to be visible and usable as a standby-node:

At this point, we’re back in a redundant status where the old standby (pgdb2) functions as the primary and the old primary (pgdb1) functions as a standby. If both machines are equal, you can leave the situation as is and continue to use it in this way.

Recover to the original situation

In case you want to go back to the initial setup (pgdb1 as primary, pgdb2 as standby), you can initiate a manual failover by stopping the current new primary (pgdb2) and recover it to use as a standby:

When the master (pgdb2) stopped, a failover was triggered and pgdb1 gets assigned as primary again:

Now, sync pgdb2 with the new primary (pgdb1) and start it:

Re-attach pgdb2 to pgpool in order to use it as a slave:

After going trough all of the above steps, we’re back to the inital situation where pgdb1 acts as the primary and pgdb2 acts as the standby.

35 thoughts on “Setup a redundant PostgreSQL database with repmgr and pgpool

  1. Thank you very much for this very detailed and useful tutorial for having a HA PostgreSQL server.

    When configuring pgpool.conf, you mention for recovery_1st_stage_command. Probably the version available on the pgpool site could work, but I’m wondering if thanks to repmgr, it could be simplified. Do you have an example of it ?

    • Hi,

      No idea if it could be simplified, I used the default supplied script to keep things simple.

  2. Hi,

    Thanks for very detailed tutorial. This is very helpful.

    But do you have any idea how to setup PG-Pool redundancy?

    Thanks in advance.

  3. Great tutorial !.
    Is there any tool that automates the last part of “sync pgdb2 with the new primary” and also pgpool re-attach the failed node ? or this is mainly done manually by the DBMS?
    In case the pgpool is not required in the installation how can the repmgr be activated/triggered (detecting the failure and doing the fail-over change) ?

  4. Excellent tutorial, helped me a lot!
    Do you know where can I find the logs? In the pgpool.conf I turned on “log_connections”, “log_hostname”, “log_statement” and “log_per_node_statement”, just for testing purposes, so I can monitor the nodes and the balancing, but don’t know where I can find it.

  5. Hey Jens, thank you for the awesome article!

    Your instructions were very helpful for the Chef cookbook I developed:

    Even if you’re not a Chef user, you may find the enhancements useful that I made to the following scripts:,, pgpool online recovery and pgpool_remote_start (all located in the templates directory)

  6. Hello

    Excellent tutorial, helped me a lot!
    But pgpool2 fails to restart after reboot because the /var/run/pgpool-II-94 directory is empty

    Any Idea ??

  7. hello…
    I want to configure both nodes with read-write access. can it possible?? Please help….

    • Hi,

      Having both nodes as read/write is not possible. This would cause issues with consistency. I don’t think there are many redundant DB solutions out there which could do that :)

  8. Hi

    Thank you for the wonderful article.
    I’m new to postgresql and pgpool.
    I have query regarding the “” file, where it is present and if not from where can I get this file and what will be the location to place this file.
    Also do I need to put this file on the pgpool node or on the database nodes s well.


      • Thank you for the reply Jens.

        I have found the “” script file. I want to automate the recovery of the failed node. At present, I have to run the “pcp_recovery_node” command manually on the Pgpool node to recover the failed node. I have placed the script ‘’ and ‘pgpool_remote_start’ on the Postgresql database nodes.
        How can I automate this process, without any manual intervention. The failed node should get recovered and also add it to the cluster as well. My ‘’ does not conatins the creation of trigger file. Is this the proper way to do the failover or I am missing something.
        Please find the ‘’ script used by me:

        # first stage recovery
        # $1 datadir
        # $2 desthost
        # $3 destdir

        #as I’m using repmgr it’s not necessary for me to know datadir(master) $1
        #repmgr needs to know the master’s ip
        MASTERNODE=/sbin/ifconfig eth0 | grep inet | awk '{print $2}' | sed 's/addr://'

        cmd1=ssh postgres@$RECOVERY_NODE "repmgr -D $CLUSTER_PATH --force standby clone $MASTERNODE"
        echo $cmd1

  9. The very best implementation of postgresql-pgpool i’ve come across on the web after intensive search. You made the whole procedure look simple. Thanks

  10. In addition, the you have any gui manager of pgpool over postgresql. I would like my clients to make use of gui for operations instead of the command line.

    • Hi, I’m not aware of any GUI managers for pgpool. For PostgreSQL itself, there is pgAdmin3.

  11. Jens,

    Thanks for wonderful tutorial on PostgreSQL hot stand by setup using repmgr and pgpool. It was awsome experience for me to configure hot stand by in postgresql for the first time. Just by following your instructions, one can setup the same successfully. Only change I made was used newer version of Postgresql 9.5 instead of 9.4. However, if you can show the way how to configure the same without using pgpool that will be additional help for those who do not want to use pgpool.

  12. Pingback: Comando de administración Pgpool modo Balanceo de Carga y Postgresql replication Slots – Linux Trucupei Blog

  13. Pingback: Automatic recovery of the failed postgresql master node is not working with pgpool II - Tech

  14. Hello All,

    I configured according to the guidelines, but there is a problem. I have 2 servers:

    Masterdb and standbyDB.

    MasterDB down -> standbyDB up to master.
    I move MasterDB -> standby “Ok” (follow the instructions)

    But when StandbyDB (master) down the MasterDB (standby) can not become a master

  15. Using PostgreSQL 9.5, pgPool-II-9.5, and repmgr 3.1.3 there are a few tweaks that are helpful to be aware of.

    First, doesn’t appear to be needed with the repmgr. I don’t think. I could be wrong, I’m still digging, but so far it’s working without it.

    Second, repmgr defaults to use pg_basebackup instead of rsync. pg_basebackup requires the destination directory to be empty. If using the command:

    /usr/pgsql-9.5/bin/repmgr -D /var/lib/pgsql/9.5/data -d repmgr -p 5432 -U repmgr -R postgres –force standby clone pgdb2

    You’ll get an error about “pg_basebackup: directory “/var/lib/pgsql/9.5/data” exists but is not empty. To use the rsync option instead of pg_basebackup, and so you don’t have to start from an empty directory, use this instead (add -r):

    /usr/pgsql-9.5/bin/repmgr -D /var/lib/pgsql/9.5/data -d repmgr -p 5432 -U repmgr -R postgres -r –force standby clone pgdb2

    That will use the rsync method and continue as expected. Hope this saves someone some time.

  16. Also, very well written instructions. There are a few config changes with the different versions I was using, but by an far one of the best write-ups on setting this up. Thank you.

  17. Hi,

    After failing the master, it dose not switch to the standby. here is the the failoer error:
    Failed node: 1
    + /usr/bin/ssh -T -l postgres PGDB1 ‘/usr/pgsql-9.5/bin/repmgr -f /var/lib/pgsql/repmgr/repmgr.conf standby promote 2>/dev/null 1>/dev/null <&-'
    Host key verification failed.

  18. This is really a beautiful techi doc. I am not database guys, but after reading it. It make my mind open in database replication.

    But i have a question, suppose i have an application what need to integrate with this replication. Can someone please explain me how this replication work with application like Apache or others. Means should all data will be send to pgpool and then pgpool will decide where it go. OR data will go to directly with pgnode1 or pgnode2…

    Any suggestion will be appropriated. And thanks a lot for this doc.

  19. really brilliant
    but i have an issue
    when i make pgdb1 down i should restart pgpool and pgdb1 to get new settings
    and when back to the original situation also need restart the three nodes

    is it my own issue or what

  20. what is the log file to manage connection
    list all active connection
    list what IPs doing
    is that available or not ???

  21. Pingback: Install postgresql-9.5 replicate with repmgr from source – "DOKUMENTASI"

  22. hi,

    When i stop postgresql in master, I checked again on pgpool error:
    root@localhost ~]# psql -U pgpool –dbname=postgres –host pgpool -c “show pool_nodes”
    psql: server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
    Please help me!

Leave a Reply

Your email address will not be published. Required fields are marked *