# OneTrust Cookies Consent Notice start for www.covermymeds.com Replication in MySQL vs. PostgreSQL
Replication in MySQL vs. PostgreSQL

Replication in MySQL vs. PostgreSQL

 |  CJ Estel

To keep this simple, for both MySQL and Postgres I will assume that you have already setup your master to have a user that can access everything that is needed for replication and that there is nothing in the network that would prevent your hosts from talking to each other.

Ultimately, I’m writing this blog because I always forget what options I typically use when doing this since I don’t do it that often and I spend way more time on the man page and the internet than I’d like to admit, so I figured I’d throw this up as a reminder for myself later and talk to some of the differences between MySQL and Postgres replication.

MySQL

There are many ways to stand up a slave in MySQL. The most common way for small databases is to use mysqldump. This method does cause locking to occur on the server while it is running. For large databases you will likely want to find a different solution like an lvmsnapshot or Percona’s XtraBackup tool.

Below are the most common options I typically use when dumping the data to setup my slave. mysqldump --all-databases --single-transaction --routines --triggers --add-drop-database --master-data=2 > mysql_source.sql

The --master-data option can be set to one and will put the binlog and the position into the appropriate statement. I prefer setting it to two as it will be commented out in the file. I like to edit the file to add in the master host and other parameters necessary so it will start up correctly when I run it, but if I forget to do that I don’t get any errors at the end of the output stating that it couldn’t connect to the master because it wasn’t specified.

Ultimately the line ends up looking similar to this:

CHANGE MASTER TO
MASTER_HOST='[ip-address]',
MASTER_USER='[repl-user]',
MASTER_PASSWORD='[repl-pass]',
MASTER_LOG_FILE='[log-file]',
MASTER_LOG_POS=[position];

Pass that file into mysql with a valid user or the proper authentication information mysql < mysql_source.sql

After that you can start the slave on the slave host and validate things are working
start slave;
show slave status\G

Keep in mind that when reviewing the output of seconds behind master that it is actually displaying the value of the current time minus the last replayed transaction. This means that if there are no new transactions on the server that the slave could report it is behind when in actuality it isn’t. I’ve always been a fan of having my replication checks actually write data on the master to avoid issues with this. Percona has some good tools around monitoring replication that I’d encourage you check out.

PostgreSQL

When you first look into Postgres replication you will see that there are a LOT of different types of replication that can be built into it. For the sake of simplicity, I’m going to focus on Streaming Replication which is what the community at large is focusing development efforts into and at this point is the “standard” way of doing Postgres replication.

When you first look up how to setup Streaming Replication after discovering that it is the Postgres standard, you will likely do what I did and step through the instructions one by one. While you’ll have to follow the setup for the replication user near the top and perform those steps on the master, you’ll want to skip down to the bottom of the page after the master is setup and read the shortcut if you will.

pg_basebackup can do all of the dirty work for you. I’ve found it SUPER simple to stand up a slave in postgres using this utility. It does all of the snapshotting of the master for you and pulls everything down over the existing Postgres service port to sync directly from the master to the slave. As long as you keep enough wal_logs (equivalent to MySQL binary logs) on the master, you run this one command and it sets everything up for you on the slave. See the man page or type pg_basebackup --help for all of the options.

pg_basebackup -h 192.168.0.10 -D /var/lib/pgsql/9.4/data -P -U replication --xlog-method=stream

Key Differences

How data flows through replication

MySQL data is written to the master and flushed down through the logs. The slave pulls the logs down and plays it through itself and writes out its own log. If you intend to use cascaded replication (A->B->C) then you need to be aware that C can not be pointed directly at A without ensuring that your logs first match up. To do this, you could stop the slave service on B, make sure that all logs are played on B, and then see what it’s position is and repoint C to that position on host A to change the upstream master.

Postgres has to match exactly on all servers with Streaming Replication. This gives you a nice benefit of being able to shuffle upstream and downstream replication hosts pretty easily as the logs match everywhere. The biggest drawback I’ve seen of this is that to achieve this state, Postgres prevents you from performing ANY writes to the slaves. Regular queries can be fielded from the slaves, but if you want to write a temp table you will get an error if that goes against a slave.

How log data is retained

MySQL has a variable called expire_logs_days. You can set this to how ever many days you want to keep your logs, specify your path to not be on the same volume as the database as a best practice, and for the most part that is what will happen.

Postgres doesn’t have anything built in to keep logs for a particular number of days. They have a variable called wal_keep_segments that specifies the number of wal logs to keep. Each wal log is 16MB in size, so with some calculation you can set it to a reasonable number to get the amount of retention that you desire. This however does not appear to be the recommended way to use this variable.

You can setup the archive for Postgres and it will push log files to that and allow you to manage them with regular filesystem maintenance or other tools. Some designs have this archive as a share across Postgres servers so that you can play forward from the archive until you get close to current and then switch over to playing forward from the server directly.

In Postgres 9.4 they released a new feature called replication slots. With replication slots, the slave essentially registers to the master and the master is aware that the slave exists and how far it has played through the logs. The master will automatically retain as many logs as it takes for any and all slaves connected to it to catch back up.

This is really nice for bringing a slave back up that went offline on a Friday before a holiday weekend, but it has its own drawbacks. Since the master will hold onto logs until that server comes back online and starts playing through them, it can cause your master to run out of disk space. If you are unable to get a slave back up and you are using replication slots, it is important that you log on to the server and unregister the replication slot that the slave was registered under.

Locking

As mentioned above, when running mysqldump with MySQL, a lock occurs while the database is being dumped. This can be disruptive to users or cause problems on systems that have high uptime requirements. For this reason, other backup tools are very popular when the database gets larger in size.

Postgres uses Multi-Version Concurrency Control (MVCC) and prevents locking in most cases. Running pg_basebackup takes a very brief lock to get a consistent state and then releases it and allows the slave to then play forward after it has downloaded the initial state provided enough logs are retained.

The equivalent of mysqldump for postgres is pgdump. This utility also avoids taking long locks in most scenarios, though I’m unaware of a way to take the sql dump that is generated by pgdump and setup replication.

Replication Options

MySQL allows you to specify statement based, row based, or mixed based replication modes. Row based is good for ensuring everything is exact, but it can send a lot more data than what is necessary. Statement based can be good as you’re just replaying the statements through the servers, so if a query updates a lot of data, you don’t have to send all of that data around your network. Statement based replication also allows you to see the statements in the binlog, so if something is broken you know what query caused it.

Statement based can run into issues though if you use things that may evaluate differently on servers. For instance, if I were to do an insert with a system generated UUID from the database, then that would yield a different result on the second box. Mixed mode seems to be the best of both worlds and will use statement based replication unless it detects a query that is unsafe for replication, and will automatically change to row based replication for transaction only and then will change back.

Postgres is more similar to the row based replication as it keeps everything in sync in all of the servers. Starting in version 9.4, 2nd Quadrant has been putting a lot of effort into their Bi-Direction Replication (BDR) project. Some of that effort has been around how this data gets sent between servers. It has introduced a new layer that allows you to achieve statement based replication results.

All or Nothing

MySQL is very flexible. You can replicate a single database, individual tables, or get as granular as you want. It also allows you to intentionally introduce differences between masters and slaves. For instance I could have very few indexes on my write master, but could have multiple indexes on my read slaves.

Postgres however is an all or nothing approach with Streaming Replication. You have to get all of the database server or none of it. Since everything has to match across servers, you can not distribute indexes differently between masters and slaves. As BDR is somewhat an extension of Streaming Replication, there are features in it that will allow you to replicate at the individual database level instead of the server level itself. BDR is still pretty new territory and we have elected not to run it in our environment yet. It looks like it will help to address some of the “all or nothing” issues that currently exist in Streaming Replication.

High Availability

MySQL gives you the power to do whatever you want. With that you can do some pretty cool things or you can shoot yourself in the foot. With my past MySQL HA deployments, I setup the auto_increment_increment and the auto_increment_offset on each server that would potentially take writes. This allowed me to manage the id fields and avoid key conflicts on PK’s that weren’t natural or were specified as unique.

Postgres has the concept of a Global Sequence. Basically any servers in their BDR cluster talk amongst themselves and automatically make sure to not stomp on sequences. At this time BDR is still relatively new and requires a good bit of locking to achieve DDL changes consistently across the cluster. In our environment the uptime requirements wouldn’t allow for those locks to be taken during normal business hours, so we haven’t used it at all outside of some minimal testing.

We hope as the product matures with future releases and plugins that it will solve a lot of our high availability and multi-datacenter needs.

Share this article on Facebook Share this article on Twitter Share this article on LinkedIn Share this article via email
Stay up to date on what's trending in healthcare