varlena
varlena
PostgreSQL Training,
Consulting & Support
General Bits
By A. Elein Mustain

13-Sept-2004 Issue: 86

Archives | General Tidbits | Google General Bits | Docs | Castellano | Português | Subscriptions | Notifications | | Prev | Next

General Bits is a column loosely based on the PostgreSQL mailing list pgsql-general.
To find out more about the pgsql-general list and PostgreSQL, see www.PostgreSQL.org.

Slony-I: Example of Replicating A Small Database: Swapping Masters
Slony-I Setup 10-Sept-2004

In this article we will walk through the steps required to add a node to the scheme, switch data providers of a table set and then promote a replication to master all using the replication scheme originally set up in Issue #85.

Adding a Node

For the sake of an example, we will add a replica of the gb table set on another database in the 8.0 installation. This will give us the master node on 74, database gb and two replicas gb and gb_replica on the 8.0 installation. Generally, one would not replicate to the same installation, however, if there is a logical reason for it, it can be done. It is assumed that the gb_replica database in the 8.0 installation is created and seeded with the same schema as the other two databases in this cluster.

First we define the new Node 3, then we ensure there are paths from Node 3 to and from Nodes 1 and 2. Next we want to enable listening along each path.

(This is a really good time to remember that the connection information in the store path command pertains to the server node. This is also a pretty good time to look up drop path and drop listen another two of the slonik commands.)

#!/bin/sh
slonik << _END_
#
# Define cluster namespace and node connection information
#
cluster name = gb;
node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres';
node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres';
node 3 admin conninfo = 'dbname=gb_replica host=localhost port=5430 user=postgres';
echo 'Cluster defined, nodes identified';

#
# Initialize the cluster and create the second node
#
store node (id=3, comment='gb_replica 8.0 5430');

#
# create paths
#
store path (server=1, client=3, conninfo='dbname=gb host=localhost port=5434 user=postgres');
store path (server=2, client=3, conninfo='dbname=gb host=localhost port=5430 user=postgres');
store path (server=3, client=1, conninfo='dbname=gb_replica host=localhost port=5430 user=postgres');
store path (server=3, client=2, conninfo='dbname=gb_replica host=localhost port=5430 user=postgres');

#
# Enable listening along each path
#
store listen (origin=1, client=3, provider=1);
store listen (origin=2, client=3, provider=2);
store listen (origin=3, client=1, profider=3);
store listen (origin=3, client=2, profider=3);

_END_
Now we are ready to start a new slon process for Node 3, the 80 installation plus the gb_replica database. Once the new slon process is up and running we can subscribe the Table Set 1, originating at Node 1, to Node 3. At this point the log files are invaluable--tail -f the log files to watch for errors. If you see any, go back to that part where drop path was mentioned.
#!/bin/sh

#
# Start Slony for each node
# 
slon gb_replica "dbname=gb_replica user=postgres port=5430 host=localhost" > slon_gbREP_80.out 2>&1 &
sleep 20;

slonik << _END_
#
# Define cluster namespace and node connection information
#
cluster name = cookie;
node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres';
node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres';
node 3 admin conninfo = 'dbname=gb_replica host=localhost port=5430 user=postgres';

subscribe set (id=1, provider=1, receiver=3, forward=yes);
echo 'set 1 of gb tables subscribed by node 3';

_END_

Swapping Providers

If your replication scheme has more than our measely two nodes, you may want to switch providers. In our case Node 1, the origin of set 1, provided the information on set 1 to Node 2. When we added Node 3 we subscribed it to Table Set 1. That Table Set was both origininated and was provided by Node 1.

Node 3 could have either Node 1 or Node 2 as a provider of that set information. Node 2 is available as a provider because when we subscribed Node 2 to Table Set 1, we also enabled it as a forwarder of Table Set 1. We also provided Listen paths from all Nodes to all Nodes.

Suppose Node 3 were subscribed to Table Set 1 via Node 1 and we wanted to change the provider of Node 3's informatin to Node 2. Then since the Paths and Listens are already set up, we simply resubscribe the set, setting the provider to Node 2 instead of Node 1.

slonik << _END_
#
# Define cluster namespace and node connection information
#
cluster name = cookie;
node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres';
node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres';
node 3 admin conninfo = 'dbname=gb_replica host=localhost port=5430 user=postgres';

subscribe set (id=1, provider=2, receiver=3, forward=yes);
echo 'set 1 of gb tables subscribed by node 3';
_END_

Swapping Masters

In our example the database called gb that is mastered on a postgreSQL 7.4 installation is replicating to an 8.0 installation. The decision has been made to move the database forward so that the 8.0 installation is the master. (This is not recommened (yet!) since 8.0 is still in beta, but you will want to be prepared to take advantage of a fast upgrade when it goes production!)

Before you even consider swapping masters, you must have in hand a complete set of steps you must follow to switch your applications accessing the 7.4 installation to the 8.0 installation. These steps are application dependent and so you are on your own. However, the steps probably consist of stopping each application, changing the connection information (ideally in one place) and bringing the application back up. It is imperative for a smooth and fast switchover that you have the application switch information at hand. Write them down and save the instructions in your source code control mechanism. You will never be sorry you did that.

One more cautionary note, it is highly recommended that you test your application on a copy of the 8.0 database. This copy should not be the replica if your application writes to the database. It should be another copy. Remember that replicas are read only databases. Oh, yes, one more thing. Backup your databases before performing any major operations on them. Switching replication masters is considered a major operation. No Excuses. Backup!

Everything that happens in a Slony-I replication happens because of an Event. One of the important Events is a SYNC Event. Every subscribing node replicates data up to a SYNC event and then commits the changes to the database. All nodes capable of forwarding subscriptions also log those changes until the SYNC event has been confirmed by every node subscribed to that set. This ensures that replication data remains available in the system until Slony is sure it is not needed for any node.

To change the master of a set, you must first ensure that there are listeners for any of the new paths. In our example, we provided listening paths from and to both of the nodes already so no new listener paths are required. Before swapping a master on a subscribed set, the set must be locked to ensure that no updates occur during the swap over part. Then the set may be moved. Finally, the set must be subscribed to by the newly designated replica node that was formerly the master node.

Before you run the script, ensure that write access to your master database is OFF.

#!/bin/sh

slonik << _EOF_

cluster name = gb;
node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres';
node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres';
node 3 admin conninfo = 'dbname=gb_replica host=localhost port=5430 user=postgres';

# add listener paths if required

#
# lock and move set
#
lock set (id=1, origin=1);
move set (id=1, old origin=1, new origin=2);

# subscribe set if required
subscribe set (id=1, provider=2, receiver=1, forward=yes);

_END_
After this script, Slony-I is ready for the change and so you should restart your application and have it point to the new master for writes. This process should only take seconds.

Slony-I Schema

The underlying tables for Slony-I are fairly straightforward. The cluster name is the name of the schema in the database in which the Slony tables reside. (Use set search_path in psql.) You can verify your commands to add nodes, listens, paths, etc. by examining these tables. It also looks very tempting to "fix" things by just changing the underlying tables. However, resist doing so. Use Slonik in order to ensure all the the appropriate Events are triggered to do the updates in an orderly fashion across all nodes.


References

This article is based on the following resources:

The scripts for this article are also available individually.

Contributors: Jan Wieck JanWieck at Yahoo.com, elein at varlena.com


Comments and Corrections are welcome. Suggestions and contributions of items are also welcome. Send them in!
Copyright A. Elein Mustain 2003, 2004, 2005, 2006, 2007, 2008, 2009

Top
Google
Search General Bits & varlena.com Search WWW