|
||||||||||||||||
|
||||||||||||||||
|
||||||||||||||||
|
||||||||||||||||
|
||||||||||||||||
GeneralBits is now included in the Site Search at PostgreSQL. To search General Bits you can also use the Google at the bottom of this page. Subscribers to General Bits also have access to Indexable General Bits which provides access to hand indexed articles.
In recognition of his role as lead developer on the internationalization front, as well as his invaluable work in both the build and release processes, Peter Eisentraut has been invited, and has accepted, to join the Core Committee. Peter has also been key to contributing work towards organizing documentation and translations and is also the author of the Information Schema. Peter has actively participated in HACKERS, GENERAL and other PostgreSQL lists for some time.
The group of the last several GeneralBits issues (84-87) and Issue #63 (an overview) are now grouped together in Tidbits for easier reference. The replication example we have been working with began as a 7.4 database gb (Node1), replicating to an 8.0 installation (Node2). We added a third node on the 8.0 database called gb_replica. Initially the third node was replicated directly from Node1. We switched Node3's provider to Node 2. And then we promoted Node2 to be the master. So now we have an 8.0 master at Node2 and two replicas, one 7.4 and one in the same installation as gb_replica. (Having two replicas in one installation is not a usual use case, but it works for our example.) We will discuss the second replica, gb_replica, as if it were on another machine or installation. FailoverSuppose our Master database failed or lost power and had to be taken off line. (Remember Node3 for the sake of discussion is not on this machine and so it will be treated as if it were live when the machine died.) What to do next is a human decision. The humans must be prepared for this failover scenario. In our case, we had decided to failover to Node1 in case of a failure on Node2.The applications communicating with the database will also be failing and must be taken offline and restarted pointing at the new master quickly after the master switch takes effect. The failover procedure for Slony is a combination of a provider change and a master change both of which were covered in the previous article. Previously we added Node3 and had it provided for by Node1. Then changed Node3's provider to Node2. And then we promoted Node2 to Master. In our failover case, Slony must do the reverse:
#!/bin/sh slonik <<_EOF_ cluster name = gb; node 1 admin conninfo = 'dbname=gb host=localhost user=postgres port=5434'; node 2 admin conninfo = 'dbname=gb host=localhost user=postgres port=5430'; node 3 admin conninfo = 'dbname=gb_replica host=localhost user=postgres port=5430'; failover (id=2, backup node = 1); _EOF_At this point the slon process for Node 2 should be dead. When you are ready to put Node 2 back into place, it will need to be added as a fresh empty database into the replication scheme. When it catches up, then you can switch masters so that Node 2 can again be the master of the cluster. These exercises had already been shown earlier, but they are here in one script for your reference. While resolving some of the problems that I ran into, I found that it was easiest and clearest for me to drop Node2. drop node "erases" the node from the entire replication cluster. This is different than uninstall node which removes the Slony-I schema from a specific database instance. Both tools are useful, but don't confuse them. The other issue I ran into was that while quickly cutting and pasting, I had mismatched paths, listens and connection information. It is very worthwhile to check each of these commands by hand to verify that the commands are exactly what you meant. And don't forget that the connection information for store path pertains to the server node. Undoing bad paths and listen connections is a delicate operation and it is very tempting to throw the whole thing away and start from scratch.
A 7.3 database was being updated to 7.4. During the testing we found at least one instance of incorrect syntax. Something in the functions that worked in 7.3, but due to 7.4's stricter typing rules did not work in 7.4. We could find and fix the problems by editing the dump file, however, that meant that we would have to edit it again when we dumped the production database again to move it to 7.4 for the next round of tests or for the real update. Yes, we could also dump the schema and the data separately. That is what is most sensible. But instead the problem of extracting 'create function' statements from the system catalogs took on a life of its own, as interesting problems do. What came of this is the following view definition CREATE OR REPLACE VIEW funcsource as SELECT '--\012create or replace function ' || n.nspname::text || '.'::text || p.proname::text || '('::text || oidvectortypes(p.proargtypes) || ')\012'::text || 'returns ' || t.typname || ' as \'\012' || p.prosrc || ' \'\012' || 'language \'' || l.lanname || ' \';\012' as func_source, proname as function, nspname as schema, t.typname as rettype, oidvectortypes(p.proargtypes) as args, l.lanname as language FROM pg_proc p, pg_type t, pg_namespace n, pg_language l WHERE p.prorettype = t.oid AND p.pronamespace = n.oid AND p.prolang = l.oid AND l.lanname <> 'c' AND l.lanname <> 'internal' ;You could use this as follows to output all of the functions definitions to a file: psql -Atc "select func_source from funcsource;" > functions.out Interesting additional qualifications can select out functions on return types, function name, schema, and argument types. AND rettype = 'bool' AND schema = 'public' AND function = 'myfunc' AND language = 'plpgsql' AND argtypes LIKE '%bool%'
|
||||||||||||||||
Comments and Corrections are welcome. Suggestions and contributions of items are also welcome. Send them in! Copyright A. Elein Mustain 2003, 2004, 2005, 2006, 2007, 2008, 2009 |