PostgreSQL Training,
Consulting & Support
General Bits
By A. Elein Mustain

27-Sept-2004 Issue: 88

Archives | General Tidbits | Google General Bits | Docs | Castellano | PortuguÍs | Subscriptions | Notifications | | Prev

General Bits is a column loosely based on the PostgreSQL mailing list pgsql-general.
To find out more about the pgsql-general list and PostgreSQL, see

Schema Changes with Slony
DDL application 22-Sep-2004

Schema Changes

Changing the schema of a database being replicated is not simple. The schema changes must occur at the exact transactional point in time. Direct schema changes could lead to serious corruption in Slony-I due to the handling and disabling of triggers, rules and constraints in the replica.

Slony-I provide a way to execute SQL statements via the Event mechanism. This provides the transactional integrity necessary for schema changes as well as the trigger, rule and constraint changes required for the replicas. The Event must be initiated on the Master Node.

I've written a couple of trivial Slony setup views to show my current Slony setup:

$ gb
     node  |       comment       |   active    
    Node 1 | gb 7.4 5434         |  Is Active 
    Node 2 | gb 8.0 5430         |  Is Active 
    Node 3 | gb_replica 8.0 5430 |  Is Active 
   (3 rows)
    Client Node | Connects to Server Node |                      Via connection                      
    Node 1      | Node 2                  | dbname=gb host=localhost port=5430 user=postgres
    Node 1      | Node 3                  | dbname=gb_replica host=localhost port=5430 user=postgres
    Node 2      | Node 1                  | dbname=gb host=localhost port=5434 user=postgres
    Node 2      | Node 3                  | dbname=gb_replica host=localhost port=5430 user=postgres
    Node 3      | Node 1                  | dbname=gb host=localhost port=5434 user=postgres
    Node 3      | Node 2                  | dbname=gb host=localhost port=5430 user=postgres
   (6 rows)
     Node  | Listens to  |  Via   
    Node 1 | Node 2      | Node 2
    Node 1 | Node 3      | Node 3
    Node 2 | Node 1      | Node 1
    Node 2 | Node 3      | Node 3
    Node 3 | Node 1      | Node 1
    Node 3 | Node 2      | Node 2
   (6 rows)
     Set  | Is Provided By | Is Received By | Is Forwardable |  Is Active  
    Set 1 | Node 1         | Node 2         | t              |  Is Active 
    Set 1 | Node 1         | Node 3         | t              |  Is Active 
   (2 rows)
     Set  |      Table      | Table Id | Unique Index Name |     Comment     
    Set 1 | public.issues   |        1 | issues_pkey       | Issues table
    Set 1 | public.articles |        2 | articles_pkey     | public.articles
    Set 1 | public.arttext  |        3 | arttext_pkey      | public.arttext
    Set 1 | public.contrib  |        4 | contrib_pkey      | public.contrib
    Set 1 | public.keywords |        5 | keywords_pkey     | public.keywords
    Set 1 | public.sqlbase  |        6 | sqlbase_pkey      | public.sqlbase
    Set 1 | public.sqlkw    |        7 | sqlkw_pkey        | public.sqlkw
    Set 1 | public.whoall   |        8 | whoall_pkey       | public.whoall
    Set 1 | public.artfile  |        9 | artfile_pkey      | public.artfile
   (9 rows)

To add a "dummy" column to the artfile table we will want to issue an Event to the master node (1) pointing to the ALTER TABLE script so that it can be synchronized between databases. The EXECUTE SCRIPT command in Slonik will do this. Remember Slony must be able to find your change script.

   # Create a new column in a table in the replicated set 
   echo "Creating new column in the artfile table"
   slonik <<_EOF_
   cluster name = gb;
   node 1 admin conninfo = 'dbname=gb host=localhost user=postgres port=5434';
   node 2 admin conninfo = 'dbname=gb host=localhost user=postgres port=5430';
   node 3 admin conninfo = 'dbname=gb_replica host=localhost user=postgres port=5430';
   execute script ( SET ID = 1, FILENAME = 'changes20040919.sql', EVENT NODE = 1);
Once this change is propagated, you can do an update to populate the new column and verify it is being updated on all replicas.


Contributors: elein at
Schema Maintenance
Organizing Schema Updates 24-Sep-2004

Managing schema updates in a sane way is difficult. It is imperative that the schema be reproduceable after the initial design and usually this is done with incremental SQL scripts. The issues with regards to schema upgrades multiply when development and production (and replication!) servers are involved.

One of the tools I use to determine if a particular upgrade script is necessary is the following function. This function determines whether a column in in a table or view. This is helpful for alter table scripts which add new columns or reconstruction of views.

   CREATE OR REPLACE FUNCTION column_in_table( text, text, text)
   RETURNS boolean AS
      (SELECT attname::text 
        FROM pg_catalog.pg_attribute a 
           JOIN pg_catalog.pg_class c ON (a.attrelid = c.oid)
           JOIN pg_catalog.pg_namespace n ON (n.oid = c.relnamespace)
        WHERE a.attname = $3 AND pg_catalog.pg_table_is_visible(c.oid)
           AND c.relname = $2 AND n.nspname=coalesce($1,''public''::text) 
      ) IS NOT NULL
   THEN ''t''::boolean
   ELSE ''f''::boolean
To select the value of this into a variable in a shell script you can use:
   x=`psql $db -Atc "select column_in_table('file_mod', 'file_control');"`

Contributors: elein at
Alternative quotes for plpgsql
Easier Constant Quoting 26-Sep-2004

For those of us who have struggled to escape many levels of single quoting over the years, the advent of dollar-quoting is a wonderful thing. dollar-quoting enables you to delimit string constants with an alternative to single quotes. The alternative is a $ followed by zero or more characters and another $.

	$$ ... some string $$
	$str$ ... some string $str$
The dollar-quotes may be nested as long as the tags within the $ signs are different and not interleaved.
	$$ This is a $q$'quoted'$q$ string $$

The body of a function, for example is a string constant. You can now define your function enclosed in dollar-quotes and not bother escaping the single quotes within the function.

   create function hello_world ( text, text )
   returns text as
	   select 'hello, world!';
   language 'sql';
That example is trivial, however, in more complex functions where escaping characters is a significant burden, using dollar-quoting makes a big difference. This is expecially true for dynamic queries where you must build an SQL command, including quotes within a string constant itself.

This handy little function shows comments on tables and functions. The function is declared within $$ quotes. And various string constants within the function are enclosed with $qn$. I numbered the matching quotes to better be able to match them correctly. In this function parameters passed as a table or function name needed to be single quoted in the query string. Also, the table that the look up depends on needs to be quoted. (Can you simplify these strings further?)

   create type comments_t AS (thing text, oname text, comment text);
   create or replace function get_comment(text, text)
   returns setof comments_t as
      ret comments_t%ROWTYPE;
      rec RECORD;
      objtype alias for $1;
      iname alias for $2;
      tbl text;
      qry text;
      types text;
      ret.thing = objtype;
      IF objtype = 'table'
         tbl = 'pg_class';
         qry := $q1$select relname as oname , obj_description( p.oid, '$q1$ || tbl ||
            $q2$') as comm from pg_class p where relname = '$q2$ || iname || $q3$';$q3$;
      ELSIF objtype = 'function'
         tbl := 'pg_proc';
         qry := $q4$select proname || oidvectortypes(proargtypes) as oname,
            obj_description( oid, '$q4$ || tbl || $q5$' ) as comm from $q5$ ||
            tbl || $q6$ where proname = '$q6$ || iname || $q7$';$q7$;
         RAISE EXCEPTION $q$USAGE: get_comment( 'table | function', object_name )$q$;
      END IF;
      FOR rec IN EXECUTE qry
         ret.oname = rec.oname;
         ret.comment = rec.comm;
         RETURN NEXT ret;  
      END LOOP;
      RETURN ;
   $$ LANGUAGE plpgsql;
   COMMENT ON FUNCTION get_comments(text,text) IS
   $q$USAGE: get_comment( 'table | function | aggregate', object_name )$q$;

The output of the get_comment function is helpful when looking for overloaded functions and for understanding what the functions do.

   db=# select * from get_comment('function','date');
     thing   |               oname               |                 comment                  
    function | date(text)                        | convert text to date
    function | date(abstime)                     | convert abstime to date
    function | date(timestamp without time zone) | convert timestamp to date
    function | date(timestamp with time zone)    | convert timestamp with time zone to date
   (4 rows)
   db=# select * from get_comment('function','circle');
     thing   |              oname              |              comment               
    function | circle(box)                     | convert box to circle
    function | circle(polygon)                 | convert polygon to circle
    function | circle(point, double precision) | convert point and radius to circle
   (3 rows)

Not all tables and functions have comments on them. However if you create tables and functions adding comments on them with the COMMENT ON statement is very helpful in the long run.

Contributors: elein at
Vacuum Options
Vacuum Full, Analyze, Lazy, Auto 06-Oct-2004

Vacuum reclaims deleted rows and free space for PostgreSQL. It is required because PostgreSQL has a lazy delete system where tuples are marked for deletion but not actually deleted until VACUUM is run.

VACUUM without FULL does not use exclusive locks and so it can be used on active databases. VACUUM with FULL does more rearranging of data to compact the table. VACUUM with FULL is also a bit slower because it does more work.

In the postgresql.conf file the variable DEFAULT_STATISTICS_TARGET sets the maximum number of entries in the most-common-value list and the maximum number of bins for the histograms used in calculating the statistics for all tables. The default value is 10, but may be changed to a value from 0 to 1000. A value of 0 effectively (and unwisely) disables statistics. A higher value will take more space and time during an ANALYZE or VACUUM ANALYZE command. The higher value will also create more accurate statistics and therefore more accurately optimized queries.

It is possible to tune the statistics for particular columns in tables by using the ALTER TABLE command.

This setting overrides the postgresql.conf DEFAULT_STATISTICS_TARGET setting for the column specified. For columns never used in a WHERE, GROUP BY or ORDER BY clause, this value can be set lower. For columns often used, it could be higher with the caveat that ANALYZE will be a tiny bit more costly. To unset the statistics setting on a particular column and have it use the setting in DEFAULT_STATISTICS_TARGET, use ALTER TABLE to set the value to -1.

VACUUM VERBOSE can help you set your free space mapping variables in the postgresql.conf. This technique is described in the first article in Issue #79. It is important to notice when plain VACUUM does not reclaim space and yet VACUUM FULL does reclaim space; this may indicate that your free space map settings are too low. Setting Free Space Mappings is discussed in the sixth article in Issue #13.

Contributors: elein at

Comments and Corrections are welcome. Suggestions and contributions of items are also welcome. Send them in!
Copyright A. Elein Mustain 2003, 2004, 2005, 2006, 2007, 2008, 2009

Search General Bits & Search WWW