Annotated postgresql.conf and Global User Configuration (GUC) Guide

Conf Setting

Range

Resources

Default

SET

-o

Documentation

Comments

Connections and Authentication

Connection Settings

tcpip_socket

true, false

none

false

No

-i

If this is true, then the server will accept TCP/IP connections. Otherwise only local Unix domain socket connections are accepted.

Unless this is a test-only server, you probably want to set this to true. Do so after configuring your pg_hba.conf file for secure access.

max_connections

2 to Int Max

14k RAM

32

No

-N #

Determines the maximum number of concurrent connections to the database server. The default is 32 (unless altered while building the server).

An important setting. Keep it as low as possible for your application configuration; if you're running a web app with persistent connections, you may be able to lower it to 10-16.

superuser_reserved_connections

0 To max_connections - 1

Reduces regular connections available

2

No


Determines the number of "connection slots" that are reserved for connections by PostgreSQL superusers. At most max_connections connections can ever be active simultaneously. Whenever the number of active concurrent connections is at least max_connections minus superuser_reserved_connections, new connections will be accepted only from superuser accounts.

This is a new setting to protect superuser access in case of a maxed-out database. Do not set it to 0 unless you are very sure that connections to your database cannot be swamped. I generally set it to 1, as I only connect to the database as the superuser in the event of a problem.

port

129 to 32768

none

5432

No

-p #

The TCP port the server listens on.

Changing the PostgreSQL port can provide you weak protection against future database worms and script kiddies. However, you will need to remember to give port options on all connecting software and libraries, which can be a pain.

unix_socket_directory



''

No

-k $

Specifies the directory of the Unix-domain socket on which the server is to listen for connections from client applications. The default is normally /tmp, but can be changed at build time.

These options apply to UDP connections to the server, usually only used for a command-line connection from the console of the server machine. Since I usually disable UDP connections as insecure, I have no recommendations here.

unix_socket_group



''

No


Sets the group owner of the Unix domain socket. (The owning user of the socket is always the user that starts the server.) In combination with the option UNIX_SOCKET_PERMISSIONS this can be used as an additional access control mechanism for this socket type. By default this is the empty string, which uses the default group for the current user.


unix_socket_permissions



0777

No


Sets the access permissions of the Unix domain socket. Unix domain sockets use the usual Unix file system permission set. The option value is expected to be an numeric mode specification in the form accepted by the chmod and umask system calls. (To use the customary octal format the number must start with a 0 (zero).)


Security and Authentication

authentication_
timeout

1-600 sec


60

No


Maximum time to complete client authentication, in seconds. If a would-be client has not completed the authentication protocol in this much time, the server breaks the connection. This prevents hung clients from occupying a connection indefinitely. This option can only be set at server start or in the postgresql.conf file.


ssl

true, false

See notes

false

No

-l

Enables SSL connections.

SSL encrypts traffic over TCP/IP port, so the data transferred cannot be read normally. Due to encryption process, it adds to CPU intensive load on both client and server. It could also increase the size of data transferred required. However, it can be tricky to configure and troubleshoot, and not all client software can support SSL access.

krb_server_keyfile



''

No


Sets the location of the Kerberos server key file.

Only used for Kerberos authentication of users.

virtual_host



''

No

-h x

Specifies the TCP/IP host name or address on which the postmaster is to listen for connections from client applications. Defaults to listening on all configured addresses (including localhost).

Essential for databases in a secure network with a DMZ, or where the database server has a public and a private TCP/IP address.

db_user_namespace

true, false


false

No


This allows per-database user names. It is off by default.

By default, users in PostgreSQL can connect to any database. This may not be an acceptable scenario in some cases. This option can be used in such cases. This feature is intended as a temporary measure until a complete solution is found. At that time, this option will be removed.

Resource Usage

Memory

Please note: Raising many of these options will require you to set kernel options for your host operating system to increase the amount of memory allowed per process or per user. See the online documentation for suggested commands for various operating systems. Unless otherwise noted, all memory options are additive to determine the total memory used by PostgreSQL.

shared_buffers

16 to Int Max

8K RAM

64

No

-B x

Sets the number of shared memory buffers used by the database server. Minimum is 2 X max_connections

Sets the size of PostgreSQL's' memory buffer where queries are held before being fed into the Kernel buffer of the host system. It's very important to remember that this is only a holding area, and not the total memory available for the server. As such, resist the urge to set this number to a large portion of your RAM, as this will actually degrade performance on many operating systems. Members of the pgsql-performance mailing list have found useful values in the range of 1000-6000, depending on available RAM, database size, and number of concurrent queries. For servers with very large amounts of available RAM (more than 1 GB) increasing this setting to 6-15% or available RAM has worked well for some users. The real analysis of the precise best setting is not fully understood and is more readily determined through testing than calculation.

As a rule of thumb, observe shared memory usage of PostgreSQL with tools like ipcs and determine the setting. Remember that this is only half the story. You also need to set effective_cache_size so that postgreSQL will use available memory optimally.

sort_mem

64 to Int Max

1 KB

1024

Yes

-S #

Specifies the amount of memory to be used by internal sorts and hashes before switching to temporary disk files. The value is specified in kilobytes, and defaults to 1024 kilobytes (1 MB). Note that for a complex query, several sorts might be running in parallel, and each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. Also, each running backend could be doing one or more sorts simultaneously, so the total memory used could be many times the value of SORT_MEM. Sorts are used by ORDER BY, merge joins, and CREATE INDEX.

Command-line options require use of -o “option”.

Sort_mem is much more difficult to set. Adjust it upwards for: large databases, complex queries, lots of available RAM. Adjust it downwards for: low available RAM, or many concurrent users. Finding the right balance spot can be hard.

Another way to set this value is to monitor the PostgreSQL temp files (in PGDATA/base/DB_OID/pgsql_tmp) and adjust sort_mem upward if you see a lot of queries swapping from these temp files.

Also keep in mind that this parameter can be adjusted per connection. So if you only have a few really large processes, you can increase the sort_mem for them before query execution, and leave it low for the rest of the connections.

vacuum_mem

1024 to Int Max

1 KB

8192

Yes


Specifies the maximum amount of memory to be used by VACUUM to keep track of to-be-reclaimed tuples. The value is specified in kilobytes, and defaults to 8192 kilobytes. Larger settings may improve the speed of vacuuming large tables that have many deleted tuples.

As this setting only uses RAM when VACUUM is running, you may wish to increase it on high-RAM machines to make VACUUM run faster (but never more than 20% of available RAM!)

Further, this setting can be SET at runtime, so a good approach is often to set it low for the frequent regular VACUUMs, and set it high for the nightly/weekly/periodic VACUUM FULL.

Free Space Map

max_fsm_pages

1000 to Int Max

6 bytes RAM

10000

No


Sets the maximum number of disk pages for which free space will be tracked in the shared free-space map. The default is 10000. This option can only be set at server start.

A proper FSM setting can eliminate or at least postpone your need to run VACUUM FULL. The best way to set it is as follows: 1) figure out the VACUUM (regular) frequency of your database based on write activity; 2) run the database under normal production load, and run VACUUM VERBOSE ANALYZE instead of VACUUM, saving the output to a file; 3) calculate the maximum total number of pages reclaimed between VACUUMs based on the output, and use that.

If the above approach is impractical, try to estimate the total number of rows which are likely to be updated or deleted between VACUUMs, and use half of that number, increasing it based on the amount of work your periodic VACUUM FULL is still doing.

Please note that databases with high “peak” activity (bursts of 1 million updates but nothing else for minutes or hours) this number can be impossible to tune perfectly. Inserted rows are not significant for FSM. Finally, if your database server is short on RAM, increasing FSM to needed values may be counter-productive.

max_fsm_relations

10 to Int Max

40 bytes RAM

100

No


Sets the maximum number of relations (tables) for which free space will be tracked in the shared free-space map.

You definitely want to increase this to cover the expected number of tables in all databases ... say, 300 to 500. PostgreSQL develops odd performance quirks if it is does not have enough FSM_relations.

Disk Resource Usage

max_files_per_process

25 to Int Max


1000

No


Sets the maximum number of simultaneously open files in each server subprocess. The default is 1000. The limit actually used by the code is the smaller of this setting and the result of sysconf(_SC_OPEN_MAX). Therefore, on systems where sysconf returns a reasonable limit, you don't need to worry about this setting. But on some platforms (notably, most BSD systems), sysconf returns a value that is much larger than the system can really support when a large number of processes all try to open that many files. If you find yourself seeing "Too many open files" failures, try reducing this setting. This option can only be set at server start or in the postgresql.conf configuration file; if changed in the configuration file, it only affects subsequently-started server subprocesses.

Per the docs, mainly used for BSD. Don't bother with it unless you get a “too many files” message.

preload_libraries

File path

See notes

Empty

No


This variable specifies one or more shared libraries that are to be preloaded at server start. An initialization function can also be optionally specified by adding a colon followed by the name of the initialization function after the library name. For example '$libdir/mylib:init_mylib' would cause mylib to be preloaded and init_mylib to be executed. If more than one library is to be loaded, they must be delimited with a comma. If mylib is not found, the server will fail to start. However, if init_mylib is not found, mylib will still be preloaded without executing the initialization function.

By preloading a shared library (and initializing it if applicable), the library startup time is avoided when the library is first used. This effectively trades longer startup times for shorter delays calling libraries not in memory.

WAL Options

Settings

fsync

true, false

See notes

true

No

-F (off)

If this option is on, the PostgreSQL backend will use the fsync() system call in several places to make sure that updates are physically written to disk. This insures that a database installation will recover to a consistent state after an operating system or hardware crash. (Crashes of the database server itself are not related to this.)

NOTE: Since 7.2, turning fsync off does NOT stop WAL. It does stop checkpointing, however. This is a change in the notes that follow Turn WAL off (fsync=false) only for a read-only database or one where the database can be regenerated from external software. While RAID plus UPSes can do a lot to protect your data, turning off fsync means that you will be restoring from backup in the event of hardware or power failure.

On the other hand, WAL imposes significant penalties on database writes, especially in single-disk systems. Essentially you are doubling the amount of read/write activity required for each update, plus requiring you to disable performance-enhancing disk-caching features of your OS and hardware.

If WAL is off, the rest of the options in this section are irrelevant.

wal_sync_method

fsync, fdatasync, open_sync, or open_datasync


Varies by platform

No


Method used for forcing WAL updates out to disk. Possible values are FSYNC (call fsync() at each commit), FDATASYNC (call fdatasync() at each commit), OPEN_SYNC (write WAL files with open() option O_SYNC), or OPEN_DATASYNC (write WAL files with open() option O_DSYNC). Not all of these choices are available on all platforms.

The system call used to sync the WAL to disk. Defaults have been set for each OS based on OS documentation, but no in-depth comparative tests have been posted. It's possible that changing the method could improve write speed on your platform, but don't monkey with it unless you have the time and resources to run comparative and failure tests. If you change the defaults, WAL may not protect you adequately

wal_buffers

4 to Int Max

8K RAM

8

No


Number of disk-page buffers in shared memory for WAL logging.

Raising this setting can improve the speed of WAL writes for large transactions. To date, no one has posted an analysis of the impact of this setting.

Checkpoints

checkpoint_segments

1 to Int Max

16 MB on disk.

3

No


Maximum distance between automatic WAL checkpoints, in log file segments (each segment is normally 16 megabytes).

Increase these settings if your database has lots of large batch writes to decrease the frequency of checkpoints (and thus lower disk activity). Decrease them if you are short on disk space or your environment has a significant risk of unexpected power-outs, as any un-checkpointed transactions will dropped on restart.

checkpoint_timeout

30 to 3600

See notes

300

No


Maximum time between automatic WAL checkpoints, in seconds.


checkpoint_warning

0 to Int Max

See notes

0

No


Send a message to the server logs if checkpoints caused by the filling of checkpoint segment files happens more frequently than this number of seconds. Zero turns off the warning.

A new option which should be quite useful for tuning checkpoint_segments heavy write activity databases. If you see this message often and regularly, you should consider increasing checkpoint_segments.

commit_delay

0-100000

See notes

0

Yes


Time delay between writing a commit record to the WAL buffer and flushing the buffer out to disk, in microseconds. A nonzero delay allows multiple transactions to be committed with only one fsync system call, if system load is high enough additional transactions may become ready to commit within the given interval. But the delay is just wasted if no other transactions become ready to commit. Therefore, the delay is only performed if at least COMMIT_SIBLINGS other transactions are active at the instant that a backend process has written its commit record.

These two settings are configured together for an environment with a high volume of small transactions. When set, they allow a group of otherwise unrelated transactions to be flushed to disk at the same time, with possible significant performance gain. Don't consider it, though, if you are short on disk space or in an unstable-power environment.

commit_siblings

1 -1000

See notes

5

Yes


Minimum number of concurrent open transactions to require before performing the COMMIT_DELAY delay. A larger value makes it more probable that at least one other transaction will become ready to commit during the delay interval.


Query Tuning

Planner Methods

enable_hashagg

enable_hashjoin

enable_indexscan

enable_mergejoin

enable_nestloop

enable_seqscan

enable_sort

enable_tidscan

true, false


true

Yes


-fi
-fm
-fn
-fs

-ft*

Enables or disables the query planner's use of the respective plan types. The default is on. This is used for debugging the query planner.

Command-line options require use of -o “option”.

These options are pretty much only for use in query testing; frequently one sets “enable_seqscan = false” in order to determine if the planner is unnecessarily discarding an index, for example. However, it would require very unusual circumstances to change any of them to false in the .conf file.




Planner Cost Constants

effective_cache_size

0 to Int Max

8 KB

1000

Yes


Sets the optimizer's assumption about the effective size of the disk cache (that is, the portion of the kernel's disk cache that will be used for PostgreSQL data files). This is measured in disk pages, which are normally 8 kB each.

An oft-overlooked setting that can result in better use of RAM on high-memory computers if raised to a correct level. I use 25% of my “available RAM” (after Linux and applications) as a general level, but have not tested extensively. Please note that if this setting is tuned, one need not increase shared_buffers unnecessarily.

random_page_cost

cpu_tuple_cost

cpu_index_tuple_cost

cpu_operator_cost

0 to Double


4
0.01
0.001
0.0025

Yes


Sets the query optimizer's estimate of the cost of processing each page lookup, tuple, index lookup, and where clause item (respectively) during a query. This is measured as a fraction of the cost of a sequential page fetch.

The default costs are based entirely on anecdotal experience, and are probably not ideal for your system. For example, machines with very fast seeking disk arrays should probably lower the random_page_cost. Remember, though, that when testing values for these settings you need to test a variety of queries, as the right setting for one kind of query can hurt another. Fortunately, since these settings are all relative to the cost of a sequential fetch, they are unlikely to be too far off in a balanced system.

Genetic Estimate Query Optimizer

geqo

true, false


true



Enables or disables genetic query optimization, which is an algorithm that attempts to do query planning without exhaustive searching. This is on by default. See also the various other GEQO_ settings.

GEQO was introduced in PostgreSQL 6.5 as a way of dealing with join optimization queries with too many tables for an exhaustive analysis by the planner. It was quite revolutionary at the time, but was never fully optimized, and has been relatively indifferently maintained since then. Since new, faster CPUs and more, faster RAM has made exhaustive query planning more affordable, I raise the GEQO threshold, usually to 20-25 tables. This prevents GEQO from being used except in the situations where it is really the only option. Of course, if you are running PostgreSQL on a machine with limited (available) CPU power, you may want to use a lower GEQO threshold than that.

If you are certain that you will never have a query that complex, you can just turn it off.

geqo_threshold

2 to Int Max


11

Yes


Use genetic query optimization to plan queries with at least this many FROM items involved. (Note that a JOIN construct counts as only one FROM item.) The default is 11. For simpler queries it is usually best to use the deterministic, exhaustive planner. This parameter also controls how hard the optimizer will try to merge subquery FROM clauses into the upper query.

geqo_selection_bias

geqo_pool_size

geqo_effort

geqo_generations

geqo_random_seed

1.5-2.0


2.0
0
1
0
-1

Yes


Various tuning parameters for the genetic query optimization algorithm: The pool size is the number of individuals in one population. Valid values are between 128 and 1024. If it is set to 0 (the default) a pool size of 2^(QS+1), where QS is the number of FROM items in the query, is taken. The effort is used to calculate a default for generations. Valid values are between 1 and 80, 40 being the default. Generations specifies the number of iterations in the algorithm. The number must be a positive integer. If 0 is specified then Effort * Log2(PoolSize) is used. The run time of the algorithm is roughly proportional to the sum of pool size and generations. The selection bias is the selective pressure within the population. Values can be from 1.50 to 2.00; the latter is the default. The random seed can be set to get reproducible results from the algorithm. If it is set to -1 then the algorithm behaves non-deterministically.

Query and Index Statistics

stats_start_collector

stats_reset_on_server_start

stats_command_string

stats_row_level

stats_block_level

true, false


true
true
false
false
false

No


These flags determine what information backends send to the statistics collector process: current commands, block-level activity statistics, or row-level activity statistics. All default to off. Enabling statistics collection costs a small amount of time per query, but is invaluable for debugging and performance tuning.

Eventually I will have an article on Techdocs explaining how to use the query statistics to tune your use of indexes. These are particularly useful for a “deductive” approach to indexing, where you index everything and then drop the indexes which are not used.

default_statistics_target

1 -1000


10

Yes


Sets the default statistics target for table columns that have not had a column-specific target set via ALTER TABLE SET STATISTICS. Larger values increase the time needed to do ANALYZE, but may improve the quality of the planner's estimates.

It can be useful to raise this target if your main tables all have very uneven distribution in often-queried columns. Raising it does make ANALYZE take longer.

Other Query Modifiers

explain_pretty_print

true, false


false

Yes


Determines whether EXPLAIN VERBOSE uses the indented or non-indented format for displaying detailed query-tree dumps.

Try it and see. The extra formatting is useful for some people, but not for everyone.

from_collapse_limit

0 to Int Max


8

Yes


The planner will merge sub-queries into upper queries if the resulting FROM list would have no more than this many items. Smaller values reduce planning time but may yield inferior query plans. The default is 8. It is usually wise to keep this less than GEQO_THRESHOLD.


join_collapse_limit

1 to Int Max


8

Yes


The planner will flatten explicit inner JOIN constructs into lists of FROM items whenever a list of no more than this many items would result. Usually this is set the same as FROM_COLLAPSE_LIMIT. Setting it to 1 prevents any flattening of inner JOINs, allowing explicit JOIN syntax to be used to control the join order. Intermediate values might be useful to trade off planning time against quality of plan.

This option is designed for those of us who like writing our queries using explicit JOIN syntax (e.g. “a join b using (1) join c using (2)”), but would still like the planner to select the join order for best execution. Particularly, people switching from MS SQL Server will want to enable this option with a moderately high value, as that database does JOIN collapsing automatically.

max_expr_depth

10 to Int Max


10000

Yes


Sets the maximum expression nesting depth of the parser. The default value is high enough for any normal query, but you can raise it if needed. (But if you raise it too high, you run the risk of backend crashes due to stack overflow.)

I cannot imagine anyone needing to raise this.

Logging and Messaging Options

Syslog

syslog

syslog_facility

syslog_ident

0-2


0
LOCAL0
postgres

No


PostgreSQL allows the use of syslog for logging. If this option is set to 1, messages go both to syslog and the standard output. A setting of 2 sends output only to syslog. (Some messages will still go to the standard output/error.) The default is 0, which means syslog is off. This option must be set at server start.

Very useful if you have syslog management tools of some sort. Otherwise can make it difficult to separate PostgreSQL output from numerous other processes.

When to Log/Message

server_min_messages

client_min_messages

log_min_error_statement

debug5, debug4, debug3, debug2, debug1, info, notice, warning, error, log, fatal, panic


notice

Yes



-d x

This controls how much message detail is written to the server logs. Valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Later values send less detail to the logs. The default is NOTICE. Note that LOG has a different precedence here than in CLIENT_MIN_MESSAGES.

Raising debug levels is always good for testing applications. The cost is greater use of disk space, some minor performance cost for output (usually < 5%). However, the performance cost increases significantly if your logs are on the same disk/array as WAL or your database, as heavy debug output will take I/O away from database activity. The impact of debug5 on a high-transaction single-disk system can be quite high. This caution applies to all of the logging options below.

What to Log

debug_print_parse

debug_print_rewritten

debug_print_plan

debug_pretty_print


true, false


false

Yes


These flags enable various debugging output to be sent to the server log. For each executed query, print either the query text, the resulting parse tree, the query rewriter output, or the execution plan. DEBUG_PRETTY_PRINT indents these displays to produce a more readable but much longer output format.

Can be useful for detecting common slow queries if you are able to wade through the voluminous log output. Particularly useful in interactive log watching when procedures hang; you can sometimes see exactly what step hangs (sometimes you can't, though, because the log waits on the database).

log_connections

log_pid

log_statement

log_duration

log_timestamp

true, false


false

No
No
Yes
Yes
Yes


Logs respective items.

All can be useful depending on what problems you are expecting. I generally leave log_timestamp on and the others off.

NOTE: A patch was proposed which would allow users to turn logging options on, but only the superuser would be able to turn them off. I don't know whether this patch will make it into 7.4 or not.

hostname_lookup

7.4: log_hostname

true, false

See notes

false

No


By default, connection logs only show the IP address of the connecting host. If you want it to show the host name you can turn this on, but depending on your host name resolution setup it might impose a non-negligible performance penalty.

This can be useful for debugging/security management, but if DNS is not local can delay new connections significantly.

show_source_port

7.4: log_source_port

true, false

See notes

false

No


Shows the outgoing port number of the connecting host in the connection log messages. You could trace back the port number to find out what user initiated the connection. Other than that, it's pretty useless and therefore off by default.

Imposes a significant but unmeasured performance penalty due to lookup as well as extra logging activity.

Statistics Logging

show_parser_stats

show_planner_stats

show_executor_stats

show_statement_stats

7.4: log_parser_stats

log_planner_stats

log_executor_stats

log_statement_stats

true, false


false

No*

-tpa
-tpl
-te
-s








For each query, write performance statistics of the respective module to the server log. This is a crude profiling instrument.

Command-line options require use of -o “option”.

May be SET by superuser.

Client Connection Defaults

Statement Behavior

autocommit

removed in 7.4

true, false

None

true

Yes


If set to true, PostgreSQL will automatically do a COMMIT after each successful command that is not inside an explicit transaction block (that is, unless a BEGIN with no matching COMMIT has been given). If set to false, PostgreSQL will commit only upon receiving an explicit COMMIT command. This mode can also be thought of as implicitly issuing BEGIN whenever a command is received that is not already inside a transaction block. The default is true, for compatibility with historical PostgreSQL behavior. However, for maximum compatibility with the SQL specification, set it to false.

DO NOT USE. Inconsistent across clients and REMOVED in 7.4 for this reason.

search_path

path

None

'$user,public'

Yes


This variable specifies the order in which schemas are searched when an object (table, data type, function, etc.) is referenced by a simple name with no schema component. When there are objects of identical names in different schemas, the one found first in the search path is used. An object that is not in any of the schemas in the search path can only be referenced by specifying its containing schema with a qualified (dotted) name.

If your application makes heavy use of schema, you can reverse this search path to make sure that public objects will override user-schema objects with the same name. Otherwise, leave it alone.

default_transaction_isolation

read committed, serializable


See Note

'read committed'

Yes


Each SQL transaction has an isolation level, which can be either "read committed" or "serializable". This parameter controls the default isolation level of each new transaction. Consult the PostgreSQL User's Guide and the command SET TRANSACTION for more information.

The default, here, is the value that supports standard MVCC behavior. “Serializable” is mainly useful for when you need to launch long-running procedures which must be successive, or when your updates pose a significant and regular risk of deadlock. Under a heavy multi-user load, setting “serializable” can impose a significant penalty as numerous transactions are forced to wait for the serialized transaction to complete. In a single-concurrent-user database, there should be little effect.

default_transaction_read_only

true, false


false

Yes


A read-only SQL transaction cannot alter non-temporary tables. This parameter controls the default read-only status of each new transaction. The default is false (read/write).


statement_timeout

0 to Int Max

See Note

0

Yes


Aborts any statement that takes over the specified number of milliseconds. A value of zero turns off the timer.

Designed to help the application where it is possible to users to execute queries that swamp the CPU for minutes, such as apps that allow dynamic queries. Setting this value to a finite amount can prevent those users from monopolizing resources, but you'll need to be prepared to deal with the exception.

Locale and Formatting

datestyle

timezone

australian_timezones


None

'iso, us'
unknown
false

Yes


Sets the display format for dates, as well as the rules for interpreting ambiguous input dates.

Sets the time zone for displaying and interpreting timestamps. The default is to use whatever the system environment specifies as the time zone.

If set to true, CST, EST, and SAT are interpreted as Australian time zones rather than as North American Central/Eastern time zones and Saturday.

For changing the default display of dates and interpretation of timezones to suit your locality and/or organization standards.

extra_float_digits

-14 to 2

None

0

Yes


This parameter adjusts the number of digits displayed for floating-point values, including float4, float8, and geometric data types. The parameter value is added to the standard number of digits (FLT_DIG or DBL_DIG as appropriate). The value can be set as high as 2, to include partially-significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or it can be set negative to suppress unwanted digits.


lc_messages

lc_monetary

lc_time

lc_numeric

System-dependent

None

Special

Yes


Sets the locale to use for formatting error messages, monetary amounts, time and numeric values. Acceptable values are system-dependent; see Section 7.1 for more information. If this variable is set to the empty string (which is the default) then the value is inherited from the execution environment of the server in a system-dependent way.

These settings are set by the initdb script when it creates your PGDATA directory. Should be set to your language, currency, etc.

client_encoding

OS-dependent

None

sql_ascii

Yes


Sets the client-side encoding for multi-byte character sets. The default is to use the database encoding.

Usually ignored in favor of database encoding. Would be set per client only for multi-lingual applications, which would then require considerable care to manage the different encodings.

Other Defaults

password_encryption

true, false

None

true

Yes


When a password is specified in CREATE USER or ALTER USER without writing either ENCRYPTED or UNENCRYPTED, this flag determines whether the password is to be encrypted.

Should remain set to true, for all practical purposes.

dynamic_library_path

path

None

'$libdir'

No*


If a dynamically loadable module needs to be opened and the specified name does not have a directory component (i.e. the name does not contain a slash), the system will search this path for the specified file. (The name that is used is the name specified in the CREATE FUNCTION or LOAD command.)

Can be SET by superuser.

Lock Management

deadlock_timeout

1 to Int Max

See Note

1000

No


This is the amount of time, in milliseconds, to wait on a lock before checking to see if there is a deadlock condition. The check for deadlock is relatively slow, so the server doesn't run it every time it waits for a lock. We (optimistically?) assume that deadlocks are not common in production applications and just wait on the lock for a while before starting check for a deadlock. Increasing this value reduces the amount of time wasted in needless deadlock checks, but slows down reporting of real deadlock errors. The default is 1000 (i.e., one second), which is probably about the smallest value you would want in practice. On a heavily loaded server you might want to raise it. Ideally the setting should exceed your typical transaction time, so as to improve the odds that the lock will be released before the waiter decides to check for deadlock.


max_locks_per_transaction

10 to Int Max


64

No


The shared lock table is sized on the assumption that at most max_locks_per_transaction * max_connections distinct objects will need to be locked at any one time. The default, 64, which has historically proven sufficient, but you might need to raise this value if you have clients that touch many different tables in a single transaction. This option can only be set at server start.


Version and Platform Compatibility

Previous PostgreSQL Versions

add_missing_from

true, false

None

true

Yes


Enables planner to “Add Missing From Clause” when you omit a table from your query. Will be false by default in future versions.


regex_flavor

advanced, extended, basic

None

advanced

Yes


The regular expression "flavor" can be set to advanced, extended, or basic. The usual default is advanced. The extended setting may be useful for exact backwards compatibility with pre-7.4 releases of PostgreSQL


sql_inheritance

true, false

None

true

Yes


This controls the inheritance semantics, in particular whether subtables are included by various commands by default. They were not included in versions prior to 7.1. If you need the old behavior you can set this variable to off, but in the long run you are encouraged to change your applications to use the ONLY keyword to exclude subtables.


Platform and Client Compatibility

transform_null_equals

true, false

None

false

Yes


When turned on, expressions of the form expr = NULL (or NULL = expr) are treated as expr IS NULL, that is, they return true if expr evaluates to the null value, and false otherwise. The correct behavior of expr = NULL is to always return null (unknown).


Source Developer Options

These options have been dropped from postgresql.conf in 7.4 because there are not useful user options, but instead are aimed at source developers. Most DBAs should have no reason to modify them.

wal_debug

0-16


0

No*


If nonzero, turn on WAL-related debugging output on standard error.

May be SET by superuser.

trace_notify

true, false


false

Yes


Generates a great amount of debugging output for the LISTEN and NOTIFY commands.


trace_locks

trace_userlocks

trace_lwlocks

debug_deadlocks

trace_lock_oidmin

trace_lock_table

log_btree_build_stats

true, false


false
false
false
false
16384
0
false

No


No documentation is available for these options at this time.


debug_assertions

true, false


true

Yes


Turns on various assertion checks. This is a debugging aid. If you are experiencing strange problems or crashes you might want to turn this on, as it might expose programming mistakes. To use this option, the macro USE_ASSERT_CHECKING must be defined when PostgreSQL is built (accomplished by the configure option --enable-cassert). Note that DEBUG_ASSERTIONS defaults to on if PostgreSQL has been built with assertions enabled.

Only useful if your PostgreSQL system is crashing, and then only if you are a source hacker.

zero_damaged_pages

true, false


false

No*


Detection of a damaged page header normally causes PostgreSQL to report an error, aborting the current transaction. Setting zero_damaged_pages to true causes the system to instead report a warning, zero out the damaged pageand continue processing. This behavior will destroy data, namely all the rows on the damaged page. But it allows you to get past the error and retrieve rows from any undamaged pages that may be present in the table. So it is useful for recovering data if corruption has occurred due to hardware or software error. You should generally not set this true until you have given up hope of recovering data from the damaged page(s) of a table. The default setting is off, and it can only be changed by a superuser.

May be SET by superuser.


Settings in Italics are new for version 7.4 -- Drafted July 3, 2003 – Covers Versions 7.3 and 7.4 – © Josh Berkus 2003 – May be re-distributed only under the BSD license per www.postgresql.org/licence.html Page 20 of 21