July 3, 2003 Copyright 2003 Shridhar Daithankar and Josh Berkus.
Authorized for re-distribution only under the
PostgreSQL license (see www.postgresql.org/license).
1 Introduction 2 Some basic parameters 2.1 Shared buffers 2.2 Sort memory 2.3 Effective Cache Size 2.4 Fsync and the WAL files 3 Some less known parameters 3.1 random_ page_cost 3.2 Vacuum_ mem 3.3 max_fsm_pages 3.4 max fsm_ relations 3.5 wal_buffers 4 Other tips 4.1 Check your file system 4.2 Try the Auto Vacuum daemon 4.3 Try FreeBSD 5 The CONF Setting Guide
It also assumes that you have gone through the PostgreSQL administration manual before starting, and to have set up your PostgreSQL server with at least the default configuration.
There are two important things for any performance optimization:
If you don't know your expected level of performance, you will end up chasing a carrot always couple of meters ahead of you. The performance tuning measures give diminishing returns after a certain threshold. If you don't set this threshold beforehand, you will end up spending lot of time for minuscule gains.
This document focuses entirely tuning postgresql.conf best for your existing setup. This is not the end of performance tuning. After using this document to extract the maximum reasonable performance from your hardware, you should start optimizing your application for efficient data access, which is beyond the scope of this article.
Please also note that the tuning advices described here are hints. You should not implement them all blindly. Tune one parameter at a time and test its impact and decide whether or not you need more tuning. Testing and benchmarking is an integral part of database tuning.
Tuning the software settings explored in this article is only about one-third of database performance tuning, but it's a good start since you can experiment with some basic setting changes in an afternoon, whereas some other aspects of tuning can be very time-consuming. The other two-thirds of database application tuning are:
Databases are very bound to your system's I/O (disk) access and memory usage. As such, selection and configuration of disks, RAID arrays, RAM, operating system, and competition for these resources will have a profound effect on how fast your database is. We hope to have a later article covering this topic.
Your application also needs to be designed to access data efficiently, though careful query writing, planned and tested indexing, good connection management, and avoiding performance pitfalls particular to your version of PostgreSQL. Expect another guide someday helping with this, but really it takes several large books and years of experience to get it right ... or just a lot of time on the mailing lists.
This is the area of memory PostgreSQL actually uses to perform work. It should be sufficient enough to handle load on database server. Otherwise PostgreSQL will start pushing data to file and it will hurt the performance overall. Hence this is the most important setting one needs to tune up.
This value should be set based on the dataset size which the database server is supposed to handle at peak loads and on your available RAM (keep in mind that RAM used by other applications on the server is not available). We recommend following rule of thumb for this parameter:
PLEASE NOTE. PostgreSQL counts a lot on the OS to cache data files and hence does not bother with duplicating its file caching effort. The shared buffers parameter assumes that OS is going to cache a lot of files and hence it is generally very low compared with system RAM. Even for a dataset in excess of 20GB, a setting of 128MB may be too much, if you have only 1GB RAM and an aggressive-at-caching OS like Linux.
There is one way to decide what is best for you. Set a high value of this parameter and run the database for typical usage. Watch usage of shared memory using ipcs or similar tools. A recommended figure would be between 1.2 to 2 times peak shared memory usage.
Say there is 1.5GB RAM in your machine, shared buffers are set to 32MB and effective cache size is set to 800MB. So if a query needs 700MB of data set, PostgreSQL would estimate that all the data required should be available in memory and would opt for more aggressive plan in terms of optimization, involving heavier index usage and merge joins. But if effective cache is set to only 200MB, the query planner is liable to opt for the more I/O efficient sequential scan.
While setting this parameter size, leave room for other applications running on the server machine. The objective is to set this value at the highest amount of RAM which will be available to PostgreSQL all the time.
If that's not an option for you, you can still have the protection of WAL and better performance. Simply move your WAL files, using either a mount or a symlink to the pg_xlog directory, to a separate disk or array from your main database files. In high-write-activity databases, WAL should have its own disk or array to ensure continuous high-speed access. Very large RAID arrays and SAN/NAS devices frequently handle this for you through their internal management systems.
If you have a database that does lots of updates and deletes, that is going to generate dead tuples, due to PostgreSQL's MVCC system. The space occupied by dead tuples can be freed with vacuum, unless there is more wasted space than is covered by the Free Space Map, in which case the much less convenient "vacuum full" is required. By expanding the FSM to cover all of those dead tuples, you might never again need to run vacuum full except on holidays.
The best way to set max _ fsm _ pages is interactive; First, figure out the vacuum (regular) frequency of your database based on write activity; next, run the database under normal production load, and run "vacuum verbose analyze" instead of vacuum, saving the output to a file; finally, calculate the maximum total number of pages reclaimed between vacuums based on the output, and use that.
Remember, this is a database cluster wide setting. So bump it up enough to cover all databases in your database cluster. Also, each FSM page uses 6 bytes of RAM for administrative overhead, so increasing FSM substantially on systems low on RAM may be counter-productive.
Contrary to popular belief, today's journaling file systems are not necessarily slower compared to non-journaling ones. Ext2 can be faster on some setups but the recovery issues generally make its use prohibitive. Different people have reported widely different experiences with the speed of Ext3, ReiserFS, and XFS; quite possibly this kind of benchmark depends on a combination of file system, disk/array configuration, OS version, and database table size and distribution. As such, you may be better off sticking with the file system best supported by your distribution, such as ReiserFS for SuSE Linux or Ext3 for Red Hat Linux, not to forget XFS known for it's large file support . Of course, if you have time to run comprehensive benchmarks, we would be interested in seeing the results!
As an easy performance boost with no downside, make sure the file system on which your database is kept is mounted "noatime", which turns off the access time bookkeeping.
Essentially it will vacuum the database when it needs it. It would get rid of playing with cron settings for vacuum frequency. It should result in better database performance by eliminating overdue vacuum issues.
OS's from the BSD family, such as FreeBSD, dynamically alter the IO priority of a process. So if you lower the priority of a vacuum process, it should not chew as much bandwidth and will better allow the database to perform normally. Of course this means that vacuum could take longer, which would be problematic for a "vacuum full."
If you are not done with your choice of OS for your server platform, consider BSD for this reason.
As noted in the worksheet, it covers PostgreSQL versions 7.3 and 7.4. If you are using an earlier version, you will not have access to all of these settings, and defaults and effects of some settings will be different.