|
||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
An explanation was sought for the fact that the following queries are parsed differently depending on whether the constant value is quoted or not. The query is qualifying on submit, a timestamp column. # select submit from invoices where submit <= 2003-03-09; submit ---------- (0 rows) # select submit from invoices where submit <= '2003-03-09'; submit --------------------- 2003-03-08 00:24:25 (1 row) This is obvious to those of us who sometimes use psql as a calculator, The first query is selecting for a date less than or equal to the integer 1991 or ( 2003 - 03 - 09 ) while the second is selecting for a date earlier than 9-Mar-2003. The parser does its best to try to evaluate the data type of arguments and in the first case it found a timestamp and an integer. Since there is no conversion from a timestamp to an integer, it settled on a comparison of submit::text and 1991::text which did not find anything. In the second query, the argument '2003-03-09' was left untyped until the context required a type. This context a comparison to a timestamp so that is the first conversion tried on the constant value. If for some reason, it did not succeed, it may try others as it did in the first query. This issue was addressed in a slightly different context in PostgreSQL GeneralBits Issue #12 Casting for bigint Queries.
There is a table with two columns of identifying information and one or more values associated with them. For example a table of student exercises in a particular area. The area and student are the identifying columns and the exercises columns are the values associated with them. elein=# \d student_stats Table "public.student_stats" Column | Type | Modifiers ------------------+---------+----------- area | text | student | text | day | date | exercises_tried | integer | exercises_solved | integer | What we want to know is the query that shows the top student in each area for a specified date interval. The simple summation only gets us so far. It will give us each person's total score and order the result so that the top student is the top row in each category, but the other rows are not really wanted. select area, student, sum(exercises_tried) as total, sum(exercises_solved) as succ from student_stats where day >= (now() - '14 day'::interval)::date group by area, student order by area asc, succ desc, total desc; area | student | total | succ ------------+---------+-------+------ arithmetic | Sarah | 40 | 40 arithmetic | Lucy | 40 | 38 ...more rows... arithmetic | Paul | 50 | 34 arithmetic | Ramon | 20 | 14 geography | Paul | 50 | 40 geography | Sarah | 40 | 40 ...more rows... geography | Xavier | 39 | 37 geography | William | 40 | 34 spelling | William | 40 | 34 spelling | Lucy | 38 | 34 ...more rows... spelling | George | 34 | 34 spelling | Sarah | 50 | 33 (27 rows) My first guess at resolving this problem was to use an outer query and select the max of total grouped by area. However, Manfred Koizar provided a much cleaner and nicer solution using DISTINCT in the outer query. This distinct relies on the sort and grouping order of the inner query so that the first record for an area is the one found first. select distinct on (area) area, student, total,succ from ( select area, student, sum(exercises_tried) as total, sum(exercises_solved) as succ from student_stats where day >= (now() - '14 day'::interval)::date group by area, student order by area asc, succ desc, total desc) foo order by area, succ desc, total desc;
In Issue #15 the topic of creating intervals was discussed and I gave the syntax for creating an interval as this: interval [(] textstring [[)] or textstring::intervalIt has come to my attention that the parentheses are not accepted and the proper syntax is: interval textstring or textstring::interval
If you are not the one who installed postgresql on your machine, you may need some way to find out how the installation was configured. You can always hack down to the PGDATA directory and take a direct look at postgresql.conf, but system administrators frown on that kind of thing. The utility pg_config will show what options were used to configure your installation. pg_config --configureThis is a nice little utility which can also show you most of the crucial information for locating the parts of your installation. The options to pg_config are: --bindir show location of user executables --includedir show location of C header files of the client interfaces --includedir-server show location of C header files for the server --libdir show location of object code libraries --pkglibdir show location of dynamically loadable modules --configure show options given to 'configure' script when PostgreSQL was built --version show the PostgreSQL version, then exit --help show this help, then exit The bindir is necessary so that it can be included in PATH. The includedir and includedir-server are necessary for development of C applications for the client or C server functions. libdir and pkglibdir are often the same value. They, together, hold postgresql runtime shared libraries and user defined shared libraries for C server functions.
PostgreSQL supports many stored procedure languages. This allows you to use the language most appropriate for the task at hand. The ones supported from postgresql.org are: SQL, C, PL/pgSQL, PL/Python, PL/Perl and PL/Tcl. PL/Ruby, PL/sh, and PL/Java are supported by other projects. There is an unverified rumor of the existence of a PL/bash as well. PL/Perl, PL/Tcl and PL/Python require configuration at build time. The rest of the PostgreSQL supported functions exist by default. C server functions require that you install the server headers as well as the client headers. PL/Perl is good for string manipulation. PL/Python is good for running aggregates and triggers. Some people prefer Tcl. SQL functions are very straightforward and PL/pgSQL is the workhorse enabling basic flow control with SQL statements. This variety is good both for people devoted to a single language or for those who like to match the implementation to the task. These links will lead you to more information about each language.
When thinking about using a foreign key to implement a relationship between two tables there are a few questions to ask. Suppose we have two tables, tableA and tableB. tableA has a primary key of akey and tableB has a primary key of bkey and also contains akey. 1. Can the column akey in tableB have a value, other than NULL, which is not present in tableA? A one to one or one to many relationship is enforceable via a foreign key. REFERENCES will do the trick. A zero to one or many relationship can be enforced by including NOT NULL on the akey column in tableB. create table tableB ( ... akey integer REFERENCES tableA (akey), ... A many to many relationship and a one to a fixed set are usually implemented by a third table called an aggregate table in which keys for tableB are matched with keys of tableA. create table ABagg ( akey integer NOT NULL REFERENCES tableA (akey), bkey integer NOT NULL REFERENCES tableB (bkey), PRIMARY KEY (akey,bkey) );This aggregate table should be used in queries where data from both tables is wanted. This is a case where a good view would be very helpful. select a.akey, b.bkey, a.info as ainfo, b.info as binfo from tableA a, tableB b, ABagg ab where ab.akey = a.akey and ab.bkey = b.bkey;
|
||||||||||||||||||||||||||||||
Comments and Corrections are welcome. Suggestions and contributions of items are also welcome. Send them in! Copyright A. Elein Mustain 2003, 2004, 2005, 2006, 2007, 2008, 2009 |