|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
In 7.2, a SERIAL column had a unique index on it by default. In 7.3, this is no longer the case. Foreign Key definitions which depended on the predefined uniqueness of the referenced column now give an error.
CREATE TABLE person ( id SERIAL, name VARCHAR NOT NULL ); psql:test.sql:6: NOTICE: CREATE TABLE will create implicit sequence 'person_id_seq' for SERIAL column 'person.id' CREATE TABLE CREATE TABLE session ( id SERIAL, person_id INTEGER NOT NULL, FOREIGN KEY (person_id) REFERENCES person (id) ); psql:test.sql:12: NOTICE: CREATE TABLE will create implicit sequence 'session_id_seq' for SERIAL column 'session.id' psql:test.sql:12: NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s) psql:test.sql:12: ERROR: UNIQUE constraint matching given keys for referenced table "person" not found The solution for this in 7.3 is to explicity define the id column in the person table to be UNIQUE or be a PRIMARY KEY (which implies uniqueness).
If postgresql cached a query plan using an index that was subsequently dropped, the cache isn't always cleared. This causes the query to try to use the dropped index and the error 'Relation [OID of the index] does not exist'.is displayed. Query plans are only cached by connection and only for prepared statements and queries in plpgsql functions. This problem only occurs if indexes are dropped while those plans are cached. If it is necessary to drop and recreate indexes on live systems, you can avoid the caching problems by closing the old connections. Sometime closing the old connections is not reasonable if the client caches the connections, however, in that case you may need to plan your drop and recreate indexes more carefully. The ultimate fix for this will be to notice when there is a DDL update and then clear the query caches. In the meanwhile it is categorized as a bug. (Tom may have this fixed by now...:-)
When you use EXPLAIN to show the cost of running a query, you get output that looks like this: Nested Loop (cost=0.00..5662254656.82 rows=138 width=199)The question is: What is the unit value for "cost"? The there is no real time equivalent for "cost", however, it is a time unit. And a high value of cost is a lot and the low value is not much. Costs are consistent on an installation on a machine and can reasonably be extrapolated to similar machines and configurations. Using EXPLAIN ANALYZE will give you an idea of the value of cost on your machine and installation. The two examples below are from my machine. The first is a fast query. The second is an unfortunate view that uses a NOT IN clause. explain analyze select * from pg_database; QUERY PLAN ---------------------------------------------------------------------------- Seq Scan on pg_database (cost=0.00..1.08 rows=8 width=182) (actual time=0.08..0.11 rows=9 loops=1) Total runtime: 0.20 msec (2 rows) explain analyze select * from badacct_route; QUERY PLAN ----------------------------------------------------------------------------- Seq Scan on account a (cost=0.00..163887.65 rows=194 width=383) (actual time=7479.80..10642.65 rows=1 loops=1) Filter: (subplan) SubPlan -> Seq Scan on routing_numbers (cost=0.00..844.70 rows=25170 width=32) (actual time=0.02..19.46 rows=8225 loops=388) Total runtime: 10642.75 msec (5 rows)With this absurdly small sample, it looks like cost is equivalent to 10 times the number of seconds it takes to run the query. But clearly, more samples and more control over the machine would be needed to be exact. Luckily, exact is not the point. The cost values from EXPLAIN are meant to be read relatively.
NOTE: As of 8.0 there is an ALTER command to change the type of a column. It is: ALTER TABLE name ALTER COLUMN column TYPE type [USING expression] There is not an ALTER command which will change the type of a column in a table. Changing types involves changing the storage of the table so it cannot be done with a simple update of the system catalog table. There is also the implicit requirement that the before type and after type be compatible. That is, that the before type must be able to be cast or manipulated into the proper format for the after type. To change the type of a column "by hand" there are two prevalent methodologies. The first is to recreate the table and the second is to add and drop a column. The first is best because its result is cleaner. However, you have to drop and recreate any dependencies on the the table. If there are many dependencies on the table, then adding and dropping a column might be better. In these examples, the changes we want to do are as follows:
Recreating the table --> Drop dependencies as necessary CREATE TABLE accttemp AS SELECT name, id, balance::float, num::numeric(10,2) FROM acct; --> Check the data in the accttemp table to be sure your casts worked. --> Don't skip this step. Really. DROP TABLE acct; ALTER TABLE accttemp RENAME TO acct; --> Recreate dependencies as necessary Adding and Dropping a column ALTER TABLE RENAME balance TO balance_old; ALTER TABLE RENAME num TO num_old; ALTER TABLE ADD COLUMN balance float; ALTER TABLE ADD COLUMN num numeric(10,2); UPDATE acct SET balance=balance_old::float, num=num_old::numeric(10,2); --> Check the data in the acct table to be sure your casts worked. --> Don't skip this step. Really. ALTER TABLE DROP COLUMN balance_old; ALTER TABLE DROP COLUMN num_old;
Did you ever wonder why some comments in a CREATE FUNCTION statement could be commented out and why some could not? Now we know why. You would think that the following lines would be parsed as you meant them. However, the parser grabs the string without paying attention to anything inside them except the ordinary escape characters. As it should--it is a string literal. The entire function definition is itself a string and if you have any stray unescaped single quotes within that string, then the string will "end" there causing a parsing problem with the rest of the text. drop function test(); create or replace function test() returns integer as ' begin -- raise NOTICE ''can't comment out!''; raise notice ''didn''''t comment out''; return 1; end;' language 'plpgsql';In this example, the stray quote in can't will cause parsing errors. Changing it to four or zero quotes should allow the commenting and uncommenting to work correctly.
This question involves tables AA and BB.
We want to get the id and sequence numbers from table AA and the sum of AA's price and the totals of the corresponding rows in BB.
There were two suggestions for this query. The first uses a subselect to sum the price from BB for each entry in AA. Both queries use a COALESCE() function to handle the case where there are no entries in BB corresponding to AA. In that case a NULL would be returned--we coalesce the NULL into a 0 so adding it to AA's price works. SELECT aa.id, aa.seqno, aa.price + COALESCE( (SELECT SUM(bb.price) FROM bb WHERE bb.id = aa.id AND bb.seqno = aa.seqno), 0) AS price FROM aa; In the second suggestion a LEFT OUTER JOIN is used to create the base table and the BB's price is summed. We are grouping by AA's id and sequence number. This means the sum will restart after each sequence number in AA. The group by also assumes that id and seqno are primary keys. SELECT AA.id, AA.seq, AA.price + coalesce(sum(BB.price),0) FROM AA left outer join BB using (id, seq) GROUP BY AA.id, AA.seq, AA.price;
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Comments and Corrections are welcome. Suggestions and contributions of items are also welcome. Send them in! Copyright A. Elein Mustain 2003, 2004, 2005, 2006, 2007, 2008, 2009 |