Discussion:
which Update quicker
Emi Lu
2014-09-23 19:35:09 UTC
Permalink
Hello list,

For a big table with more than 1,000,000 records, may I know which
update is quicker please?

(1) update t1
set c1 = a.c1
from a
where pk and
t1.c1 <> a.c1;
......
update t1
set c_N = a.c_N
from a
where pk and
t1.c_N <> a.c_N;


(2) update t1
set c1 = a.c1 ,
c2 = a.c2,
...
c_N = a.c_N
from a
where pk AND
( t1.c1 <> a.c1 OR t1.c2 <> a.c2..... t1.c_N <> a.c_N)


Or other quicker way for update action?

Thank you
Emi
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Daniele Varrazzo
2014-09-23 20:39:13 UTC
Permalink
Post by Emi Lu
Hello list,
For a big table with more than 1,000,000 records, may I know which update is
quicker please?
(1) update t1
set c1 = a.c1
from a
where pk and
t1.c1 <> a.c1;
......
update t1
set c_N = a.c_N
from a
where pk and
t1.c_N <> a.c_N;
(2) update t1
set c1 = a.c1 ,
c2 = a.c2,
...
c_N = a.c_N
from a
where pk AND
( t1.c1 <> a.c1 OR t1.c2 <> a.c2..... t1.c_N <> a.c_N)
Definitely the second, and it produces less bloat too.
Post by Emi Lu
Or other quicker way for update action?
You may express the comparison as (t1.c1, t1.c2, ... t1.cN) <> (t2.c1,
t2.c2, ... t2.cN)
It's not going to be faster but maybe it's more readable.


-- Daniele
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Steve Crawford
2014-09-23 21:04:24 UTC
Permalink
Post by Emi Lu
Hello list,
For a big table with more than 1,000,000 records, may I know which
update is quicker please?
(1) update t1
set c1 = a.c1
from a
where pk and
t1.c1 <> a.c1;
......
update t1
set c_N = a.c_N
from a
where pk and
t1.c_N <> a.c_N;
(2) update t1
set c1 = a.c1 ,
c2 = a.c2,
...
c_N = a.c_N
from a
where pk AND
( t1.c1 <> a.c1 OR t1.c2 <> a.c2..... t1.c_N <> a.c_N)
....
We don't have any info about table structures, index availability and
usage for query optimization, whether or not the updated columns are
part of an index, amount of memory available, disk speed, portion of t1
that will be updated, PostgreSQL settings, etc. so it's really anyone's
guess. A million rows is pretty modest so I was able to try a couple
variants of "update...from..." on million row tables on my aging desktop
without coming close to the 60-second mark.

*Usually* putting statements into a single transaction is better (as
would happen automatically in case 2). Also, to the extent that a given
tuple would have multiple columns updated you will have less bloat and
I/O using the query that updates the tuple once rather than multiple
times. But a lot will depend on the efficiency of looking up the
appropriate data in "a."

Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Loading...