Discussion:
large object does not exist after pg_migrator
Jamie Fox
2009-07-13 21:13:37 UTC
Permalink
Hi -
After what seemed to be a normal successful pg_migrator migration from 8.3.7
to 8.4.0, in either link or copy mode, vacuumlo fails on both our production
and qa databases:

Jul 1 11:17:03 db2 postgres[9321]: [14-1] LOG: duration: 175.563 ms
statement: DELETE FROM vacuum_l WHERE lo IN (SELECT "xml_data" FROM
"public"."xml_user")
Jul 1 11:17:03 db2 postgres[9321]: [15-1] ERROR: large object 17919608
does not exist
Jul 1 11:17:03 db2 postgres[9321]: [16-1] ERROR: current transaction is
aborted, commands ignored until end of transaction block

I migrated our qa database using pg_dump/pg_restore and vacuumlo has no
problem with it. When I try querying the two databases for large objects
manually I see the same error in the one that was migrated with pg_migrator:

select loread(lo_open(xml_data,262144),1073741819) from xml_user where id =
'10837246';
ERROR: large object 24696063 does not exist
SQL state: 42704

I can also see that the pg_largeobject table is different, in the pg_restore
version the Rows (estimated) is 316286 and Rows (counted) is the same, in
the pg_migrator version the Rows (counted) is only 180507.

Any advice on what I might look for to try and track down this problem?
pg_restore on our production database takes too long so it would be really
nice to use pg_migrator instead.

Thanks,

Jamie
Jamie Fox
2009-07-13 22:21:39 UTC
Permalink
Hi -
This is probably more helpful - the pg_largeobject table only changed after
vacuumlo, not before. When comparing pre- and post- pg_migrator databases
(no vacuum or vacuumlo):

select * from pg_largeobject where loid = '24696063';

in the pre- there are three rows, having pageno 0 through 3, in the post-
database there are no results.

Thanks for any advice,

Jamie
Post by Jamie Fox
Hi -
After what seemed to be a normal successful pg_migrator migration from
8.3.7 to 8.4.0, in either link or copy mode, vacuumlo fails on both our
Jul 1 11:17:03 db2 postgres[9321]: [14-1] LOG: duration: 175.563 ms
statement: DELETE FROM vacuum_l WHERE lo IN (SELECT "xml_data" FROM
"public"."xml_user")
Jul 1 11:17:03 db2 postgres[9321]: [15-1] ERROR: large object 17919608
does not exist
Jul 1 11:17:03 db2 postgres[9321]: [16-1] ERROR: current transaction is
aborted, commands ignored until end of transaction block
I migrated our qa database using pg_dump/pg_restore and vacuumlo has no
problem with it. When I try querying the two databases for large objects
select loread(lo_open(xml_data,262144),1073741819) from xml_user where id =
'10837246';
ERROR: large object 24696063 does not exist
SQL state: 42704
I can also see that the pg_largeobject table is different, in the
pg_restore version the Rows (estimated) is 316286 and Rows (counted) is the
same, in the pg_migrator version the Rows (counted) is only 180507.
Any advice on what I might look for to try and track down this problem?
pg_restore on our production database takes too long so it would be really
nice to use pg_migrator instead.
Thanks,
Jamie
Loading...