Discussion:
Decreasing performance in table partitioning
Herouth Maoz
2014-09-07 13:59:33 UTC
Permalink
Hello all.

I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.

My problem is the main loop, in which data for one month is moved from the old table to the partition table.

(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);

In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!

I tried to improve this, by first testing whether there are any records for the current month in the table, adding:

(2)

EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;

Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:

EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );

But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.

Here is the number of rows in each month for the table I was trying to partition:

count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00


And the times for each iteration:

Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min

For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.

Can anybody explain the performance deterioration and/or offer a suggestion for a different design?

TIA,
Herouth
Tom Lane
2014-09-07 16:50:35 UTC
Permalink
Post by Herouth Maoz
My problem is the main loop, in which data for one month is moved from the old table to the partition table.
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);
In the first few iterations, this runs in very good times. But as
iterations progress, performance drops, despite the size of the date for
each month being more or less the same.
How many of these are you doing in a single transaction? Are you doing
them in separate exception blocks? What PG version is this exactly?

My guess is that the cycles are going into finding out that tuples deleted
by a prior command are in fact dead to the current command (though still
live to outside observers, so they can't be hinted as dead). That ought
to be relatively cheap if it's all one subtransaction, but if there were a
large number of separate subtransactions involved, maybe not so much.

regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Herouth Maoz
2014-09-10 08:34:18 UTC
Permalink
Thank you. I was away for a few days.

This is PG version 9.1. Now, this is in a function. As far as I understand, every function is a single transaction. I have not created exception blocks because I don't have any special handling for exceptions. I'm fine with the default.

The data in each table is for about 10 months, so it looks about 10 times each cycle.

What has occured to me, though, is that maybe I should write the DELETE statement as DELETE FROM ONLY, as the previously created partitions would be scanned, despite having no applicable data, the way I wrote it. Does that make sense?
Post by Tom Lane
Post by Herouth Maoz
My problem is the main loop, in which data for one month is moved from the old table to the partition table.
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);
In the first few iterations, this runs in very good times. But as
iterations progress, performance drops, despite the size of the date for
each month being more or less the same.
How many of these are you doing in a single transaction? Are you doing
them in separate exception blocks? What PG version is this exactly?
My guess is that the cycles are going into finding out that tuples deleted
by a prior command are in fact dead to the current command (though still
live to outside observers, so they can't be hinted as dead). That ought
to be relatively cheap if it's all one subtransaction, but if there were a
large number of separate subtransactions involved, maybe not so much.
regards, tom lane
--
חךות מעוז
יוניסל ׀תךונות סלולךיים מתקדמים
☎ 03-5181717 שלוחה 742
Huang, Suya
2014-09-08 01:57:33 UTC
Permalink
Instead of deleting from the original non-partition table which is not efficient, you can try below approach.

Put below logic in a function as you like:

Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.

Drop old non-partition table if you're satisfied with current table structure.

Thanks,
Suya
From: pgsql-general-***@postgresql.org [mailto:pgsql-general-***@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-***@postgresql.org
Subject: [GENERAL] Decreasing performance in table partitioning


Hello all.

I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.

My problem is the main loop, in which data for one month is moved from the old table to the partition table.

(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);

In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!

I tried to improve this, by first testing whether there are any records for the current month in the table, adding:

(2)

EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;

Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:

EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );

But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.

Here is the number of rows in each month for the table I was trying to partition:

count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00


And the times for each iteration:

Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min

For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.

Can anybody explain the performance deterioration and/or offer a suggestion for a different design?

TIA,
Herouth
Herouth Maoz
2014-09-10 08:26:08 UTC
Permalink
Thank you. Sorry I have been away for a few days and couldn't thank you before.

Wouldn't this have an impact if there are things like views or functions based on the old table?
Post by Huang, Suya
Instead of deleting from the original non-partition table which is not efficient, you can try below approach.
Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.
Drop old non-partition table if you’re satisfied with current table structure.
Thanks,
Suya
Sent: Monday, September 08, 2014 12:00 AM
Subject: [GENERAL] Decreasing performance in table partitioning
Hello all.
I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.
My problem is the main loop, in which data for one month is moved from the old table to the partition table.
(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);
In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!
(2)
EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;
EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );
But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.
count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00
Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min
For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.
Can anybody explain the performance deterioration and/or offer a suggestion for a different design?
TIA,
Herouth
--
חךות מעוז
יוניסל ׀תךונות סלולךיים מתקדמים
☎ 03-5181717 שלוחה 742
Huang, Suya
2014-09-11 04:26:27 UTC
Permalink
The views will go with the table. if you rename table, view definition will be automatically changed accordingly. In your situation, you may need to recreate views or other objects have dependency on that old table.

But functions will remain the same, so as long as your new table has been renamed to the same name as the old table, it should be ok.

Note, it’s tested in my 9.3 environment, not sure how it behaves in older versions


From: Herouth Maoz [mailto:***@unicell.co.il]
Sent: Wednesday, September 10, 2014 6:26 PM
To: Huang, Suya
Cc: pgsql-***@postgresql.org
Subject: Re: [GENERAL] Decreasing performance in table partitioning

Thank you. Sorry I have been away for a few days and couldn't thank you before.

Wouldn't this have an impact if there are things like views or functions based on the old table?

On 08/09/2014, at 04:57, Huang, Suya wrote:


Instead of deleting from the original non-partition table which is not efficient, you can try below approach.

Put below logic in a function as you like:

Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.

Drop old non-partition table if you’re satisfied with current table structure.

Thanks,
Suya
From: pgsql-general-***@postgresql.org<mailto:pgsql-general-***@postgresql.org> [mailto:pgsql-general-***@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-***@postgresql.org<mailto:pgsql-***@postgresql.org>
Subject: [GENERAL] Decreasing performance in table partitioning


Hello all.

I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.

My problem is the main loop, in which data for one month is moved from the old table to the partition table.

(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);

In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!

I tried to improve this, by first testing whether there are any records for the current month in the table, adding:

(2)

EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;

Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:

EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );

But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.

Here is the number of rows in each month for the table I was trying to partition:

count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00


And the times for each iteration:

Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min

For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.

Can anybody explain the performance deterioration and/or offer a suggestion for a different design?

TIA,
Herouth
--
חךות מעוז
יוניסל ׀תךונות סלולךיים מתקדמים
☎ 03-5181717 שלוחה 742
Herouth Maoz
2014-09-16 12:21:42 UTC
Permalink
Thank you. My solution is based on your suggestion, but instead of creating a new partition table and inserting into it, I create partitions as I originally planned, under the existing table, and insert to them. But without deleting.

I use INSERT INTO... SELECT ONLY, without deleting, and so the data becomes duplicated (each row is both in the main table and in the partition). At the end of the loop, when all partitions are populated and indexed, I use TRUNCATE ONLY on the main table, and drop its indexes. This way, the views are not affected.

So thank you for your suggestion and your help. Now for a comparable-size table, run time was less than 2 hours which is quite acceptable.
Post by Huang, Suya
The views will go with the table. if you rename table, view definition will be automatically changed accordingly. In your situation, you may need to recreate views or other objects have dependency on that old table.
But functions will remain the same, so as long as your new table has been renamed to the same name as the old table, it should be ok.
Note, it’s tested in my 9.3 environment, not sure how it behaves in older versions

Sent: Wednesday, September 10, 2014 6:26 PM
To: Huang, Suya
Subject: Re: [GENERAL] Decreasing performance in table partitioning
Thank you. Sorry I have been away for a few days and couldn't thank you before.
Wouldn't this have an impact if there are things like views or functions based on the old table?
Instead of deleting from the original non-partition table which is not efficient, you can try below approach.
Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.
Drop old non-partition table if you’re satisfied with current table structure.
Thanks,
Suya
Sent: Monday, September 08, 2014 12:00 AM
Subject: [GENERAL] Decreasing performance in table partitioning
Hello all.
I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.
My problem is the main loop, in which data for one month is moved from the old table to the partition table.
(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);
In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!
(2)
EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;
EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );
But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.
count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00
Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min
For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.
Can anybody explain the performance deterioration and/or offer a suggestion for a different design?
TIA,
Herouth
--
חךות מעוז
יוניסל ׀תךונות סלולךיים מתקדמים
☎ 03-5181717 שלוחה 742
--
חךות מעוז
יוניסל ׀תךונות סלולךיים מתקדמים
☎ 03-5181717 שלוחה 742
Huang, Suya
2014-09-17 00:24:15 UTC
Permalink
That’s a good idea, thanks for sharing Herouth!

From: Herouth Maoz [mailto:***@unicell.co.il]
Sent: Tuesday, September 16, 2014 10:22 PM
To: Huang, Suya
Cc: pgsql-***@postgresql.org
Subject: Re: [GENERAL] (Solved) Decreasing performance in table partitioning

Thank you. My solution is based on your suggestion, but instead of creating a new partition table and inserting into it, I create partitions as I originally planned, under the existing table, and insert to them. But without deleting.

I use INSERT INTO... SELECT ONLY, without deleting, and so the data becomes duplicated (each row is both in the main table and in the partition). At the end of the loop, when all partitions are populated and indexed, I use TRUNCATE ONLY on the main table, and drop its indexes. This way, the views are not affected.

So thank you for your suggestion and your help. Now for a comparable-size table, run time was less than 2 hours which is quite acceptable.

On 11/09/2014, at 07:26, Huang, Suya wrote:


The views will go with the table. if you rename table, view definition will be automatically changed accordingly. In your situation, you may need to recreate views or other objects have dependency on that old table.

But functions will remain the same, so as long as your new table has been renamed to the same name as the old table, it should be ok.

Note, it’s tested in my 9.3 environment, not sure how it behaves in older versions


From: Herouth Maoz [mailto:***@unicell.co.il]
Sent: Wednesday, September 10, 2014 6:26 PM
To: Huang, Suya
Cc: pgsql-***@postgresql.org<mailto:pgsql-***@postgresql.org>
Subject: Re: [GENERAL] Decreasing performance in table partitioning

Thank you. Sorry I have been away for a few days and couldn't thank you before.

Wouldn't this have an impact if there are things like views or functions based on the old table?

On 08/09/2014, at 04:57, Huang, Suya wrote:



Instead of deleting from the original non-partition table which is not efficient, you can try below approach.

Put below logic in a function as you like:

Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.

Drop old non-partition table if you’re satisfied with current table structure.

Thanks,
Suya
From: pgsql-general-***@postgresql.org<mailto:pgsql-general-***@postgresql.org> [mailto:pgsql-general-***@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-***@postgresql.org<mailto:pgsql-***@postgresql.org>
Subject: [GENERAL] Decreasing performance in table partitioning


Hello all.

I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.

My problem is the main loop, in which data for one month is moved from the old table to the partition table.

(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);

In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!

I tried to improve this, by first testing whether there are any records for the current month in the table, adding:

(2)

EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;

Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:

EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );

But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.

Here is the number of rows in each month for the table I was trying to partition:

count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00


And the times for each iteration:

Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min

For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.

Can anybody explain the performance deterioration and/or offer a suggestion for a different design?

TIA,
Herouth
--
חךות מעוז
יוניסל ׀תךונות סלולךיים מתקדמים
☎ 03-5181717 שלוחה 742
--
חךות מעוז
יוניסל ׀תךונות סלולךיים מתקדמים
☎ 03-5181717 שלוחה 742
Marc Van Olmen
2014-09-17 01:16:42 UTC
Permalink
Trying to debug an out of memory error with Postgresql.

Simple:

* Open Connection
* begin transaction
* trying to import about 20GBytes of data (40K rows + rest large image blob's)
* end transaction
* Close Connection

What I notice is that the python app stays around 200Mbytes of memory usage, but the postgres process on my MacOSX 10.9.5 is growing and growing. Until it runs out of memory (running 32-bit version).

Sqlalchemy, 0.9.5, psycopg2 2.6, python 2.7.5, postgresql 9.3.2 (default config settings)

Database:

* The database has several Triggers that are fired written in PL/Python
* some of them are simple "NOTIFY ..;" others are 500 lines of python code

Things that I already figured out:

* If i remove the database triggers PL/Python there is no memory problem
* if i import in chunks of 1000 rows and I do commit each time the postgress process goes back to low memory (so some memory gets deallocated) so I'm able to import everything.
* I added Python pympler and printed out memory diffs to see memory gets leaked in the triggers but nothing seems to show up.

Question:

* any ideas on how to trace this case without going into full debug mode and running gdb etc.

Error I see:

(45845,0xa0db51a8) malloc: *** mach_vm_map(size=8388608) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
TopMemoryContext: 64727172 total in 7901 blocks; 132784 free (7871 chunks); 64594388 used
CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
PL/Python procedures: 8192 total in 1 blocks; 4872 free (0 chunks); 3320 used
SPI exceptions: 8192 total in 1 blocks; 2328 free (0 chunks); 5864 used
Rendezvous variable hash: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 used
TopTransactionContext: 3859664 total in 6 blocks; 98504 free (31 chunks); 3761160 used
CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used

.... (thousands of these)

pg_authid_rolname_index: 1024 total in 1 blocks; 552 free (0 chunks); 472 used
MdSmgr: 8192 total in 1 blocks; 5320 free (0 chunks); 2872 used
ident parser context: 0 total in 0 blocks; 0 free (0 chunks); 0 used
hba parser context: 7168 total in 3 blocks; 2680 free (3 chunks); 4488 used
LOCALLOCK hash: 24576 total in 2 blocks; 13080 free (3 chunks); 11496 used
Timezones: 78520 total in 2 blocks; 5968 free (0 chunks); 72552 used
ErrorContext: 8192 total in 1 blocks; 8176 free (3 chunks); 16 used
ERROR: spiexceptions.OutOfMemory: out of memory
DETAIL: Failed on request of size 2016.
CONTEXT: Traceback (most recent call last):
PL/Python function "metavalue_cleanup", line 25, in <module>
TD["new"]["id"]))
PL/Python function "metavalue_cleanup"
STATEMENT: INSERT INTO metavalue (id_item, id_metatype, id_employee, date, value) VALUES (5079, 1, 1, now(), 'J107') RETURNING metavalue.id
Tom Lane
2014-09-17 03:53:15 UTC
Permalink
[ out of memory while ]
* trying to import about 20GBytes of data (40K rows + rest large image blob's)
What I notice is that the python app stays around 200Mbytes of memory usage, but the postgres process on my MacOSX 10.9.5 is growing and growing. Until it runs out of memory (running 32-bit version).
Hm ... why are you running a 32-bit build on recent OS X? I'm pretty sure
that OS X release doesn't support any non-64-bit hardware. Not that the
leak isn't a problem, but you'd have a bunch more headroom with a 64-bit
executable.
* The database has several Triggers that are fired written in PL/Python
* some of them are simple "NOTIFY ..;" others are 500 lines of python code
...
TopMemoryContext: 64727172 total in 7901 blocks; 132784 free (7871 chunks); 64594388 used
CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
PL/Python procedures: 8192 total in 1 blocks; 4872 free (0 chunks); 3320 used
SPI exceptions: 8192 total in 1 blocks; 2328 free (0 chunks); 5864 used
Rendezvous variable hash: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 used
TopTransactionContext: 3859664 total in 6 blocks; 98504 free (31 chunks); 3761160 used
CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
.... (thousands of these)
Thousands of CurTransactionContext lines? That would suggest that you're
creating a subtransaction (savepoint) in each trigger firing. You should
try to avoid that if you can. I'm too tired to look up which plpython
constructs might cause that exactly.

It may be that there is also some internal leak in plpython; the fact that
we only seem to have accounted for ~75MB of memory in the memory context
dump suggests that something outside the core pgsql code is eating memory,
and python might be the something. It's unlikely we can debug that unless
you can provide a self-contained test case, though.

regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Loading...