Discussion:
Memory usage and configuration settings
Mike C
2012-03-05 12:37:55 UTC
Permalink
Hi,

I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
Database is approximately 5GB, and is a mixture of read/write.
Postgres is occasionally being killed by the linux oom-killer. I am
trying to understand how much memory postgres could use, and how to
change the configuration to bring it down to a level that won't get it
killed.

Key configuration values are:

max_connections = 350
shared_buffers = 4GB
temp_buffers = 24MB
max_prepared_transactions = 211
work_mem = 16MB
maintenance_work_mem = 131MB
wal_buffers = -1
wal_keep_segments = 128
checkpoint_segments = 64
effective_cache_size = 4GB
autovacuum_max_workers = 4

which I have interpreted to be:

max_locks_per_transaction = 64
max_connections = 350
autovacuum_max_workers =4
max_prepared_transactions = 211 (I've since realised this can be 0; I
use prepared statements, not 2PC)
shared_buffers = 4294967296
wal_block_size = 8192
wal_buffers = 16777216 (actually, -1, but following the documentation
of max(16MB, shared_buffers/32) it should be 16MB).
and wal segment size = 16777216, block_size = 8192

And using the equations on the kernel resources page, I get:

Connections = 6,678,000
= (1800 + 270 * max_locks_per_transaction) *
max_connections
= (1800 + 270 * 64) * 350
Autovacuum Workers = 76,320
= (1800 + 270 *
max_locks_per_transaction) * autovacuum_max_workers
= (1800 + 270 * 64) * 4
Prepared Transactions = 3,808,550
= (770 + 270 *
max_locks_per_transaction) * max_prepared_transactions
= (770 + 270 * 64) * 211
Shared Disk Buffers = 36,077,725,286,400
= (block_size + 208) * shared_buffers
= (8192 + 208) * 4294967296
= ~33TB
WAL Buffers = 137,573,171,200
= (wal_block_size + 8) * wal_buffers
= (8192 + 8) * 16777216
= ~128GB
Fixed Space Requirements = 788,480
Overall = 36,215,309,808,950 bytes (~33.2 TB!)

33.2TB doesn't seem right, and while I know the equations are just
approximations, this seems too much. What have I done wrong? I read a
prior thread about this on the pgsql lists which seemed to indicate
the equations for shared disk and wall buffers should be divided by
the block_size 8192, and looking at it closer, wonder if the equation
for both should just be overhead + buffer?

Also what is the relationship between memory and work_mem (set to 16M
in my case). I understand work_mem is per sort, and in most cases our
queries only have a single sort. Does this mean an additional 16M per
sorting client (350 * 16M = 5.6GB), or presumably it only uses the
work memory as it needs it (i.e. does it preallocate all 16M for each
sort, or on an as-needed basis depending on the size of sorted data?)

Are there any other ways to calculate the worst case memory usage of a
given postgres configuration?

My gut feeling is to reduce shared_buffer to 1GB or less and reduce
connections to ~150-200 (to reduce worst case work_mem impact).

Kind Regards,

Mike
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tom Lane
2012-03-05 16:04:58 UTC
Permalink
Post by Mike C
I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
Database is approximately 5GB, and is a mixture of read/write.
Postgres is occasionally being killed by the linux oom-killer. I am
trying to understand how much memory postgres could use, and how to
change the configuration to bring it down to a level that won't get it
killed.
Basically, you can't fix it that way, at least not if you want to have a
sane configuration. The problem is misdesign of the OOM killer: it will
count the shared memory block against the postmaster *once for each
child process*. The only realistic solution is to turn off OOM kill for
the postmaster (and maybe its children too, or maybe you'd rather have
them immune as well). The former is pretty easy to do if you're
launching the postmaster from a root-privileged initscript. I think
most prepackaged versions of PG are set up to be able to do this
already. If you want the children OOM-killable it requires a
source-code tweak as well, since that property is normally inherited.
Post by Mike C
Shared Disk Buffers = 36,077,725,286,400
= (block_size + 208) * shared_buffers
= (8192 + 208) * 4294967296
= ~33TB
I think you've multiplied by the block size one time too many. Ditto
for WAL buffers.

regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Mike C
2012-03-05 17:03:03 UTC
Permalink
Post by Tom Lane
Post by Mike C
I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
Database is approximately 5GB, and is a mixture of read/write.
Postgres is occasionally being killed by the linux oom-killer. I am
trying to understand how much memory postgres could use, and how to
change the configuration to bring it down to a level that won't get it
killed.
Basically, you can't fix it that way, at least not if you want to have a
sane configuration.  The problem is misdesign of the OOM killer: it will
count the shared memory block against the postmaster *once for each
child process*.  The only realistic solution is to turn off OOM kill for
the postmaster (and maybe its children too, or maybe you'd rather have
them immune as well).  The former is pretty easy to do if you're
launching the postmaster from a root-privileged initscript.  I think
most prepackaged versions of PG are set up to be able to do this
already.  If you want the children OOM-killable it requires a
source-code tweak as well, since that property is normally inherited.
Ok, that makes sense. With regards to work_mem, am I right in thinking
the child processes only allocate enough memory to meet the task at
hand, rather than the full 16M specified in the config file?
Post by Tom Lane
Post by Mike C
Shared Disk Buffers    = 36,077,725,286,400
                                 = (block_size + 208) * shared_buffers
                                 = (8192 + 208) * 4294967296
                                 = ~33TB
I think you've multiplied by the block size one time too many.  Ditto
for WAL buffers.
Yes spot on, removed the block_size and it is now the more sane ~4.1GB.

Thanks for your help,

Mike
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tom Lane
2012-03-05 17:13:24 UTC
Permalink
Post by Mike C
Ok, that makes sense. With regards to work_mem, am I right in thinking
the child processes only allocate enough memory to meet the task at
hand, rather than the full 16M specified in the config file?
They only allocate what's needed ... but you have to keep in mind that
work_mem is *per operation*, eg per sort or hash. A complex query could
require several such steps and thus eat several times work_mem.

regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Merlin Moncure
2012-03-06 00:01:31 UTC
Permalink
Post by Mike C
Hi,
I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
Database is approximately 5GB, and is a mixture of read/write.
Postgres is occasionally being killed by the linux oom-killer. I am
trying to understand how much memory postgres could use, and how to
change the configuration to bring it down to a level that won't get it
killed.
max_connections = 350
shared_buffers = 4GB
temp_buffers = 24MB
max_prepared_transactions = 211
work_mem = 16MB
maintenance_work_mem = 131MB
wal_buffers = -1
wal_keep_segments = 128
checkpoint_segments = 64
effective_cache_size = 4GB
autovacuum_max_workers = 4
max_locks_per_transaction = 64
max_connections = 350
autovacuum_max_workers =4
max_prepared_transactions = 211 (I've since realised this can be 0; I
use prepared statements, not 2PC)
shared_buffers = 4294967296
wal_block_size = 8192
wal_buffers = 16777216 (actually, -1, but following the documentation
of max(16MB, shared_buffers/32) it should be 16MB).
and wal segment size = 16777216, block_size = 8192
Connections = 6,678,000
                    = (1800 + 270 * max_locks_per_transaction) *
max_connections
                    = (1800 + 270 * 64) * 350
Autovacuum Workers = 76,320
                                   = (1800 + 270 *
max_locks_per_transaction) * autovacuum_max_workers
                                   = (1800 + 270 * 64) * 4
Prepared Transactions = 3,808,550
                                     = (770 + 270 *
max_locks_per_transaction) * max_prepared_transactions
                                     = (770 + 270 * 64) * 211
Shared Disk Buffers      = 36,077,725,286,400
                                = (block_size + 208) * shared_buffers
                                = (8192 + 208) * 4294967296
                                = ~33TB
WAL Buffers = 137,573,171,200
                    = (wal_block_size + 8) * wal_buffers
                    = (8192 + 8) * 16777216
                    = ~128GB
Fixed Space Requirements = 788,480
Overall = 36,215,309,808,950 bytes (~33.2 TB!)
33.2TB doesn't seem right, and while I know the equations are just
approximations, this seems too much. What have I done wrong? I read a
prior thread about this on the pgsql lists which seemed to indicate
the equations for shared disk and wall buffers should be divided by
the block_size 8192, and looking at it closer, wonder if the equation
for both should just be overhead + buffer?
Also what is the relationship between memory and work_mem (set to 16M
in my case). I understand work_mem is per sort, and in most cases our
queries only have a single sort. Does this mean an additional 16M per
sorting client (350 * 16M = 5.6GB), or presumably it only uses the
work memory as it needs it (i.e. does it preallocate all 16M for each
sort, or on an as-needed basis depending on the size of sorted data?)
Are there any other ways to calculate the worst case memory usage of a
given postgres configuration?
My gut feeling is to reduce shared_buffer to 1GB or less and reduce
connections to ~150-200 (to reduce worst case work_mem impact).
One easy thing to neglect when doing memory counting is backend
private memory. Each postgres process typically eats around 1mb and
this will grow if your processes are long-lived as the backend starts
to cache various structures. As a rule of thumb I tend to use 4mb per
backend (you can confirm this yourself by subtracting SHR from RES).
In absolutely pathological cases (like heavy plpgsql backends with a
lot of tables and views) it can be worse. 4mb * 350 = 1.4gb...so
you're cutting things fairly close.

Looking at your postgresql.conf, your memory settings for
shared_buffers are a more aggressive than the often suggested 25% rule
but I bet it's the backend memory that's pushing you over the edge.

Rather than reducing backend count, I'd consider (carefully) using
pgbouncer to reduce overall connection count. Or you can reduce
shared buffers, but in your case I'd probably cut it to 1GB if it was
me.

merlin
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Continue reading on narkive:
Loading...