Discussion:
PostgreSQL and Windows 2003 DFS Replication
(too old to reply)
Arnaud Lesauvage
2006-07-27 11:16:55 UTC
Permalink
Hi list !

I am currently deploying two servers (Windows 2003 R2) that will
be used as file servers as well as PostgreSQL servers.

One of the server will be the main server, the other one a backup
server (no load-balancing, only an easy-recoverage solution).
The goal is to be able to start working quickly after one of the
server fails (after the main server fails actually, since the
backup server is not used).

I already configured a high-availability solution for the file
server part by using the built-in DFS Replication service.

I first thought I would use Slony-I to replicate changes to the
main database on the backup server, but I then realized that I
might use DFS Replication for that.
The point is that I am not sure that it will work.

Documentation about DFS Replication is not very talkative (IMHO),
I have to little knowledge of PostgreSQL's file handling to know
if it will work or not.

I have compiled some informations about DFS Replication from
Microsoft"s web site. Could you PostgreSQL gurus tell me whether
using this replication mechanism is a good idea or not ?
The main advantage for me is that I will not need to configure 2
replication systems (one for the files, on for the DBs). I would
only need to maintain one of them !

Here is the documentation I got, thanks a lot for reading me to
this point ! :

"DFS Replication, the successor to the File Replication service
(FRS) introduced in Windows 2000 Server operating systems, is a
new, state-based, multimaster replication engine that supports
replication scheduling and bandwidth throttling. DFS Replication
uses a new compression algorithm known as remote differential
compression (RDC). RDC is a "diff-over-the wire" client-server
protocol that can be used to efficiently update files over a
limited-bandwidth network. RDC detects insertions, removals, and
re-arrangements of data in files, enabling DFS Replication to
replicate only the changed file blocks when files are updated.

DFS Replication uses many sophisticated processes to keep data
synchronized on multiple servers. Before you begin using DFS
Replication, it is helpful to understand the following concepts.

* DFS Replication is a multimaster replication engine. Any change
that occurs on one member is replicated to all other members of
the replication group.

* DFS Replication detects changes on the volume by monitoring the
update sequence number (USN) journal, and DFS Replication
replicates changes only after the file is closed.

* DFS Replication uses a staging folder to stage a file before
sending or receiving it. For more information about staging
folders, see Staging folders and Conflict and Deleted folders.

* DFS Replication uses a version vector exchange protocol to
determine which files need to be synchronized. The protocol sends
less than 1 kilobyte (KB) per file across the network to
synchronize the metadata associated with changed files on the
sending and receiving members.

* When a file is changed, only the changed blocks are replicated,
not the entire file. The RDC protocol determines the changed file
blocks. Using default settings, RDC works for any type of file
larger than 64 KB, transferring only a fraction of the file over
the network.

* DFS Replication uses a conflict resolution heuristic of last
writer wins for files that are in conflict (that is, a file that
is updated at multiple servers simultaneously) and earliest
creator wins for name conflicts. Files and folders that lose the
conflict resolution are moved to a folder known as the Conflict
and Deleted folder. You can also configure the service to move
deleted files to the Conflict and Deleted folder for retrieval
should the file or folder be deleted. For more information, see
Staging folders and Conflict and Deleted folders.

* DFS Replication is self-healing and can automatically recover
from USN journal wraps, USN journal loss, or loss of the DFS
Replication database.

* DFS Replication uses a Windows Management Instrumentation (WMI)
provider that provides interfaces to obtain configuration and
monitoring information from the DFS Replication service."


Many thanks for your advices on this !

Regards
--
Arnaud


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
Merlin Moncure
2006-07-27 15:44:43 UTC
Permalink
Post by Arnaud Lesauvage
Hi list !
I am currently deploying two servers (Windows 2003 R2) that will
be used as file servers as well as PostgreSQL servers.
One of the server will be the main server, the other one a backup
server (no load-balancing, only an easy-recoverage solution).
The goal is to be able to start working quickly after one of the
server fails (after the main server fails actually, since the
backup server is not used).
I already configured a high-availability solution for the file
server part by using the built-in DFS Replication service.
I am very suspicious about DFS for this. File based replication
usually doesn't work for sql servers because of the complex
interdependencies in the files. It sounds like a fancy rsync and is
very unlikely to be able to guarantee consistent backup unless all
writes are synchronous.

for a cold/warm standby postgresql backup, I'd suggest using pitr.
It's easy to set up and administer. for hot read only backup, bite the
bullet and use slony.

merlin

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq
Csaba Nagy
2006-07-27 15:57:39 UTC
Permalink
Post by Merlin Moncure
for a cold/warm standby postgresql backup, I'd suggest using pitr.
I found that PITR using WAL shipping is not protecting against all
failure scenarios... it sure will help if the primary machine's hardware
fails, but in one case it was useless for us: the primary had a linux
kernel with buggy XFS code (that's what I think it was, cause we never
found out for sure) and we did use XFS for the data partition, and at
one point it started to get corruptions at the data page level. The
corruption was promptly transferred to the standby, and therefore it was
also unusable... we had to recover from a backup, with the related
downtime. Not good for business...
Post by Merlin Moncure
It's easy to set up and administer. for hot read only backup, bite the
bullet and use slony.
I think slony would have helped us recovering from the above mentioned
situation earlier and easier, as it transfers logical data and not pages
directly. It has though a bigger overhead than WAL shipping in terms of
administration and performance penalty.

Cheers,
Csaba.



---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Arnaud Lesauvage
2006-07-28 09:39:45 UTC
Permalink
Post by Csaba Nagy
Post by Merlin Moncure
for a cold/warm standby postgresql backup, I'd suggest using pitr.
I found that PITR using WAL shipping is not protecting against all
failure scenarios... it sure will help if the primary machine's hardware
fails, but in one case it was useless for us: the primary had a linux
kernel with buggy XFS code (that's what I think it was, cause we never
found out for sure) and we did use XFS for the data partition, and at
one point it started to get corruptions at the data page level. The
corruption was promptly transferred to the standby, and therefore it was
also unusable... we had to recover from a backup, with the related
downtime. Not good for business...
Post by Merlin Moncure
It's easy to set up and administer. for hot read only backup, bite the
bullet and use slony.
I think slony would have helped us recovering from the above mentioned
situation earlier and easier, as it transfers logical data and not pages
directly. It has though a bigger overhead than WAL shipping in terms of
administration and performance penalty.
OK, but corruption at the data page level is a very unlikely
event, isn't it ?

Regards
--
Arnaud


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Merlin Moncure
2006-07-28 20:30:56 UTC
Permalink
Post by Arnaud Lesauvage
Post by Csaba Nagy
I found that PITR using WAL shipping is not protecting against all
failure scenarios... it sure will help if the primary machine's hardware
fails, but in one case it was useless for us: the primary had a linux
kernel with buggy XFS code (that's what I think it was, cause we never
found out for sure) and we did use XFS for the data partition, and at
one point it started to get corruptions at the data page level. The
corruption was promptly transferred to the standby, and therefore it was
also unusable... we had to recover from a backup, with the related
downtime. Not good for business...
OK, but corruption at the data page level is a very unlikely
event, isn't it ?
yes, and that is not a pitr problem, that is a data corruption
problem. i am very suspicious that slony style replication would
provide any sort of defense against replicating from a machine which
is changing bytes from a to b, etc. i think the best defense against
*that* sort of problem would be synchronous replication via pgpool.

merlin

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
Csaba Nagy
2006-07-31 08:35:26 UTC
Permalink
Post by Merlin Moncure
Post by Arnaud Lesauvage
Post by Csaba Nagy
I found that PITR using WAL shipping is not protecting against all
failure scenarios... it sure will help if the primary machine's hardware
fails, but in one case it was useless for us: the primary had a linux
kernel with buggy XFS code (that's what I think it was, cause we never
found out for sure) and we did use XFS for the data partition, and at
one point it started to get corruptions at the data page level. The
corruption was promptly transferred to the standby, and therefore it was
also unusable... we had to recover from a backup, with the related
downtime. Not good for business...
OK, but corruption at the data page level is a very unlikely
event, isn't it ?
It's not... it just happened to me again, strangely this time on a Slony
replica. It might be that the hardware/OS/FS combination we use is the
problem, might be that postgres has some problem with those (I would
exclude slony being able to produce such things). But it did happened,
and I can't exclude it will happen again. This time I'll be able to
investigate closer I hope.
Post by Merlin Moncure
yes, and that is not a pitr problem, that is a data corruption
problem. i am very suspicious that slony style replication would
provide any sort of defense against replicating from a machine which
is changing bytes from a to b, etc. i think the best defense against
*that* sort of problem would be synchronous replication via pgpool.
When it happened for us, it was a few blocks in some tables, and I
suspect it was a OS/FS bug. In that case slony would not propagate the
error, it might propagate bad data, but not the page error itself. So it
might not protect against bad data, but I will be able to switch over
and have a working system immediately compared to recover from a backup
from yesterday after a downtime of 8 hours. So instead of loosing data
worth of 1 day and have a downtime of 8 hours I'll have a downtime of 1
minute and have a few bad entries in the DB... for the kind of
application we have here it is definitely a better scenario.

Cheers,
Csaba.



---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
Arnaud Lesauvage
2006-07-28 09:37:00 UTC
Permalink
Post by Merlin Moncure
Post by Arnaud Lesauvage
I am currently deploying two servers (Windows 2003 R2) that will
be used as file servers as well as PostgreSQL servers.
One of the server will be the main server, the other one a backup
server (no load-balancing, only an easy-recoverage solution).
The goal is to be able to start working quickly after one of the
server fails (after the main server fails actually, since the
backup server is not used).
I already configured a high-availability solution for the file
server part by using the built-in DFS Replication service.
I am very suspicious about DFS for this. File based replication
usually doesn't work for sql servers because of the complex
interdependencies in the files. It sounds like a fancy rsync and is
very unlikely to be able to guarantee consistent backup unless all
writes are synchronous.
OK, I get your point.
Post by Merlin Moncure
for a cold/warm standby postgresql backup, I'd suggest using pitr.
It's easy to set up and administer. for hot read only backup, bite the
bullet and use slony.
Warm backup is just fine for us. I'll check the pitr option !
Thanks for your advice !

Regards
--
Arnaud


---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
Magnus Hagander
2006-08-18 11:29:34 UTC
Permalink
Post by Arnaud Lesauvage
Hi list !
I am currently deploying two servers (Windows 2003 R2) that will be
used as file servers as well as PostgreSQL servers.
One of the server will be the main server, the other one a backup
server (no load-balancing, only an easy-recoverage solution).
The goal is to be able to start working quickly after one of the
server fails (after the main server fails actually, since the
backup server is not used).
I already configured a high-availability solution for the file
server part by using the built-in DFS Replication service.
I first thought I would use Slony-I to replicate changes to the
main database on the backup server, but I then realized that I
might use DFS Replication for that.
The point is that I am not sure that it will work.
Documentation about DFS Replication is not very talkative (IMHO), I
have to little knowledge of PostgreSQL's file handling to know if
it will work or not.
I have compiled some informations about DFS Replication from
Microsoft"s web site. Could you PostgreSQL gurus tell me whether
using this replication mechanism is a good idea or not ?
The main advantage for me is that I will not need to configure 2
replication systems (one for the files, on for the DBs). I would
only need to maintain one of them !
To add to this thread, even if it's a bit late:

It is *not* safe to use DFS/FRS replication for your PostgreSQL data
directory. DFS is not synchronous (which means you could lose committed
data or parts thereof), and it does *not* guarantee write order (which
means your database will quite likely end up completely corrupt if you
get a failover whenever anything is happening).
Post by Arnaud Lesauvage
* DFS Replication detects changes on the volume by monitoring the
update sequence number (USN) journal, and DFS Replication
replicates changes only after the file is closed.
This is also a problem - PostgreSQL generally doesn't close its files
until it's really necessary.


//Magnus


---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Continue reading on narkive:
Loading...