Continuing our series of short posts and cool tips, here’s another one for your arsenal.
You’re probably familiar with zfs and its ability to send and receive dataset snapshots. Did you also know that it can send incremental snapshots, so you can keep your remote backup up to date with minimum transfer? Or how about an easy way to sync data from an old and about-to-be-retired server to a new 128 core one?
What you will need:
- source server running zfs
- destination server running zfs
- root access on both of these servers
- a network between the two servers, preferably a high-bandwidth one (add more links for redundancy or aggregation)
Start by making a snapshot of the source dataset.
1
|
|
Prepare zfs receive
process on the remote server. Note that we’re
using nc(1)
here to send and receive the stream accross an encrypted
VPN tunnel. If your servers are connected by the public Internet, you
should use ssh here instead. -I
option to netcat specifies the size
of the TCP receive buffer.
1
|
|
Back on the source server, start the transfer and pipe it into nc(1)
.
Remember to specify a matching TCP send buffer size and also add an IPv4
TOS value to maximise throughput.
1 2 3 4 5 6 7 |
|
So that took care of the initial backup. From now on, you can send
incremental snapshots only. You first take a new shapshot of your
dataset and then pass a -i
option to zfs send
to indicate what the
previous snapshot was for this dataset.
1 2 |
|
zfs receive
command on the remote server needs no modification. zfs
will realise its receiving an incremental backup and, provided the
destination file system exists, will do the right thing.
Note that if the destination filesystem was modified (which can happen
if you have atimes enabled on the filesystem), you will get the
following error: cannot receive new filesystem stream: destination
'tank/srv' exists. must specify -F to overwrite it
. Do
as it suggests, and don’t worry, it won’t cause the entire filesystem to
be transferred from scratch.