Some time ago, I created a build machine to test out pkg building using poudriere for i386, amd64, and armv6.
The machine was old, had a single disk, but 6G of RAM. I ended up using ZFS on it. Now that things are working properly, I wanted to move it to a more appropriate box (12 core, 32G, SSD). Putting an old disk in it to boot from sounded silly. Time to copy the data to the new host. Fortunately, ZFS makes this very easy. The migration is from a single disk, to a mirrored pair, but you could just as easily use any layout including raidz, raidz2, etc.
Things to note: You can’t have 2 zpools of the same name; you can’t export the pool you are booted from.
Things we’ll be doing:
- Attach disk via USB to new host
- Boot system via USB disk
- Stop (and disable) system services
- Create new zpool
- Install bootblocks
- Snapshot (recursive)
- Zfs send | zfs receive
- Export new zpool
- Reboot using mfsbsd
- Disconnect USB disk
- Import pool (change pool names here if new and old were different, but you want them the same)
- Zpool set bootfs
- Edit rc.conf for things like interface changes
- Reboot on new disk(s)/pool
Attach the old disk
Start by taking the disk from the old system and attaching it via USB to the new system. I have one that offers an additional power plug, so its not relying on USB to power the device. This can be important if you’re not using a laptop drive, or a cd/dvd designed to run on low power. Now boot off this disk (we want to use the existing system to create the new one so there are no issues of compatibility with ZFS .
Creating a new zpool
Now we can create our new zpool . I make mine a mirror. Substitute the disk you want to work on for ${disk}. Substitute the amount of RAM you have for ${ram_size}. If the value is large, you can just use 8192 (8G). This is probably sufficient unless your system tends to swap a lot.
1zpool labelclear -f /dev/${disk}
2gpart destroy -F ${disk}
3gpart create -s gpt ${disk}
4gpart add -a 4k -t efi -s 100M -l efi0 ${disk}
5gpart add -a 4k -t freebsd-boot -s 1M -l boot0 ${disk}
6gpart add -a 4k -t freebsd-swap -s ${ram_size}G -l swap0 ${disk}
7gpart add -a 4k -t freebsd-zfs -l disk0 ${disk}
8gnop create -S 4096 /dev/gpt/disk0
9gmirror label -b prefer swap /dev/gpt/swap0 /dev/gpt/swap1
We’ve now cleared existing partition and label information from the new disks, created new partitioning, ensured that ZFS will use 4k sector (all the new disks use 4k natively, so this makes sure we’re being efficient. Lastly, we setup our mirrored swap.
Depending on whether you’re booting UEFI or not, you may not want to create the efi partiton above. If this is the case, further down, don’t try to install the efi boot blocks either.
Again, if you’re not doing the UEFI thing, don’t add the partition or boot code. Also, reduce the mbr bootcode indexes by 1 (instead of 2, use 1). If in doubt, you can do
1[root@builder ~ ]$ gpart show /dev/mfid0
2=> 40 499056560 mfid0 GPT (238G)
3 40 204800 1 efi (100M)
4 204840 2048 2 freebsd-boot (1.0M)
5 206888 67108864 3 freebsd-swap (32G)
6 67315752 431740840 4 freebsd-zfs (206G)
7 499056592 8 - free - (4.0K)
8
9[root@builder ~ ]$ gpart show /dev/mfid1
10=> 40 499056560 mfid1 GPT (238G)
11 40 204800 1 efi (100M)
12 204840 2048 2 freebsd-boot (1.0M)
13 206888 67108864 3 freebsd-swap (32G)
14 67315752 431740840 4 freebsd-zfs (206G)
15 499056592 8 - free - (4.0K)
16
17[root@builder ~ ]$
to see what your partition table looks like (do this for each disk you’re working with. If its a mirror, make sure they’re both treated the same.
1zpool create -f -o altroot=/mnt -O compress=lz4 -O atime=off -O checksum=fletcher4 -O canmount=off -m none tank /dev/gpt/disk0.nop
2zpool attach tank /dev/gpt/disk0.nop /dev/gpt/disk1.nop
This is where we actually create our zpool . We specify using the nop devices to ensure that ZFS will set things up for 4k. After reboot, nop won’t exist, and ZFS will just ‘do the right thing’. I’ve created the new pool as tank, because the old pool is called zroot. We’ll change the name towards the end.
Snapshot and send|receive
If you have existing snapshots, you may want to clear them. If so, you can do that with the following:
List the snapshots (good to check before making them all disappear):
1zfs list -H -o name -t snapshot
Make them all disappear:
1zfs list -H -o name -t snapshot | xargs -n1 zfs destroy
Now its time to snapshot the entire zroot pool. We can do this with:
1zfs snapshot -r zroot@migration
The last argument there is of the form ‘pool-name’@‘snapshot-name’. These are freeform, you can call them what ever you want. I chose these because it made sense for what I was trying to do.
Time to use ZFS send to shoot the snapshot data into the new pool:
1zfs send -R zroot@migration | zfs recv -F tank
The ‘-R’ sends all of it. Snapshot, and anything that the snapshot depends on. The ‘-F’ forces a rollback of the filesystem to the most snapshot before performing the receive operation.
Export, then Reboot to mfsbsd
Export the pool:
1zpool export -f tank
Time to reboot and boot up with mfsbsd and login with root/mfsroot.
Import the new pool with the old-pool name. For me, the new pool is tank, and the old-pool name is zroot.
1zpool import -o altroot=/mnt tank zroot
You should now have the zroot pool mounted on /mnt. You can now do
1[root@mfsbsd ~ ]$ zfs mount -a
2[root@mfsbsd ~ ]$ zfs list
3NAME USED AVAIL REFER MOUNTPOINT
4zroot 38.4G 159G 96K none
5zroot/ROOT 17.5G 159G 96K none
6zroot/ROOT/default 17.5G 159G 14.1G /mnt
7zroot/freebsd-svn 5.06G 159G 5.06G /mnt/svn
8zroot/poudriere 4.39G 159G 96K none
9zroot/poudriere/data 461M 159G 104K /mnt/usr/local/poudriere/data
10zroot/poudriere/data/.m 3.18M 159G 3.18M /mnt/usr/local/poudriere/data/.m
11zroot/poudriere/data/cache 3.18M 159G 3.18M /mnt/usr/local/poudriere/data/cache
12zroot/poudriere/data/logs 14.9M 159G 14.9M /mnt/usr/local/poudriere/data/logs
13zroot/poudriere/data/packages 440M 159G 440M /mnt/usr/local/poudriere/data/packages
14zroot/poudriere/data/wrkdirs 96K 159G 96K /mnt/usr/local/poudriere/data/wrkdirs
15zroot/poudriere/jails 2.77G 159G 96K none
16zroot/poudriere/jails/10_3_0_amd64 1.28G 159G 1.28G /mnt/usr/local/poudriere/jails/10_3_0_amd64
17zroot/poudriere/jails/10_3_0_dns_amd64 96K 159G 96K /mnt/usr/local/poudriere/jails/10_3_0_dns_amd64
18zroot/poudriere/jails/11_0_0_armv6 1.49G 159G 1.49G /mnt/usr/local/poudriere/jails/11_0_0_armv6
19zroot/poudriere/ports 1.17G 159G 96K none
20zroot/poudriere/ports/default 1.17G 159G 1.17G /mnt/usr/local/poudriere/ports/default
21zroot/tmp 41.7M 159G 41.5M /mnt/tmp
22zroot/usr 11.4G 159G 96K /mnt/usr
23zroot/usr/home 5.56G 159G 5.56G /mnt/usr/home
24zroot/usr/ports 3.54G 159G 3.05G /mnt/usr/ports
25zroot/usr/src 2.34G 159G 1.17G /mnt/usr/src
26zroot/var 16.0M 159G 96K /mnt/var
27zroot/var/audit 96K 159G 96K /mnt/var/audit
28zroot/var/crash 96K 159G 96K /mnt/var/crash
29zroot/var/log 15.5M 159G 15.3M /mnt/var/log
30zroot/var/mail 160K 159G 96K /mnt/var/mail
31zroot/var/tmp 160K 159G 96K /mnt/var/tmp
32[root@mfsbsd ~ ]$
Now our pool has been renamed. We need to set the bootfs parameter.
1zpool set bootfs=zroot/ROOT/default zroot
Install the bootcode. Again, if you’re not doing EFI, don’t try to install the EFI bootcode.
1cd /mnt
2gpart bootcode -p boot/boot1.efifat -i 1 ${disk}
3gpart bootcode -b boot/pmbr -p boot/gptzfsboot -i 2 ${disk}
Last changes
Now its time to make any last changes we want before booting our “new” system. Don’t forget, prefix your paths with ‘/mnt’. Things you might want to change are ifconfig_ variables. Services should already be disabled, but double check and disable that have still been left on.
Export the zpool and reboot to the new pool:
1zpool export zroot
2reboot
Make sure you don’t boot mfsbsd again (remove the media, or choose to boot off the internal disk (you may need to configure the bios to boot UEFI if you haven’t already, and you performed the UEFI partitioning steps).
If all goes well, it will boot to the new system successfully. Don’t forget to enable the services we disabled in the beginning.
Comments