March 01, 2017
Restoring Data with ZFS Snapshots
Previously, I wrote about ZFS storage pools and filesystems, storage arrays, and storage locations within the arrays, and how you can do some cool stuff with them like quick drive replacements. However, the best part of ZFS, in my opinion, is its ability to perform snapshots. Snapshots are point-in-time, read-only versions of the filesystem. Think of them as a picture of whatever is inside of the filesystem’s mountpoint at the time the snapshot was requested. Those snapshots can then be “restored” as a synthetic full image and used in a variety of ways.
Create a filesystem:
CODE: sudo zfs create iCanHazPool/yesYouCan
Create a snapshot with no data inside the filesystem:
CODE: zfs snapshot iCanHazPool/yesYouCan@snap1
List out the filesystems and snapshots (otherwise known as children) for the yesYouCan filesystem:
CODE: sudo zfs list -t all -r iCanHazPool/yesYouCan
Start adding data into the filesystem’s mountpoint. In this example the mountpoint is /iCanHazPool/yesYouCan.
CODE: sudo dd if=/dev/random of=/iCanHazPool/yesYouCan/myRandomData bs=1M count=4096
Run the list command again.
Whaaaaa, why is the snapshot storing data even though it was a snapshot of the mountpoint while it had nothing inside? Well, snapshots store no data when they are first created. As data changes inside the mountpoint of the filesystem, the snapshot has to remember how to revert the mountpoint back to the state that it was in when the snapshot existed. So, in the example above “snap1” has to store 16.1 KB of information (in this case only really metadata) to understand how to get back from 1.81 GB of used size to 30.4 KB, or in this case the REFER values.
Take another snapshot and list it out again.
Notice that the most recent snapshot again has no used space.
Let’s delete some data within the /iCanHazPool/yesYouCan/ and see how snapshots can be used.
List out the filesystem again.
Come with me into the rabbit hole. After deleting the random.dat file in directory /iCanHazPool/yesYouCan/ ‘snap2’ increased in size from 0 KB to the exact amount of space that was removed in the mountpoint, 1.81 GB. You will notice that there was no actual decrease in used space for the overall filesystem because it’s the snapshot’s job to understand what it needs to store to get back to what the mountpoint looked like at the time ‘snap2’ was taken.
That is all well and good but what do I care? You can restore the data from the ‘snap2’ point-in-time of course.
The above is the rollback function in zfs. Previously, I deleted the data that existed in the mountpoint, but I captured what that mountpoint looked like prior to that point in ‘snap2’. When the data was removed from the mountpoint, the snap2 snapshot increased in size to make sure it could refer back to what the snapshot looked like. Rolling back to ‘snap2’ allowed the filesystem to move the information that was stored back into the correct location and reduce the size of the snapshot. The storage used in the overall pool did not increase or decrease with the rollback because any changes that happened were re-referenced (or stored )within the snapshot after changes occurred.
Wait, what just happened here? Well when snapshots are first taken they store no data.
Let it run for a bit.
This just scratches the surface for what can be done with ZFS. There are a number of ways that snapshots can be used. For now though, you can try this filesystem out yourself because you can reinforce what you have learned by doing. Also if you are a partner, at least take a look at a device’s terminal, however you should probably not effect anything on production data without testing first (just a little CYA for me).