User Tools

Site Tools


blog:zfs_troubles

ZFS Troubles

I have long been convinced that if you care about your 'digital assets' (such as 1TB+ of flac music files or multi-terabytes of family photographs), you need to store them on a file system which has in-built 'bit-rot detection' capabilities. Basically, this means a file system which checksums your data on read and compares the result to checksums computed on write: any difference means some data has changed, and the file system should have data redundancy capabilities that allows it, at that point, to go and retrieve known-good versions of the data from its redundancy store and replace the changed data.

Windows does not have such a file system, unless you count **ReFS**, which was introduced in about 2012… and which was removed from Home and Pro versions of Windows 10 in 2017. (That is to say, you can't create new ReFS volumes in today's Windows 10, though it can still read old ReFS volumes created in earlier versions). It is primarily for this reason that I believe Windows is inappropriate to use as an operating system for the bulk storage of important digital assets.

Linux, however, has at least 2 bit-rot resistant file systems: Btrfs and ZFS. They are not without their issues, however! For starters, Btrfs was originally developed by Oracle (which rings alarm bells for many!) and has had a long gestation (since 2007) before being generally regarded as stable. An unstable file system is definitely not something you want to trust your digital assets to! If it's now regarded as mostly stable, it nevertheless has to live with the fact that whilst Suse Enterprise Server made it their default file system in 2015, Red Hat have declared in 2017 that they will not be supporting it in future. I think the 'general mood' in the Linux world, if you can ever say such a thing exists, is that betting on Btrfs in the long-term is not a great idea.

Which leaves ZFS -and indeed, I've used ZFS exclusively for my bulk storage for many years, with a variety of Linux distros, ranging from Fedora to CentOS, Manjaro, Arch and OpenSuse. It's generally not a great idea to use it with a bleeding edge distro like Fedora, since kernel changes between versions can render the ZFS kernel modules unloadable. You're then dependent on the good folk at ZFS on Linux to catch up and release a newer version of their modules before it works once more… and until they do, your data is inaccessible, which isn't much use to anyone! But, stick to a slower-release Linux, such as CentOS or Ubuntu LTS version, and ZFS works reliably and well. /wiki/OpenIndiana|OpenIndiana]]. That's basically a version of open source Solaris, so is genuinely Unix-y and, as a clone of Solaris, supports the Solaris version of ZFS natively. Except… except… Politics!

ZFS was originally developed by Sun (which was, in due course, bought out by Oracle… are your bells ringing yet?!). Sun did open-source it, making it available as part of OpenSolaris in about 2008. However, Sun licensed it with the “CDDL” open source license, which is incompatible with the GPL license used by Linux kernel developers. They apparently did this deliberately to stop ZFS being made part of Linux, with which they were competing at the time. ZFS cannot therefore be part of the mainstream Linux kernel development process… and that fact has now come to bite ZFS hard.

The short version of the unfolding ZFS on Linux bad news story is that ZFS on Linux relies on some functionality exposed by the kernel via two long-deprecated functions. Preparing for the imminent release of the new version 5.0 Linux kernel, the kernel developers finally (after more than 10 years) removed those two functions. That breaks ZFS at a stroke and means that the ZFS on Linux developers now need to code workarounds -which, quite possibly, won't perform as well as the original code that uses the now-removed functions.

Performance isn't exactly an issue when you're using ZFS as a bulk storage file system only: it's not as if you are writing to it hundreds of thousands of times a day, after all. But the fact that the issue arises at all illustrates the danger of being outside the kernel main line: the kernel developers can do things which break your code without having any responsibility for the breakage (and thus no responsibility to fix it). If your code is non-main-line, the breakage is something you have to fix, after it's occurred.

For the moment, none of this really concerns me: I'm running ZFS under CentOS 7, which is still using a version 3 kernel (and will do so until its end of life in 2024); And in five years' time, I think we'll know the state of play of ZFS on linux 5.x much better than we do now.

In the meantime, knowing that my CentOS 7 boxes would one day be end-of-life, I had anyway been considering a switch to FreeBSD, which has had integrated ZFS support out-of-the-box for years. Funnily enough, the FreeBSD developers recently decided to switch their ZFS code base to the ZFS-on-Linux one. This won't break ZFS, though, as the FreeBSD kernel is not the Linux kernel: whilst the Linux 5.x kernel changes break ZFS-on-Linux, it will continue working fine on the FreeBSD kernel.

The other thing I already do, incidentally, is run a server on OpenIndiana. That's basically a version of open source Solaris, so is genuinely Unix-y and, as a clone of Solaris, supports the Solaris version of ZFS natively. Again, as a bulk data server, I don't actually have to interact with that server's operating system very much (basically to check the Zpools are healthy once in a while!), so the fact that it behaves and operates rather differently to Linux isn't a major problem for me.

Anyway: the moral of this story is perhaps to be a little wary of ZFS on Linux. Maybe it's also to be prepared know how to use more than one operating system. And, possible, to shop around for the very best protection for those precious digital assets you need to keep safe: don't complacently settle on one O/S, file system or storage solution!

blog/zfs_troubles.txt · Last modified: 2019/02/03 11:47 by dizwell