It's Friday afternoon. You're three hours into a distribution upgrade when the package manager bails on a dependency conflict. Half your packages are at the new version, half are stuck at the old one, and your production monitoring stack is down. With ext4, you'd be staring at a long night of manual recovery. With Btrfs, you type one command, reboot, and you're back at the state from ten minutes ago -- upgrade undone, services running, weekend intact.

That's the promise of Btrfs snapshots, and when configured properly, it's a promise the filesystem actually keeps. This guide walks through the mechanics of how Btrfs snapshots work at the filesystem level, how to set up automated snapshot management for real systems, and how to build a full rollback and offsite backup pipeline that can save you when things go sideways.

How Copy-on-Write Makes Snapshots Free

To understand why Btrfs snapshots are so fast and space-efficient, you need to understand the copy-on-write (CoW) mechanism at the heart of the filesystem. Every piece of data and metadata in Btrfs lives in B-tree structures. When you modify a file, Btrfs doesn't overwrite the existing data blocks. Instead, it writes the new data to a fresh location on disk and updates the B-tree pointers to reference the new blocks. The old blocks remain untouched until nothing references them anymore.

This is fundamentally different from traditional filesystems like ext4, which overwrite data in place. The CoW approach means that at any moment, the filesystem contains a complete, consistent tree of pointers to all your data. A snapshot is simply a second pointer to that same tree root. According to the official Btrfs documentation, creating a snapshot is instantaneous because it only requires creating a new tree root copy in the metadata -- no data is copied at all.

Note

A Btrfs snapshot is technically a subvolume. The only difference between a regular subvolume and a snapshot is that a snapshot starts with an initial copy of another subvolume's content. After creation, both are independent -- modifications to one don't affect the other. This is why btrfs subvolume delete is the same command you use to remove both snapshots and subvolumes.

Here's what makes this so efficient: if you have 50 GB of data and take a snapshot, your total disk usage is still roughly 50 GB. The snapshot and the original share all of their data blocks. Only when you modify a file does the filesystem allocate new space -- and only for the changed blocks, not the entire file. A Fedora Magazine article on the topic described this particularly well: rather than scanning the source and destination to find differences the way tools like rsync do, Btrfs tracks changes as a function of its CoW behavior across snapshot generations, making difference computation essentially instant even on terabyte-scale volumes.

When Chris Mason, the principal author of Btrfs, designed the filesystem at Oracle in 2007, this CoW B-tree architecture was central to the entire concept. As he explained in a 2008 interview with InternetNews.com, the goal was to let Linux scale for the storage demands of the future while being straightforward to administer and manage. Snapshots weren't an afterthought bolted on -- they are the foundation of how Btrfs handles transactions internally.

Subvolume Layout: Getting the Foundation Right

Before you can use snapshots effectively, your filesystem needs the right subvolume layout. The layout determines what gets included in a snapshot, what gets excluded, and how cleanly you can perform a rollback. Getting this wrong is the single most common source of snapshot-related headaches.

The key principle: a snapshot of a subvolume does not include nested subvolumes. When you snapshot @ (your root subvolume), any subvolume mounted inside it -- like @home or @var_log -- appears as an empty directory in the snapshot. This is a feature, not a bug. It means you can roll back your system without also rolling back user data, logs, or database files.

Here's a recommended flat layout used by distributions like openSUSE, Fedora, and CachyOS:

subvolume layout (subvolid=5)
# Top-level (subvolid=5) -- mount this to inspect all subvolumes
toplevel (subvolid=5)
├── @              # mounted at /           (root filesystem)
├── @home           # mounted at /home       (user data, excluded from root snapshots)
├── @snapshots      # mounted at /.snapshots (snapshot storage)
├── @var_log        # mounted at /var/log    (logs persist across rollbacks)
├── @var_cache      # mounted at /var/cache  (package cache, expendable)
└── @var_tmp        # mounted at /var/tmp    (temp files, expendable)

The corresponding /etc/fstab entries look like this. Note that compress=zstd is applied to subvolumes that benefit from it (root, home, logs) while transient subvolumes like @var_cache and @var_tmp skip compression since their contents are short-lived and the CPU overhead isn't worthwhile. The @snapshots subvolume also skips compression because snapshots are read-only CoW references -- they share blocks with the source subvolume and don't store independent data that could be compressed separately:

/etc/fstab
# <device>  <mount>       <type>  <options>                                      <dump> <pass>
UUID=...   /             btrfs   subvol=@,defaults,noatime,compress=zstd          0      0
UUID=...   /home         btrfs   subvol=@home,defaults,noatime,compress=zstd     0      0
UUID=...   /.snapshots   btrfs   subvol=@snapshots,defaults,noatime               0      0
UUID=...   /var/log      btrfs   subvol=@var_log,defaults,noatime,compress=zstd  0      0
UUID=...   /var/cache    btrfs   subvol=@var_cache,defaults,noatime               0      0
UUID=...   /var/tmp      btrfs   subvol=@var_tmp,defaults,noatime                 0      0
Pro Tip

Store your snapshots in a subvolume at the top level (like @snapshots), not nested inside the root subvolume. This way, when you replace @ during a rollback, your snapshots survive the operation. The Arch Wiki's Snapper article documents this layout in detail and explains why Snapper's default nested .snapshots directory can cause problems during rollbacks.

Why separate @var_log? Because logs are the first thing you need after a failed upgrade. If your logs are part of the root snapshot, rolling back to a pre-upgrade state also rolls back the logs that would tell you what went wrong. The SUSE Linux Enterprise documentation explicitly requires /var/log on its own subvolume for this reason, and distributions like CachyOS and openSUSE follow the same convention.

Creating and Managing Snapshots Manually

Before diving into automation tools, it's worth understanding the raw btrfs commands. Everything that Snapper and btrbk do under the hood reduces to these primitives.

Create a read-only snapshot of your root filesystem:

# btrfs subvolume snapshot -r / /.snapshots/@_2026-02-08_pre-upgrade

The -r flag makes the snapshot read-only, which is important for two reasons: it prevents accidental modification, and it's a prerequisite for using btrfs send to transfer the snapshot to another disk or host. Without -r, the snapshot is writable by default -- useful for testing changes in an isolated environment, but not what you want for backup purposes.

List all subvolumes to see what you've created:

terminal
# btrfs subvolume list -t /
ID      gen     top level  path
--      ---     ---------  ----
256     485     5          @
257     483     5          @home
258     480     5          @snapshots
259     479     5          @var_log
264     485     258        @snapshots/@_2026-02-08_pre-upgrade

Inspect the details of a specific snapshot:

terminal
# btrfs subvolume show /.snapshots/@_2026-02-08_pre-upgrade
@snapshots/@_2026-02-08_pre-upgrade
        Name:                   @_2026-02-08_pre-upgrade
        UUID:                   7a3e9d14-28f1-4c8a-b192-3f7e1d8c5a02
        Parent UUID:            a1b2c3d4-5678-9012-ef34-567890abcdef
        Creation time:          2026-02-08 09:15:22 +0000
        Subvolume ID:           264
        Generation:             485
        Flags:                  readonly

The Parent UUID field links this snapshot to the subvolume it was taken from. This parent-child relationship is what enables incremental btrfs send later. Delete a snapshot when you no longer need it:

# btrfs subvolume delete /.snapshots/@_2026-02-08_pre-upgrade

Deletion doesn't happen instantly. Btrfs adds the subvolume to a cleanup queue, and the cleaner thread processes it in the background, freeing shared blocks only when no other snapshot references them.

Automated Snapshots with Snapper

Manual snapshots are fine for one-off operations, but production systems need automation. Snapper, developed by Arvin Schnell at openSUSE, is the de facto standard for automated Btrfs snapshot management on Linux. It handles creation, cleanup, and retention policies, and integrates with package managers to take pre/post snapshots around every system change.

Install and create a configuration for the root filesystem:

terminal
# Install snapper (Debian/Ubuntu)
$ sudo apt install snapper

# Create a configuration for the root filesystem
$ sudo snapper -c root create-config /

# Verify the configuration
$ sudo snapper list-configs
Config | Subvolume
-------+----------
root   | /

The configuration lives at /etc/snapper/configs/root. The critical settings to tune are the timeline variables that control how many snapshots are retained across different time periods:

/etc/snapper/configs/root
# Enable timeline snapshots (hourly automatic snapshots)
TIMELINE_CREATE="yes"

# Enable cleanup so old snapshots are automatically pruned
TIMELINE_CLEANUP="yes"

# Retention policy: how many snapshots to keep per time period
TIMELINE_LIMIT_HOURLY="5"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="0"
TIMELINE_LIMIT_YEARLY="0"

# Limit total snapshot disk usage to 50% of the filesystem
SPACE_LIMIT="0.5"

# Keep at least 20% of the filesystem free
FREE_LIMIT="0.2"

# Use the 'number' cleanup algorithm for pre/post snapshots
NUMBER_CLEANUP="yes"
NUMBER_LIMIT="10"

Enable the systemd timers that drive the automation:

$ sudo systemctl enable --now snapper-timeline.timer snapper-cleanup.timer

Once the timers are running, get in the habit of checking space consumption regularly. The standard df command doesn't account for shared CoW blocks accurately. Use btrfs filesystem usage / for a complete view of data, metadata, and unallocated space, and btrfs filesystem du -s /.snapshots to see how much space your snapshots are consuming exclusively versus sharing with the live filesystem.

Warning

Keep your snapshot count conservative. The CachyOS wiki recommends a maximum of 10 snapshots for the root filesystem. Accumulating dozens of snapshots on a busy system leads to significant metadata overhead, and can cause noticeable slowdowns during balance operations and mounts. If you're running a filesystem that sees heavy writes, aggressive retention policies will cost you in performance.

Pre/Post Snapshots with Package Managers

On Arch-based distributions, the snap-pac package automatically triggers Snapper pre/post snapshots around every pacman transaction. On openSUSE, this integration is built into zypper by default. On Debian and Ubuntu, Snapper ships with apt hooks (/etc/apt/apt.conf.d/80snapper) that automatically create pre/post snapshot pairs around apt transactions once Snapper is installed and configured. On Fedora, dnf-plugin-snapper provides similar integration for dnf.

The pre/post snapshot pair lets you inspect exactly what changed during an operation:

terminal
# List snapshots -- notice the pre/post pairs
$ sudo snapper list
 # | Type   | Pre # | Date                     | Cleanup  | Description
---+--------+-------+--------------------------+----------+-------------------
 0 | single |       |                          |          | current
 1 | single |       | Sat 08 Feb 2026 09:00:01 | timeline | timeline
 5 | pre    |       | Sat 08 Feb 2026 10:12:33 | number   | apt install nginx
 6 | post   |     5 | Sat 08 Feb 2026 10:12:58 | number   | apt install nginx

# Show what files changed between pre (5) and post (6)
$ sudo snapper diff 5..6
+..... /etc/nginx/nginx.conf
+..... /usr/sbin/nginx
c..... /var/lib/dpkg/status

# Undo just that operation (reverts changes from 5..6)
$ sudo snapper undochange 5..6

The undochange command is surgical -- it only reverts the specific file-level changes between two snapshots, without requiring a full system rollback. This is extremely powerful for backing out a single package installation while keeping everything else in place.

Performing a Full System Rollback

When you need to completely revert your root filesystem to a previous state -- say, after a catastrophic upgrade failure -- a full rollback is what you want. The general strategy involves replacing the current root subvolume with a snapshot. There are two primary approaches.

Method 1: Boot into a Snapshot via GRUB

Tools like grub-btrfs automatically detect your Btrfs snapshots and add them to the GRUB boot menu. When you boot into a snapshot, the system mounts it read-only (with an overlayfs layer for temporary writes on some configurations). From there, you can assess the state and decide whether to make the rollback permanent.

terminal
# Install grub-btrfs (Arch example)
$ sudo pacman -S grub-btrfs

# Enable the path unit that auto-regenerates GRUB entries
# when new snapshots appear in /.snapshots
$ sudo systemctl enable --now grub-btrfsd

# Manually regenerate if needed
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
Note

grub-btrfs works best on distributions with standard GRUB configurations like Arch and openSUSE. On Ubuntu and Debian, GRUB's configuration structure differs, and you may need to manually adjust the grub-btrfs configuration or build it from source. Additionally, if /boot is on a separate non-Btrfs partition (such as an EFI System Partition), the kernel and initramfs images won't be included in your Btrfs snapshots -- you'll need to handle /boot backups separately.

Method 2: Manual Subvolume Replacement

This is the most reliable method and works on any distribution. The idea is simple: mount the top-level subvolume, move the broken root out of the way, and replace it with a read-write snapshot of the good state.

rollback procedure (from live USB or rescue shell)
# 1. Mount the top-level subvolume (subvolid=5)
# mount -t btrfs -o subvolid=5 /dev/sda2 /mnt

# 2. Verify what's there
# ls /mnt
@  @home  @snapshots  @var_log  @var_cache  @var_tmp

# 3. Move the broken root out of the way
# mv /mnt/@ /mnt/@_broken_2026-02-08

# 4. Create a read-write snapshot from the good snapshot
# btrfs subvolume snapshot /mnt/@snapshots/@_2026-02-08_pre-upgrade /mnt/@

# 5. Unmount and reboot
# umount /mnt
# reboot

# 6. After confirming everything works, clean up
# sudo mount -t btrfs -o subvolid=5 /dev/sda2 /mnt
# sudo btrfs subvolume delete /mnt/@_broken_2026-02-08
# sudo umount /mnt
Caution

Never use btrfs property set to flip a read-only snapshot to read-write for rollback purposes. This leaves the received_uuid field in an inconsistent state, which will break incremental btrfs send for any backup pipeline that depends on that snapshot. Always create a new read-write snapshot using btrfs subvolume snapshot (without -r) as shown above. The btrbk documentation explicitly warns about this pitfall.

Step 4 is instantaneous regardless of your filesystem size because of the CoW mechanism described earlier. You're not copying 50 GB of data -- you're creating a new B-tree root that points to the same blocks. The only overhead is metadata.

Offsite Backups with btrbk and btrfs send/receive

Snapshots protect you from software failures, but they don't protect you from disk failures. Every snapshot lives on the same physical device as the original data. If the drive dies, all your snapshots die with it. The official Btrfs documentation states this clearly: a snapshot shares its data blocks with the original, so if those blocks are damaged, the snapshot is damaged too.

btrbk (Btrfs Backup Tool) is a purpose-built utility for creating snapshots and transferring them incrementally to a separate Btrfs volume, either locally (external drive) or remotely (over SSH). It uses btrfs send/receive under the hood, which streams only the block-level differences between two snapshots -- far more efficient than file-level tools like rsync.

Local Backup to an External Drive

Assume your system volume is mounted at / with the top-level accessible at /mnt/btr_pool, and an external backup drive is mounted at /mnt/backup:

/etc/btrbk/btrbk.conf
# Global settings
transaction_log         /var/log/btrbk.log
lockfile                /run/lock/btrbk.lock
stream_buffer           256m
timestamp_format        long

# Only create snapshots when the subvolume has actually changed
snapshot_create         onchange

# Local snapshot retention: keep 48 hours + 14 daily
snapshot_preserve_min   latest
snapshot_preserve       48h 14d

# Backup retention: 14 daily, 5 weekly, 6 monthly
target_preserve_min     latest
target_preserve         14d 5w 6m

# Volume definition
volume /mnt/btr_pool
  snapshot_dir          .btrbk_snapshots
  target                /mnt/backup/myhost

  # Back up root and home subvolumes
  subvolume             @
  subvolume             @home

Run it with a dry-run first, then execute:

terminal
# Dry run -- shows what would happen without touching disk
$ sudo btrbk -n -v run

# Execute the backup
$ sudo btrbk run

# Check the summary
$ sudo btrbk list snapshots
$ sudo btrbk list backups

Remote Backup over SSH

For offsite backups, btrbk can transfer snapshots to a remote host over SSH. The critical security consideration is that btrfs receive needs root access on the target. btrbk ships with an SSH filter script (ssh_filter_btrbk.sh) that restricts the remote user to only the btrfs subcommands needed for backup operations.

/etc/btrbk/btrbk.conf (SSH target)
ssh_identity            /etc/btrbk/ssh/id_ed25519
ssh_user                root

volume /mnt/btr_pool
  snapshot_dir          .btrbk_snapshots
  target                ssh://backup.example.com/mnt/backup/myhost

  subvolume             @
  subvolume             @home

On the remote host, restrict the SSH key in authorized_keys:

~root/.ssh/authorized_keys (on backup server)
command="/usr/share/btrbk/scripts/ssh_filter_btrbk.sh --target --delete --info",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-ed25519 AAAA... btrbk@myhost

Automate the whole thing with a systemd timer:

/etc/systemd/system/btrbk.timer
[Unit]
Description=btrbk periodic backup

[Timer]
OnCalendar=hourly
RandomizedDelaySec=300
Persistent=true

[Install]
WantedBy=timers.target

Understanding btrfs send/receive

The btrfs send and btrfs receive commands form the backbone of every Btrfs backup strategy. Understanding how they work gives you the flexibility to build custom workflows beyond what any tool provides.

A full send serializes an entire subvolume into a binary stream. An incremental send takes two snapshots and emits only the differences between them. The Btrfs kernel wiki describes the difference computation as a function of changed blocks tracked by the CoW mechanism across generations -- no directory traversal required. This means an incremental send of a 1 TB volume with 5 MB of changes takes seconds, not hours.

manual incremental backup with send/receive
# Initial full send (first-time bootstrap)
# btrfs subvolume snapshot -r / /.snapshots/root-day1
# btrfs send /.snapshots/root-day1 | btrfs receive /mnt/backup/

# Next day: take a new snapshot
# btrfs subvolume snapshot -r / /.snapshots/root-day2

# Incremental send -- only the delta since day1
# btrfs send -p /.snapshots/root-day1 /.snapshots/root-day2 | btrfs receive /mnt/backup/

# Over SSH to a remote host
# btrfs send -p /.snapshots/root-day1 /.snapshots/root-day2 | ssh root@backup btrfs receive /mnt/backup/

# Clean up the old snapshot on the source (keep the latest for the next delta)
# btrfs subvolume delete /.snapshots/root-day1
Note

Both the parent snapshot (-p) and the new snapshot must be read-only. The parent must exist on both the source and the destination for incremental send to work. If you delete the parent from either side before the next incremental run, you'll need to do a full send again. This is the one rule that catches everyone at least once.

For send protocol improvements, kernel 6.0 introduced protocol version 2, which can transfer compressed extents as-is without decompressing and recompressing on the receiving side. Use the --compressed-data flag with btrfs send --proto 2 to enable this. If both your source and target run kernel 6.0+ with btrfs-progs 6.0+, this can significantly speed up transfers of compressed volumes. A further protocol version 3, adding fsverity support, is currently in development behind the CONFIG_BTRFS_EXPERIMENTAL kernel option and is not yet considered stable.

Common Pitfalls and How to Avoid Them

Btrfs snapshots are powerful, but they come with sharp edges that can surprise you. Here are the scenarios that catch administrators off guard.

Running Out of Space with Snapshots

Because snapshots share data blocks via CoW, deleting files from the live filesystem doesn't free space if those blocks are still referenced by a snapshot. This is the number-one surprise for Btrfs newcomers. You rm -rf a 20 GB directory, df shows no change, and you're suddenly staring at ENOSPC. The fix is to delete old snapshots to release the shared references:

# btrfs filesystem usage /

This command (not df) gives you an accurate view of how space is allocated across data, metadata, and system chunks. Use btrfs filesystem du for per-directory exclusive vs. shared space breakdowns.

Metadata Bloat from Excessive Snapshots

Every snapshot carries its own metadata overhead. On a busy filesystem with frequent writes, each snapshot accumulates metadata for every CoW operation. A Fedora community discussion revealed that users were seeing hundreds of megabytes of metadata consumed by snapshots with very little actual data difference between them. The fix is a conservative retention policy -- 5 to 10 snapshots for root, cleaned up regularly.

Monitoring Snapshot Space with Quota Groups

If you need precise per-subvolume space tracking, Btrfs provides quota groups (qgroups). Enabling qgroups lets you see exactly how much space each snapshot exclusively consumes versus how much is shared with other snapshots. This is especially useful for tuning retention policies.

terminal
# Enable quota groups on the filesystem
# btrfs quota enable /

# Show per-subvolume space usage (referenced vs. exclusive)
# btrfs qgroup show -reF /
Warning

Enabling qgroups adds metadata overhead on every write and can significantly slow down snapshot creation and deletion on busy systems. Many administrators enable qgroups temporarily to audit space usage, then disable them again. If you run a snapshot-heavy workflow, test the performance impact before leaving qgroups enabled permanently. Recent kernels (6.12+) also offer simple quotas (squotas) as a lighter-weight alternative, though this feature is still maturing.

Database and VM Files

CoW is actively harmful for workloads that frequently overwrite data in place, like database files and virtual machine disk images. Every overwrite creates a new copy of the affected blocks, fragmenting the data and consuming extra space. Disable CoW for these specific files or directories:

terminal
# Disable CoW on a directory (must be set before files are created)
$ sudo chattr +C /var/lib/postgresql/
$ sudo chattr +C /var/lib/libvirt/images/

# Verify the attribute
$ lsattr -d /var/lib/postgresql/
---------------C---- /var/lib/postgresql/

Alternatively, create a separate subvolume with nodatacow mount option for these workloads, and exclude it from your snapshot configuration entirely.

Swap Files and Snapshots Don't Mix

Btrfs added swapfile support in kernel 5.0, but swap files cannot live on a snapshotted subvolume. The CoW mechanism conflicts with the kernel's swap implementation. If you try to snapshot a subvolume containing an active swap file, you'll get an error like ERROR: cannot snapshot '/path': Text file busy (newer versions of btrfs-progs will clarify this as source subvolume contains an active swapfile). The solution is to place your swap file on its own dedicated subvolume that is excluded from all snapshot operations.

Putting It All Together: A Real Recovery Scenario

Let's walk through a complete scenario. You're running an Ubuntu server with the subvolume layout described earlier. Snapper is taking hourly snapshots. You're about to upgrade from one LTS release to the next.

pre-upgrade procedure
# 1. Take an explicit pre-upgrade snapshot with a descriptive name
#    -c root = config name, -c number = cleanup algorithm
$ sudo snapper -c root create -d "pre-upgrade-to-noble" -c number

# 2. Also push a backup to your external disk
$ sudo btrbk run

# 3. Proceed with the upgrade
$ sudo do-release-upgrade

# ... upgrade fails halfway through ...

# 4. Don't panic. Find your pre-upgrade snapshot
$ sudo snapper list
 # | Type   | Date                     | Description
---+--------+--------------------------+----------------------------
12 | single | Sat 08 Feb 2026 14:30:00 | pre-upgrade-to-noble

# 5. Reboot into a live USB and perform the rollback
# (see the manual subvolume replacement procedure above)

# 6. After reboot, verify everything is back to normal
$ cat /etc/os-release
$ systemctl --failed
$ journalctl -b -p err

Total downtime: the time it takes to boot a live USB, type five commands, and reboot. The rollback operation itself -- replacing the root subvolume with a snapshot -- is instantaneous regardless of filesystem size. Your logs in /var/log survived because they're on a separate subvolume, so you can investigate what went wrong at your leisure.

Snapshots are not a replacement for backups. They protect against software errors, not hardware failures. A proper strategy uses snapshots for fast local rollback and btrfs send/receive for offsite replication to a physically separate disk or host.

One more piece of the reliability puzzle: run btrfs scrub periodically. Scrub reads all data and metadata blocks and verifies their checksums, detecting silent bit-rot that would otherwise go unnoticed until you try to read the affected file. Schedule it weekly or monthly via a systemd timer:

# btrfs scrub start /

Wrapping Up

Btrfs snapshots turn filesystem management from a high-stakes, irreversible operation into something you can experiment with confidently. The CoW architecture that Chris Mason designed back in 2007 makes snapshots functionally free to create, the subvolume model gives you granular control over what gets captured and what doesn't, and tools like Snapper and btrbk automate the tedious parts so you don't have to remember to snapshot before every upgrade.

The essential setup is straightforward: a flat subvolume layout with @, @home, @var_log, and @snapshots at the top level; Snapper or btrbk managing automated snapshot creation and cleanup; and a btrfs send/receive pipeline pushing incremental backups to a second disk or remote host. With that in place, even a catastrophic upgrade failure becomes a five-minute recovery.

The filesystem continues to improve rapidly. Recent kernel versions have added send protocol v2 for compressed transfer, and ongoing work on FSCRYPT encryption patches (currently in v6/v7 of the kernel patchset, not yet merged to mainline) promises native file-level encryption as an alternative to full-disk LUKS. Performance optimizations for metadata-heavy workloads continue with each release. The on-disk format has been declared stable since November 2013, and distributions from SUSE Enterprise to Fedora Workstation now default to Btrfs. If you haven't yet explored what snapshots can do for your systems, there hasn't been a better time to start.