Vue normale

Reçu avant avant-hier

The One Trick That Made Immutable Linux Easier For Me

30 mars 2026 à 06:54

If you’ve recently dipped your toes into the world of immutable Linux distributions like Fedora Silverblue, openSUSE MicroOS, or even the Steam Deck, you'll encounter this issue eventually.

You try to perform a basic task, like adding a custom script to /usr/bin or creating a global configuration directory, and the terminal throws an error: Read-only file system.

It’s a frustrating moment. You chose an immutable OS for the stability, the atomic updates, and the "unbreakable" nature of the system. But now you feel like a guest in your own house.

The traditional fixes, manually mounting an overlay filesystem or using rpm-ostree to layer packages, either require a reboot or complex manual management.

systemd-sysext was built specifically to solve this problem. This often-overlooked utility uses OverlayFS under the hood but adds compatibility checking, systemd integration, and a standardized format, allowing you to dynamically merge binaries and libraries into /usr at runtime, without touching the underlying read-only image and without a reboot.

Quick Look at Immutability

To understand why we need sysext, you first have to understand why the Linux world is moving toward immutability. In a traditional "mutable" distribution like Ubuntu or Arch, the root filesystem is a giant, writable scratchpad. Any process with root privileges can modify any file in /usr or /bin.

While this gives us total freedom, it’s also a major source of system drift. Over time, manual changes, conflicting libraries, and failed package installations make the system unpredictable.

Immutable distributions solve this by treating the operating system as a read-only image. When you update the system, you aren't just changing individual files; you are switching to a completely new, pre-verified version of the OS. This makes the system "atomic", it either works perfectly, or it rolls back to the previous version.

The Problem: Seeing the "Read-Only" Barrier

While immutability is great for stability, it’s a nightmare for "on-the-fly" troubleshooting. On a standard system, if I need to see why a network port is blocked, I might quickly install nmap or tcpdump. On an immutable system, I’m stuck.

You can see this in action by trying to manually add a file to your system binaries:

sudo touch /usr/bin/test_file
read only file system

Instead of creating a file, you’ll get a rejection message: touch: cannot touch '/usr/bin/test_file': Read-only file system. This proves that even with sudo, the core of your OS is locked.

To add a tool "the official way" (layering), you’d have to run a command like rpm-ostree install and then restart your computer. For a quick task, that's a massive interruption. And this "rpm-ostree" is more of a Fedora Silverblue thing, it won't work on non-Fedora atomic distros.

How System Extensions Actually Work

I'd like to think of systemd-sysext as a digital "overlay." Instead of fighting the read-only filesystem, we are going to build a small directory structure that contains our tools and tell the system to virtually "merge" it on top of the existing /usr.

This uses a kernel feature called OverlayFS. It takes your base (read-only) system as the "Lower" layer and your extension as the "Upper" layer. The result is a "Merged" view that the user interacts with. To your applications, it looks like the files were there all along.

Step 1: Building Your First System Extension

You don't need complex build systems to create a system extension. At its simplest, a sysext is just a directory structure that mirrors the Linux root. Let's build a workspace for a custom tool.

First, mirror the Linux filesystem hierarchy:

mkdir -p my-tool-ext/usr/bin
mkdir -p my-tool-ext/usr/lib/extension-release.d/
Creating the base hierarchy.

Next, let's create a simple test tool. In a real-world scenario, you could drop a compiled binary like ncdu or htop here, but for this guide, a script works perfectly:

echo -e '#!/bin/sh\necho "Sysext is active on Fedora!"' > my-tool-ext/usr/bin/foss-tool
chmod +x my-tool-ext/usr/bin/foss-tool
Creating a custom shell script and setting execution permissions using chmod.

Step 2: The Metadata "Passport"

This is the most critical step and where you can get stuck. systemd-sysext acts as a gatekeeper. It will not merge an extension unless it knows exactly which OS version the extension is built for. To find out what your system expects, run:

cat /etc/os-release | grep -E '^ID=|^VERSION_ID='

Checking Fedora OS release version.

On my setup, this returns ID=fedora and VERSION_ID=43. If you are following along, make sure to replace these values with whatever your specific system reports. If you are on Silverblue 39, use those numbers.

Now, create the mandatory release file:

echo "ID=fedora" > my-tool-ext/usr/lib/extension-release.d/extension-release.my-tool-ext
echo "VERSION_ID=43" >> my-tool-ext/usr/lib/extension-release.d/extension-release.my-tool-ext
Creating extension release metadata file

Before we merge anything into the live system, it’s worth double-checking our work. A systemd-sysext image is essentially a mirror of your root directory, so the file hierarchy must be exact. You can verify your layout by running:

ls -R my-tool-ext
Verifying sysext directory structure.

You should see your binary sitting in usr/bin and your metadata 'passport' tucked away in usr/lib/extension-release.d/. If these aren't in the right place, the system simply won't know how to 'map' them during the merge.

Step 3: The "Merge" Moment

Now that our extension has its "passport" ready, we move it to the system's extension path and trigger the merge. This is the moment where the read-only barrier is bypassed:

sudo cp -r my-tool-ext /var/lib/extensions/
sudo systemd-sysext merge
Executing the systemd-sysext merge command on Fedora

Next, confirm the binary location. The system should see it as a standard system tool:

ls -l /usr/bin/foss-tool
foss-tool
Verifying custom binary in usr-bin

You can verify the status and run your new tool:

systemd-sysext status
Checking the active systemd-sysext extensions.

Your system is still technically read-only, but you’ve successfully injected new functionality into it without a single reboot.

Troubleshooting: When the Merge Fails

One of the most common frustrations is seeing the error: No suitable extensions found (1 ignored due to incompatible image). This isn't a bug; it's a safety feature.

If your extension-release file says you are on Fedora 42 but you actually just upgraded to Fedora 43, systemd will block the merge.

It does this because libraries often change between versions, and merging an incompatible binary could cause system instability. If you hit this error, simply update your metadata to match your current os-release and re-run the merge.

Reverting Without a Trace

The most powerful feature of systemd-sysext isn't just how easily it adds tools, it’s how cleanly it removes them. Traditional package management often leaves behind config files or libraries that clutter your system over time.

With sysext, unmerging is a clean break:

sudo systemd-sysext unmerge
Running systemd-sysext-unmerge for system cleanup.

If you try to run your tool now, the shell will return a No such file or directory error. The overlay has been lifted, and your /usr directory is exactly as it was when you first installed the OS.

Why This Beats the Container Approach

A common question is: "Why not just use Distrobox?" Containers are amazing for general applications, but they run in an isolated namespace. If you are trying to debug a kernel issue or analyze hardware peripherals, that isolation can get in the way.

systemd-sysext puts the tool directly on the host. It has the same permissions and visibility as a tool shipped with the OS itself. If you need a tool to "be" the system rather than just "run on" the system, sysext is the surgical choice.

Conclusion

The move toward immutable Linux shouldn't feel like a move toward a "locked-down" experience. Tools like systemd-sysext prove that we can have our cake and eat it too. We get the security of a read-only core and the flexibility to inject any tool we need instantly.

What Are Btrfs Subvolumes? And Why They’re Better Than Traditional Linux Partitions

22 mars 2026 à 15:12

For many Linux users, partitioning is the most nerve-wracking part of installation. It’s that moment where you double-check everything, hoping you don’t wipe the wrong drive or end up with a layout you’ll regret later.

I like to think of a disk as a cabinet. The fixed “drawers” are partitions, and if one turns out to be too small, fixing it later means resizing, moving things around, and hoping nothing breaks in the process.

But what if partitions were not like the fixed drawers? What if they were like those adjustable shelves instead that could adapt as your needs change?

That’s exactly what Btrfs subvolumes bring to the table. Subvolumes are one of the most powerful features of Btrfs file system that provides independently mountable directory trees that all share the same underlying disk pool.

I am going to discuss this subvolume feature specifically, and why it changes how you think about disk management. And once you get used to them, it's hard to go back.

The Problem with Traditional Partitioning

Typical disk layout with separate partitions

While the analogy is okay, let's see the real thing.

With a typical Ext4 setup, you decide everything upfront. Maybe you give 50GB to / and the rest to /home. It seems reasonable, until it isn’t.

A few months later, your root partition fills up thanks to Flatpaks, containers, or system updates. Meanwhile, your home partition might still have hundreds of gigabytes sitting unused. The system can’t borrow that space, even if it desperately needs it.

That’s the limitation of fixed partitions. Btrfs subvolumes were designed to solve exactly this.

A Smarter Approach with Subvolumes

Btrfs takes a different approach. Instead of splitting your disk into rigid chunks, it creates a shared storage pool.

Subvolumes act like partitions from the outside, you can mount one as root and another as home, but under the hood, they all draw from the same free space. There’s no need to resize anything. If one part of your system needs more storage, it simply uses what’s available.

This works because subvolumes are not separate block devices. They are namespaces within a single Btrfs filesystem. You get the organisational benefits of partitions without their rigidity.

This flexibility makes a huge difference in everyday use.

Check If You’re Already Using Btrfs subvolume

If you’re on Fedora or openSUSE, chances are you’re already using Btrfs. Still, you can confirm your filesystem type with:

findmnt -no FSTYPE /
Fedora uses Btrfs by default

And if you are using Btrfs, subvolumes are almost certainly already set up for you. To check your subvolume layout directly:

sudo btrfs subvolume list /
Default Btrfs subvolume layout

You’ll likely see entries like root and home. This is a common “flat layout,” where root your main system ( / ) and home represents your personal files.

Even though they appear separate, they’re just different subvolumes sharing the same disk space. You can confirm this by checking how they are mounted using the mount command:

mount | grep btrfs
Subvolumes mounted as root and home

This output shows how Btrfs subvolumes are mounted and used by the system. Both root (/) and home (/home) are coming from the same physical partition. This confirm, no separate partitions are used.

Snapshots: The Killer Feature of Subvolumes

Snapshots are where Btrfs really shines and it's important to understand why.

In Btrfs, a snapshot is itself a subvolume. When you take a snapshot, you're asking Btrfs to create a new subvolume that initially shares all the same data as the original, without actually copying anything and it happens almost instantly. Even on large systems, it doesn’t actually copy all your data; it just records the current state.

First, create a directory for snapshots:

sudo mkdir /snapshots

Then take a snapshot of your root subvolume:

sudo btrfs subvolume snapshot / /snapshots/before-update
Creating a system snapshot takes less than a second

That’s it. You now have a complete snapshot of your system before making changes. If an update goes wrong, you have a fallback ready.

Now verify it:

sudo btrfs subvolume list /
New snapshot appearing in the list

You see the snapshots/before-update listed alongside the root and home subvolumes because it is a subvolume, just one that was born from a snapshot operation.

💡
You can automate snapshot scheduling entirely through subvolumes using a tool like snapper. More on this in some other tutorial.

How Subvolumes Make Snapshots Efficient: Copy-on-Write

In the previous section, I mentioned that Snapshots are nearly instant and take up almost no extra space when first created. This is possible because of how subvolumes use Copy-on-Write (CoW) internally.

You see, when two subvolumes, root and its snapshot in our example, share a block of data, neither actually holds a copy. They both point to the same underlying data. Only when one of them changes that data, Btrfs writes changes to a different location, updating the pointer for just that subvolume. The other subvolume keeps pointing to the original.

This is why:

  • Snapshots are created almost instantly, even on large systems
  • A fresh snapshot uses almost no additional disk space
  • The original data is never lost mid-write if something goes wrong

Understanding Disk Usage with Subvolumes

Disk space reporting can feel a bit confusing with Btrfs. Traditional tools like df -h don’t always show the full picture because snapshots share data.

For a clearer view, use:

sudo btrfs filesystem usage /
Accurate storage usage including metadata

If your disk seems unexpectedly full, old snapshots are often the reason. Because each snapshot is a subvolume holding references to data, deleting a file in your live system doesn't free space if an older snapshot still holds a reference to it.

Cleaning up old snapshot subvolumes is the solution and the process is straightforward:

sudo btrfs subvolume delete /snapshots/old-snapshot-name

The Downsides of Subvolumes

Subvolumes aren't without trade-offs.

Because every write in a subvolume goes through Copy-on-Write, write-heavy workloads like databases or virtual machine disk images can see a performance penalty. Over time, CoW writes can also lead to fragmentation within subvolumes.

For directories inside a subvolume where you want to opt out of CoW behaviour (and therefore lose snapshot coverage for those paths), you can disable it with:

chattr +C /var/lib/libvirt/images

There’s also some background maintenance, like balancing storage across the pool, but most modern distributions handle this automatically.

Conclusion

As you can see, Btrfs subvolumes change the fundamental model of how storage is organized. Instead of fixed partitions at install time, you define logical boundaries with subvolumes that all share the same pool and can be snapshotted individually instantly.

The flexibility, the snapshots, the efficient disk usage that come with Btrfs are due to its subvolumes feature. Shared pool, CoW data sharing, and per-subvolume state tracking, once you understand the workings of subvolumes, you'll start appreciating Btrfs even more.

And once you get used to working that way, going back to traditional filesystem will be nearly impossible. I mean this is why Btrfs is the choice of a modern Linux system, right?

New to Linux? These 4 systemd Tools Help You Fix Common Issues

1 mars 2026 à 15:38

If you’ve spent any time in the Linux community, you know that systemd is a hot topic. Some people love it because it handles everything; others wish it didn't! But here’s the reality: almost every major Linux distribution (like Ubuntu, Fedora, and Debian) uses it today.

Think of systemd as the "manager" of your computer. When something goes wrong, like your Wi-Fi won't connect or a creative app keeps crashing, systemd is the one with all the answers.

But where to find those answers? systemd has built-in tools that help you troubleshoot issues with your system. If you’re just starting your Linux journey, I recommend exploring the following four tools.

📋
If you are ensure, please check if your Linux system uses systemd.

1. Systemctl

In Linux, background apps are called services. If your system is not accepting SSH connections, you use systemctl to see what’s happening under the hood.

I mean, before you try to fix something, you need to know if it's actually broken.

sudo systemctl status ssh

This is the most important command in a Linux user’s toolkit. When you run it, pay attention to the Active line:

  • Active (running): Everything is great!
  • Inactive (dead): The service is off. Maybe it crashed, or maybe you never turned it on.
  • Failed: This is the red flag. systemd will usually give you a "Main PID" (Process ID) and a reason for the failure right there in the terminal.
Systemctl status output, showing process IDs and recent log events for a running service.

The "turn it off and on again" trick

We’ve all heard it: "Have you tried restarting it?" Restart a service in Linux with systemctl.

  • Kickstart a failed one: sudo systemctl start ssh
  • Stop a lagging service: sudo systemctl stop ssh
  • Reset: sudo systemctl restart ssh
  • Disable a service (to speed up boot): sudo systemctl disable ssh

2. Journalctl

When an app crashes, it doesn't just vanish. It usually screams an error message into the void. Journalctl is the tool that catches those screams and saves them in a "journal" for you to read later.

Unlike old-school Linux logs (which were scattered across dozens of text files), systemd keeps everything in one central, encrypted location.

Filtering the noise

If you just type journalctl, you’ll see thousands of lines of code, most of it is boring stuff like "System time updated." To be a good detective, you need to filter:

journalctl -xe
  • -x: Adds "catalog" info (it explains the errors in plain English).
  • -e: Jumps straight to the end of the log so you see the newest stuff first.
Using the journalctl -xe to jump to the most recent system logs

Targeting a Specific App: If you want to check the issue with a specific app, don't read everything, just read the entries for that specific app:

journalctl -u ssh
Filtering system logs for a specific unit using the -u flag

Time Travel: Did your computer freeze two hours ago? You can ask the journal to show you only that time frame:

journalctl --since "2 hours ago"
Using time filters in journalctl allows you to pinpoint exactly what happened during a system freeze

3. Systemd-analyze

Is your computer taking forever to start up? Instead of guessing which app is slowing you down, you can ask systemd-analyze to blame the culprit. This tool measures every millisecond of your boot process and tells you exactly which service is holding things up.

Run this command to see a ranked list of the slowest-starting apps:

systemd-analyze blame

You might find that a "Modem Manager" you don't even use is taking 2 minutes to start. This gives you the power to disable it and save time every time you turn on your PC.

The blame command identifies exactly which services are slowing down your Linux boot time

Additionally, some apps can't start until other apps are finished. If App A waits for App B, it creates a chain.

systemd-analyze critical-chain

This shows you the path systemd took to reach your desktop. If one link in the chain is slow, the whole system feels sluggish. You can learn more about optimizing Linux boot speed in our dedicated guide.

The critical-chain command reveals the 'relay race' of your system startup.
🚧
systemctl status, journalctl, and systemd-analyze are 100% safe. They are "read-only." However, be careful with sudo systemctl stop. If you stop a service like dbus or systemd-logind, your screen might freeze or you might get logged out!

4. Coredumpctl

Sometimes, an app doesn't just have an error, it crashes completely. In programmer terms, it "dumped its core." This means the app threw its entire memory onto the floor and quit.

Coredumpctl is like a forensic investigator. It lets you look at that memory "snapshot" to see what the app was doing right before it died.

Listing the crashes

To see a table of every app that has crashed on your system recently, use:

coredumpctl list
The coredumpctl list command displays a table of recorded application crashes, including time, PID, and executable name.

The "Detective Report"

If you see that your favorite app crashed, you can get the full report by using its Process ID (PID) from the list:

coredumpctl info [PID]

This will show you things like the "Signal" (usually SIGSEGV, which means the app tried to touch memory it wasn't allowed to) and a "Stack Trace" (the last few functions the app ran).

💡
Some distributions (like minimal Debian installs) might not have coredumpctl installed by default. You can usually get it by running sudo apt install systemd-coredump.

Conclusion

By using these systemd tools, you’ve moved past the "I'll just reboot and hope for the best" stage. You can now see the status of your apps, read their logs, speed up your boot time, and investigate crashes like a seasoned Linux user.

Next time something feels "off" on your Linux machine, don't panic. Just remember: systemctl for the status, journalctl for the logs, systemd-analyze for the speed, and coredumpctl for the crash.

❌