Vue lecture

Privacy Messenger Session Is Staring Down a 90-Day Countdown to Obscurity

If you care about privacy and don't take too well to governments and Big Tech companies snooping on your messages, then Session has probably come up at some point. It's a free, open source, end-to-end encrypted messaging app that doesn't ask for your phone number or email to sign up.

Messages are routed through an onion network rather than a central server, and the combination of no-metadata messaging, anonymous sign-up, and decentralized architecture has earned it a loyal following among privacy-conscious users.

Unfortunately, the project has sent out a mayday call as it risks closure.

A call for help

Your donations have helped, and the Session Technology Foundation (STF) has received enough funding to support critical operations for 90 days.

This means that Session will remain available on the app stores and essential services (such as the file server and push notification…

— Session (@session_app) April 9, 2026

The Session Technology Foundation (STF) sent out what can only be described as a distress signal, announcing that the app's survival is now in serious peril. The day it was posted on was also the last working day for all paid staff and developers at the STF.

From that point on, Session is being kept running entirely by volunteers.

The donations that they received earlier are enough to keep critical infrastructure online until July 8, but not nearly enough to retain a development team. With nobody left on payroll, development has been paused.

Due to that, introducing new features is off the table, existing bugs will most likely go unaddressed, and the STF says new releases are unlikely during this period.

Session co-founder Chris McCabe had already flagged the trouble coming. In a personal appeal published earlier in March, he wrote that the organizations safeguarding Session had faced many challenges over the years and that the project's very survival was now at risk.

He had concluded by appealing that:

The project is on a path to self-sustainability, but the future is fragile. If every Session user contributed just one dollar, it would go a long way towards Session reaching sustainability. If you've ever considered donating, now is the time to act.

The above didn't accomplish enough to change the outcome, so the Session folks had to sound the alarm. The foundation says it needs $1 million to complete the work still in progress.

That includes Protocol v2, which adds forward secrecy (PFS), post-quantum cryptography, and improved device management, as well as Session Pro, a subscription tier intended to put the project on a self-sustaining footing.

If that goal is hit, the STF says it hopes Session could stand on its own without needing to go back to the community for more.

As of writing, $65,000 of that $1 million has been raised. Anyone who wants to see this privacy-focused messaging app survive, especially at a time when surveillance is only getting worse, can donate at getsession.org/donate.


Suggested Read 📖: Session's Other Co-Founder Thinks You Don't Need to Ditch WhatsApp

  •  

Good News! France Starts Plan to Replace Windows With Linux on Government Desktops

France's national digital directorate, DINUM, has announced (in French) it is moving its workstations from Windows to Linux. The announcement came out of an interministerial seminar held on April 8, organised jointly by the Directorate General for Enterprise (DGE), the National Agency for Information Systems Security (ANSSI), and the State Procurement Directorate (DAE).

The Linux switch is not the only move on the table. France's national health insurance body, CNAM, is migrating 80,000 of its agents to a set of homegrown tools: Tchap for messaging, Visio for video calls (more on this later), and France transfert for file transfers.

The country's national health data platform is also set to move to a sovereign solution by the end of 2026.

Beyond the immediate moves, the seminar laid out a broader plan. DINUM will coordinate an interministerial effort built around forming coalitions between ministries, public operators, and private sector players, with interoperability standards at the core (the Open Interop and Open Buro initiatives are specifically named).

Every French ministry, including public operators, will be required to submit its own non-European software reduction plan by Autumn 2026.

The plan is expected to cover things like workstations, collaboration tools, antivirus, AI, databases, virtualization, and network equipment. A first set of "Industrial Digital Meetings" is planned for June 2026, where public-private coalitions are expected to be formalized.

Speaking on this initiative, Anne Le Hénanff, Minister Delegate for Artificial Intelligence and Digital Affairs, added that (translated from French):

Digital sovereignty is not optional — it is a strategic necessity. Europe must equip itself with the means to match its ambitions, and France is leading by example by accelerating the shift to sovereign, interoperable, and sustainable solutions.
By reducing our dependence on non-European solutions, the State sends a clear message: that of a public authority taking back control of its technological choices in service of its digital sovereignty.

You might remember, a few months earlier, France set out on a similar path for video conferencing. The country mandated that every government department switch to Visio, its homegrown, MIT-licensed alternative to Teams and Zoom by 2027.

Part of the broader La Suite Numérique initiative, it had already been tested with 40,000 users across departments before the mandate was announced. So this move looks like an even more promising one, and we shall keep an eye on how this pans out.


Suggested Read 📖: ONLYOFFICE Gets Forked

  •  

Is a Clanker Being Used to Carry Out AI Fuzzing in the Linux Kernel?

With the rise of AI and humanoid robots, the word "Clanker" is being used to describe such solutions, and rightly so. In their current state, these are quite primitive, and while they can act like something resembling human intelligence, they still can't match what nature cooked up.

Now that terminology has made its way into the Linux kernel thanks to Greg Kroah-Hartman (GKH), the Linux stable kernel maintainer and the closest thing the project has to a second-in-command.

He has been quietly running what looks like an AI-assisted fuzzing tool on the kernel that lives in a branch called "clanker" on his working kernel tree. Before you ask, fuzzing is a method of automated software testing that bombards code with unexpected, malformed, or random inputs to trigger crashes, memory errors, and other misbehavior.

It is a critical line of defense for a massive codebase like Linux.

How it started

a post by greg kroah-hartman that lays out how he is excercising using some new fuzzing tols

It began with the ksmbd and SMB code. GKH filed a three-patch series after running his new tooling against it, describing the motivation quite simply. He picked that code because it was easy to set up and test locally with virtual machines.

What the fuzzer flagged were potential problems specific to scenarios involving an "untrusted" client. The three fixes that came out of it addressed an EaNameLength validation gap in smb2_get_ea(), a missing bounds check that required three sub-authorities before reading sub_auth[2], and a mechToken memory leak that occurred when SPNEGO decode fails after token allocation.

GKH was very direct about the nature of the patches, telling reviewers: "please don't trust them at all and verify that I'm not just making this all up before accepting them."

These pictures show the Clanker T1000 in operation.

It does not stop there. The clanker branch has since accumulated patches across a wide range of subsystems, including USB, HID, WiFi, LoongArch, networking, and more.

Who is GKH?

If you are not well versed with the kernel world, GKH is one of the most influential people in Linux development.

He has been maintaining the stable kernel branch for quite a while now, which means every long-term support kernel that powers servers, smartphones, embedded devices, and pretty much everything else running Linux passes through his hands.

He also wrote Linux Kernel in a Nutshell back in 2006, which is freely available under a Creative Commons license. It remains one of the more approachable references for anyone trying to understand kernel configuration and building, and it is long overdue for a new edition (hint hint).

Linus has been thinking about this too

Speaking at Open Source Summit Japan last year, Linus Torvalds said the upcoming Linux Kernel Maintainer Summit will address "expanding our tooling and our policies when it comes to using AI for tooling."

He also mentioned running an internal AI experiment where the tool reviewed a merge he had objected to. The AI not only agreed with his objections but found additional issues to fix.

Linus called that a good sign, while asserting that he is "much less interested in AI for writing code" and more interested in AI as a tool for maintenance, patch checking, and code review.

AI should assist, not replace

There is an important distinction worth making here. What GKH appears to be doing here is not having AI write kernel code. The fuzzer surfaces potential bugs; a human with decades of kernel experience reviews them, writes the actual fixes, and takes responsibility for what gets submitted.

If that's the case, then this is the sensible approach, and it mirrors what other open source projects have been formalizing. LLVM, for instance, adopted a "human in the loop" AI policy earlier this year, requiring contributors to review and understand everything they submit, regardless of how it was created.


Suggested Read 📖: Greg Kroah-Hartman Bestowed With The European Open Source Award

  •  

Microsoft Locked Out VeraCrypt, WireGuard, and Windscribe from Pushing Windows Updates

Microsoft has had a complicated relationship with the open source world. VSCode, TypeScript, and .NET are all projects it created, and its acquisition of GitHub put it in charge of the world's largest code hosting platform.

But it is also the same company that bakes telemetry into Windows by default and has been aggressively pushing Copilot AI into every corner of its software. That last part especially has been nudging a growing number of people toward open alternatives.

And now, a wave of developer account suspensions has given some open source developers a new headache.

What's happening?

this photo shows a forum post by mounir idrassi talking about the unfair suspension of their microsoft account that was used to sign windows drivers and the bootloader

Microsoft rolled out mandatory account verification for all partners enrolled in the Windows Hardware Program who had not completed verification since April 2024. The requirement kicked in on October 16, 2025, giving partners 30 days from notification to verify their identity with a government-issued ID.

Plus, that ID has to match the name of the Partner Center primary contact. Miss the deadline or fail verification, and your account gets suspended with no further submissions allowed.

This matters because signing Windows kernel drivers requires one of these accounts. Without it, developers cannot push driver-signed updates for Windows, and Windows will flag unsigned drivers, blocking them from loading at the kernel level.

Three major open source projects found this out the hard way. VeraCrypt, WireGuard, and Windscribe all had their developer accounts suspended, cutting off their ability to ship updates on Windows.

It appears @Microsoft is actively suspending developer accounts with no warning or reason of various security tools like VeraCrypt, WireGuard and also Windscribe. We've had this VERIFIED account for 8+ years to sign our drivers.

We've been trying to resolve this for over a… https://t.co/iwkryuwKuO pic.twitter.com/7VcnAQIbnP

— Windscribe (@windscribecom) April 8, 2026

VeraCrypt developer Mounir Idrassi was the first to go public. In a SourceForge forum post, he wrote that Microsoft had terminated his account with no prior warning, no explanation, and no option to appeal.

Repeated attempts to reach Microsoft through official channels got him nothing but automated replies. The suspension hit his day job too, not just VeraCrypt.

WireGuard creator Jason Donenfeld hit the same wall a couple of weeks later, when he went to certify a new WireGuard kernel driver for Windows and found his account showing as access restricted. He eventually tracked down a Microsoft appeals process, but it carried a 60-day response window.

Windscribe's situation was arguably the messiest. The company says it had held a verified Partner Center account for over eight years and spent more than a month trying to sort things out before going public.

Moreover, once an account is suspended, Partner Center blocks users from opening a support ticket directly.

What now?

This eventually got Microsoft's attention as Scott Hanselman, VP and Member of Technical Staff at Microsoft and GitHub stepped in on X to say the accounts would be fixed. He pointed to the October 2025 blog post (linked earlier) and said the company had been sending emails to affected partners since then.

Scott confirmed he had personally reached out to both Mounir and Jason to get their accounts unblocked, and that fixes were already in progress.

Anyway, this doesn't look good, and leaving developers of critical security software without recourse for weeks only erodes trust. But, in the end, this won't really affect a behemoth like Microsoft, who has a dominating hold on the operating system market.


Suggested Read 📖: Proton Workspace and Meet launched as alternatives to Big Tech offerings

  •  

I Tried Apt Command's New Rollback Feature — Here’s How It Went

APT, or Advanced Package Tool, is the package manager on Debian and its derivatives like Ubuntu, Linux Mint, and elementary OS. On these, if you want to install something, remove it, or update the whole system, you do it via APT.

It has been around for decades, and if you are on a Debian-based distro, then you have almost certainly used it without giving it much thought. That said, it has seen active development in the last couple of years.

We covered the APT 3.0 release this time last year, which kicked off the 3.x series with a colorful new output format, the Solver3 dependency resolver, a switch from GnuTLS/GnuPG to OpenSSL, and Sequoia for cryptographic operations.

The 3.1.x cycle that followed has now closed out with APT 3.2 as the stable release, and it brings some notable changes with it.

What do you get with Apt 3.2?

a terminal window that shows the output to apt --help, we have the version numbers, a brief description of apt, and a list of the most used apt commands

The biggest additions with this release are transaction history with rollback support, some new commands, and per-repository package filtering.

APT now keeps a log of every package install, upgrade, and removal. You can view the full list with apt history-list, which shows all past operations with an ID assigned to each. To see exactly what packages were affected in a specific operation, you can use apt history-info <ID>.

From there, apt history-undo <ID> can be used to reverse a specific operation, reinstalling removed packages or removing installed ones as needed. If you undo something mistakenly and want it back, run apt history-redo <ID> to reapply it.

For cases where you want to revert everything back to the state at a particular point, apt history-rollback <ID> does that by undoing all operations that happened after the specified ID. Use this with care, as it makes a permanent change.

apt why and apt why-not are another set of new additions that let you trace the dependency chain behind a package. Run apt why <package> and APT will tell you exactly what pulled it onto your system. Run apt why-not <package> and it will tell you why it is not installed.

Similarly, Include and Exclude are two new options that let you limit which packages APT uses from a specific repository. Include restricts a repo to only the packages you specify, and Exclude removes specific packages from a repo entirely.

Solver3, which shipped as opt-in with APT 3.0, is now on by default. It also gains the ability to upgrade packages by source package, so all binaries from the same source are upgraded together.

Additionally, your system will no longer go to sleep while dpkg is running mid-install and JSONL performance counter logging is also in, though that is mostly useful for developers.

If all of that's got you interested, then you can try Apt 3.2 on a Debian Sid installation as I did below or wait for the Ubuntu 26.04 LTS release, which is reportedly shipping it.

How to use rollback on Apt?

I almost got lost in the labyrinth of Vim, unable to exit.

After installing some new programs using APT, I tested a few commands to see how rollback and redoing transactions worked. First, I ran sudo apt history-list in the terminal and entered my password to authorize the command.

The output was a list of APT transactions that included the preparatory work I had done to switch to Debian Sid from Stable, as well as the two install commands to get Vim and Nala installed.

Next, I ran sudo apt history-info 4, the number being the ID of the transaction, and I was shown all the key details related to it, such as the start/end time, requested by which user, the command used, and packages changed.

After that, I ran sudo apt history-undo 4 to revert the Vim installation and sudo apt history-redo 4 to restore the installation; both of these commands worked as advertised.

Finally, I tested sudo apt history-rollback 3 to get rid of Nala, and the process was just about the same as before, with me being asked to confirm changes by typing "Y".

When I tried to run apt history-redo for this one, the execution failed as expected.


💬 Do these new additions look useful to you? Can't be bothered? Let me know below!

  •  

Anthropic Just Handed Apache $1.5M to Secure the Open Source Stack AI Depends On

Anthropic has handed the Apache Software Foundation (ASF) a $1.5 million donation. The money is earmarked for build and security infrastructure, project services, and community support.

If you have used the internet today, you have almost certainly touched something the ASF maintains. Some of its projects like Kafka, Spark, Cassandra, the Apache HTTP Server, are not some niche tools, but a critical part of the modern IT infrastructure.

The ASF does not sell anything. It runs on donations, and without sustained funding, the infrastructure behind all of that software does not maintain itself.

Anthropic's framing for the donation is essentially that AI runs on this stuff and someone has to fund it. As AI development moves forward more quickly, the open source foundations underneath it need to be in good shape to keep up.

On the topic, Ruth Suehle, President of the Apache Software Foundation, added that:

Open source software is the foundation of modern digital life — largely in ways the average person is completely unaware of — and ASF projects are a critical part of that. When it works, nobody notices, and that’s exactly the goal.
But that kind of reliability isn’t a given. It is the result of sustained investment in neutral, community-governed infrastructure by each part of the ecosystem. Support like Anthropic’s helps ensure long-term strength, independence, and security of the systems that keep the world running.

Similarly, Vitaly Gudanets, Chief Information Security Officer at Anthropic, said that:

AI is accelerating rapidly, but it’s built on decades of open source infrastructure that must remain stable, secure, and independent. Supporting the Apache Software Foundation is a direct investment in the resilience and integrity of the systems that modern AI — and the broader software ecosystem — depend on.

Some thoughts

You might remember Anthropic was part of a similar donation campaign back in March, when the Linux Foundation announced $12.5 million in grants to strengthen open source software security. Anthropic was one of seven contributors to that pool, alongside AWS, Google, Google DeepMind, GitHub, Microsoft, and OpenAI.

That funding was managed by Alpha-Omega and the Open Source Security Foundation (OpenSSF), with the goal of helping open source maintainers deal with the growing flood of AI-generated vulnerability reports they simply do not have the bandwidth to handle.

It is great to see open source receiving monetary support, but the smaller players who are equally important in the ecosystem also need to be supported better. Big donations like this tend to flow toward well-established foundations, while the countless smaller projects that hold up just as much critical infrastructure quietly struggle for resources.

  •  

PyTorch Foundation Expands Its Open Source AI Portfolio With Helion and Safetensors

The PyTorch Foundation has taken on two new projects: Helion, a tool for writing machine learning kernels contributed by Meta, and Safetensors, a secure model file format contributed by Hugging Face.

Both were announced at PyTorch Conference Europe in Paris, and the two now join DeepSpeed, Ray, vLLM, and PyTorch itself as foundation-hosted projects.

Moreover, the foundation has confirmed that ExecuTorch, Meta's solution for running PyTorch models on edge and on-device environments, is being merged into PyTorch Core.

If you were looking for the why, it is fairly straightforward.

Both moves come as AI teams increasingly focus on getting models into production rather than just training them. Running kernels efficiently across different hardware and keeping model files safe to load are two problems the ecosystem has been dealing with for a while.

Talking about Helion joining up, Matt White, the CTO of the PyTorch Foundation, added that:

Helion gives engineers a much more productive path to writing high-performance kernels, including autotuning across hundreds of candidate implementations for a single kernel.

As part of the PyTorch Foundation community, this project strengthens the foundation for an open AI stack that is more portable and significantly easier for the community to build on.

Luc Georges, Chief Open Source Officer, Hugging Face echoed similar excitement:

Safetensors joining the PyTorch Foundation is an important step towards using a safe serialization format everywhere by default. The new ecosystem and exposure the library will gain from this move will solidify its security guarantees and usability. Safetensors is a well-established project, adopted by the ecosystem at large, but we're still convinced we're at the very beginning of its lifecycle: the coming months will see significant growth, and we couldn't think of a better home for that next chapter than the PyTorch Foundation.

What does this mean?

The PyTorch Foundation is a Linux Foundation-hosted organization that acts as the vendor-neutral home for PyTorch and a growing set of open source AI projects. The main goal here is to keep governance and technical direction community-driven rather than tied to any single company's whims.

The Linux Foundation is the broader stewardship body behind over 1,000 open source projects, covering everything from the Linux kernel and Kubernetes to OpenSSF. The PyTorch Foundation sits under that umbrella, giving its projects access to LF's governance infrastructure and oversight.

Helion comes in as a tool that makes writing the low-level code that runs AI models on GPUs significantly less painful. It handles a lot of the tedious groundwork automatically, and finds the best configuration for your hardware on its own.

Whereas Safetensors is a file format for storing and sharing AI model weights that doesn't come with the security baggage of older formats.

  •  

Glass UI Is Making a Comeback on Linux Thanks to KDE Contributors

KDE Plasma's two classic themes, Oxygen and Air, are making a comeback. A group of KDE contributors is actively restoring both ahead of the Plasma 6.7 release, which is scheduled for June 16, 2026.

Both themes trace their roots back to the KDE 4 era. Oxygen shipped as the default theme from KDE 4.0, defined by its dark tones and glassy aesthetic. It held that spot until KDE 4.3, when Air took over as the default, bringing a lighter look built around transparency and white as its base color.

While Oxygen stuck around into the Plasma 5 and 6 eras, it did so in an increasingly broken condition, and Air eventually got dropped from Plasma entirely.

Now, both are getting a second shot thanks to the restoration effort led by KDE contributor Filip Fila, alongside the original Oxygen designer Nuno Pinheiro and several other KDE developers.

On the Oxygen side, the panel has been fully reworked and is now orientation-aware, so vertical panels actually behave correctly. A minimized window indicator and a proper switch design were both missing entirely and have now been added.

Similarly, adaptive opacity is now supported and enabled by default, and the color scheme bug that was causing readability issues in widgets like System Monitor has been fixed.

Air needed its transparency restored to match its original KDE 4 character. That is done now, with blur added behind widgets, improving readability and visual appeal in the process. The panel has also been reworked, a new header and footer design has been added, and Air now has its own switch SVGs.

Why now after all this time? Well, KDE's 30th anniversary coincides with the Plasma 6.7 release, and the people behind this want to ship these historically significant themes for the occasion.

As of writing, 26 of 40 checklist items have been completed (linked below), with some pending work including gradient banding fixes in Oxygen, missing SVGs for checkmarks, radio buttons, toolbar, and menubar items across both themes, and a timer SVG for Air.

And if you want to see what that progress looks like, continue reading! 😬

How do these compare to Breeze?

From left to right, we have Breeze, Oxygen, and Air.

I checked out how Plasma's default Breeze theme compared to Oxygen and Air on a KDE Neon setup, and I must say, things are looking promising. The themes have things like the panel styling, widget backgrounds, and the new switch designs in place.

I specifically took a look at the panel and widgets, and these looked very clean, feeling like they belonged in the modern Plasma experience, which is not something you would expect from themes this old.

One thing worth noting is that the icons stayed as Breeze regardless of which Plasma Style I picked.

As for the difference between these, Breeze is flat by design. Minimal, no frills, gets out of your way kind. Oxygen and Air are not like that, bringing visible depth and some bling to the desktop, but in different ways.

While Air leans hard into transparency, making panels and widgets look light and barely there, Oxygen goes the other direction with darker gradients and more visual weight across the board.

Personally, I prefer Oxygen as it looks a lot like Windows 7's Aero, which I quite liked back in the day.

You can try these out too!

this screenshot shows the way to install custom themes on a kde plasma system, there are two app windows visible here, one is a file manager window, the other is the plasma style menu under system settings

First, you have to download the files for Oxygen and Air on a KDE Plasma-equipped system. Next, you have to go into System Settings > Appearance > Colors & Themes > Plasma Style.

Here, click on "Install from File..." and select a theme file to install it. Repeat for the other one, then select whichever theme you want and hit "Apply" on the bottom-right.

If you want to stay in sync with the development of these, you can keep an eye out on the GitLab issue tracker and Telegram group for this project.

  •  

Opera GX on Linux is for Gamers Who Put Stickers on Their Laptop

I have been gaming (not to be confused with gambling ☠️) for quite some time now. In that time, I have seen my fair share of gaming-centric platforms, storefronts, and applications, ranging from the genuinely useful to the elaborate solutions to problems that nobody really had.

Opera is a name that never fully disappeared from my mind because it has been consistent in delivering a browsing experience that many people prefer. I used it for a while, mostly for the free built-in VPN, before eventually moving to Firefox when I felt it was time for a change.

However, they also have a gaming-focused browser called Opera GX, which has been available on Windows and macOS for some time now. Earlier this year, we got word that a Linux port was in the works, and it did eventually arrive.

Curious about what took them so long, I asked Maciej Kocemba, Product Director at Opera GX, and he had this to add:

Bringing Opera GX to Linux has been a priority for us for some time now, especially since we've seen such public support among the community. One group even launched an online petition that collected several hundred signatures, which was pretty cool to see.
With gaming on Linux growing so fast right now, this is the perfect moment for us to bring a browser designed for gamers to a platform that values customization and control as much as we do. We’re so happy to finally make it available to this community of users, and we're eager to see how they'll take advantage of the GX features they’ve been waiting for.

That got me hyped enough to see for myself what a gaming browser actually feels like and whether there was anything a regular browser couldn't already do.

Non-FOSS Warning! The application mentioned here is not open source. It has been covered because it's available for Linux.

A Gamer's Browser?

this is the about page of opera gx on linux that shows a bunch of information related to the release and the system it is being run in

I took Opera GX for a run on a Nobara Linux 43 setup, using an Early Access version to test it across various scenarios ranging from general browsing to playing YouTube content to running internet speed tests.

On first launch, the browser asked me if I wanted to send telemetry; I declined and moved on to the initial setup. It asked me to pick a theme, and I went with the default GX Classic since it felt more Opera than the rest.

It then asked whether I wanted background music that reacts to my browsing, sound effects for in-browser actions, and keyboard sounds as I typed. I left all of these at their defaults since, honestly, they went right over my head (don't call me a boomer pls).

Opera GX on Linux new user onboarding.

I could also enable a bunch of sidebar integrations for Telegram, X, Instagram, and so on, but I left those turned off. Opera GX even asked me to import data from another web browser, but it failed to detect Vivaldi, which was already installed.

The next step involved me manually turning on the ad blocker (and later the block trackers option), but the other two features, GX Control and GX Cleaner, were toggled on by default.

Opera GX on Linux new user onboarding continued.

I also had to disable Opera AI from the hamburger menu, as I didn’t need it. What did catch my eye, though, were the many pre-installed sponsored speed dials I had to clear out.

And, to little surprise, the default search engine was set to Google, but that is changeable from the Settings menu.

On the left are the toggles to configure the ad blocker and Opera AI. On the right are the sponsored speed dials.

Interestingly, while the ad blocker does work, it fails to show data on how many ads and trackers were blocked when I clicked on the widget for it in the top bar (the shield-looking icon).

a screenshot of opera gx on linux that shows a youtube video being played, with the ad blocker widget visible on the top-right with missing data on blocked ads and trackers (just says 0)

Left: The ad blocker widget. Right: The Privacy and Security settings.

I headed into the Privacy and Security section of the settings and found quite a few things enabled by default: automatic sending of crash reports, fetching images for suggested news sources, displaying promotional notifications, and receiving promotional speed dials, bookmarks, and campaigns.

Not great for anyone who takes their privacy seriously and just doesn't want to be bombarded with spammy notifications and speed dial suggestions.

A demo of GX Control on Linux.

I then fired up Arc Raiders and tried my hand at GX Control. It is Opera GX's built-in resource management panel that lets you cap how much RAM, CPU, and network bandwidth the browser can use with individual toggles and sliders for each limiter. It worked as advertised and even threw up warnings when I set the limits too low.

this screnshot shows the gx cleaner feature of opera gx on linux on the left, with various options to toggle, the most notable ones are the three presets: min, med, and max
GX Cleaner in action.

Similarly, GX Cleaner is the browser's built-in browser cleanup tool that clears out cache, cookies, history, tabs, downloads, and more. It has handy presets, MIN, MED, and MAX, that control how deep the sweep goes, ranging from a light clear of recent temporary files all the way to a full wipe of just about everything. It worked as expected during my use.

A few things that I skipped testing were the stuff most browsers have like bookmarks and extensions, the latter of which Opera GX supports from the Opera catalog.

Then there are the other GX-specific bits, account sync for carrying your data across devices, and the sidebar webapps for Twitch and ChatGPT, which let you keep a stream or an AI assistant open without leaving your current tab.

GX Mods is also there, giving you access to over 10,000 community-made themes, sounds, shaders, and UI tweaks, though you will need an Opera account to get into it.

Wasted?

Depends. For someone like me who tends to close any unnecessary apps running in the background before launching a game, I don't see much use for a gaming browser. For casual browsing tasks, the occasional Alt+Tab to a regular browser does just fine, and the Steam overlay's built-in browser is handy too (albeit very barebones).

That said, if you are the kind of person who RGBs everything in sight and already has a riced-out Linux setup, Opera GX could be a decent addition to the collection.

Just go through the default settings before you do anything else. A lot of what's enabled out of the box won't sit well with most Linux users.

You can grab the DEB and RPM binaries for Opera GX from the official website.

  •  

The Linux Kernel is Finally Letting Go of i486 CPU Support

Plenty of CPU architectures have come and gone over the last few decades. The x86 family alone has seen a long line of chips rise to prominence and fade away as newer generations took over.

The i486 is one such chip, and it has been holding on in the Linux kernel far longer than most people expected. It was launched in 1989 as Intel's answer to what came next after the i386.

It was faster, smarter, and arrived right as personal computers were making their way from offices into living rooms. For many people, a 486-powered PC was their first computer.

By the early 1990s, the chip was everywhere. It was so dominant that AMD, Cyrix, and IBM all jumped in with their own compatible versions to grab a slice of the market. Intel kept producing the i486 well past its prime too, with embedded versions rolling off the line until 2007.

Most major platforms dropped i486 support a long time ago. Microsoft's last operating systems to officially support it were Windows 98 and Windows NT 4.0. The Linux kernel, however, has kept the lights on for i486 users well into the 2020s.

But that is now changing. 😅

What's happening?

Back in April 2025, kernel maintainer Ingo Molnár posted an RFC patch series to the Linux Kernel Mailing List, proposing to raise the minimum supported x86-32 CPU. The new floor would require chips with both a Time Stamp Counter (TSC) and CMPXCHG8B (CX8) instruction support.

Anything short of that, including the i486 and some early Pentium variants, would be out.

Prior to that, Linus Torvalds had already made his position clear on the mailing list, saying that:

I really get the feeling that it's time to leave i486 support behind. There's zero real reason for anybody to waste one second of development effort on this kind of issue.

Ingo's RFC had covered a fair amount of ground. The full cleanup would touch 80 files and remove over 14,000 lines of legacy code, including the entire math-emu software floating-point emulation library.

Now, the first of those patches removes the CONFIG_M486, CONFIG_M486SX, and CONFIG_MELAN Kconfig build options. It has been committed and is queued for Linux 7.1. Once it lands, building a Linux kernel image for i486-class hardware will no longer be possible.

Ingo noted in the commit that no mainstream x86 32-bit distribution has shipped an M486=y kernel package in some time, so the real-world impact on active users should be close to zero.

Unsupported but not unusable

If you have an i486 machine tucked away somewhere, it is not suddenly useless. Older kernel releases will continue to run on the hardware just fine.

Yes, those older kernels are not getting security patches. But if you are keeping a decades-old machine around for historical or educational purposes, it will not be your daily driver.

Just keep it off the internet, pair it with an older LTS kernel, and it will do what you need it to do without much fuss.

  •  

A New Linux Kernel Driver Wants to Catch Malicious USB Devices in the Act

A patch has been submitted to the Linux kernel mailing list proposing a new HID driver that would passively monitor USB keyboard-like devices and flag the ones that look like they're up to no good.

The driver is called hid-omg-detect, and it was proposed by Zubeyr Almaho.

The way it works is fairly clever. Rather than blocking anything outright, the module sits quietly in the background and scores incoming HID devices based on three signals.

Keystroke timing entropy, plug-and-type latency, and USB descriptor fingerprinting. The idea here is that a real human typing on a real keyboard behaves very differently from a device that was purpose-built to inject keystrokes the moment it's plugged in.

If a device's score crosses a configured threshold, the module fires off a kernel warning and points toward USBGuard as a userspace tool to actually do the blocking. Zubeyr adds that the driver itself does not interfere with, delay, or modify any HID input events.

This is already the second revision of the patch. The first pass got feedback on things like global state management and logging inside spinlock-held regions, all of which have been addressed in v2.

Is there a real threat?

The short answer is yes. The proposal explicitly calls out two threats, BadUSB and O.MG; both are worth knowing about.

BadUSB is the broader class of attack that was first disclosed back in 2014 by security researchers. It works by reprogramming the firmware on a USB device to impersonate a keyboard.

The operating system sees it as a perfectly normal input device, trusts it completely, and lets it do whatever its payload tells it to, be it open terminals, download malware, or exfiltrate data.

The O.MG Cable takes the same idea and hides it inside something that looks exactly like a regular USB cable. There's a tiny implant built into the connector that can inject keystrokes, log them, spoof USB identifiers to dodge detection, and be controlled remotely over WiFi.

Neither of these are making the headlines as often as they once did, but that doesn't mean the threat has gone away. Such tools have only gotten more refined and accessible, and malicious actors in 2026 are not getting any less creative or aggressive.

However, there's a big 'but' (not that you pervert) here. This is only a proposal, and while it looks good on the surface, the kernel maintainers have the final say in whether this makes it into Linux.

Via: Phoronix

  •  

Proton Launches Workspace and Meet, Takes Aim at Google and Microsoft

If you are a regular reader of ours, then you know that Proton is one of the privacy-focused services we usually vouch for. I have been using their various services personally for quite a while now, and I can confidently say that they know what they are doing.

Of course, I am just a random person on the internet yapping about how good it is. If you haven't ever tried their offerings, then you can decide for yourself, as they have launched two new services that could make your move away from Big Tech easier.

Two Big Launches

a purple-colored banner that shows the various proton services included in proton workspace

Proton Workspace is a comprehensive suite that pulls all of Proton's services together under one roof, aimed at businesses and teams that want a privacy-first alternative to Google Workspace and Microsoft 365.

It brings together Mail, Calendar, Drive, Docs, Sheets, VPN, Pass, Lumo, and the newly launched Proton Meet (more on it later). Businesses (both small and big) that want Proton's full suite without having to manage a separate subscription for every service and team member can go for this.

As an added bonus, being on a Swiss platform means the US government can't compel Proton to hand over your data the way it can with Google or Microsoft under the CLOUD Act.

📋
The URLs for some Proton services above are partner links.
the three pricing tiers for proton workspace is shown here, with workspace standard ($12.99 per user per month annually), workspace premium ($19.99 per user per month annually), and enterprise (contact sales team) listed

If Proton Workspace interests you, then you can opt for one of the two paid plans.

Workspace Standard, at $12.99/month per user on an annual plan or $14.99/month per user if you pay monthly, gets you Mail, Calendar, Drive, Docs, Sheets, Meet, VPN, and Pass. It also includes 1 TB of storage per user and support for up to 15 custom email domains.

Workspace Premium bumps that up to 3 TB of storage per user, 20 custom email domains, higher Meet capacity (250 participants vs. 100 on Standard), access to Lumo, and email data retention policies at $19.99/month per user annually or $24.99/month per user on a monthly plan.

Large organizations can also reach out to Proton directly for a specially tailored Enterprise plan, and if you are already a Proton Business Suite member, then you get a free upgrade to Workspace Standard.

a purple-colored banner that shows a demo of proton meet with many participants in a video call

On the other hand, Proton Meet is their new end-to-end encrypted video conferencing tool, and it goes up directly against the likes of Zoom and Google Meet.

Every call, including audio, video, screen shares, and chat, is encrypted using the open source Messaging Layer Security (MLS) protocol. Thanks to that, not even Proton can see what goes on in your meetings, and there are no logs either.

the three pricing tiers for proton meet is shown here, with meet professional ($7.99 per user per month annually), workspace standard ($12.99 per user per month annually), and workspace premium ($19.99 per user per month annually) listed

As for the pricing, the Free tier lets anyone host calls with up to 50 participants for up to an hour without requiring a Proton account. For more headroom, the Meet Professional plan costs $7.99/user/month and raises the participant cap to 100, with meeting durations of up to 24 hours.

Teams that want Meet bundled with the rest of Proton's suite can opt for Workspace Standard or Premium instead, which is the better deal if you are already switching over from Google or Microsoft.

You have many options to use Meet. It is available on the Web, but also ships with native apps for Linux (yeah, you read that right), Android, Windows, macOS, and iOS.

  •  

LibreOffice Drama: TDF Removes Collabora Developers in One Sweep

TDF's Membership Committee has removed all Collabora staff and partners from membership in one move, covering over 30 developers. This includes, per Collabora's own count, seven of LibreOffice's all-time top ten core committers who are still active.

To make things more complicated, this is only the latest in a series of departures. Several of TDF's original founders have already stopped being members over recent years, and of the remaining active founders, three of the last four are now paid TDF staff who aren't writing core code.

And it doesn't stop there. Collabora takes aim at a series of governance decisions it considers indefensible. Board appointments, it says, favored non-technical staff over experienced contributors, and the revival of shelved online code now puts TDF in direct competition with its own biggest contributor.

Then there are the legal proceedings against former volunteer board members, which were reportedly bankrolled by donor money, and trademark complaints directed at contributors while others are misusing the LibreOffice name freely without any pushback.

This comes from Michael Meeks, CEO of Collabora Productivity and one of the founders of The Document Foundation. He published this on April 1, and before you ask, no, it's not an April Fool's joke.

As for what Collabora plans to do next, Meeks has laid out plans for a new, lighter Collabora Office product, rebuilt from a cleaner base with less legacy code baggage and a web-based toolkit. Apart from that, their Classic product is not going away, with support set to continue for the foreseeable future.

On their future relationship with LibreOffice, Michael adds:

We will continue to make contributions to LibreOffice where that makes sense (if we are welcome to), but it clearly no longer makes much sense to continue investing heavily in building what remains of TDF’s community and product for them – while being excluded from its governance.
In this regard, we seem to be back where we were fifteen years ago. Meanwhile TDF continues to hire developers, sells LibreOffice and starts to act more like a staff-controlled collective than a Free Software project.

Collabora is also calling for developers to get involved in this new endeavor. If you have the relevant skills, you can head to their community page.

The response

The Document Foundation's official reply came from Italo Vignoli, a founder Collabora lists as having already exited TDF membership.

He has kept it short, confirming that the removals happened, pointing to TDF's recently adopted Community Bylaws as the basis. Those bylaws include a clause requiring anyone affiliated with a company in an active legal dispute with TDF to step down from membership.

The stated rationale is that past situations saw people put their employer's interests ahead of the foundation's, and the clause exists to stop that happening again. The specifics of the legal dispute between TDF and Collabora are not mentioned by either party.

On Collabora's plans and the wider fallout, the post keeps things brief, laying out that this kind of split is not unheard of in the world of FLOSS, and nothing in the MPL license stops Collabora from building whatever it wants.

TDF also makes clear that a membership revocation is not a ban from contributing, with the project remaining open to anyone, and expects Collabora to keep contributing "when the time comes."


Suggested Read 📖: ONLYOFFICE gets forked

  •  

Proposal to Centralize Per-User Environment Variables Under Systemd in Fedora Rejected

A contributor named Faeiz Mahrus put forward a change proposal for Fedora 45 that would change how per-user environment variables are managed on the system. Right now, Fedora handles this through shell-specific RC files: ~/.bashrc for Bash users, ~/.zshrc for Zsh users.

These files are responsible for things like adding ~/.local/bin and ~/bin to your $PATH, which is the list of directories your system searches when you run a command.

The problem Faeiz pointed to was that Fedora ships a number of alternative shells (Fish, Nushell, Xonsh, and Dash among them), but none of those have packaged RC files that do the same job.

So if you switch your default shell to Fish, any scripts or programs you've installed in ~/.local/bin suddenly stop being found by the system. They're still there, but your shell doesn't know where to look for them.

The proposed fix was to move this responsibility to systemd's environment-generator functionality, using drop-in configuration files placed in the /etc/skel/.config/environment.d/ directory.

Since systemd manages user sessions on Fedora, the idea was that it could apply these environment variables to all user processes regardless of which shell you're running. One config file would cover all shells, with no per-shell fixing required.

The vote

The proposal went to the FESCo for a vote, and it came back with six votes against and three abstentions. The key objection was that the proposal didn't adequately account for environments where systemd isn't running.

Committee member Neal Gompa (ngompa) voted against it, pointing out that containers don't guarantee systemd is present, which would make the change quietly disruptive for anyone running Fedora-based container images. Kevin Fenzi (kevin), another member, said that the proposal wasn't convincing enough yet.

If you didn't know, FESCo, or the Fedora Engineering and Steering Committee, is the governing body that reviews and approves all significant proposed changes to Fedora Linux before they land in a release.

Contributors submit change proposals, FESCo members deliberate, and the committee votes on whether a proposal is ready to ship, needs revision, or should be turned away. It is essentially the gatekeeper for what makes it into a Fedora release.

While the FESCo has marked the ticket as rejected, they haven't fully shut the door on the idea. Committee member Michel Lind (salimma) noted in the closing comment that the proposal owner is welcome to resubmit once the gaps around systemd-less environments are addressed and more concrete configuration examples are provided.

Via: Phoronix


Suggested Read 📖: Fedora project leader suggests using Apple's age verification API

  •  

Arch Installer Goes 4.0 With a New Face and Fewer &#x27;Curses&#x27;

Arch Linux needs no introduction around here. It is the distro people flock to for its no-nonsense, rolling release approach and, of course, the right to say "I use Arch, btw" at every given opportunity.

Setting it up used to mean having the wiki open in one window and a terminal in another, hoping you didn't miss a step. Arch Installer (archinstall) changed that.

It is Arch's official guided installer that is bundled with the live ISO. It takes you through the whole process, from disk partitioning to desktop environment selection, without requiring you to memorize yet another command. I have used it while installing an Arch-based distro in the past (Omarchy), and it was quite reliable.

The developers have now introduced Arch Installer 4.0, and it is a major overhaul.

What to expect?

Video courtesy of Sreenath.

We begin with the most obvious change, where Arch Installer has ditched curses, the old C library powering most terminal interfaces you've come across, in favor of Textual, a Python TUI framework by Textualize.io.

This brings a cleaner look, and menus are now async too, with the installer running as a single persistent Textual app throughout rather than spinning up a new instance for each selection. This means the user interface won't freeze or stall between selections while the installer is doing work in the background.

Moving on, you can now set up a firewall during installation, with firewalld available right from the menu. GRUB also picks up Unified Kernel Image (UKI) menu entry support. A Btrfs bug that had the installer choking on partitions with no mountpoints assigned has been fixed too.

On the translation front, Galician and Nepali are in as new languages, and a good chunk of the existing ones, Italian, Japanese, Turkish, Hungarian, Ukrainian, Czech, Finnish, Spanish, and Hindi included, have been refreshed.

Worth noting too is that Arch Installer 4.1 has already arrived shortly after, and it drops the NVIDIA proprietary driver option since nvidia-dkms is no longer in the Arch repos.

Closing words

You can grab the latest Arch Linux ISO to try the new installer or update an existing live ISO by running pacman -Syu. For the full changelog, head to the releases page on GitHub.


Suggested Read 📖: Wayland’s most annoying bug is getting fixed

  •  

GNOME 50 Drops Google Drive Integration (For a Valid Reason)

Almost two weeks ago, someone on GNOME's Discourse forum asked whether the missing Google Drive support in GNOME 50 was a bug or a deliberate decision.

GNOME developer Emmanuele Bassi replied, confirming that Drive was no longer supported.

He went on saying that libgdata, the library that coordinates communication between GNOME apps and Google's APIs, has gone without a maintainer for nearly four years. Furthermore, GVFS dropped its libgdata dependency about ten months ago, and GNOME Online Accounts now checks for that before offering the Files toggle under its Google provider settings at all.

Emmanuele suggested that anyone wanting to restore the feature should reach out to the GVFS maintainer. Chiming in on this, Michael Catanzaro, another GNOME developer, said that libgdata has since been archived on GitLab (linked above), leaving nothing to even contribute to at this point.

Further explaining that:

GNOME had already disabled this functionality years ago, but distros sometimes move slowly. If Fedora had disabled it sooner, then perhaps users would have noticed the problem before the project was archived rather than after. Oh well.

Back in December 2022, Catanzaro had already put out a public call for someone to take over libgdata, warning that the integrations depending on it would eventually stop working if nobody did. That was over three years ago, and nobody ever stepped up.

The issue was not just libgdata itself. It was the only remaining reason libsoup2 was still present in the GNOME stack, at a time when libsoup2 was already being phased out ahead of the GNOME 44 release.

Currently, Debian's security tracker lists many open CVEs against it, covering everything from HTTP request smuggling to authentication flaws. Keeping libgdata around meant keeping all of those spicy vulnerabilities around too.

A long shot, but…

I like to be delulu every so often, and I think that maybe Google could officially step in? Assigning a developer or two to bring back Drive support could get things rolling; I am aware that they don't have any shortage of talent after all.

Plus, they are already known to be supporters of open source. Seeing their recent f*ckups, this could be a good win for both their PR team and GNOME users who rely on such support.


Suggested Read 📖: GNOME 50 is here, but ditches X11

  •  

After 5 Years, PineTime Gets a Major Upgrade with AMOLED, GPS, and More

PINE64 has built a reputation for delivering open source hardware to people who actually care about what runs on their devices. From single-board computers like the ROCKPro64 and the RISC-V powered STAR64 to Linux smartphones like the PinePhone, the company has been pretty consistent.

One of their offerings is the PineTime, which is a compact, inexpensive open source smartwatch that has been around since 2019. It started as a community side project, inspired partly by the simplicity of the old Pebble, and is priced at around $26.99.

Years later, PINE64 has revealed what comes next. Announced at FOSDEM 2026 and detailed in a new blog post, the PineTime Pro is the open source smartwatch's next step up.

PineTime Pro is Coming

Pics courtesy of the PINE64 team.

The PineTime Pro is a significant hardware upgrade over the original, and the spec sheet makes that known right away.

At its core is a dual-core Cortex-M33 SoC, with an application core running at up to 200 MHz and a separate dedicated Bluetooth core. RAM goes up from the original's 64 KB of SRAM to 800 KB of SRAM plus 8 MB of PSRAM. The display jumps from a 240x240 pixel 1.3-inch LCD to a 410x502 pixel 2.13-inch AMOLED panel with touch support.

Beyond that, the Pro comes with GPS, a heart rate sensor with blood oxygen measurement, a 6D IMU, Bluetooth 5.2 with both Classic and Low Energy support, a microphone, a speaker, and a vibration motor. It also has a digital crown that doubles as a button.

External storage is delivered via 8 MB of QSPI flash, and there is a 4-pin connector for power, debugging, and programming purposes.

Additionally, PINE64 is calling this one the PineTime Pro and not the PineTime 2 for a deliberate reason. The original is not being discontinued as it is still doing well.

The Pro is meant to sit alongside it as a more capable option, not replace it. If the original PineTime was built to be approachable, the Pro is built for those who want to push things further.

On the software side, developers from both InfiniTime and Wasp-OS are involved, with the groundwork for it already being laid. The extra hardware headroom also means features that were never realistic on the original could actually happen here.

When to expect?

As for where things stand right now, it is early. The first two watch prototypes arrived toward the end of 2025, but a non-functional SWD port made loading and debugging software harder than expected.

A second batch showed up just before FOSDEM 2026 but ran into a flash memory issue, which meant the demo at the event had to run on a development board rather than the actual watch hardware.

A third hardware revision is expected in early April, and the team is optimistic this one will finally clear the remaining hurdles.

There is no release date yet, and PINE64 is not claiming otherwise. But after years of hardware iteration, the PineTime Pro is finally starting to feel like something we might actually one day wear on our wrists.

  •  

Ubuntu 26.04 LTS Requires More RAM Than Windows 11?

Ubuntu 26.04 LTS "Resolute Raccoon" is not out yet, but its release notes have an unexpected change that missed my eyes completely. Canonical has bumped the minimum RAM requirement for Ubuntu Desktop to 6 GB for this upcoming LTS release.

While it is a major shift for desktop users, on the server side, things remain far more flexible. Ubuntu Server's documentation lists a minimum of 1.5 GB for ISO installs, with a suggested minimum of 3 GB to account for real-world workloads.

Ubuntu 26.04 LTS system requirements on the left, Ubuntu 24.04 LTS' on the right.

Ubuntu 24.04 LTS, the current long-term support release, lists 4 GB of RAM alongside a 2 GHz dual-core processor and 25 GB of storage as its minimum requirements. Those requirements were carried over to Ubuntu 25.10 as well. So the jump to 6 GB in 26.04 marks the first time Canonical has raised the desktop RAM ceiling in a while.

But Windows requires less?

Microsoft lists 4GB as the minimum required RAM for Windows 11, which on paper looks more generous than what Ubuntu 26.04 is asking for. But that number is worth looking at a little more closely, though.

this screenshot shows a list of system requirements for running windows 11, there are listing for processor, ram, storage, system firmware, tpm, graphics card, and display

I say that because it is also mandatory to have Trusted Platform Module (TPM) version 2.0 to run Windows 11. If you didn't know (or care about), TPM is a dedicated security chip built into your motherboard that handles cryptographic keys used by features like Windows Hello and BitLocker.

The thing is, most computers that have shipped with TPM in the past few years (at least the Windows-focused ones) come with at least 8 GB of RAM, and if you draw a parallel with how badly 4 GB of RAM performs (check the comments) on a Windows 11 install, you will see that the claim sounds sloppy.

Canonical appears to be taking the more straightforward approach here. Ubuntu with GNOME has been known to be fairly hungry on RAM once you start actually using it.

Open a browser, load a handful of tabs, and the available memory starts to disappear quickly. The 4 GB figure that covered Ubuntu 24.04 seems closer to a technical floor than a practical ceiling, and moving it to 6 GB in 26.04 reflects that reality more honestly.

The TLDR is that both operating systems need headroom well above their listed minimums the moment you start doing anything beyond light use; one lists in clearly, while the other doesn't.

What about systems with 4 GB of RAM?

If your machine has 4 GB of RAM, Ubuntu 26.04 LTS should still be a decent fit, but if you are a power user who likes to multitask, then Lubuntu, the official Ubuntu flavor can be a better fit for you. It is built on the LXQt desktop environment, runs relatively comfortably with a minimum of 1 GB of RAM and 2 GB recommended. Xubuntu is also a good candidate here.

For systems where even that is a stretch, opting for a window manager like i3 or bspwm instead of a full desktop environment will give you a functional Linux setup on hardware that a standard Ubuntu install would likely struggle with.


Suggested Read 📖: Best lightweight Linux distributions

  •  

Ubuntu 26.04 LTS Beta Shows You There&#x27;s Potential in the Stable Release

For regulars in the open source space, Ubuntu is kind of a household name that introduced many to the diverse world of Linux, where you have all kinds of flavors. Want some work done? You have Fedora. Want to earn the rights to say "I use Arch, btw," and get work done? You have Arch Linux.

We are now weeks away from Ubuntu's next long-term support release, and Canonical have now provided everyone with a beta build for testing purposes. Let's see what it delivers.

Ubuntu 26.04 LTS Beta: A Functional Upgrade

the desktop view of ubuntu 26.04 lts beta is shown here with the quick settings dropdown visible on the top-right

While the beta release offers variants like Server, WSL, and Cloud images, alongside the official Ubuntu flavors, I have only focused on Desktop here. We start with the new boot animation that looks clean but will be easy to miss if you have a decently powerful system.

Powering the release is Linux 7.0, whose development is still wrapping up, but it marks a significant jump from the Linux 6.8 kernel that shipped with Ubuntu 24.04 LTS. For the desktop, it ships with GNOME 50, which has finally managed to let go of X11.

0:00
/0:08

The new boot animation.

The shell picks up a power mode indicator in the top bar, better screen time controls, and fixes for some long-standing annoyances like deleted default folders reappearing after a reboot. Variable refresh rate and fractional scaling are now stable features and are no longer buried behind an experimental flag.

Resources replaces System Monitor for hardware monitoring and process management. It is built on GTK4 and libadwaita, so it slots in naturally with the rest of the desktop, and was picked over Mission Center largely because of its stronger accessibility support.

GNOME 50 demo and the Resources app on the left; on the right is the App Center.

The App Center has also been updated to show and manage Deb packages installed from Ubuntu's repositories, not just snaps. You can head over to the Manage section to find a new package type filter that lets you view them separately.

On the graphics side, Mesa 26.0 is on board, bringing OpenGL 4.6 and Vulkan 1.4 support along with a broad set of driver improvements across Intel, AMD, and NVIDIA hardware.

Docker Engine 29 is also included, which makes the Containerd image store the default for fresh installs and adds experimental nftables firewall backend support.

There are more new features in Ubuntu 26.04 that we have covered here.

Start Testing

You can download the beta builds for x86 systems from the release portal (which is also where the stable release will go live), and for other platforms like ARM64, you will have to visit the image mirror.

Stay tuned to here and on our socials for more detailed coverage when the stable release comes out!

  •  

Why Ubuntu 26.10 Might Drop ZFS, RAID &amp; Encryption Support From Grub

Canonical engineer Julian Andres Klode, who works on Ubuntu's secure boot signing, has put forward a proposal on Ubuntu's community forums to significantly cut down the GRUB bootloader for the upcoming Ubuntu 26.10.

The proposal takes aim at GRUB's parsers, which Julian describes as a "constant source of security issues," and proposes cutting a number of features from signed builds to reduce the attack surface in the pre-boot environment.

a cropped screenshot of a post by julian andres klode on ubuntu's discourse forum that lays out a proposal to remove certain features from grub on ubuntu 26.10

What is meant to get the axe? On the filesystem side, Btrfs, HFS+, XFS, and ZFS would all be dropped, leaving only ext4, FAT, ISO 9660, and SquashFS for Snaps. Image support would go too, alongside the Apple partition table, LVM, most md-RAID modes (RAID1 is retained), and LUKS-encrypted disks.

In practice, that means Ubuntu 26.10 systems running Secure Boot would need to boot from a plain, unencrypted ext4 partition on a GPT or MBR disk. No ZFS, no Btrfs, no encrypted /boot. Those features would still be available through unsigned GRUB builds, but you'd lose Secure Boot entirely in exchange.

He pitches this as a meaningful security improvement and also as a step toward eventually moving to newer boot solutions down the line.

Now, here's the catch. If your current setup relied on any of the features being dropped, the release upgrader would block you from moving to Ubuntu 26.10 at all. Those systems would stay on 26.04 LTS by default.

There's resistance

Neal Gompa, a well-known name in Linux spaces and contributor to Fedora, openSUSE, and several other distributions, pushed back on a couple of points right away.

On Btrfs, he argued that GRUB's driver for it is read-only and actively maintained upstream, and that users running boot-to-snapshot setups depend on it being there.

He also disputed Julian's suggestion that native /boot RAID setups are uncommon, saying that software RAID1 is "incredibly common," in his experience, and removing it would be a substantial loss, not a minor one.

When a community member questioned whether there was a need to support older systems. Neal laid out that a large chunk of web hosting, cloud, and VPS environments still don't support UEFI and that plenty of UEFI implementations predating 2017 were too broken to be practically useful.

Another Ubuntu community member, Paddy Landau, raised a different concern. Dropping PNG and JPEG support in signed builds would kill boot menu theming, something he's had running on his Ubuntu setup for years.

He also questioned the security case, noting that the known vulnerabilities appear to affect GRUB versions before 2.12 and that TGA format doesn't carry the same risk.

The sharpest response came from Thomas Ward, a Ubuntu Technical Board member, who stated that Ubuntu's own default installers, including the server installer, set up LVM by default, and LUKS encryption on Ubuntu currently requires LVM.

Canonical's own recommended installation configuration would, under this proposal, end up incompatible with Secure Boot on 26.10. He's asking for a clear, per-feature public justification before anything moves forward and argues that without it, dropping features that users and compliance environments actively depend on is simply not justifiable.

And I agree with him. If you can't provide convincing reasons to remove each one of those features, then don't bother proposing it, simple.


Suggested Read 📖: Fedora's project leader has suggested something to tackle age verification

  •