Vue normale

Reçu avant avant-hier

Can Free VPN and AI Save Firefox From Decline?

5 avril 2026 à 04:54

It's no secret that Firefox has been steadily losing ground over the past decade or so. Despite efforts to revitalize this once beloved titan of the internet, the market share just hasn't returned, and Mozilla's recent choices haven't been helping the cause. That being said, Mozilla hasn't given up, and after many false starts, it seems like current leadership is ready to give it a go at regaining ground.

The recently introduced built-in Firefox VPN feature is an example of this, as are the (admittedly controversial) AI-powered enhancements recently shipped in recent releases. But are these enough to give Firefox a real chance to claw its way back to the top, or at least make it relevant enough to survive?

Let's talk about it, and see where things might be headed for our favourite red panda.

Is Firefox really dying?

A screenshot of the latest browser stats from Statcounter
Firefox isn't faring so well on stat counter in recent years

Since we’re asking whether Firefox can be resurrected, it shouldn’t come as a shock that, by the numbers, Firefox is not in a particularly good place. Since the launch of Google Chrome, Firefox has gradually, and then more rapidly, fallen from its former position to the point where it now accounts for just 2.29% of global browser market share, according to Statcounter. That’s down from 7.97% in 2016 (which is still quite minimal), a drop of roughly 5.7 percentage points in the last decade alone.

Of course, a low market share does not mean an open-source project is literally “dying”. But Firefox is not just a project. It is also a product, and as a product, it has an incentive not just to exist or survive, but to thrive. Right now, the long-term trend suggests it is doing neither especially well.

What happened to Firefox's popularity anyway?

A screenshot of the about dialog from Firefox 149
Firefox is still getting regular release despite the falling market share

It’s easy to snigger and say “Chrome happened, heh!” but that wouldn’t do the whole story justice. It’s unfair to say that the resignation of former Mozilla CEO Brendan Eich in 2014 and the subsequent creation of Brave is responsible for Firefox’s decline, even if that episode is sometimes cited as one more nail in the Firefox coffin.

Instead, the reality is a bit more complicated, and it’s worth paying attention to before we answer the questions posed by our overall premise.

For starters, Firefox has reinvented itself a bit too often in a relatively short timeframe, and unfortunately, these reinventions have at times blindsided loyal users. From Australis to Quantum/Photon, and later Proton, Mozilla has seemed to be in a relentless search for a new Firefox aesthetic. On the surface, no pun intended, this may not seem like a big deal, because after all, “a UI is just another coat of paint”, right?

💡
Did you know? Firefox is gearing up for yet another interface change. You can learn all about it in our coverage on Firefox Nova.

The problem with change is friction

A photo of a person stuck under a bunch of boxes
Too many changes in a short time can leave users feeling overwhelmed Pexels / cottonbro studio

Every change is another experience for users to get used to, and adjusting to change brings friction. The more change, the more friction, and the more friction the greater the frustration. Eventually, users get tired and move on.

By contrast, Chrome and most of Firefox’s major competitors have remained comparatively stable in their core look and feel over time, which reduces the friction users feel when moving from one version to the next. Furthermore, Firefox lost its legacy extension system and full browser theming in 2017, and before that, the standout Panorama tab groups feature in 2016. You can see the Firefox 57 transition point in Mozilla’s own release notes.

Simply put, Firefox suffers from a war of its own attrition. So the question then becomes can its new features heal the scars the old wounds left behind?

Why the new VPN matters, if they get it right

Mozilla VPN

Of all the moves Mozilla has been making in Firefox recently, this one perhaps has the greatest potential to be the sleeper hit Firefox has needed for a long time. After all, Mozilla has long positioned itself as a champion of privacy and security, and Firefox still retains a stronger reputation for privacy than many of its mainstream rivals.

Unlike AI features, which many users may ignore, distrust, or actively avoid, built-in privacy tools solve a problem people already understand.

That said, Mozilla needs to be careful not to make some of the same obvious mistakes that have hurt other browsers in the past. Just as importantly, it needs to resist the temptation to keep this feature restricted to only a select few in the long run.

Don’t give us a glorified proxy

A screenshot from the Opera VPN page
Opera VPN has come under fire in the past for not being a true VPN service

Opera tried this, and to my knowledge, it is still essentially that, despite carrying the name of a VPN. If Mozilla is serious about this effort, then it needs to make sure that what it is calling a VPN actually delivers on what the term implies.

If this is going to matter, it cannot feel like a half-step, a marketing hook, or a dressed-up proxy with a more fashionable label. It needs to be useful, absolutely trustworthy (a very hard sell), and accessible enough that ordinary users can feel the benefit without having to decode the fine print first.

It needs to be for everyone, or it shouldn’t exist at all

A silhouette of 5 person posing in front of a sunset sky while standing on what appears to be a hill
Pexels / Olha Ruskykh

That stance may sound a little hardline, but it is the stance Firefox needs if Mozilla truly intends to make this feature matter on the global stage. A privacy feature cannot meaningfully strengthen Firefox’s position if large parts of the world are excluded from using it.

The world is not limited to the US, UK, Europe, and Canada. It never was. If Mozilla is going to introduce a feature like this, it needs to be available worldwide, or it risks sending the message that a large subset of highly connected users, many of whom also contribute to the open-source technologies that make these features possible, do not matter enough to be included. Mozilla, of all companies, needs to prove that this is not its position.

AI: Not for everyone, but maybe enough for some

A screenshot of the AI Controls in Firefox preferences
The AI settings in Firefox Preferences show Mozilla is leaning heavily towards local solutions

It's important to understand the approach Mozilla is taking here, since this is an area where things often get framed through sensationalism rather than reality. Yes, Mozilla is adding AI features to Firefox, and at a fairly brisk pace. However, these features are still optional, though Mozilla choosing to make them opt-out rather than opt-in might leave a bad taste in some users' mouths. Mozilla’s current AI controls are part of that wider balancing act.

That being said, some users not only won't mind these features, but may sincerely expect them to be present in any modern browser, and be disappointed without them. After all, there's a very real market for the likes of Microsoft's Copilot and Google's Gemini: casual users who aren't too deeply concerned how something works so much as whether they can use it or not.

Striking the balance

A screenshot of the marketing for Firefox, showing the line "Control without complexity" and a number of images and associated points
Mozilla is trying to market Firefox with a more balanced approach, but will it work?

The key here isn't so much about whether Mozilla/Firefox should abandon AI altogether. It's clearly a direction Mozilla is dead set on exploring, even as privacy concerns continue to dominate the conversation. The real trick is to find a way for these features to exist while also doing something genuinely useful.

Poor article summaries and gimmicky integrations are just not going to win many people over, certainly not in the long run. But on-device tools that provide translations, help users conduct better research, navigate their browsing history more intelligently, or just generally get real work done faster without sending their data off into the void? Now that's a story most people can confidently get behind.

That's where Mozilla may have a real opening. Sure, AI isn't likely to be the thing that single-handedly "saves" Firefox, even if done "right". Yet, if it's handled carefully, it could help Firefox feel current, capable, and competitive to the kinds of users who now expect these conveniences to exist.

Counterpoint: What about the competition? Is everyone doing it?

A screenshot of Vivaldi showing the "keep browsing human" announcement post
Vivaldi is known for its bells and whistles. AI isn't one of them

No, and if we're looking at benchmarks of success, this really matters. For example, Vivaldi, the "spiritual successor" to the pre-Chromium-clone Opera, has firmly chosen not to integrate generative AI features into the browser. They've been quite explicit about this stance with their "keep browsing human" messaging.

In a world where it seems every major browser vendor is diving in head-first, this is a bold decision that helps Vivaldi stand apart from a market increasingly saturated by the same talking points and "checklist features" that feel like mere buzzword copycatting. This is also one of the reasons why Firefox forks like Waterfox and others have continued to hold solid, faithful communities.

Truthfully, Firefox has often been chosen because it's not like the crowd: it's not Chrome, it's not a clone (it still uses its own Gecko engine), and it's the one major browser that has historically dared to remain not only independent but substantively different. So while some users won't mind a little assistance here and there, the Firefox faithful may be more likely to be the ones turned off by the "AI everywhere" trend that's taken over the internet. For those users, restraint can be a selling point in itself.

What this means for Firefox

A screenshot from Firefox.com showing "Fast to switch. Easy to settle in."
Mozilla is clearly trying to keep the Firefox brand relevant and alive. Will these new efforts be enough?

What Mozilla is pursuing here is still quite the gamble. They're playing the fine line between the privacy-focused legacy of Firefox and the "assisted future" that the world is headed towards. It may look like the right way forward for some, but might very well be a death knell to others.

Mozilla may believe in striking a balance by keeping these features flexible, optional, and in some cases locally driven. The problem is that balance is hard to achieve, and even harder to effectively communicate.

So Firefox's real challenge isn't just adding new features. It's in convincing people that it still knows where to draw the line. If Mozilla gets that balance right, Firefox may come across as modern without feeling overstuffed. If they get it wrong, it risks alienating users who just wanted a browser with boundaries.

The secret benefit of drawing attention

A photo of a loudspeaker with an orange base, white hand, and white flange with a silver rim, sitting on a lightly coloured stool
"AI", "privacy", and "VPN" sure are great ways to stir up conversation, if this is the aim Pexels / Mikhail Nilov

It would be remiss of me to close out without addressing the one thing that this new strategy by Mozilla may be most succeeding at: getting us to talk about Firefox again. Sure, not all the talk around Mozilla's recent decisions has been positive, and if we're being fair, they have given us some reasons for pause. However, if there's one thing attention does well, it's getting people to see what all the fuss is about, even if they're otherwise not sold or even all that interested.

Maybe that's what Mozilla is angling for with Firefox after all - and if they can manage to stick the landing, all this increased attention and coverage might just be the key to getting new (and old) users to try this new flavour of Firefox ice cream and find that we like it.

Is it all enough?

A screenshot from firefox.com showing more of the new branding for Firefox
Will the new features keep up with the ambitious branding and fresh energy?

Frankly, it's a bit too early to tell, though the reality is that trends can often be shifted by the most unexpected winds of change. No one expected Chromebooks to become a success, until they were. At one time, no one saw smartphones coming, now they're everywhere. What drove those trends? Tiny, seemingly innocuous factors, and simple, seemingly unimportant features. The same can happen with Firefox and its ambitions to recapture its position in the hearts and minds of users around the world. Could the new VPN and more, but cleverly handled AI integration be the secret sauce to push things over the line?

Only time will tell, but maybe, there's a chance this time.

Git Isn’t Just for Developers. It Might Be the Best Writing Tool Ever

4 avril 2026 à 06:49

In 2019, I watched a fellow writer almost lose her life’s work.

We were working in an advertising agency. Like most writers who end up in advertising, we were both secretly working on our novels. One afternoon, after lunch, I noticed her pacing around the office, rifling through her bag, checking every desk. Her irritation quickly turned into panic.

Her pen drive was missing.

Hours later, on the verge of tears, she told us why this particular pen drive mattered: it held the only copy of her manuscript.

My first reaction was disbelief. Only copy?

No emailed draft to herself, no Google Drive or Dropbox, no backup anywhere? The answer was simple: she hadn’t thought about it. Relative tech illiteracy had put an entire novel at the mercy of a misplaced USB stick.

My reaction was part heartbreak, part annoyance, and part dread. That night I sat down to audit my own practice—how I recorded, recalled, and stored my work.

At the time, the source of truth for my fiction was a single folder on Dropbox, with dozens of subdirectories by project. All the manuscripts were ‎.doc or ‎.docx. I took regular backups of that folder, zipped them, and emailed them to myself with dates and times in the subject line. If something went wrong, I could theoretically roll back to a recent version.

On paper, that sounded reasonable. In my body, it felt wrong. I couldn’t articulate why, but I knew “not losing everything” was not the same as “leaving behind a studio that someone else could actually use.”

A few weeks later, on a whim, I decided to relearn programming after almost twenty years. Maybe, I thought, programming in 2019 would be kinder than it had been in 2001.

The first lesson on The Odin Project was on Git.

I went through it expecting boilerplate developer lore and came out with something else: a way to resolve the unease I had been carrying about my writing. Git didn’t just promise safety from catastrophic loss; it offered a way to keep a living, navigable history of my writing. It suggested that my studio didn’t have to be a pile of files.

It could be a time machine instead.

I remember feeling irritated that night: why was Git not being taught to writers?

The Timelessness of Plain Text

Sociologist Kieran Healy wrote a guide for “plain people” on using plain text to produce serious work. Neither he nor I are the first non‑programmers to come to this realization, and hopefully not the last: plain text is the least glamorous, most important infrastructure upon which I build my work. I use the word infrastructure intentionally: plain text forms the substrate that underlies, connects, and outlives higher-level applications. For people like you and me---whether we are writers or not---choosing to work with plain text is a political choice about memory and power, not a mere nerdy preference about file types.

It has been over six years since I moved all my writing to plain text and Git. Before that, my life’s work sat in one folder, spread over a handful of ‎.doc and ‎.docx files. Now, plain text is the lifeblood of everything I write—a choice to live closer to the infrastructure layer where I retain power over time, interoperability, and preservation. The alternative is renting them from whoever owns the fancy app.

An extract of the writer's git commit history © Theena Kumaragurunathan

Why does this matter?

In my last two columns, I spoke about how Emacs interfaces with my work: and using it for writing my next novel ; put simply, why I choose to work on Emacs in the age of AI tools. None of my Emacs-fu would be possible without plain text and Git sitting underneath.

Most of us are told that platforms will take care of our work. “Save to cloud” is the default. Drafts live in Google Docs, outlines in Notion, images in someone else’s “Photos,” notes in an app that syncs through servers we don’t control. It feels safe because it is convenient. It feels like progress: softer interfaces, smarter features, less friction.

The cost is deliberately obfuscated.

You pay it when the app changes its business model and the export button slips behind a subscription.

You pay it when comments you believed were part of the record are actually trapped inside an interface that will be sunsetted in ten years.

You pay it when a future collaborator has to sign up for a dead service—if that’s even possible—just to open a reference document.

You pay it when your own older drafts become psychologically “far away,” not because you are ashamed of them, but because the path to them runs through expired logins and abandoned software.

A repository of written work hosted entirely on proprietary, cloud‑bound software is a studio that dies when the companies behind it do—or when they decide that their future no longer includes you.

If you want your studio to outlive you, you cannot outsource its memory to platforms that see your work as a data source, a training set, or a metric. You need materials and tools that privilege longevity over lock‑in.

The Studio as a Text Forest

Showing my writing studip built on git

Plain text works because it is not sexy. It is not “disruptive.” Good. That is precisely why it is so important.

A text file is one of the most durable digital objects we have. It has remained readable, without elaborate translation, across decades of hardware, operating systems, and software ecosystems. It is trivial to convert into other formats: PDF, EPUB, HTML, printed book, subtitles. It compresses well. It plays well with search. It fails gracefully.

When I began moving my practice into plain text, I was not thinking about posterity. I was thinking about control. I wanted to pick up my work on any machine and carry on. I wanted to stop worrying that an update to a writing app would quietly rearrange my files. I wanted my drafts to be mine, not licensed to me through someone else’s interface.

The result is a studio structured less like a warehouse of finished products and more like a forest of living documents.

Each project—work‑in‑progress novels, screenplays, this very series of essays, research trails—lives in its own directory inside a single mono‑repo for all my writing. Inside each directory are text files that do one thing each: a chapter, a scene, a note, a log of cuts and revisions. The structure is legible at a glance. You don’t need me to draw a diagram or sell you a course. Anyone who knows how to open a folder can navigate it.

This is not nostalgia for a simpler computing era. It is about lowering the barrier for future humans—future me, future collaborators, future scholars, future strangers—to enter the work without first having to resurrect my software stack.

Plain text gives us a chance to build archives with the same openness as a box of annotated manuscripts, without the paper slowly turning to dust.

But text alone is not enough. A studio that outlives the writer needs a memory of how the work changed.

Version Control as Time Machine and Conversation

Linus Torvalds probably never intended Git for use by writers. And perhaps that is why I view it as almost possessing magical powers. You see, with Git I can talk to my future self, and my future self can talk to my past self.

In software, version control lets teams collaborate on code without stepping on each other’s toes. In a solo writing practice, it becomes something else: a time machine, a ledger of decisions, a slow, ongoing conversation between different iterations of the writer.

Every time I hit a significant point in a project—adding a chapter, making a painful cut, restructuring a section—I make a commit. I write a short message explaining what I did and why. Over months and years, these messages accumulate into a meta-narrative: not the story itself, but a veritable documentary of how my stories came to be.

When I open the log of a book or a long essay, I can scroll through those messages and see the ghost of my own thinking. I see the point where I abandoned a subplot, the week I rewrote an ending three times, the day I split a single swelling document into a modular structure that finally made sense. It is humbling and reassuring in equal measure: it shows me that good writing isn't a result of strokes of inspiration but sitting down consistently to wrangle my writing brain.

At some point, selected manuscripts from this mono‑repo will be made publicly available under a Creative Commons license.

When that happens, I will not just be publishing a final text. I will be publishing its making. A reader in another part of the world, years from now, will be able to trace how a scene evolved. A young writer will see that the book they admire was once a mess. A collaborator will be able to fork the repo, experiment with adaptations, translations, or critical editions, and perhaps send those changes back.

Version control turns my writing studio into something that can be forked, studied, and extended, not just consumed.

This stands in stark contrast to the way most digital platforms treat creative work today: as a stream of “content” to be scraped, remixed anonymously into generic output, and resurfaced as something merely “like” you. When your drafts live inside a proprietary system, you are not only dependent on that system to access them; you are also feeding an apparatus whose incentives diverge sharply from your own.

A Git repository of plain‑text work, mirrored in places you control, is not magically immune to scraping. Mine has been private from the moment I created it, and it will remain so until I am ready to open parts of it on an instance whose values align with my own. Even then, determined actors can copy anything that is accessible. The point is not perfect protection. The point is to design for humans first: to make the work legible and usable to future people on terms that you have thought about, instead of leaving everything at the mercy of opaque platforms.

Designing for the Long Afterlife

What does it mean, practically, to design a studio that outlives you?

It does not mean embalming your work in an imaginary final state. The texts we now call “classical” did not survive because someone froze them. They survived because people kept copying, translating, annotating, arguing with them. They survived because they were malleable, not because they were pristine.

If I want my work to have any chance at a similar afterlife—not in scale, but in spirit—I need to make it easy for future people to touch it.

For me, that means:

  • The core materials of my work live in plain text, organized in a directory structure that makes sense without me.
  • The history of that work is kept in Git, with commit messages written for humans, not machines.
  • The repositories I want to be accessible are published under licenses that explicitly permit study, remixing, and adaptation.
  • The studio is mirrored in more than one place, including at least one I self‑host, so its existence is not tied to a single company’s fortunes.

Notice what this does not require. It does not forbid me from using GUI tools, publishing platforms, or even proprietary software where necessary. I am not pretending to live in a bunker with only a terminal and a text editor. I am saying that the source of truth for my work is kept somewhere that does not depend on the goodwill of companies for whom my creative life is just another asset.

This is not an overnight migration. It took me years to get from a single Dropbox folder of ‎.docx files to my current setup. The important part was the direction of travel. Every project I started in plain text, every journal I kept as a folder of files instead of a locked‑down app, every book I moved into a Git repo rather than an opaque project bundle, was a step toward a studio that a future human could actually enter.

A Quiet Resistance to Big Tech's Power

We are entering an era where large AI systems are trained on whatever they can scrape. The default fate of most creative work is to be swallowed, blurred, and regurgitated as undifferentiated “content.” It becomes harder to tell where a particular voice begins and the training data ends. As more of the public web fills with machine‑generated sludge, it becomes harder for human readers to find specific, intentional work without passing through the filters of a few large intermediaries.

A self‑hosted, plain‑text, version‑controlled studio will not stop any of this by itself. But it is a form of quiet resistance. And at this point in our collective history, where the same infrastructures that mediate our creative lives are entangled with surveillance, automated propaganda, and the machinery of war, even small acts of refusal matter.

Moving a novel into plain text will not topple a platform. Hosting your own Git server will not end a conflict. But these choices shape who ultimately has their hands on the levers of our personal and collective memories.

CW or Morse code?

16 mars 2026 à 04:19
 Unpacking the FAA's Boeing 787 Transponder DirectiveAs SARC Communicator editor I read a lot of blogs, club websites and other sources of amateur radio news. This one particularly caught my eye.The sourcehttps://www.paddleyourownkanoo.com/2026/03...

Why Linux Users Love to Hate Ubuntu

5 mars 2026 à 11:37

These days, it’s become fashionable to make fun of Ubuntu.

Whether it’s jokes about Snap packages or criticism of Canonical’s decisions, mocking Ubuntu often feels like the default attitude in parts of the Linux community.

To be fair, Canonical has made decisions over the years that have not always been well received, and some of the criticisms of Ubuntu and the direction it’s taken have their own merit. Yet, the derisive way Ubuntu is often talked about online isn’t particularly fair and, frankly, misses the point.

Ubuntu didn’t become the “face of Linux” by accident, nor did it gain its popularity and mass appeal (both on the desktop and servers) without real, solid reasons behind it. For many, it is in fact these same reasons that cause them to feel so passionately about the shift in direction since the early days.

Ubuntu’s speciality: Linux for Human Beings

A slightly customized Ubuntu desktop with the "About" panel of the system settings open
Ubuntu's simplicity and ease of use have always been its strengths

Ubuntu was once widely seen as the easiest Linux distro for beginners and a solid choice for both casual and “power” users alike. Many Linux enthusiasts (myself included) recommended it without hesitation because it was straightforward and opinionated in a way that just felt sensible for regular people. From the time you popped in a live CD, you got a sane, uncomplicated experience that felt like a breath of fresh air compared to Windows, and it made you feel like Linux could actually feel like home. All you had to do was install it, update it when necessary, and get on with your life.

The slogan “Linux for human beings” was more than a branding choice. Ubuntu embodied this motto in a very real way by reducing friction for everyday people and never being afraid to match form to function. It hasn’t always lived up to that purpose in ways that everyone agreed with, but the underlying mission has never truly changed, if we’re being fair.

Even with the shift towards a more developer-focused ecosystem, it has remained just as easy to download, install, learn the ropes (if you’re new) and get on with your life. Drivers are still a breeze to set up for most hardware. The default themes are still designed with a polished aesthetic taste in mind, and yes, installing apps easily and swiftly is still a major feature. Whether you’re deep in DevOps or a casual desktop user who wants a stable system that doesn’t demand constant babysitting, Ubuntu remains one of the most practical choices in the Linux world. In other words, the memes and tropes are loud and often funny, but reality still begs to differ because Ubuntu still delivers.

So why all the hate? What happened to our once beloved flagship among Linux distros?

From darling to punching bag (and why that happened)

Snapstore for Ubuntu

In order to understand why Ubuntu has been falling from its place of overwhelming popularity among Linux users, it’s important to remember that Ubuntu was not just a community effort, as is the case with many other distros. Ubuntu is both a community effort and a product of Canonical, and it’s actually the latter first. While the community has some say in what happens through feedback, bug reports, feature requests, and other standard open-source infrastructure, Canonical ultimately makes the call for what defines Ubuntu as a whole.

Like any company, Canonical makes decisions based on factors that aren’t always known or agreed with by the broader public. While many of these decisions have ultimately worked out well, just about as many have also proven not to work out in the long run. This fluctuation between success and well, failure, is a natural part of the product lifecycle for any long-running product.

Ubuntu is no exception to this rule.

However, from the perspective of the community, many of these decisions started steering Ubuntu in directions that many users found puzzling and, at times, concerning. The backlash didn’t come suddenly, nor did it stem from a single decision. It came from a notable pattern: Ubuntu choosing its own path, even when the broader Linux community preferred a different direction. While this isn’t inherently “bad”, it’s unfortunately created friction within the community. To be fair, some of these decisions, such as introducing Amazon affiliate links during the Unity era, or the decision to keep the Snap Store closed on the backend, haven’t followed the expected ethos of the Linux/open-source world.

Furthermore, with the Linux desktop constantly fighting the challenges of “fragmentation”, the decisions to use snaps over Flatpaks for a containerised solution, AppArmor over SELinux, etc, have brought on accusations of ‘NIH’ (Not Invented Here) syndrome. Unfortunately, while Canonical has reversed course on some of its more controversial choices and attempted to show goodwill and engage more collaboratively, the reputation and distrust are unfortunately hard to shake. Yet, in spite of these difficulties, Ubuntu itself has largely settled into a steady state, even becoming, in the eyes of some, “boringly stable”.

But whether or not this accusation is fair, it’s a sign that Ubuntu is largely doing its job. A boring desktop is often a reliable desktop, and for most people, especially people trying to work, play, study, or just have a functional computer, reliability beats novelty any day.

Taking a path less travelled, and yet…

An Ubuntu desktop showing the GNOME dash and Ubuntu's panel interface

Ubuntu is often criticised for “driving in its own lane”, but that independence is also why it has remained so relevant and popular. Many of the distros that have taken its crown in the ranks of popularity and ease of use are still Ubuntu derivatives. Even if they look different on the surface, or choose not to include technologies that have become synonymous with Ubuntu, they’re still Ubuntu at heart (like snaps).

This isn’t a mistake. Ubuntu is a solid base for the likes of Mint, Zorin, AnduinOS, and others because it’s stable, widely supported, and consistent, even while Canonical is willing to take the heat for making strong platform decisions.

Like any other distro, Ubuntu is a reflection of choices and decisions, whether those are made by the community, upstream maintainers, or the entity curating and tying everything together. It represents the collective work of everyone who contributes, packages, and builds. As such, it’s not just “another Canonical product”, even if the influence of a product mindset is evident. That combination of open-source philosophy and community culture, alongside the stability and direction of a commercially stewarded platform, is what makes Ubuntu unique.

Ubuntu’s mission is simple: ship something cohesive, make it consistent, and keep it well supported over time. Sure, it’s not always going to please everyone, especially those of us who would prefer a more decentralised decision-making process or more community consensus. But if we’re being honest, it’s also why we so often assume Ubuntu when we’re writing tutorials and install instructions.

That’s no accident, either. Ubuntu may not be perfect (no distro is), but it makes enough of the right choices to remain a dependable foundation, not only for users, but for an entire ecosystem built on top of it.

More than a desktop OS

A Digital Ocean dashboard screenshot showing information for an Ubuntu-based droplet
Ubuntu is popular as a server OS, with many platforms offering pre-built images for various applications

Ubuntu and its ecosystem are often easy to reduce to the realm of “beginner distro,” but that view is outdated, and I’d even argue it’s never really been true. Granted, I personally started using Ubuntu because I wanted to see what the hype was about where the likes of Compiz, Beryl, and other flashy effects were concerned. Yet, I never even got to try any of the whiz-bang features until I was a few years into my Linux experience, due to hardware limitations. So what kept me here? It was recognising that Ubuntu is so much more than a desktop.

Ubuntu is a serious platform across the server space, cloud platforms, and embedded environments and infotainment, and even lives on in the mobile space due to the efforts of UBPorts. Personally, I’ve never run a VPS on any other distro, not because I couldn’t, but because I haven’t found any reason to choose another. Ubuntu just works, and when your mission is to keep servers reliably online and updated for yourself and clients, that’s exactly what you need it to do.

Ironically, many of the same reasons Ubuntu gets flack on the desktop are the reasons it’s preferred in development and server spaces today. For instance, using a snap to install and configure a web service like Nextcloud is far simpler than even using a more well-known solution like Docker. Some snaps don’t even require any further configuration beyond setting up basic admin credentials and settings through web-based UI.

Ubuntu’s LTS cadence is a lifeline for server stability. Once you’ve successfully deployed a complex server environment, it’s often preferable to keep it “as is” for as long as humanly possible, while still getting the necessary security upgrades and minimal feature changes that you need to keep it up to date. With a Ubuntu LTS, that kind of stability isn’t even a challenge to solve, because there again, it just works. You get the flexibility and familiarity of a Debian-based system, with the freshness and stability that Ubuntu brings to the table.

Another important point is that a lot of production environments, containers, tutorials, and automation examples are written with Ubuntu (or Ubuntu-like) systems in mind. By matching what’s common in the field, you spend less time fighting your environment and more time understanding and using the tools you need to get actual work done.

Giving Snaps a fair shake

The Ubuntu App Center on the "Explore" tab
Ubuntu's App Center is the default "app store" for snaps

While “just getting work done” is one of Ubuntu’s hallmarks, that’s not typically what people think of when they think of snaps, and let’s be honest: snaps are a big part of why Ubuntu gets mocked. This criticism isn’t completely imaginary either. While the tech has come a long way, snaps still have some real-world challenges. But, the same can be said for just about any containerised packaging system. For the sake of fairness, let’s just get some of the remaining issues out of the way.

Theming inconsistencies still persist, especially if you are using an app built with a toolkit that your desktop isn’t built on. Snaps still take significantly more storage space than “native” packages, because they often depend on other “foundational” snaps. Also, there’s no open or decentralised software store, so we have to trust Canonical’s stewardship. These are real trade-offs, and it’s only fair to acknowledge them.

Usually, the discourse stops right here, as if “Snaps exist” is the same thing as “Ubuntu is unusable.” If I had a dollar for every time I’ve seen someone say “First thing I do is remove snap from the system”, I could end world hunger overnight. Yet, realistically, most people don’t choose an OS to make a statement about packaging decisions. They just want to be able to install what they need, do it quickly, keep it updated, and avoid breaking things in the process. Whether some in the community like them or not, snaps deliver on this promise.

By providing a consistent delivery mechanism for newer app releases, a simple rollback method and a clean way to clear app settings and data once an app is removed, snaps reduce dependency stress across different Ubuntu releases. For most types of software they simplify maintenance for developers and users alike. Plus, many of the issues that led to snaps being so heavily disparaged, such as slow startup times and terrible desktop integration, have been massively improved since their introduction, and continue to be improved with time.

Besides, even if you absolutely detest snap as a technology, Ubuntu is still flexible enough that you can make your own choices about where you get your apps and what package distribution formats you prefer. Case in point: most of the apps I use on my Ubuntu system today are Flatpaks and native applications, not because I don’t use snaps (I actually use quite a few), but just because that’s how most of the latest versions of the apps I need are currently packaged.

Why you can safely ignore the noise

Many of the arguments against Ubuntu these days are essentially identity- or philosophy- based, not practical positions. For most people, a better question is simply: what do you need your computer to do?

Ubuntu is still a strong choice if you’re new to Linux and want something straightforward, different from Windows and macOS, but familiar enough to not be a complete shock to the system. If you’re a developer seeking the friendly environment of a Linux-based workflow, choosing Ubuntu means you’ll have a system that matches the majority of guides and tutorials you’ll encounter online. The same is true if you work in DevOps or system administration.

The point is, whether you’re a casual desktop user or a seasoned denizen of SSH terminals, Ubuntu still meets the mark, offering stability, broad app availability, and the ability to Google a problem and find answers quickly.

Why it’s never going to be for everyone

A screenshot of the GNOME dash in Ubuntu showing multiple applications running on a virtual desktop
Not everyone likes Ubuntu, and that's perfectly okay

It goes without saying, but Ubuntu can’t be everyone’s cup of tea either, and even some long-time users might find it no longer fits their needs. For instance, if you prefer ultra-minimal systems that let you build everything your own way, or even if you just want to avoid Canonical’s decisions on principle, Ubuntu won’t fit the bill, and that’s perfectly okay.

With the move to deliver more core components as snaps, it’s also understandable that some of us might be forced to choose other distros to avoid this fundamental change in direction.

What really matters here is that none of this is a matter of a moral judgement, though I’m sure some folks would argue otherwise (and hey, I respect it, even if I disagree). At the end of the day, it’s all about freedom and finding the matching tools to get the job done, whatever that means for you.

Final thoughts

Long story short, Ubuntu often gets the most backlash because it’s one of the most visible and durable targets. It’s a distro many of us have long outgrown, but it’s also the distro where we “cut our teeth” on everything Linux has to offer. It’s no surprise then, that it’s the distro many people now love to dunk on and poke fun at.

Love it or hate it though, Ubuntu remains. It’s still quietly doing what many people actually need, still serving its age-old role as many folk’s first foray into Linux, still pushing innovation and momentum across spaces where we need it most, and still helping the collective to gain market share. The work Ubuntu does behind the scenes may not always be exciting, but no doubt, it’s quite invaluable. It doesn’t have to be perfect, and sure, it would be nice to see it reclaim its former glory, even just for a bit of nostalgia.

But Ubuntu has earned its place among the Linux giants, and continues to prove itself every day. So maybe, just maybe, it doesn’t deserve our hate.

❌