Those who don’t have the time or appetite to tweak/modify/troubleshoot their computers: What is your setup for a reliable and low-maintenance system?
Context:
I switched to Linux a couple of years ago (Debian 11/12). It took me a little while to learn new software and get things set up how I wanted, which I did and was fine.
I’ve had to replace my laptop though and install a distro (Fedora 41) with a newer kernel to make it work but even so, have had to fix a number of issues. This has also coincided with me having a lot less free time and being less interested in crafting my system and more interested in using it efficiently for tasks and creativity. I believe Debian 13 will have a new enough kernel to support my hardware out of the box and although it will still be a hassle for me to reinstall my OS again, I like the idea of getting it over with, starting again with something thoroughly tested and then not having to really touch anything for a couple of years. I don’t need the latest software at all times.
I know there are others here who have similar priorities, whether due to time constraints, age etc.
Do you have any other recommendations?
Doesn’t ucore also have to restart to apply updates?
Not super ideal for a server as far as maintenance and uptime to have unexpected, frequent restarts as opposed to in-place updates, unless one’s startup is completely automated and drives are on-device keyfile decrypted, but that probably fits some threat models for security.
The desktop versions are great!
Run k3s on top and run your stateless services on a lightweight kubernetes, then you won’t care you have to reboot your hosts to apply updates?
deleted by creator
Not super ideal for a server as far as maintenance and uptime to have unexpected, frequent restarts
This is such a weird take given that 99.9% of people here are just running this on their home servers which aren’t dictated by a SLA, so it’s not like people need to worry about reboots. Just reboot once a month unless there’s some odd CVE you need to hit sooner than later.
So why would somebody run that on their homeserver compared to tried and true staples with tons of documentation? 🍿
You’re right, they should be running Windows Server as God intended 😆
It’s just Fedora CoreOS with some small quality-of-life packages added to the build.
There’s tons of documentation for CoreOS and it’s been around for more than a decade.
If you’re running a container workload, it can’t be beat in my opinion. All the security and configuration issues are handled for you, which is especially ideal for a home user who is generally not a security expert.
That is very fair!!
But on the other hand, 99.9% of users don’t read all of the change notes for their packages and don’t have notifications for CVEs. In that case, in my opinion just doing updates as they come would be easier and safer.
They won’t apply unexpectedly, so you can reboot at a time that suits. Unless there’s a specific security risk there’s no need to apply them frequently. Total downtime is the length of a restart, which is also nice and easy.
It won’t fit every use-case, but if you’re looking for a zero-maintenance containerized-workload option, it can’t be beat.
Running exotic niche server images out in the wild…
It’s just Fedora CoreOS with some QoL packages added at build time. Not niche at all. The very minor changes made are all transparent on GitHub.
Choose CoreOS if you prefer, it’s equally zero maintenance.
Yeah, sure. I was running Bluefin-DX. One day image maintainers decided to replace something and things break. UBlue is an amazing project. Team is trying hard but it’s definitely not zero mainainace. I fear they are chasing so many UBlue flavours, recently an LTS one based on CoreOS, spreading thin.
If you depend on third party modules you’ll end up with third party maintenance - we didn’t purposely decide to break this we don’t work at Nvidia.
Jorge, OP asked about “not having to really touch anything for a couple of years”. I am just sharing my experience. Big fan of containers and really appreciate your efforts of pulling containers tech into Linux desktop. Thank you!
I don’t understand the answer though. Maybe I am missing something here. There’s an official Bluefin-DX-Nvidia iso. Nvidia-containers-toolkit was part of that iso.
On a separate note, I liked the idea of GTS edition. Since few weeks ago iso became unavailable pending some fix. At the same time I see loads of new LTS edition buzz. It’s still in Alpha though. I feel confused.
I don’t understand the answer though.
The answer is if you’re depending on software that is closed and out of your control (aka. you have an Nvidia card) then you should have support expectations around that hardware and linux.
There are no GTS ISOs because we don’t have a reliable way to make ISOs (the ones we have now are workarounds) but that should be finished soon.
Thanks for clarifying, Jorge. I wish I lived in a perfect world where all hardware and software follow FOSS principles. Until then I will have to rely on the other distros that embrace an imperfect reality. I cannot reconcile how Bluefin targets developers and NVidia, unfortunately is not something many of those developers can afford to ignore. Good luck with your project!
I cannot reconcile
It’s like a saving throw in a video game, most times you can make it, but every once in a while you don’t lol.
🤷 I’ve been running Aurora and uCore for over a year and have yet to do any maintenance.
You can roll back to the previous working build by simply restarting, it’s pretty much the easiest fix ever and still zero maintenance (since you didn’t have to reconfigure or troubleshoot anything, just restart).
This is the way. The uBlue derivatives benefit from the most shared knowledge and problem-solving skills being delivered directly to users.
Between that, and using a decorative distrobox config, I get an actually reliable system with packages from any distro I want.
You’re not going to believe this, but I’ve found Arch is it. My desktop install was in December 2018: Sway with Gnome apps. Save for Gnome rolling dice on every major update, it’s been perfectly boring and dependable.
There are two camps of Arch users:
- Use it despite it breaking on every update, because of AUR and other benefits
- What? Arch breaks?
Debian. Unattended upgrades. Maybe flatpaks if your (GUI) stuff isn’t on debian
fedora has been this for myself. maybe tweaking every now and then to fix whatever edge cases I’ve run into but it’s the least painful distro I’ve used so far
Nixos?
in my app the post starts with this sentence:
Those who don’t have the time or appetite to tweak/modify/troubleshoot their computers […]
Yeah just use the default setup. Some minor tweaks at first, then it stays the same forever.
A minor tweak on another system, like an obscure driver, can be a huge headache on nix
NixOS was troubleshoot central for me. Not all programs behaved as expected with Nix’s unique design.
Once you get it setup tho, it works the same forever.
Ubuntu. Or, get a Mac - which is even more “boring”.
As someone who just had to bandaid an unexplained battery draw on his wife’s MacBook - no, Mac OS no longer “just works”. Apple buries some of the most basic settings inside a command line-only tool called
pmset
, and even then those can be arbitrarily overridden by other processes.And even after a fresh reinstall and new battery, it still drains the battery faster in hibernation mode than my Thinkpad T14 G1 running LMDE does while sleeping. Yeah, that was a fun discovery.
That Thinkpad is by far one of my most dependable machines.
If you have battery drain, make sure you’ve disabled the option to regularly wake up and do some background processing (check for emails, sync photos, etc.). Settings → Battery → Options… → Wake for network access. (Or search for “Power Nap” in the System Sertings dialog.)
No need to use
pmset
for that.So here’s the thing - if you can think of it, I’ve already tried it 😅 I spent a week and a half sifting through countless forum posts on Apple’s own support center, Macrumors, reddit, and a host of other forums.
The “Wake for network access” setting was the first thing I disabled after I wiped and reinstalled the OS. Among a number of other settings, including “Power Nap”. Still got the fucking “EC.DarkPME (Maintenance)” process firing off every ~45 seconds, no matter what I did, causing excessive insomnia and draining the battery within 12 hours.
What I ended up doing was using a little tool called “FluTooth” to automatically disable wifi/Bluetooth on sleep (the built-in OS settings did fuck-all), set
hibernationmode
to 25, and a few other tweaks withpmset
that currently escape me (edit: disablednetworkoversleep
,womp
,ttyskeepawake
,powernap
- which was still set to1
even with the setting in System Settings was disabled 🤨), and a couple others I can’t remember as it’s not here in front of me).I put several full charge cycles on the brand new battery before it finally calmed the fuck down.
I feel you. I still use an intel macbook with tweaks i cannot remember plus 3rd party utils like Turbo Boost switcher. That experience alone has kept me from upgrading to newer models.
In retrospect my powerbook g4 (Ti) and os 9 was peak computing.
My Thinkpad T14 running Linux Mint (LMDE) gets better battery life on “Suspend” than that damn MBP does when hibernated. It’s the 2017 A1706, too - out of ALL the variants it had to be that one 😂
Oh no. Maybe some Incense to cleanse the demons? (⊙_⊙)
Edit: I just remembered I had a similar problem, after changing the battery on my 2015. This thread at macrumors helped me tremendously especially the last entry (did it on three seperate days before it had an effect.) but I’m sure you already tried all of that. Just for the off chance.
these Intel Macs were such a bad experience.
That thread was a godsend. Turning off
tcpkeepalive
was the other one that I couldn’t remember, but that seemed to help out as well.My wife has had multiple MacBooks over the years (I set up her old 2009-era A1278 with Linux Mint for the kids to do homework), and after I “fixed” it and talked about the longer wake-up process, she told me that’s what she was used to already and the “super fast wake up” was a very new thing for her when she bought it. So no complaints from her, and the battery performs better. Win/win.
The fact that you’re even saying such things as “time constraints” or “to learn new software” suggests an attitude to computing shared by about 0.01% of the population. It cannot be re-stressed enough to the (sadly shrinking) bubble that frequents this community: the vast majority of people in the world have never touched a laptop let alone a desktop computer. Literally everything now happens on mobile, where FOSS is vanishingly insignificant, and soon AI is going to add a whole new layer of dystopia. But that is slightly offtopic.
It’s a good question IMO. Choosing software freedom - to the small extent that you still can - should not just be about the freedom to tinker, it should also just be easy.
The answer is Ubuntu or Mint or Fedora.
fedora with gnome for me.
Get a big mainstream distro and stop tinkering with it.
i want to try another distro than ubuntu, but the damn thing isnt giving me a single excuse to format my system. it doesnt break if you don’t fuck with it.
This really is the answer. The more services you add, the more of your attention they will require. Granted, for most services already integrated into the distro’s repo, the added admin overhead will likely be minimal, but it can add up. That’s not to say the admin overhead can’t be addressed. That’s why scripting and crons, among some other utilities, exist!
i think its more about modifying the system behavior, esp on desktop oses. i have many local services running on my server, and if set up right, its pretty much no maintenance at all.
Such a bad comment, what does tinkering mean? Not use any software besides the default one? So only browsing and text apps? facepalm
Tinkering, in my personal definition, would mean installing third party repositories for the package manager (or something like the AUR on Arch) or performing configuration changes on the system level… Just keep away as most as possible from accessing the root user (including su/sudo) is a general a good advice I would say.
Keeping away from sudo, got it.
If you want to take that from my text then feel free.
Linux Mint Debian Edition (LMDE) is my pick.
I’ve got two study laptops and apart from Tailscale giving me some grief very recently with DNS resolution, I literally haven’t had any problems with either machine. Both have been going for 1.5 years.
I like the LMDE route for the DE already having pretty decent defaults and not requiring much tweaking from the get-go. Xfce (as it ships by default in Debian) absolutely works, but I end up spending an hour theming it and adding panel applets and rearranging everything so that it… ends up looking similar to Cinnamon anyway, because default Xfce looks horrible in my opinion
every system is only as stable as the user. anybody can break Debian or any other “stable” distro of renown the second they go tinkering, adding PPAs or anything else
I am a longtime fan of Debian Stable, for exactly that reason. I installed the XFCE version using the custom installer about 8 years ago and have had very few issues.
Initially my GPU wasn’t well supported so I had to use the installer from Nvidia, forcing me to manually reinstall the driver after every kernel update. That issue has been fixed in recent years so now I can just use the driver from the Debian repos.
I installed the unattended-updates package about 2 years ago and it has been smooth sailing since
Use timeshift, It saved my ass like 3 times
PopOS is very stable as a desktop. It also keeps up to date with packages better than base Ubuntu in my opinion.
Debian stable is as hassle-free as you’ll get.
It sounds like your issue is more with having to migrate to a new laptop. Firstly - buy laptops that are more linux compatible and you’ll have fewer niggles like with sound, suspend and drivers.
Secondly - use “dpkg --get-selections” and “–set-selections” to transfer your list of installed software across to your new laptop. Combined with transferring your /home directory, user migration can be speeded up.
Firstly - buy laptops that are more linux compatible
This is the thing: The laptop is from Starlabs, supposedly made for Linux…