Yes, in principle what you've said about the Unix approach here is correct, if you upgrade one half of a system and not the other half and now they're talking different protocols, that might not work.
But keep in mind that if your system can't cope with this what you've done there is engineer in unreliability, you've made a system that's deliberately not very robust, unless it's very, very tightly integrated (e.g. two sub-routines inside the same running program) the cost savings had better be _enormous_ or what you're doing is just amplifying a problem and giving it to somebody else, like "solving" a city's waste problem by just dumping all the raw sewage into a neighbouring city's rivers.
Now, the "you can't delete things because then the disk space is unreachable" argument makes plenty of sense for, say, FAT, a filesystem from the 1980s.
But (present year argument) this is 2018. Everybody's main file systems are journalled. Sure enough, both systems _can_ write a record to the journal which will cause the blocks to be freed on replay and then remove that journal entry if the blocks actually get freed up before then. The difference is that Windows doesn't bother doing this.
> Now, the "you can't delete things because then the disk space is unreachable" argument makes plenty of sense for, say, FAT, a filesystem from the 1980s.
Unix semantics were IIRC in place as far back as v7 (1979), possibly earlier - granted, a PDP disk from that time was bigger (~10-100MB) than the corresponding PC disk from a few years later (~1-10MB), but an appeal to technological progress in this particular example case is a moot point.
> But keep in mind that if your system can't cope with this what you've done there is engineer in unreliability
It's weird that you're blaming my operating system's problems on me. "My system" is something a ton of other people wrote, and this is the case for pretty much every user of every OS. I'm not engineering anything into (or out of) my system so I don't get the "you've made a system that [basically, sucks]" comments.
> [other arguments]
I wasn't trying to go down this rabbit hole of Linux-bashing (I was just trying to present it as as objective of a flexibility-vs.-reliability trade-off as I could), but given the barrage of comments I've been receiving: I don't know about you, but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot. Sometimes the window rendering gets messed up, sometimes I get random error pop-ups, sometimes stuff just doesn't run. I don't get why it happens in every instance, and there might be lots of different reasons in different instances. IPC mismatch is my best guess for a significant fraction of the incidents. All I know is it happens and it's less stable than what you (or I) would hope or expect. Yet from everyone's comments here I'm guessing I must be the only one who encounters this. Sad for me, but I'm happy for you guys I guess.
> ...but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot...
Ubuntu developer here. This doesn't happen to me in practice. Most updates don't cause system instability. I rarely reboot.
Firefox is the most noticeable thing. After updating Firefox (usually it's a security update), Firefox often starts misbehaving until restarted. But I am very rarely forced to restart the login session or the entire system. I should, to get updates to actually take effect, but as a developer I'm usually aware of the specifics, so I can afford to be more selective than the average user.
Are you sure you aren't comparing apples to oranges here, and are actually complaining about the stability of updates while running the development release, which involves ABIs changing and so forth?
Development release? Currently I'm on 16.04, and I've never been on a development release of anything on Ubuntu. I'm just describing the behavior I usually see in practice (which it seems someone attributed to "D-BUS" [1]). Obviously the logon session doesn't get messed up if all I'm updating is something irrelevant like Firefox, but if I update stuff that would actually affect system components then there's a good chance I'll have to reboot after the update or I'll start seeing weird behavior. This has generally been my experience ever since... any Ubuntu version, really. It's almost ironic that the most robust thing to update in practice is the OS kernel.
All I can say is that, based on everything I know, that's not the current experience of the majority of users, so it doesn't seem fair for you to generalize this to some architectural problem. I don't know if you unknowingly have some edge case setup or what.
Are you reading the same comments I'm writing? I was literally point-by-point saying the opposite of what you seem to have read me writing:
> You: All I can say is that, based on everything I know, that's not the current experience of the majority of users
>> Me: Yet from everyone's comments here I'm guessing I must be the only one who encounters this.
???
> You: It doesn't seem fair for you to generalize this to some architectural problem.
>> Me: I don't get why it happens in every instance, and there might be lots of different reasons in different instances. IPC mismatch is my best guess for a significant fraction of the incidents.
I have been running Ubuntu since 2004, and except for Firefox which tends to destabilize on update, I’ve observed this twice in 14 years; I update weekly or more often, and reboot every few months (usually on a kernel update I want to take hold)
Maybe it's because you update often so there are fewer changes in between? I update far less frequently.. it's not my primary OS so it's not like I'm even on it every day (or week). I use it whenever I need to.
Have you filed any bug report or point to one? All i have seen is a lot of handwaving about "IPC mismatch", things will not fix itself unless people actively report/help fix issues.
Here, on Arch, Firefox updates don't cause me any grief. Only time I've ever need to reboot is after a kernel or DKMS module update.
For systemd updates, I can just reload it. For the likes of core components like bash, and major DE updates, I can just lazily use loginctl to terminate all of my sessions, and start fresh.
I'm not sure why Firefox would be causing instability until you restart (reboot?), though.
Firefox with e10s enabled (all current releases) detects version differences between parent process and and a child process started at a later point in time. Until recently it aborted the entire browser when that happened. I think now they have some logic that tries to keep running with the already open processes and abandoning the incompatible child.
Ideally they'd just prefork a template process for children and open fds for everything they need, that way such a detection wouldn't be necessary.
Or you could explicitly design the package so that your pre/postinstall scripts ensure that you install to a separate directory, and rename-replace the old directory, so you can’t get half-finished updates.
Regarding the rest, if your code has incompatible API breaks between two patch or minor version changes, you’ll need to rethink your development model.
Ubuntu user here. Ubuntu is less stable than my second girlfriend, and she tried to stab me once.
Lately, every time my co-worker has updated Ubuntu, it has broken his system. He's like my canary in the coalmine. I wait for his system to not fall over before I will update mine.
Maybe its time to consider an OS with better maintainers, like Debian. I've had less issues on unstable/sid over the past few years than I had on the last Ubuntu LTS release (which was what spurred me to Debian). On my other machine, Debian Stretch (and Jessie prior to upgrading) have treated me well, there just isn't breakage when upgrading to the latest stable release or when applying security patches.
I chose Ubuntu because it was more widely supported by 3rd party software vendors and support companies than Debian. But this doesn't matter, because I still ran into hardware and software compatibility issues, and Ubuntu is more up to date than Debian, meaning Debian would have been even more broken by default.
I don't know of a single Linux distro that works out of the box with my laptops. Maybe if I bought a $2,000 laptop that shipped with Linux it would work. It would still be a pain in the ass to update, though.
I kind of hate Linux as a desktop now. I've been using it as such for 14 years, and it's only gotten worse.
I had very similar reasons for starting with Ubuntu, but when it came right down to it, all the software that I thought would only work on Ubuntu works just fine on Debian.
Hardware support wise, newer kernels generally come to Debian sooner too, as the latest stable kernel generally gets into Sid a week or so after release, then added to backports for Debian Stable after a few weeks. Currently you can nab 4.14 from backports on Debian Stable, and 4.15 should be coming down the pike shortly (seeing as its just a few days old).
Depends what you need ofcourse; a lot of people buy the newest and fastest but do not need it. Most (90%+) of my dev work works fine on an X220 which I can pick up for $80, has stellar linux support and still really good (14+ hour) battery life. Depends on the use case ofcourse, but when I see what most people around me do on their 2k+ laptops, they could have saved most of that. Also, Ubuntu Unity is just not very good; but Ubuntu or Debian with i3 are perfect. Cannot imagine a better desktop.
This is unfortunate. I've been using Linux and suffered, for the lack of a better word, with its warts since 2002.
There was a period between 2013-2016 where Linux was great as my main operating system. It was more stable than OS X and was much better for development.
Is hardware support your main issue with desktop Linux?
No, it's mostly software, but hardware is a big problem.
The software (especially Ubuntu's desktop) is lacking basic features from even 10 years ago. Maybe there's a way to get it to do what it used to do, but I can't figure it out, and I'm not going to research for two days to figure it out. I just live with a lack of functionality until I can replace this thing.
Not only that, but things are more complicated, with more subsystems applying more constraints (in the name of compatibility, or security, or whatever) that I never asked for and that constantly gets in my way. Just trying to control sound output and volume gives me headaches. Trying to get a new piece of software to work requires working out why some shitty subsystem is not letting the software work, even though it is installed correctly. Or whining about security problems. You installed the software, Ubuntu, don't fucking whine to me that there's a SELinux violation when I open my browser!
Hardware is a big problem because modern software requires more and more memory and compute cycles. All of my old, stable laptops can no longer perform the web browsing workloads they used to. Browsers just crash from lack of memory, or churn from too much processing. If you don't use modern browsers, pages just won't load.
Aside from the computing power issue, drivers are garbage. Ignoring the fact that some installers simply don't support the most modern hard disks, and UEFI stupidity, I can't get video to work half the time. When I can, there are artifacts everywhere, and I have to research for three days straight to decipher what mystical combination of graphics driver and firmware and display server and display configuration will give me working graphics. Virtually every new laptop for several years uses hybrid graphics, and you can't opt-out or you get artifacts or crashing. Even my wifi card causes corruption and system crashing, which I can barely control if I turn off all the features of the driver and set it to the lowest speed! Wifi!!! How do you screw that up, seriously?
Modern Linux is just a pain in the ass and I'm way too old to spend my life trying to make it work.
> I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot
Which is almost true. In fact, you were unable to use programs that changed runtime dependencies or conflicted with current user sessions, init processes or kernel modules. You can often use other programs, but not ones that in any way touched the ones you upgraded, for one reason or another.
If you have to upgrade, say, a command line utility, that almost always doesn't require rebooting. If you have to upgrade a GUI app, or a tool that depends on some bastardized unholy subsystem designed to "secure desktop sessions", that may very well require relinquishing the session and restarting it. If you have to upgrade a tool used by your desktop (and if you have a complex desktop, that is literally thousands of programs), it's the same story, though you may even need to restart your desktop session manager or even your display server.
Then there's system init processes, kernel modules, firmware, system daemons and the like. You can reload those without rebooting, but it's certainly not easy - you will probably have to change to runlevel 1, which kills almost everything running. You can reload the kernel without rebooting, too - very handy for live patching - but really, why the hell would anyone want to do this unless they were afraid to power off their system?
So, technically, rebooting is not required to update in many cases in Linux, just like in Windows. But it is definitely the simplest and most reliable way.
> If you have to upgrade a GUI app, or a tool that depends on some bastardized unholy subsystem designed to "secure desktop sessions", that may very well require relinquishing the session and restarting it. If you have to upgrade a tool used by your desktop (and if you have a complex desktop, that is literally thousands of programs), it's the same story, though you may even need to restart your desktop session manager or even your display server.
Thanks, I'm glad at least one person agrees I'm not hallucinating. The vast majority of people here are telling me I'm basically the only one this happens to.
I've long since abandoned "bare metal" Linux in favor of VirtualBox and Windows for my home machine and VirtualBox and macOS on my laptop.
Monday's I merge last week's snapshot, take a new one, and run all my updates. Then I do my dev work in my VM. Before I head out on trips, I just ship the entire machine over the network to my MacBook Pro.
This is mostly because have you literally ever tried to install any Linux on laptops? It's always a Russian roulette with those $+#&ing Broadcom wireless chipsets. >.<
So you're not hallucinating. Linux as a desktop/laptop had a sweet spot from like...2012-ish till 2016. Then 802.11ac went mainstream so Broadcom released new chipsets and graphics cards had a whole thing with new drivers and Ubuntu's packagers (the people) lost their mind or something.
Nothing feels right, at least in Ubuntu/Arch land right now.
> I don't know about you, but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot. Sometimes the window rendering gets messed up, sometimes I get random error pop-ups, sometimes stuff just doesn't run.
This is less of an issue with Linux, per se, and more to do with proprietary video drivers.
I have multiple systems in my home with various GPUs. The systems running Intel and AMD GPUs with open source drivers don't have this problem. The two desktops with Nvidia GPUs have this problem whenever the Nvidia driver is updated.
I also had the same exact problem with my AMD system back when it was running the proprietary fglrx driver.
>This is less of an issue with Linux, per se, and more to do with proprietary video drivers.
Actually, it IS a problem with Linux. I don't get this behavior on my Windows or OSX machines where NVIDIA has been reliably (modulo obvious caveats) shipping "evil proprietary" drivers for a decade.
Linux is great, but it doesn't need to be coddled.
It's weird that you're blaming my operating system's problems on me.
No one is blaming you specifically, it is a common way of saying, “if you write operating systems, and you do $THING, you will get $RESULT.” Common, but wrong, which is why your high school English teacher will ding you for phrasing something that way.
Why would Linux need ‘defending’ for superior flexibility? The fact that files work like this is an advantage, not a disadvantage. I have never seen the flaw you’ve pointed out actually occurring in practice.
Well, it's not always an advantage. It's just the consequences of a different locking philosophy.
Windows patches are a much bigger pain in the ass to deal with on a month-to-month basis, but Linux patches can really bite you.
Example 1:
Say I have an application 1 that uses shared library X, and a application 2 that spawns an external process every 5 minutes that uses library X and communicates in some way with application 1. Now let's say that library X v2.0 and v2.1 are incompatible, and I need to apply an update.
On Windows, if I update this program, it will keep running until the system is rebooted. Updates, although they take significant time due to restarts, are essentially atomic. The update either applies to the entire system or none of the system. The system will continue to function in the unpatched state until after it reboots.
On Linux, it's possible for application 1 to continue to run with v2.0 of the shared library, while application 2 will load v2.1, and suddenly your applications stop working. You have to know that your security update is going to cause this breaking change and you need to deal with it immediately after applying the update.
Example 2:
A patch is released which, unbeknownst to you, causes your system to be configured in a non-bootable state.
On Windows, you'll find out immediately that your patch broke the system. It's likely (but not certain) to reboot again, roll back the patch, and return to the pre-patched state. In any event, you will know that the breaking patch is one that was in the most recently applied batch.
On Linux, you may not reboot for months. There may be dozens or hundreds of updates applied before you reboot your system and find that it's not in a bootable state, and you'll have no idea which patch has caused your issue. If you want your system in a known-working state, you'll have to restore it prior to the last system reboot. And God help you if you made any configuration changes or updates to applications that are not in your distro's repository.
No lie. After all nothing is stopping you from updating once every tuesday and rebooting after updates. You just wont have to do it 8 times in succession or stop in the middle of doing useful work to do so.
I just don't update nvidia or my kernel automatically and magically I only have to reboot less than once a month and always on my schedule.
I have! We had a log shipping daemon that wasn't always releasing its file handles properly and kept taking out applications due to out of spacing the box. That said, I drastically prefer the Unix behaviour.
It is a common tactic of Linux evangelists to state that they never have the problems you're experiencing and thereby disregard any criticism. You'll probably also get variants of "you're using the wrong distribution".
But keep in mind that if your system can't cope with this what you've done there is engineer in unreliability, you've made a system that's deliberately not very robust, unless it's very, very tightly integrated (e.g. two sub-routines inside the same running program) the cost savings had better be _enormous_ or what you're doing is just amplifying a problem and giving it to somebody else, like "solving" a city's waste problem by just dumping all the raw sewage into a neighbouring city's rivers.
Now, the "you can't delete things because then the disk space is unreachable" argument makes plenty of sense for, say, FAT, a filesystem from the 1980s.
But (present year argument) this is 2018. Everybody's main file systems are journalled. Sure enough, both systems _can_ write a record to the journal which will cause the blocks to be freed on replay and then remove that journal entry if the blocks actually get freed up before then. The difference is that Windows doesn't bother doing this.