Lack of inter-generational knowledge transfer doesn't cut it. Most of the people who rolled this stuff are still alive. And as for the whipper-snappers: people don't get very far writing programming languages/video games/operating systems without knowing their stuff.
The real boogeyman is feature combinatorics. When making a tightly-integrated product (which people tend to expect these days), adding "just" one new feature (when you already have 100 of them) means touching several (if not all 100) things.
Take OpenBSD for example: When you have a volunteer project by nerds for nerds, prioritizing getting it right (over having the fastest benchmark or feature-parity with X) is still manageable.
Bring that into a market scenario (where buyers have a vague to non-existent understanding of what they're even buying), and we get what we get. Software companies live and die by benchmark and feature parity, and as long as it crashes and frustrates less than the other guy's product, the cash will keep coming in.
I tend to agree that OpenBSD is hitting the spot a lot better, but the problem I have is that there's not enough momentum that it keeps up with hardware releases. They had been maintaining kernel drivers for AMD GPUs for a while, but it seems they stopped updating regularly. I now own no hardware from the last decade that OpenBSD can get accelerated graphics on, and I need accelerated graphics to power the displays that allow me to be productive (by showing me enough information at once that I can understand what I'm doing).
I was having a conversation with somebody the other day about a privacy concern they were addressing, where a company was offering to monitor cell signals for some retail analytics purpose; and it was genuinely surprising to them that mobile phones broadcast and otherwise leak information that can be used to fingerprint the device. I think it's rather shocking the amount of ignorance people allow themselves to have when it comes to things like this. Furthermore, the way she was talking about it, it seems she thought it was the responsibility of basically anyone but the owners of these devices to consider things like this, or even ask the questions that would tell you something like this exists.
> When making a tightly-integrated product (which people tend to expect these days)
Do they? It was my impression that the recent evolution of user-facing software (i.e. the web, mostly) was about less integation due to reduced scope and capabilities of any single piece of software.
> adding "just" one new feature (when you already have 100 of them) means touching several (if not all 100) things.
This sounds true on first impression, but I'm not sure how true it really is. Consider that I could start rewriting this as "adding 'just' one new program when you already have 100 of them installed on your computer"... and it doesn't make sense anymore. A feature to a program is like a program to OS, and yet most software doesn't involve extensive use, or changes, of the operating system.
The most complex and feature-packed software I've seen (e.g. 3D modelling tools, Emacs, or hell, Windows or Linux) doesn't trigger combinatorial explosion; every new feature is developed almost in isolation from all others, and yet tight integration is achieved.
And this is actually more rule then exception - once you have more then 2 plugins, the chance of plugins colliding or not allowing updates of the main software in the future are more or less norm.
Not an issue if there are no third party plugins. Its however hard to resist allowing third party plugins when you already have the architecture. Also hard to resist feature bloat when adding new features are seemingly free.
If there are no third party plugins, then you don't have plugins - its an internal architectural decision, not relevant for the end-users.
Having plugins means anybody should be able to create one.
I remember vagrant has support for all historic plugins versions no matter the current API version. This is rare goodness but prevents only one type of problem - inability to update the core.
Or, just think about your phone. If I put my head to the speaker, a sensor detects that, and the OS turns off the screen to save power. If I'm playing music to my Bluetooth speaker, and a call comes in, it pauses the song. When the call ends, the song automatically resumes.
KT's UNIX 0.1 didn't do audio or power management or high-level events notification.
> Pretty sure making emacs render smoothly in 2016 was not an isolated change--even if the code change were only a single line.
This was a corner case. What I meant is the couple dozen packages I have in my Emacs that are well-interoperating but otherwise independent, and can be updated independently.
> Or, just think about your phone. If I put my head to the speaker, a sensor detects that, and the OS turns off the screen to save power. If I'm playing music to my Bluetooth speaker, and a call comes in, it pauses the song. When the call ends, the song automatically resumes.
These each affect a small fraction of code that's running on your phone. Neither of them is e.g. concerned with screen colors/color effects like night mode, or with phone orientation, or with notifications, or countless other things that run on your phone in near-complete independence.
Buttery smooth emacs is not a corner case. When working on API-level things (for those who don't know--emacs is practically an operating system), if we care about not breaking things, one must be highly cognizant of all the ways that API is consumed. Even if the final change ends up being very small code-wise, the head-space required to make it is immense. That is why we have (relatively) so few people who make operating systems and compilers that are worth a damn.
Most emacs plug-ins aren't operating at that level of interdependence. This one is working on a text buffer at t0, and that other one is working on a text buffer at t1. Of course, the whole thing can be vastly simplified if we can reduce interdependence, but that is not the way the world works. Typical end user doesn't want emacs. Typical end user wants MS Word.
Even if I accepted the replacement of my word "feature" with your word "program" (not that I do), one only needs to look at Docker's prevalence to see the point still holds. Interdependence is hard, and sometimes unavoidable.
> Consider that I could start rewriting this as "adding 'just' one new program when you already have 100 of them installed on your computer"... and it doesn't make sense anymore.
Turns out when you chnage the words of a statement, it changes the meaning of that statement.
> Lack of inter-generational knowledge transfer doesn't cut it. Most of the people who rolled this stuff are still alive.
I think he means generations in terms of the workplace/politics where you can have a generation change every few years. Meaning that most old guys go and new guys come. Technically you could ask the old guys because most are still alive but it doesn't happen for a lot of different reasons.
Lack of inter-generational knowledge transfer doesn't cut it. Most of the people who rolled this stuff are still alive. And as for the whipper-snappers: people don't get very far writing programming languages/video games/operating systems without knowing their stuff.
The real boogeyman is feature combinatorics. When making a tightly-integrated product (which people tend to expect these days), adding "just" one new feature (when you already have 100 of them) means touching several (if not all 100) things.
Take OpenBSD for example: When you have a volunteer project by nerds for nerds, prioritizing getting it right (over having the fastest benchmark or feature-parity with X) is still manageable.
Bring that into a market scenario (where buyers have a vague to non-existent understanding of what they're even buying), and we get what we get. Software companies live and die by benchmark and feature parity, and as long as it crashes and frustrates less than the other guy's product, the cash will keep coming in.