I agree with this. One of the best changes we ever made was embracing a branch-heavy merge-centric workflow for literally everything. There was considerable pushback initially because it was perceived as 'complicated' but that dried up after a month because people were no longer fighting to push their hurriedly-rebased work to the canonical master branch any more.
It also means we can put an extra layer of QA testing between dev branch and master, which works nicely as a means of forcing communication between devs and QA too, and helps with onboarding (because the product featureset is huge).
Automated tests are great, but having a person read the feature ticket description and testing steps and then check that the functionality actually makes sense before it hits the master branch...
One more tool in the box, and it's a great culture correction mechanism too. Devs who don't test their work before submitting it get it sent back for fixing and feel less productive, so doing things thoroughly the first time feels quicker than rushing it. Doesn't even require management to monitor rates of reopening tickets, etc: people just don't like having to redo things.
When something needs doing, I just get it done. If you wait for everyone to agree that it needs planning to be done at some point in the future, it'll never get done because the feature firehose never relents.
This has cost me a few weekends, but a) it meant that I kept my sanity in the long run and b) despite said fixes reducing the company's dependency on me (ie. supposedly damaged the job security) taking the initiative has allowed me to demonstrate a different and entirely more significant kind of value.
I'm sure this works fine for you, but I would in general not recommend that people do work on their spare time. The reason to do that is, as you point out, that you get to take all the risk, and your company gets all the profit. That's not, generally, a healthy employment relationship and as an employer, I would be pissed if I found out someone did that.
Just carve out an hour every day from your salaried time to chip away at these projects. Nobody will ever know unless you tell them.
Eh. The effect could be the same. I average less than 8 hour work days. But once every 2 months or so I throw myself at a problem at work I personally fee strongly about. This might spill into a late-night or weekend. But I like the rush of it and I get a lot of appreciation at work and it has materially benefited my career. Once I even got an on the spot bonus issues from the CTO for fixing a perf problem.
I actually invested some time a while back in building a nicer API for C# to invoke shell commands and process the results. The only downside IMO is the Rx library dependency for STDOUT/STDERR; I personally try to avoid depending on libraries which themselves have extra dependencies.
Since I did this at work it belongs to my employer, so I can't currently publish it freely, but it's not part of our core product so they may be open to publishing it under MIT licence or similar at some stage.
So it can be done, and has been done, but I guess most people are sufficiently happy with Powershell, Python, etc to not bother bridging the gap for C# too.
As always, the public debate (and witchhunt) is focused on the tool, the symptom, the observation, and never the underlying cause.
How are these technologies being abused, and why are the perpetrators allowed to do so?
The surveillance tech already exists, as a component of personalised advertising, because it makes lots of money. It's all any government needs as well. It's flown under the radar wrt. the masses, because most people have absolutely no comprehension of how that kind of thing can work and will ignore anyone talking about it because it's an SEP.
But facial recognition is something which the masses can recognise, if not understand, and therefore react to as a threat.
Personally I think that as long as we haven't solved the first problem then we should avoid piling more possible problems upon it, so roadblocking facial recog tech is not something I would object to. But it does miss the point by rather a large margin.
I don't believe that it'll be as 'awareness-raising' as some people seem to believe either, for the reason implied above: people only care about the comprehensible threat.
I learned to program without the Internet. There was just a blocky image on the TV, a rubber keyboard, and 16K of RAM. The overwhelming majority of the available memory resided in the programmer's head.
While I'd never consider trying to impose that approach on someone, I'd certainly suggest it. It really encourages attention to detail and total focus, which are increasingly valuable skills these days.
Another way to look at it is that only infrastructure should be locking stuff, and infrastructure should be a very tiny part of the codebase. That infrastructure should probably be responsible for layering a different concurrency paradigm on top of threads...
Most platforms these days provide such things as part of the language, or in the standard library, or as a freely-available package. Writing one's own concurrency infrastructure is usually unnecessary, but when it is needed, it needs to be kept as small and as easily-auditable as possible.
A bit like `unsafe` in Rust, in fact.
Locking all over the place generally indicates that someone's trying to shotgun-debug concurrency bugs. I've had to use libraries which did that, and wished horrible things upon those responsible.
Indeed, I do like scripting languages which are specific. Whatever the language, I do insist that people spell stuff out in a properly readable and standard fashion if writing a script. Aliases and shorthand are fine at a prompt but by damn if I ever see them in a Git commit...
For the record, most other shells seem to understand autocompletion of command parameters these days, so hitting <tab> after --re should complete --recurse.
Tangentially related and not aimed at parent post:
<rant>
My main complaint with Powershell is that it's a mess. I've had to write scripts in Powershell since version 1. We have to work with servers which are still running Windows 2008, and trying to get a Powershell script to run in the same way across 2008 - 2019 is next to impossible. People use Powershell because it's there and it's 'object-oriented' and the resulting maintenance is a terror because it started as a hack and actually fixing that will only make the compatibility story worse (some say this has already happened).
(That option to run it in a versioned compatibility mode? Doesn't seem to work properly in practice. I have a suspicion that the differences in this case were down to some quirk of the console/terminal behaviour on varying Windows versions rather than Powershell itself, but that doesn't cut a lot of ice when there is literally no way to fix such-and-such-a-system in a way which works across all relevant machines without, you know, replacing 10 megabytes of PS with something else entirely. Unix shells tend not to have this problem because the integration points tend to just be STDIN, STDOUT and STDERR, and the shell does very little to mangle them.)
If you write something for POSIX shell (no, not Bash) then it'll basically work on pretty much any Unix from the last few decades. No guarantees regarding the other things you invoke from it, but the shell itself is fairly minimal and consistent. If it's confining, use a more powerful language; you're probably outside its use case.
My other complaint is that I find the object-oriented nature of Powershell to be nothing but a bloody nuisance. I can see how it might be useful sometimes (when wandering around at the prompt it's OK for discoverability) but trying to debug any nontrivial script is a fucking nightmare. 80% of any script ends up just being sanity checks producing useful error messages, because the errors provided by PS itself are rarely useful and usually misleading.
(Actually that's inaccurate. It's more common that there are no sanity checks whatsoever, and the output of the script has no correlation with its purpose whatsoever because one function called early on yielded two values instead of one, so a variable was an array when the rest of the script assumed it was a scalar. The rest of the script then blithely continued on as if nothing was wrong, because some combination of properties on the result was missing/null (because it's an array, not a single object). The POSIX shell behaviour in such cases is usually to pass two paths instead of one to a command at some stage, which might cause an interesting error message, kill the script (`set -e` is your friend), or erase your hard disk (any of which may raise a ticket and maybe even get someone to fix the script).)
Object types are great with statically-typed compiled languages (because your compiler/IDE will usually yell at your mistakes) and they're fine with any project which has a proper build process (because your test suite should yell at your mistakes) but for standalone scripts they're a spectacular example of a foot-seeking tactical nuke launcher.
Net debugging time is less with a text-oriented shell because, when necessary, I can pipe STDIN or a text file to an individual command to see what it does, and what it prints to the terminal is exactly what it gave to my script.
Some or all of the above might be fixed by having someone's personal definition of 'competent programmers', but fewer ways for things to go wrong before the code is actually run would be a good start.
And no, 'Kubernetes' and 'Cloud' are not general solutions. Some of us still work with individual, separate machines in many legally distinct environments which do not permit the 'herd of cattle' and 'staged rollout' approach to server management.
</rant>
But it's really not. Level 4 will require the vehicle to take you from door to door without driver intervention in normal driving conditions. Level 4 is mostly what people think of when they think of autonomous driving because you can take a nap, work on your laptop, whatever. You get in your car, tell it where you want to go, and it does everything.
Level 5 basically removes the steering wheel so you never drive. But once you're at level 4, almost all of the hard problems have been solved.
The complaint is more about bad interfaces. One of the examples (unix command line tools) has a very clumsy interface if you're trying to use it from within a lower level program: create pipes for input & output, fork, exec, use i/o on the pipes to feed data in and get results out, waitpid, figure out how the tool exited... this is slow, annoying to implement, has many failure modes that would not exist if the interface were a direct function call, and it's inflexible in that you'll be unable to use the core functionality provided by the tool if your data cannot be realized as a simple text stream that still makes sense to the receiving tool. And you can't extend the core functionality e.g. by passing another function to it. That's a huge amount of friction to using code that is already there, because of bad interface. And that's why people reimplement those tools instead of composing programs using those existing tools.
Likewise, FFIs exist, but often it's just easier to reimplement something in your language instead of trying to use the existing implementation from another language.
It's not about interfaces being an impediment but there being too many composition models, none of which seem to scale very well. A composition model is more than an interface. E.g. the way you compose a C program is you write the pieces (functions, structs) that are designed to fit together in certain ways, and then run the compiler which binds these together. The way you compose a web application is you invoke multiple processes that have a pre-shared notion of the protocols and they bind with each other. We're inventing model upon model, stacking them up like an wall of rocks. I'm saying we should look for models that are powerful but compact and can scale up. It should also be something that leads to less reimplementation because composition within it becomes easier. I don't think we have such models and I believe this needs deeper study.
I'm not a game dev, but rough guess: the framerate is artificially limited to some 'sane' range by the equivalent of a short sleep() which is cancelled by an input event. More devices, more events, more chance to prompt the next frame early?
It's very common to constrain your central "game loop" for several reasons. A game like Hotline presumably has some basic loop like "process inputs, process events, update AI plans, process AI actions, redraw screen". When the logical tasks are easy (e.g. most of the AIs are dead and you're standing still), that's essentially just a busy-wait that redraws your screen as fast as it can. Constantly maxing system resources is obnoxious, and it can be jarring when things slow back down. 40FPS might look just fine, but you can still 'feel' the change when you ramp up and down from 60FPS, so it's nicer to just cap the whole thing at 40FPS. Mouse inputs probably don't adjust the cap itself, but to avoid laggy responses they're usually interrupts which might refresh the screen.
A fun aside: some games fill out the time until the next redraw with more "thinking time" instead of sleep(), which leads to bizarre behaviors like an AI that gets smarter when you turn down the graphics settings.
It also means we can put an extra layer of QA testing between dev branch and master, which works nicely as a means of forcing communication between devs and QA too, and helps with onboarding (because the product featureset is huge).
Automated tests are great, but having a person read the feature ticket description and testing steps and then check that the functionality actually makes sense before it hits the master branch...
One more tool in the box, and it's a great culture correction mechanism too. Devs who don't test their work before submitting it get it sent back for fixing and feel less productive, so doing things thoroughly the first time feels quicker than rushing it. Doesn't even require management to monitor rates of reopening tickets, etc: people just don't like having to redo things.