Automated code formatting, in my experience, never decreases diff sizes, and frequently increases them. Some of those diff size increases support git-blame, some of them hinder it. Around the boundary between two possible formattings, they’re terrible.
Code formatters do tend to force some patterns that may make the line-oriented git-blame more useful such as splitting function calls into many lines, with a single argument on each line; yet that’s not about the code formatter, just the convention. (And the automatic formatters choose it because they have no taste, which is necessary to make other styles consistently good. If you have taste, you can do better than that style, sometimes far better.)
Depends on the language and the available formatters really, I find the black/ruff formatting style in Python to be very consistent, which helps with git blame. For C++ there's no good default formatter for "small diffs" since they all, as you say, add random line breaks dependending on position of parameters and such.
With style you can do better, but having style is impossible in anything except single developer projects.
Halting is sometimes preferable to thrashing around and running in circles.
I feel like if LLMs "knew" when they're out of their depth, they could be much more useful. The question is whether knowing when to stop can be meaningfully learned from examples with RL. From all we've seen the hallucination problem and this stopping problem all boil down to this problem that you could teach the model to say "I don't know" but if that's part of the training dataset it might just spit out "I don't know" to random questions, because it's a likely response in the realm of possible responses, instead of spitting out "I don't know" to not knowing.
SocratesAI is still unsolved, and LLMs are probably not the path to get knowing that you know nothing.
> if LLMs "knew" when they're out of their depth, they could be much more useful.
I used to think this, but no longer sure.
Large-scale tasks just grind to a halt with more modern LLMs because of this perception of impassable complexity.
And it's not that they need extensive planning, the LLM knows what needs to be done (it'll even tell you!), it's just more work than will fit within a "session" (arbitrary) and so it would rather refuse than get started.
So you're now looking at TODOs, and hierarchical plans, and all this unnecessary pre-work even when the task scales horizontally very well (if it just jumped into it).
At this point it's AI discussing with AI about AI. AI is really good at this, it's much easier to keep this discourse going, than to solve deep technical problems with it.
OP doesn't like working for people that have bad tools mandated by the company. He uses a proxy measure to determine this beforehand.
The other poster had problems with people like OP because they don't use the (bad) tools provided by the company.
It doesn't sound wrong from either side. It's actually a win-win for both if they don't meet, which would mean OPs strategy is great for both. It might preclude OP from some opportunities though if the filter is too wide.
I personally do think that if you mandate the wrong tools you will never get the best developers, because great developers are very picky about the tools they use. It can be a bit too extreme in some cases, but I've rarely seen anybody that is good at this job and not very opinionated in some way or the other.
In most cases the problem is mandating though, if you give recommendation but allow deviations from that recommendation within reason you can usually get everybody to be happy.
It absolutely would. I can even tell you what type of laptop/dev equipment you’d likely get.
Hard to say what the actual office environment would end up like (plenty of toxic nerds out there), but I’ve worked for CEOs who were devs, and I even when they were terrible people, I never once hated the development part of the job.
Also German here, if you've worked through our robotic bureaucracy you know that there is some valid criticism here.
Germany is criticized all the time. You can read it, you can ignore it, you can disagree with parts of it, but I don't think anybody should be above criticism. Lest they think they might be #1 in everything.
>However, code that is well-designed by humans tends to be easier to understand than LLM spaghetti.
Additionally you may have institutional knowledge accessible. I can ask a human and they can explain what they did. I can ask an LLM, too and they will give me a plausible-sounding explanation of what they did.
I can't speak for others, but if you ask me about code I wrote >6 months ago, you'll also be stuck with a plausible-sounding explanation. I'll have a better answer than the LLM, but it will be because I am better at generating plausible-sounding explanations for my behavior, not because I can remember my thought processes for months.
This is where stuff like git history often comes in handy. I cannot always reliably explain why some code was the way it is when looking at a single diff of my own from years ago, but give me the history of that file and the issue tracker where I can look up references from commits and see the comments etc, and I can reconstruct it with very high degree of certainty.
There might also be a high level design page about the feature, or jira tickets you can find through git commit messages, or an architectural decision record that this new engineer could look over even if you forgot. The LLM doesn't have that
The weights won't have that by default, true, that's not how they were built.
But if you're a developer and can program things, there is nothing stopping you from letting LLMs have access to those details, if you feel like that's missing.
I guess that's why they call LLMs "programmable weights", you can definitely add a bunch of context to the context so they can use it when needed.
>But for asking a clarifying question during a training class?
LLMs can barely do 2+2, humans don't even understand the weights if they see them. LLMs can have all the access they want to their own weights and they won't be able to explain their thinking.