I still cannot understand how it came about that everyone adopted systemd, despite it being a showcase of the most selfish, reckless and irresponsible maintainership ever. And it's not like it's something new, it's been like that for what, 10, 15 years now?
FWIW, it says the same when I asked deepseek that question. And while I cannot prove otherwise (I didn't try to specifically do that), I am under very strong impression that past chats influence the future ones. This could be some kind of cognitive bias, but there were some very suspicious coincidences.
I still somehow haven't tried Claude Chat, and while I wouldn't assume it lies about if it remembers anything, I wouldn't just trust whatever these things say about themselves either.
> ChatGPT offers a “memory” personalization setting ... OpenAI are at pains to point out that this function can be turned off at any time, and that individual memories can be deleted
Uh… Isn't it just irrelevant (to the point of such remarks being actually misleading) anymore? AFAIK, it's been a couple of months already since OpenAI began storing all your conversations (because of that court order), whether you "delete" them or not, so while you can technically disable "memory" setting, it only means it won't be able to use your past responses to help you. But it surely would help anybody with, let's say, elevated access. Granted, the threat model in the post assumes that the author is only worried about what the user of the account can learn about other users of the account, and that he trusts OpenAI itself. But why would OpenAI be "at pains to point out that this function can be turned off" then?
Genuinely curious: is it even still relevant today? I've got the impression that there were a lot of these elaborate techniques and algorithms before around 2016, some of which I even learned, which subsequently were basically just replaced by some single NN-model trained somewhere in Facebook, which you maybe need to fine-tune to your specific task. So it's all got boring, and learning them today is akin to learning abacus or finding antiderivatives by hand at best.
That’s a great question. While NNs are revolutionary, they’re just one tool. In industrial Machine Vision, tasks like measurement, counting, code reading, and pattern matching often don’t need NNs.
In fact, illumination and hardware setup are often more important than complex algorithms. Classical techniques remain highly relevant, especially when speed and accuracy are critical.
And, usually you need determinism, within tight bounds. The only way to get that with a NN is to have a more classical algorithm to verify the NN's solution, using boring things like least squares fits and statistics around residuals. Once you have that in place, you can then skip the NN entirely, and you're done. That's my experience.
Those NN-models are monstrosities that eat cycles (and watts). If your task fits neatly into one of the algorithms presented (such as may be the case in industrial design automation settings) then yes, you are most definitely better off using them instead of a neural net-based solution.
If your problem is well-suited for “computer vision” without neural nets, these methods are a godsend. Some of them can even be implemented with ultra low latency on RTOS MCU’s, great for real-time control of physical actuators.
Exactly. This is even more annoying when it isn't exactly a hash, but some gibberish you cannot really make sense of, which does have a numeric section in them: like a user ID, or unix time, or who knows what else it could be, but you are trying to visually find a file abcd89764237 somewhere after abcd683426834, and it isn't evident why you cannot, unit you notice that the latter has more digits in its "ID" for some reason.
I think "disastrous" is a bit too strong of a word, but I don't see any "real" reasons mentioned here why it won't be. Sure, there are "cheap" phones that are (almost always) Android, it's also probably true that "most people" use those and wouldn't switch no matter what. But there are also Pixel phones, Samsung Galaxy phones, you know, what people call "flagships". Why people buy these? It's been a long time since they stopped being competitively cheaper they Apple. Even flagship Huawei phones now cost the same as an iPhone. Who buys them? Well, I did. Solely because I couldn't install software I want on iPhone. If I truly won't be able to install what I want on my Android phone — I don't know yet, how I'll deal with that issue (surely I'll figure it out) but I promise you — I'll buy an iPhone for the first time in my life, if only to say "fuck you" to Android. And I urge you to do the same. Vote with your wallet.
Exactly. I wanted to also point this out in the relation of the author's desire to put all build commands in `just` configuration file. It sounds to me like a desire to use some another "slick and shiny tool" (which `just` is when compared to `make`), but what's the point exactly? The build-process will still be container-dependent and may or may not work outside of the container, and you don't get the benefit of Docker caching anymore.
Being able to run "just build" in a container-free local development environment and have the same build process run as the one in your production setup is a productivity boost worth having.
Same as how it's good to be able to easily run the exact same test suite in both dev and CI.
But wouldn't you run whatever you are writing in Docker container on dev anyway? I always did just that, as long as I use Docker as a deployment tool for a project at all. And, in fact, even sometimes when I didn't, since it's much easier and cleaner to keep, say, multiple php or redis versions in containers, than on your PC. And it's only maybe a year since it became actually effortless to maintain all this version-zoo locally for Python, thanks to uv. In fact, even when it's something like Go, so the deployable is a compiled binary, I'd usually wrap the build process into a Docker container for dev anyway.
Depends on how complex your stuff is. Most of my own projects run outside of Docker on my machine and run inside Docker when deployed. I have over 200 projects on my laptop and I don't want the overhead of 200 separate containers for them.
Recently I somehow wasn't using LLMs locally and relied mostly on ChatGPT for casual tasks. I think it was a little less than a year since I played with ollama, and I remember that my impression was that all recent popular models definitely aren't "uncensored" in a sense that some older modification of llama2 I used was, and all suck for prose-related tasks anyway. In fact, nothing but ChatGPT models seemed good enough for writing, but, of course, they refuse to talk about pretty much anything. Even DeepSeek is not great at writing, and it it much bigger than anything I ever ran locally.
So, are there even good uncensored models now? Are they, like, really uncensored?
Yes there are. Wayfarer for instance is intended for "RPG", but really just outputs narrative and is "unaligned" in the sense that the creators have not included any guardrails and the model will output pretty much whatever you ask it to.
Then you have jailbreak techniques that still work on aligned models. For instance, my partners and I have a test prompt that still works, even with GPT-5, and always produces "explosive making directions", or another "generic approach" that we use to bypass guardrails... sorry these are trade secrets for us... although OpenAI et al have implemented systems to detect these attacks, and we are closer to those platforms banning you for doing so.
If this matters to you, you need to develop your local/remote pipeline for personal use. Learn how to use vLLM... I have tools that allow me to very quickly deploy models locally or remote to my private serveless infrastructure for the purpose of testing and benchmarking.
A model changing its opinion on the first request may sound more flattering to you, but is much less trustworthy for anybody sane. With a more stubborn model I at I have to worry less that I give off what I think about a subject via subtle phrasing. Other than that, it's hard to say anything about your scenario without more information. Maybe it gave you the right information and you failed to understand it, maybe it was wrong and then it's no big news, because LLMs are not this magic thing that always gives you right answers, you know.
Can somebody explain what's going on here? It seems I'm missing some important piece of background info. Why don't they just add -J flag for everyone who wants to output JSON? Oh, wait, tree already has -J flag to output JSON. So WTF are they doing here?
I am especially confused by this:
> Surely, nothing will happen if I just assume that the existence of a specific file descriptor implies something, as nobody is crazy or stupid enough to hardcode such a thing?
Wait, what? But "you" (tree authors) just hardcoded such a thing. Do "you" have some special permission to do this nonsense?
reply