A toy example but it's an interesting read nonetheless. We also host our monolith app on a few bare metal machines (considerably beefier than the example however) and it works well, although the app does considerably more queries (and more complex queries) than this. Transaction locking issues are our bane though.
I love stuff like this. It always tickles my brain to try and find the optimal way (or, as optimal a way as I can) of solving puzzles. Sometimes it's easy, sometimes it's really hard. Oftentimes you can get something decent with not too much effort, and the dopamine hit is great when you see it working
Purchasing F35s is paying tribute to the empire so it doesn't come down on you harder with tariffs and compliance burdens. It's not meant to actually be useful.
Yep, anyone paying billions in what is effectively tribute to this admin is only playing themselves considering the stable genius seems to flip the game board every 5 minutes.
They're betting on things going "back to normal" eventually. They have neither the imagination or courage to think otherwise.
If it does, the new president/ruling party will probably look favorably on those who respected the crown even when they hated the guy who wore it. Because that's how the normal is.
Finland just joined NATO, which means that they lost their neutrality and independency for foreign politics. It is now very difficult to do something completely different or unfrienly than the U.S. because of that one country with shared border.
If the U.S really takes more steps towards taking Greenland and works against NATO ally, then it might be very complicated situation for both Denmark and Finland, and the whole alliance. But until then Finland at least is stuck with the U.S.
They can, and they already do it somewhat. We've found enough to know that.
As the most well known example: Anthropic examined their AIs and found that they have a "name recognition" pathway - i.e. when asked about biographic facts, the AI will respond with "I don't know" if "name recognition" has failed.
This pathway is present even in base models, but only results in consistent "I don't know" if AI was trained for reduced hallucinations.
AIs are also capable of recognizing their own uncertainity. If you have an AI-generated list of historic facts that includes hallucinated ones, you can feed that list back to the same AI and ask it about how certain it is about every fact listed. Hallucinated entries will consistently have less certainty. This latent "recognize uncertainty" capability can, once again, be used in anti-hallucination training.
Those anti-hallucination capabilities are fragile, easy to damage in training, and do not fully generalize.
Can't help but think that limited "self-awareness" - and I mean that in a very mechanical, no-nonsense "has information about its own capabilities" way - is a major cause of hallucinations. An AI has some awareness of its own capabilities and how certain it is about things - but not nearly enough of it to avoid hallucinations consistently across different domains and settings.
reply