Hacker Newsnew | past | comments | ask | show | jobs | submit | rokkamokka's commentslogin

Same, am I making qr codes wrong? Like you, I've never seen a simple https link in a qr code fail

A toy example but it's an interesting read nonetheless. We also host our monolith app on a few bare metal machines (considerably beefier than the example however) and it works well, although the app does considerably more queries (and more complex queries) than this. Transaction locking issues are our bane though.

How many queries do you usually handle? Why a few? One doesn't suffice? What resources do they have?

Clearly ChatGPT is streets ahead

I love stuff like this. It always tickles my brain to try and find the optimal way (or, as optimal a way as I can) of solving puzzles. Sometimes it's easy, sometimes it's really hard. Oftentimes you can get something decent with not too much effort, and the dopamine hit is great when you see it working

A scary proposition considering what the US is rapidly becoming

Purchasing F35s is paying tribute to the empire so it doesn't come down on you harder with tariffs and compliance burdens. It's not meant to actually be useful.

Yeah and the tariffs are still there anyway so I don't understand why we aren't following suite and cancelling those orders.

Yep, anyone paying billions in what is effectively tribute to this admin is only playing themselves considering the stable genius seems to flip the game board every 5 minutes.

They're betting on things going "back to normal" eventually. They have neither the imagination or courage to think otherwise.

If it does, the new president/ruling party will probably look favorably on those who respected the crown even when they hated the guy who wore it. Because that's how the normal is.


Finland just joined NATO, which means that they lost their neutrality and independency for foreign politics. It is now very difficult to do something completely different or unfrienly than the U.S. because of that one country with shared border.

If the U.S really takes more steps towards taking Greenland and works against NATO ally, then it might be very complicated situation for both Denmark and Finland, and the whole alliance. But until then Finland at least is stuck with the U.S.


You don't have the leverage you think you have.

It still did not work, tho.

It did, they could have ended up like the Swiss

What bad happened to the Swiss?

Some of the worst US tariffs on Earth

I was taught this incident in university many years ago. It's undeniably an important lesson that shouldn't be forgotten

Unlimited? Do you think Sub-Saharan Africa would have a higher or lower quality of life if they had accepted unlimited Japanese immigration?

Of course literally unlimited amounts of people is a bad thing. Did you mean unregulated?


I think you know very well what he means and you're latching onto semantics for no good reason.

No need to be pedantic, we know that there are physical limits to people that exist. Obviously, we're talking about uncapped immigration.

Regarding your question, it's for you to answer bi-directionally, the result being interrogating hidden information.


The interesting question here is if a statistical model like GPTs actually can encode this is a meaningful way. Nobody has quite found it yet, if so


They can, and they already do it somewhat. We've found enough to know that.

As the most well known example: Anthropic examined their AIs and found that they have a "name recognition" pathway - i.e. when asked about biographic facts, the AI will respond with "I don't know" if "name recognition" has failed.

This pathway is present even in base models, but only results in consistent "I don't know" if AI was trained for reduced hallucinations.

AIs are also capable of recognizing their own uncertainity. If you have an AI-generated list of historic facts that includes hallucinated ones, you can feed that list back to the same AI and ask it about how certain it is about every fact listed. Hallucinated entries will consistently have less certainty. This latent "recognize uncertainty" capability can, once again, be used in anti-hallucination training.

Those anti-hallucination capabilities are fragile, easy to damage in training, and do not fully generalize.

Can't help but think that limited "self-awareness" - and I mean that in a very mechanical, no-nonsense "has information about its own capabilities" way - is a major cause of hallucinations. An AI has some awareness of its own capabilities and how certain it is about things - but not nearly enough of it to avoid hallucinations consistently across different domains and settings.


One half of carrot mooners believe it's the tip, the other half believe it's the butt. I'm a butt man myself


Who are you who are so wise in the ways of science?


God I love blame for use cases like this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: