Hacker Newsnew | past | comments | ask | show | jobs | submit | more pegasus's commentslogin

Social media won't die, but it can be replaced with something that is better in every way, but especially better at actually enriching our lives, rather than better at gluing us to screens and feeding us ads. It rests on us to create these decentralized systems. I think local-first software and some ideas from crypto are some good first guideposts on the way there. AI can surely help too, if used judiciously.


I don't understand the downvotes - parent's right. All regimes today like to trumpet themselves as (exceedingly!!) democratic, the question is: are they? In my estimation, overall, communist countries have done significantly worse in this department. And yes, rule by the many people is the definition of democracy.


"rule by the many people is the definition of democracy"

Well with elections we have a "rule by a few".

Wondering if elections could not be replaced by an app.


I think the claim would be that an LLM would only ever pass a strict subset of the questions testing a particular understanding. As we gather more and more text to feed these models, finding those questions will necessarily require more and more out-of-the-box thinking... or a (un)lucky draw. Giveaways will always be lurking just beyond the inference horizon, ready to yet again deflate our high hopes of having finally created a machine which actually understands our everyday world.

I find this thesis very plausible. LLMs inhabit the world of language, not our human everyday world, so their understanding of it will always be second-hand. An approximation of our own understanding of that world, itself imperfect, but at least aiming for the real thing.

The part about overcoming this limitation by instantiating the system in hardware I find less convincing, but I think I know where he comes from with that as well: by giving it hardware sensors, the machine would not have to simulate the world outside as well - on top of the inner one.

The inner world can more easily be imagined as finite, at least. Many people seem to take this as a given, actually, but there's no good reason to expect that it is. Plank limits from QM are often brought up as an argument for digital physics, but in fact they are only a limit on our knowledge of the world, not on the physical systems themselves.


I take it you don't subscribe to Popper then?


It's definitely bucket-like for me, and I can attest meditation empties it. Whenever I stop meditating, mental busyness and subconscious anxiety slowly build up. Half hour a day is enough to keep it away. I just keep bringing my attention back to the breath, trying to feel into the physiological need to breathe (which is usually occluded or distorted by mental activity). Whenever I feel I am actively holding to some tension, I allow myself to release it. That's all in terms of instructions, and for me it works wonders. I look at it as the equivalent of flossing for the brain ;)


Colons separating key and value pairs enhance readability in my book, so KDL goes too far for me. Other options are JSON5 (https://json5.org/) and CUE (https://cuelang.org/docs/introduction/). The latter is maximalist rather than minimalist, but very well thought out and worth checking out.


I like it a lot: a superset of JSON, fixing the most annoying warts of the format - most annoying, because most gratuitous, in that it would have cost nothing to have those obvious features and yet would have greatly increased usability. It's just a very pragmatic approach to improving JSON compared to creating a brand-new language like YAML and a slew of other contenders seemingly vainly try. I wish the name and presentation would better reflect this. Something like FJSON (Fixed JSON) or JSONF (JSON Fixed) or some other name which makes it clear this is an extended JSON, not yet another language, to avoid the confusion which already has engulfed the comment section here.


Optional unquoted keys. Basically, a superset of JSON, fixing the most annoying warts of the format - most annoying, because most gratuitous, in that it would have cost nothing to have those obvious features and yet would have greatly increased usability. I like it a lot, it's just a very pragmatic approach to improving JSON compared to creating a brand-new language like YAML and a slew of other contenders seemingly vainly try. I wish the name and presentation would better reflect this. Something like FJSON (Fixed JSON) or JSONF (JSON Fixed) or some other name which makes it clear this is an extended JSON, not yet another language, to avoid the confusion which already has engulfed the comment section here.


JSON++


Exactly. It's not about skeuomorphism, it's about saving space. Yes, it's unintuitive, and they could have made it work with a circular swipe as well, and probably should have, but it makes sense design-wise.


A counter provides more information but takes longer to read and appreciate than a simple angular magnitude.


Did you read TFA? It gives concrete advice on how to change the training and eval of these models in order to decrease the error rate. Sure, these being stochastic models, the rate will never reach zero, but given that they are useful today, decreasing the error rate is worthy cause. All this complaining on semantics is just noise to me. It stems from being fixated on some airy-fairy ideas of AGI/ASI, as if anything else doesn't matter. Does saying that a model "replied" to a query mean we are unwittingly anthropomorphizing them? It's all just words, we can extend their use as we see fit. I think "confabulation" would be a more fitting term, but beyond that, I'm not seeing the problem.


We can call it whatever, and yes, the answer is training- just like all things regarding the quality of LLM output per parameter count. The problem is that many people understand“hallucination” as a gross malfunction of an otherwise correctly functioning system, I.e. a defect that can/must be categorically “fixed”, not understanding that it is merely a function of trained weights, inference parameters, and prompt context that they can:

A: probably work around by prompting and properly structuring tasks

B: never completely rule out

C: not avoid at all in certain classes of data transformations where it will creep in in subtle ways and corrupt the data

D: not intrinsically detect, since it lacks the human characteristic of “woah, this is trippy, I feel like maybe I’m hallucinating “

These misconceptions stem from the fact that in LLM parlance, “hallucination” is often conflated with a same-named, relatable human condition that is largely considered completely discrete from normal conscious thought and workflows.

Words and their meanings matter, and the failure to properly label things often is at the root of significant wastes of time and effort. Semantics are the point of language.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: