Hacker News new | past | comments | ask | show | jobs | submit | more bithive123's comments login

The "news" is never "new". There is nothing in the news that people haven't been doing to each other for thousands of years.

Some people complain that Hollywood just keeps making the same movies, but they'll watch the same news for their entire lives.


Me reading in the news that for the eighth year in a row the city has come in on budget, crime is low, and school achievement is up is great and I enjoy it. Because it affects my life directly. The value of news, repetitious or otherwise, is that it's part of the feedback loop of governance.


>The "news" is never "new". There is nothing in the news that people haven't been doing to each other for thousands of years.

Huh? You can easily name the news that literally never happened before e.g innovation and tech


You get to choose what games to play in life, and how to play them. When I was younger and the internet was new and exciting, I had more of an online presence. I don't have a family and plenty of time to spend online, I just choose not to.

If the kinds of jobs you want penalize you for not having an online presence, then you have to decide if you want to play that game. Personally I have always hated jumping through hoops; I don't have anything to prove and am confident in my abilities. I don't need to broadcast my hobbies and opinions to the world.

So while I could probably make more money if I conformed to a different play style, I am happier playing my way where I don't do anything (including using social media) because I ought to, but because I want to.


> You get to choose what games to play in life

Word. At one point in my post-secondary education career, I rebelled against being present online as a Computer Science major. I rejected LinkedIn. I met like-minded people at school and got my first job after dropping out of school.

One of my good friends never post anything online. He rejected the game of selling yourselves to the employers altogether. He thought the game was meaningless and the prospective employers should see through her talent without the game.

After a few years, I became discontent in my job. My network was only from the like-minded people offline. Some folks did the job for decades and didn't see any reason to change.

Eventually I chose to play the game: I joined in-person networking events, signed up a LinkedIn account and created a web development portfolio. I had good job leads and learned about the companies and the positions through the events and LinkedIn. Note that LinkedIn was only for professional purposes: I didn't broadcast my opinions via posts or comments but simply clicked "like" for the connections' new job announcements. I got jobs that paid 40% more than my old one which afforded me to get air conditioning at home finally.

My friend still lives in her parents' home and has no career prospects for almost 2 decades now.


>> You get to choose what games to play in life, and how to play them

Absolutely. But you don't get to change the rules of the game you choose.

You can absolutely have your own rules to your own game. But naturally you can't expect others to play by those rules unless they want to.

My expectation is that I'm happy to be flexible with the rules, but I've figured out what game I want to play. I'd make a terrible employee so I chose the "own business" game. I'm not into excessive control so I chose the "hire good people" game.

My recommendation is to figure out the big things, and not sweat the small stuff. Be honest, but not overly picky.


Exactly. At a certain point in your career (and life) you get to be more picky, and start seeing jobs as a two way street. Employers who make candidates jump through unnecessary hoops will filter out some of the unqualified people, sure, but they'll also filter out people who are qualified but have no interest in playing stupid games with their time.

One employer asked me to take a long, automated personality test for cultural fit before we've even had a conversation. I scoffed and walked away with a polite rejection. That was enough of a test for both of us. I feel sorry for anyone who has to work there.

Another job had me do a long take home that was actually realistic for the kind of work I was expected to do (build data driven frontends). I loved the take home, did well on it, got the job and loved that too. It was a good test for both of us. Most of the jobs I've enjoyed were like that. Reasonable interviews and reasonable assessments that respect the candidate while still giving the company useful information.

The job/client I'm interviewing for right now explicitly asked for a LinkedIn profile. I simply told them I don't have one, but they looked at my resume anyway and scheduled an interview despite that. We'll see how that goes.


> One employer asked me to take a long, automated personality test for cultural fit before we've even had a conversation.

I've only been asked this once, and my reply was (paraphrasing) "that you require this already tells me that I'm a poor cultural fit here."


Heh, well done. (And much more concise than mine.)

Mine was something like (also paraphrased) you can't be serious, reference to rodents in a maze, incredulous chuckle, sorry you have to be there, no thank you, kthxbye.

It really rubbed me the wrong way. Even as a teenager interviewing for Starbucks in a group setting for a minimum wage job that started at 3am every day, they treated everyone with more humanity and respect. I don't know how this company manages to hire anyone at all. Maybe they're specifically filtering for compliant minions who never question authority. Shrug.


I'm posting this not because I disagree with you, but rather to just clarify one point you made;

>> Employers who make candidates jump through unnecessary hoops will filter out some of the unqualified people, sure, but they'll also filter out people who are qualified but have no interest in playing stupid games with their time.

It's worth noting that from an employer perspective missing some qualified people is not a problem (because they'll usually have plenty of qualified people applying for the post.) Whereas hiring an unqualified person is disastrous.

In other words given the choice of "reducing to a subset of all good" or "reducing to all good plus possibly a few bad" the goal is the former.

This can be galling if you are an "excluded good one", but its worth understanding this because being filtered out doesn't mean you're bad. (It might mean that, but not necessarily so.)

As to your main point, being compatible with your employer in terms of style (the interview works both ways) is important. Generally speaking the hiring process is often unrelated to the work process (for both sides of the table) so some discernment is necessary but I'm a big believer in "when someone shows you who there are, believe them".


> It's worth noting that from an employer perspective missing some qualified people is not a problem

Yeah, that's a great point. And helps with the two way street. It's a self-selecting match between employers and applicants who each are looking for particular things. At least when the employment market is healthy.


It's such a farce. You got a 70% on a multiple choice test and now you are "certified" in information security, nevermind that you don't know how to actually produce anything (neither a "dev" nor an "op"). But hey, the c-suite who hired you can only read pie charts and bar graphs, and is paid top dollar to make sure those boxes get checked, not to know how anything actually works.

So you buy a bunch of automated scanning tools, pester the sysadmins to install multiple root-kits on all the servers, generate a bunch of PDF reports, and email them to those same sysadmins. You know, to help out the people who have been _achieving_ security (not just talking about it) for years. You rely on them to implement or document everything for you, but it never occurs that they could teach you a thing or two because they are not "certified". What they would consider their nuanced opinion tempered by years of experience comes across to you and your c-suite boss as complacent and change-resistant excuse-making.


First we invent a machine to kill, then someone invents a machine to kill that machine, and so on. We call this progress. An endless cycle of violence perpetuated by the pursuit of one group's security at the expense of another's.

Is warfare a fact of life? Maybe. But to take actions which logically and demonstrably create the very insecurity they are meant to avoid is irrational. Point this out and you are branded an idealist. "Humanity is doomed to violence, so always be ready to kill" is apparently sage wisdom.


Within a systemic lens, yes it is. As weaponry has gotten more deadly, conflicts have gotten less.

There has never been an instance of stable human civilization (beyond small groups or extremely defensible/remote locations) that did not involve the ability to be violent. That ability is not durable. The skills required to ensure it does not become the purpose of a society is not durable. It takes craft, community, and hard work to cultivate the readiness for violence and restrain its application.

It is fine for people to disavow violence. It is beneficial that they do. You will find in the military people who value pacifism more deeply than your average citizen.

We do not have to believe or participate in something to be grateful for the people who do. One personal experience with violence and the value of the readiness to respond becomes apparent.


The issue with your take is that those who believe humanity is doomed to violence have hard scientific evidence on their side.


By inductive reasoning that's an understandable opinion; human beings have had wars somewhere every day for thousands of years. Is the conclusion that therefore violence is inevitable prescriptive as well? Where do we draw the line between accepting necessary violence (someone attacks you) and perpetuating the cycle (arms races)? Do the laws of physics somehow require humans to engage in violent conflicts?

It seems evident to me that the root of these problems is in our thinking; when we engage in everyday behaviors and experience unexpected results, we easily recognize the error and take corrective action. For instance, if you take a wrong turn while driving. But when the same thing happens with violent conflicts, we seem to shrug our shoulders and say "nothing to learn here, it's just the human condition".

Maybe the best we can do is individually not contribute to the various forms of violence. This requires a level of responsibility that is abdicated when we start with the conclusion that violence is mandatory. It's not very scientific to start with a conclusion.


> It seems evident to me that the root of these problems is in our thinking; when we engage in everyday behaviors and experience unexpected results, we easily recognize the error and take corrective action. For instance, if you take a wrong turn while driving. But when the same thing happens with violent conflicts, we seem to shrug our shoulders and say "nothing to learn here, it's just the human condition".

I'm having trouble understanding here. Is your contention that the alternative to being a pacifist means giving up?

To the contrary, the proliferation of nuclear weapons has resulted in a world safer than it's ever been


We may not have a lot to discuss if your idea of safety is everyone having loaded guns pointed at each other. To me that is an extremely precarious position especially if we have accepted as wisdom that violence, maybe even preemptive violence, might be necessary. Is indefinite cold war really our highest aspiration?

I'm trying to suggest that human beings engage in all sorts of destructive behaviors, including warfare, because of the way we think about the world. We compartmentalize violence, pollution, etc, advocate for the security of our nation-state at the expense of others, and ignore the "externalities", i.e. the simple fact that everything is connected. This tendency perpetuates the very cycles of conflict that most people feel like we could do without.

And yet somehow it's easier to see that you've gone too far east and to go west instead, or vice versa. But for non-trivial matters it seems people want a fixed action pattern to which to cling ("never use force" or "be ready to shoot first") and get confused by comments such as mine which aren't advocating any fixed strategy because the fixed strategies are what got us in this mess.


> Is indefinite cold war really our highest aspiration?

It's literally brought about the longest period of peace between major world powers, in basically the entire history of civilization.

> I'm trying to suggest that human beings engage in all sorts of destructive behaviors, including warfare, because of the way we think about the world. We compartmentalize violence, pollution, etc, advocate for the security of our nation-state at the expense of others, and ignore the "externalities", i.e. the simple fact that everything is connected. This tendency perpetuates the very cycles of conflict that most people feel like we could do without.

At the end of the day human conflict is rooted in simple physics and biology. We compete for a limited set of resources. Even if everyone had every resource they wanted, they'd still compete for sexual partners. You're not going to fix biology or physics. If you have an issue with this, I'd suggest filing a complaint with God.

Literally no movement to ignore these very real constraints has ever been met with success. Some have shown promise on very small scales, but these usually end up petering out due to lack of new converts, or descend into madness upon scale. Can you point to anything that substantiates your point?

> And yet somehow it's easier to see that you've gone too far east and to go west instead, or vice versa. But for non-trivial matters it seems people want a fixed action pattern to which to cling ("never use force" or "be ready to shoot first") and get confused by comments such as mine which aren't advocating any fixed strategy because the fixed strategies are what got us in this mess.

What is your comment? No one can choose on behalf of others. You can only choose on behalf of yourself. Hence the issue.


> It's literally brought about the longest period of peace between major world powers, in basically the entire history of civilization.

What may look to you like peace is still a mexican standoff. On small time scales no one is firing a weapon but it's not an environment I'd call peaceful.

> At the end of the day human conflict is rooted in simple physics and biology. We compete for a limited set of resources. Even if everyone had every resource they wanted, they'd still compete for sexual partners. You're not going to fix biology or physics. If you have an issue with this, I'd suggest filing a complaint with God.

I don't think using physics to explain warfare would be simple at all. Biologically, I agree that situational constraints sometimes do potentiate conflict. But my complaint is firmly with humans, not some made-up external force they imagine to be responsible. If you would use violence to obtain sexual partners that's on you, not God.

I'm not suggesting that people ignore real constraints, that would be as irrational as ignoring the consequences of actions. I'm also not claiming to have a formula, or system to achieve this. I'm saying we discount the possibility of anything new or different at our own peril. I just can't accept that "God made us violent, so stop worrying and love the bomb" is the only conclusion a truly serious person could come to.

> What is your comment? No one can choose on behalf of others. You can only choose on behalf of yourself. Hence the issue.

I agree that there is no simple solution to ending humanity's destructive tendencies because society cannot change from the top down, only the bottom up (or inside out if you prefer).


Yes, that is sage wisdom. It is a fallacy to think that humans exist in any sort of harmony with nature. The law of nature is kill or be killed. I’m reminded of this each summer when wasps try to take my home from me.


While it's all thermodynamic deadweight loss, if I have to choose between them I'd rather the machines were destroying just machines, not the humans.


Negation, being a concept, exists only in the mind. Same with "things". A thing is a noun; a part of speech. The "real world" is undifferentiated quanta.


I have an hard time coming to terms with this platonic view of the mind, as if our minds where some kind of extradimensional aliens playing with this sandbox of "undifferentiated quanta" sometime called reality.

I understand how it make sense saying that the concept of spedrunning is completely absent in Ocarina Of Time and only exists in the player playing the game, but I do not see how this would be a good philosophy to apply to ourselves.

I confess that I have a particular aversion to this specific philosophy/POV because I feel like it is riding on the respectability and "coolness" of science to sound more serious while being just another metaphysics without (IMHO) any* particularly good qualities.

* Ok, I admit that it has at least a good quality: it is a good example of a non-religious metaphysics to give to people that cannot imagine a non-religious metaphysics.


Philosophies are tools for reasoning. I don't literally go through my life thinking "oh here's an undifferentiated quanta, time to apply some nouns to it." But if I want to adopt a scientific mindset it's beneficial to think in terms of the physical experience versus my mental model of it because I can write my mental model down, whereas I can't write physical experiences or undifferentiated quanta down. That's what makes them quanta.

We have tons of sayings for this like "the map is not the territory," "wherever you go that's where you are."


Put simply, "the map is not the territory"


It would be a bad convention for a map to use valleys to represent mountains.


If that just a nominal change, such as just switching the color convention it uses for montains vs valleys, then it would be fine.

And that's the case with positive/negative.


Or, as Monty Python has rightly pointed out, "it's only a model."


While in the mathematics of the later part of the 19th century and of the 20th century there have been developed many theories with very abstract concepts for which it may be claimed that those concepts have been invented in the minds of some mathematicians without a direct correspondence with the world experienced by them, such a claim would be false about almost any concept in the mathematics developed until the 19th century, because almost all older mathematical concepts are just abstractions of properties of the physical world.

For instance, what happens when you connect the two electrodes of a battery to the pins of a semiconductor diode will differ depending on whether you negate the battery or not (i.e. you revert or not its connections). What happens with a ball (or with a thrown stone) will differ depending on whether its velocity is positive or negative, and so on.

Additions and subtractions of physical quantities, therefore also negation, happen in the physical world regardless of the presence of sentient beings.

Humans can recognize such properties of the world and give them names and integrate them in coherent mathematical models, but the base concepts are not inventions, they are the result of empirical observations.


Careful!

> What happens with a ball (or with a thrown stone) will differ depending on whether its velocity is positive or negative, and so on.

The velocity of a ball is a vector. Using a positive or negative number to describe it is a manner of convention. When you say that you threw a ball with “positive 7 mph” velocity, you need to explain what you mean.

One might argue that there really is a ball and that it has a velocity and that the velocity really is an element in a vector field originating [0] at the center of mass of the ball. Debating to what extent this is fundamentally true or is just a useful concept that people came up with would be interesting.

[0] In general relativity, space is not Euclidean (nor is it a flat Minkowski space), and velocity vectors are only really meaningful in association with a point in spacetime. You can read all about tangent bundles in Wikipedia :)


Only according to some epistemologies.


Ceci n'est pas une pipe


Propaganda. It is a form of violence.


So by that reasoning, if I'm frustrated about global warming and I share an article without reading it I'm spreading propaganda?

And committing violence?

I don't think so.


If you are doing it primarily out of frustration, without reading the articles, and merely out of a desire to "put it in other people's faces", then yes.

Motivation matters. I may care deeply about the environment but if I act from a place of wanting to coerce others, I sow seeds of resistance to the very thing I am trying to achieve.


In "any sense" of the word? Surely anyone who adjusts their behavior when they get undesired or unexpected results is reasoning and thinking. And since most activities are mediated by thought of some kind, most people are reasoning and thinking otherwise they would never recover from even simple mistakes, like walking east when they need to go north.

Saying they're "not thinking in any sense of the word" because they can't solve predicate logic problems from a college textbook is a rather odd claim. Surely those things arise from reasoning and thinking, rather than the other way around.


This seems to me to be where these systems need to go in the future, akin to reinforcement learning.

You feed an llm a prompt. It then abstracts and approximates what the result should be. It then devises a hypothesis and solves it and compares it to the approximated output. Then it can then formulate a new hypothesis and evaluate it, based off the outcome of hypothesis 1. From there it can either keep iterating or dump that path for a new one (e.g., the next best hypothesis in the original formation).

At some point the answer is "good enough." But along the way it keeps playing against its thoughts to see if it can do better.

A key issue may be the original approximation, so it may need to consider its adjustment when iterating.

Maybe this is how cutting edge llms work now. I have no idea.


For anyone who still doesn't understand why more and more folks are pointing out that LLMs "hallucinate" 100% of the time, let me put it to you this way: what is the LLM doing differently when it generates tokens that are "wrong" compared to when the tokens are "right"? If there is a difference, where does that exist? In the mechanism of the LLM, or in your mind?

Bonus question 1: Why do humans produce speech? Is it a representation of text? Why then do humans produce text? Is there an intent to communicate, or merely to construct a "correct" symbolic representation?

Bonus question 2: It's been said that every possible phrase is encoded in the constant pi. Do we think pi has intelligence? Intent?


What is the difference between a google map style application that shows pixels that are "right" for a road, and pixels that are "wrong" for a road?

A pixel is a pixel, colors cannot be true or false. The only way we can say a pixel is right or wrong in google maps is whether the act of the human utilizing that pixel to understand geographical information results in the human correctly navigating the road.

In the same way, an LLM can tell me all kinds of things, and those things are just words, which are just characters, which are just bytes, which are just electrons. There is no truth or false value to an electron flowing in a specific pattern in my CPU. The real question is what I, the human, get out of reading those words, if I end up correctly navigating and understanding something about the world based on what I read.


Unfortunately, we do not want LLMs to tell us "all sorts of things," we want them to tell us the truth, to give us the facts. Happy to read how this is the wrong way to use LLMs, but then please stop shoving them into every facet of our lives, because whenever we talk about real-life applications of this tech it somehow is "not the right fit".


> we want them to tell us the truth, to give us the facts.

That's just one use case out of many. We also want it to tell stories, make guesses, come up with ideas, speculate, rephrase, ... We sometimes want facts. And sometimes it's more efficient to say "give me facts" and verify the answer then to find the facts yourself.


What if other sources of facts switch to confabulating LLMs? How will you be able to tell facts from made up information?


how do you do that now?


I think the impact of LLMs is both overhyped underestimated. The overhyping is easy to see: people predicting mass unemployment, etc., when this technology reliably fails very simple cognitive tasks and has obvious limitations that scale will not solve.

However, I think we are underestimating the new workflows this tech will enable. It will take time to search the design space and find where the value lies, as well as time for users to adapt to a different way of using computers. Even in fields like law where correctness is mission-critical, I see a lot of potential. But not from the current batch of products that are promising to replace real analytical work with a stochastic parrot.


That's a round peg in a square hole. As ive seen them called elsewhere today, these "plausible text generators" can create a pseudo facsimile of reasoning, but they don't reason, and they don't fact check. Even when they use sources to build consensus, its more about volume than authoritativeness.


I was watching the show, 3 Body Problem, and there was a great scene where a guy tells a woman to double check another man’s work. Then goes to the man and tells him to triple check the woman’s work. MoE seems to work this way, but maybe we can leverage different models that have different randomness and maybe we can get to a more logical answer.

We have to start thinking about LLM hallucination differently. When it’s follows logic correctly and provides factual information, that is also a hallucination, but one that fits our flow of logic.


Sure, but if we label the text as “factually accurate” or “logically sound” (or “unsound”) etc., then we can presumably greatly increase the probability of producing text with targeted properties


What on Earth makes you think that training a model on all factual information is going to do a lick in terms of generating factual outputs?

At that point, clearly our only problem has been we've done it wrong all along by not training these things only on academic textbooks! That way we'll only probabilistically get true things out, right? /s


> The real question is what I, the human, get out of reading those words

So then you agree with the OP that an LLM is not intelligent in the same way that Google Maps is not intelligent? That seems to be where your argument leads but you're replying in a way that makes me think you are disagreeing with the op.


I guess I am both agreeing and disagreeing. The exact same problem is true for words in a book. Are the words in a lexicon true, false, or do they not have a truth value?


The words in the book are true or false, making the author correct or incorrect. The question being posed is whether the output of an LLM has an "author," since it's not authored by a single human in the traditional sense. If so, the LLM is an agent of some kind; if not, it's not.

If you're comparing the output of an LLM to a lexicon, you're agreeing with the person you originally replied to. He's arguing that an LLM is incapable of being true or false because of the manner in which its utterances are created, i.e. not by a mind.


So only a mind is capable of something making signs that are either true or false? Is a properly calibrated thermometer that reads "true" if the temperature is over 25C incapable of being true? But isn't this question ridiculous in the first place? Isn't a mind required to judge whether or not something is true, regardless of how this was signaled?


Read again; I said he’s arguing that the LLM (i.e. thermometer in your example) is the thing that can’t be true or false. Its utterances (the readings of your thermometer) can be.

This would be unlike a human, who can be right or wrong independently of an utterance, because they have a mind and beliefs.


A human can be categorically wrong? Please explain.

And of course a thing in and of itself, be it an apple, a dog or an LLM, can’t be true or false.


I’ll cut to the chase. You’re hung up on the definition of words as opposed to the utility of words.

That classical or quantum mechanics are at all useful depends on the truthfulness of their propositions. If we cared about the process then we let the non-intuitive nature of quantum mechanics enter into the judgement of the usefulness of the science.

The better question to ask is if a tool, be it a book, a thermometer, or an LLM are useful. Error rates affect utility which means that distinctions between correct and incorrect signals are more important than attempts to define arbitrary labels for the tools themselves.

You’re attempting to discount a tool based on everything other than the utility.


Any reply will always sound like someone is disagreeing, even if they claim not to.

Though in this case I'm not even sure what the comment they're supposedly disagreeing with is even claiming. Is it even claiming anything?


> Any reply will always sound like someone is disagreeing, even if they claim not to.

Disagree! :)

> Though in this case I'm not even sure what the comment they're supposedly disagreeing with is even claiming. Is it even claiming anything?

It's offering support for the claim that LLMs hallucinate 100% of the time, even when their hallucinations happen to be true.


Ah okay I understand, I think. So basically that's solipsism applied to a LLM?

I think that's taking things a bit too far though. You can define hallucination in a more useful way. For instance you can say 'hallucination' is when the information in the input doesn't make it to the output. It is possible to make this precise, but it might be impractically hard to measure it.

An extreme version would be a En->FR translation model that translates every sentence into 'omelette du fromage'. Even if it's right the input didn't actually affect the output one bit so it's a hallucination. Compared to a model that actually changes the output when the input changes it's clearly worse.

Conceivably you could check if the probability of a sentence actually decreases if the input changes (which it should if it's based on the input), but given the nonsense models generate at a temperature of 1 I don't quite trust them to assign meaningful probabilities to anything.


No, your constant output example isn’t what people are talking about with “hallucination.” It’s not about destroying information from the input, in the sense that it you asked me a question and I just ignored you, I’m not in general hallucinating. Hallucinating is more about sampling from a distribution which extends beyond what is factually true or actually exists, such as citing a non-existent paper, or inventing a historical figure.


> It's offering support for the claim that LLMs hallucinate 100% of the time, even when their hallucinations happen to be true.

Well this makes the term "hallucinate" completely useless for any sort of distinction. The word then becomes nothing more than a disparaging term for an LLM.


Not really. It distinguishes LLM output from human output even though they look the same sometimes. The process by which something comes into existence is a valid distinction to make, even if the two processes happen to produce the same thing sometimes.


Why is it a valid distinction to make?

For example, does this distinction affect the assessed truthfulness of a signal?

Does process affect the artfulness of a painting? “My five year old could draw that?”


It makes sense to do so in the same way that it’s useful to distinguish quantum mechanics from classical mechanics, even if they make the same predictions sometimes.


A proposition of any kind of mechanics is what can be true or false. The calculations are not what makes up the truth of a proposition, as you’ve pointed out.


But then again, that road, according to a neighboring country who thinks that it’s their land and it isn’t a road. Depending on what country you’re in, it makes a difference


> The real question is what I, the human, get out of reading those words, if I end up correctly navigating and understanding something about the world based on what I read.

No, the real question is how you will end up correctly navigating and understanding something about the world from a falsehood crafted to be optimally harmonious with the truths that happen to surround it?


A pixel (in the context of an image) could be “wrong” in the sense that its assigned value could lead to an image that just looks like a bunch of noise. For instance, suppose we set every pixel in an image to some random value. The resulting would look like total noise and we humans wouldn’t recognized it as a sensible image. By providing a corpus of accepted images, we can train a model to generate images (arrays of pixels) which look like images and not, say, random noise. Now it could still generate an image of some place or person that doesn’t actually exist, so in that sense the pixels are collectively lying to you.


> What is the difference between a google map style application that shows pixels that are "right" for a road, and pixels that are "wrong" for a road?

Different training methodology and objective and possibility to correct obviously wrong outcomes by comparing it against reality.


> let me put it to you this way: what is the LLM doing differently when it generates tokens that are "wrong" compared to when the tokens are "right"?

It is conditioning on latents about truth, falsity, reliability, and calibration. All of these inferred latents have been shown to exist inside LLMs, as they need to exist for LLMs to do their jobs in accurately predicting the next token. (Imagine trying to predict tokens in, say, discussions about people arguing or critiques of fictional stories, or discussing mistakes made by people, and not having latents like that!)

LLMs also model other things: for example, you can use them to predict demographic information about the authors of texts (https://arxiv.org/abs/2310.07298), even though this is something that pretty much never exists IRL, a piece of text with a demographic label like "written by a 28yo"; it is simply a latent that the LLM has learned for its usefulness, and can be tapped into. This is why a LLM can generate text that it thinks was written by a Valley girl in the 1980s, or text which is 'wrong', or text which is 'right', and this is why you see things like in Codex, they found that if the prompted code had subtle bugs, the completions tend to have subtle bugs - because the model knows there's 'good' and 'bad' code, and 'bad' code will be followed by more bad code, and so on.

This should all come as no surprise - what else did you think would happen? - but pointing out that for it to be possible, the LLM has to be inferring hidden properties of the text like the nature of its author, seems to surprise people.


> It is conditioning on latents about truth, falsity, reliability, and calibration. All of these inferred latents have been shown to exist inside LLMs, as they need to exist for LLMs to do their jobs in accurately predicting the next token.

No, it isn't, and no, they haven't [1], and no, they don't.

The only thing that "needs to exist" for an LLM to generate the next token is a whole bunch of training data containing that token, so that it can condition based on context. You can stare at your navel and claim that these higher-level concepts end up encoded in the bajillions of free parameters of the model -- and hey, maybe they do -- but that's not the same thing as "conditioning on latents". There's no explicit representation of "truth" in an LLM, just like there's no explicit representation of a dog in Stable Diffusion.

Do the thought exercise: if you trained an LLM on nothing but nonsense text, would it produce "truth"?

LLMs "hallucinate" precisely because they have no idea what truth means. It's just a weird emergent outcome that when you train them on the entire internet, they generate something close to enough to truthy, most of the time. But it's all tokens to the model.

[1] I have no idea how you could make the claim that something like a latent conceptualization of truth is "proven" to exist, given that proving any non-trivial statement true or false is basically impossible. How would you even evaluate this capability?


This was AFAIK the first paper to show linear representations of truthiness in LLMs:

https://arxiv.org/abs/2310.06824

But what you should really read over is Anthropic's most recent interpretability paper.


> In this work, we curate high-quality datasets of true/false statements and use them to study in detail the structure of LLM representations of truth, drawing on three lines of evidence: 1. Visualizations of LLM true/false statement representations, which reveal clear linear structure. 2. Transfer experiments in which probes trained on one dataset generalize to different datasets. 3. Causal evidence obtained by surgically intervening in a LLM's forward pass, causing it to treat false statements as true and vice versa. Overall, we present evidence that language models linearly represent the truth or falsehood of factual statements.

You can debate whether the 3 experiments cited back the claim (I don't believe they do), but they certainly don't prove what OP claimed. Even if you demonstrated that an LLM has a "linear structure" when validating true/false statements, that's whole universe away from having a concept of truth that generalizes, for example, to knowing when nonsense is being generated based on conceptual models that can be evaluated to be true or false. It's also very different to ask a model to evaluate the veracity of a nonsense statement, vs. avoiding the generation of a nonsense statement. The former is easier than the latter, and probably could have been done with earlier generations of classifiers.

Colloquially, we've got LLMs telling people to put glue on pizza. It's obvious from direct experience that they're incapable of knowing true and false in a general sense.


> [...] but they certainly don't prove what OP claimed.

OP's claim was not: "LLMs know whether text is true, false, reliable, or is epistemically calibrated".

But rather: "[LLMs condition] on latents *ABOUT* truth, falsity, reliability, and calibration".

> It's also very different to ask a model to evaluate the veracity of a nonsense statement, vs. avoiding the generation of a nonsense statement [...] probably could have been done with earlier generations of classifiers

Yes. OP's point was not about generation, it was about representation (specifically conditioning on the representation of the [con]text).

Your aside about classifiers is not only very apt, it is also exactly OP's point! LLMs are implicit classifiers, and the features they classify have been shown to include those that seem necessary to effectively predict text!

One of the earliest examples of this was the so-called ["Sentiment Neuron"](https://arxiv.org/abs/1704.01444), and for a more recent look into kind of features LLMs classify, see [Anthropic's experiments](https://transformer-circuits.pub/2024/scaling-monosemanticit...).

> It's obvious from direct experience that they're incapable of knowing true and false in a general sense.

Yes, otherwise they would be perfect oracles, instead they're imperfect classifiers.

Of course, you could also object that LLMs don't "really" classify anything (please don't), at which point the question becomes how effective they are when used as classifiers, which is what the cited experiments investigate.


> But rather: "[LLMs condition] on latents ABOUT truth, falsity, reliability, and calibration".

Yes, I know. And the paper didn't show that. It projected some activations into low-dimensional space, and claimed that since there was a pattern in the plots, it's a "latent".

The other experiments were similarly hand-wavy.

> Your aside about classifiers is not only very apt, it is also exactly OP's point! LLMs are implicit classifiers, and the features they classify have been shown to include those that seem necessary to effectively predict text!

That's what's called a truism: "if it classifies successfully, it must be conditioned on latents about truth".


> "if it classifies successfully, it must be conditioned on latents about truth"

Yes, this is a truism. Successful classification does not depend on latents being about truth.

However, successfully classifying between text intended to be read as either:

- deceptive or honest

- farcical or tautological

- sycophantic or sincere

- controversial or anodyne

does depend on latent representations being about truth (assuming no memorisation, data leakage, or spurious features)

If your position is that this is necessary but not sufficient to demonstrate such a dependence, or that reverse engineering the learned features is necessary for certainty, then I agree.

But I also think this is primarily a semantic disagreement. A representation can be "about something" without representing it in full generality.

So to be more concrete: "The representations produced by LLMs can be used to linearly classify implicit details about a text, and the LLM's representation of those implicit details condition the sampling of text from the LLM".


My sense is an LLM is like Broca's area. It might not reason well, but it'll make good sounding bullshit. What's missing are other systems to put boundaries and tests on this component. We do the same thing too: hallucinate up ideas reliably, calling it remembering, and we do one additional thing: we (or at least the rational) have a truth-testing loop. People forget that people are not actually rational, only their models of people are.


One of the surprising results in research lately was the theory of mind paper the other week that found around half of humans failed the transparent boxes version of the theory of mind questions - something previously assumed to be uniquely a LLM failure case.

I suspect over the next few years we're going to see more and more behaviors in LLMs that turn out to be predictive of human features.


The terminology is wrong but your point is valid. There is no internal criteria or mechanism for statement verification. As the mind likely is also in part a high dimensional construct and LLMs to an extent represent our collective jumble of 'notions' it is natural that their emits resonate with human users.

Q1: A ""correct" symbolic representation" of x. What is x? Your "Is there an intent to communicate, or" choice construct is problematic. Why would one require a "symbolic representation" of x, x likely being a 'meaningful thought'. So this is a hot debate whether semantics is primary or merely application. I believe it is primary in which case "symbolic representation" is 'an aid' to gaining a concrete sense of what is 'somehow' 'understood'. You observe a phenomena, and understand its dynamics. You may even anticipate it while observing. To formalize that understanding is the beginning of 'expression'.

Q2: because while there is a function LLM(encodings, q) that emits 'plausible' responses, an equivalent function for Pi does not exist outside of 'pure inexpressible realm of understanding' :)


>I believe it is primary in which case "symbolic representation" is 'an aid' to gaining a concrete sense of what is 'somehow' 'understood'.

There is nothing magic about perception to distinguish it meaningfully from symbolic representation; in point of fact, that which you experience is in and of itself a symbolic representation of the world around you. You do not sense the frequencies outside the physical transduction capabilities of the human ear, or the wavelengths similarly beyond the capability to transduce of the human eye, or feel distinct contact beyond the density of haptic transduction of somatic nerves. Nevertheless, those phenomena are still there, and despite their insensible nature, have an effect on you. Your entire perception is a map, which one would be well advised to not mistake for the territory. To dismiss symbolic representation as something that only occurs on communication after perception is to "lose sight" of the fact that all the data your mind integrates into a perception is itself, symbolic.

Communication, and symbolic representation is all there is, and it happens long before we even get to the partnof reality where I'm trying to select words to converse about it with you.


> There is nothing magic about perception to distinguish it meaningfully from symbolic representation; in point of fact, that which you experience is in and of itself a symbolic representation of the world around you.

You're right that there's nothing magic about it at all. The mind operates on symbolic representations, but whether those are symbolic representations of external sensory input or symbolic representations of purely endogenous stochastic processes makes for a night-and-day difference.

Perception is a map, but it's a map of real territory, which is what makes it useful. Trying to navigate reality with a map that doesn't represent real territory is not just not useless, it's dangerous.


> As the mind likely is also in part a high dimensional construct and LLMs to an extent represent our collective jumble of 'notions' it is natural that their emits resonate with human users.

But humans are equipped with sensory input, allowing us to formulate our notions by reference to external data, not just generate notions by internally extrapolating existing notions. When we fail do do this, and do formulate our notions entirely endogenously, that's when we say we are hallucinating.

Since LLMs are only capable of endogenous inference, and are not able formulate notions based on empirical observation, the are always hallucinating.


> what is the LLM doing differently when it generates tokens that are "wrong" compared to when the tokens are "right"?

When they don't recall correctly, it is hallucination. When they recall perfectly, it is regurgitation/copyright infringement. We find issue either way.

May I remind you that we also hallucinate, memory plays tricks on us. We often google stuff just to be sure. It is not the hallucination part that is a real difference between humans and LLMs.

> Why do humans produce speech?

We produce language to solve body/social/environment related problems. LLMs don't have bodies but they do have environments, such as a chat room, where the user is the environment for the model. In fact chat rooms produce trillions of tokens per month worth of interaction and immediate feedback.

If you look at what happens with those trillions of tokens - they go into the heads of hundreds of millions of people, who use the LLM assistance to solve their problems, and of course produce real world effects. Then it will reflect in the next training set, creating a second feedback loop between LLM and environment.

By the way, humans don't produce speech individually, if left alone, without humanity as support. We only learn speech when we get together. Language is social. Human brain is not so smart on its own, but language collects experience across generations. We rely on language for intelligence to a larger degree than we like to admit.

Isn't it a mystery how LLMs learned so many language skills purely from imitating us without their own experience? It shows just how powerful language is on its own. And it shows it can be independent on substrate.


> When they don't recall correctly, it is hallucination. When they recall perfectly, it is regurgitation/copyright infringement. We find issue either way.

You nailed it right there.


Bonus question 2 is the most ridiculous straw man I've seen in a very long time.

The existence of arbitrary string encodings in transcendental numbers proves absolutely nothing about the processing capabilities of adaptive algorithms.


Exactly. Reading digits of pi doesn’t converge toward anything. (And neither do infinite typewriter monkeys.) Any correct value they get is random, and exceedingly rare.

LLMs corral a similar randomness to statistically answer things correctly more often than not.


Here’s the issue: humans do the same thing: the brain builds up a model of the world but the model is not the world. It is a virtual approximation or interpretation based on training data: passed experiences, perceptions, etc.

A human can tell you the sky is blue based on its model. So can any LLM. The sky is blue. So the output from both models is truthy.


> A human can tell you the sky is blue based on its model. So can any LLM. The sky is blue. So the output from both models is truthy.

But a human can also tell you that the sky is blue based looking at the sky, without engaging in any model-based inference. An LLM cannot do that, and can only rely on its model.

Humans can engage in both empirical observation and stochastic inference. An LLM can only engage in stochastic inference. So while both can be truthy, only humans currently have the capacity to be truthful.

It's also worth pointing out that even if human minds worked the same way as LLMs, our training data consists of an aggregation of exactly those empirical observation -- we are tokenizing and correlating our actual experiences of reality, and only subsequently representing the output of our inferences with words. The LLM, on the other hand, is trained only on that second-order data -- the words -- without having access to the much more thorough primary data that it represents.


A witness to a crime thinks that there were 6 shots fired; in fact there were only 2. They remember correctly the gender of the criminal, the color of their jacket, the street corner where it happened, and the time. There is no difference in their mind between the true memories and the false memory.

I write six pieces of code that I believe have no bugs. One has an off-by-one error. I didn't have any different experience writing the buggy code than I did writing the functional code, and I must execute the code to understand that anything different occurred.

Shall I conclude that myself and the witness were hallucinating when we got the right answers? That intelligence is not the thing that got us there?


> Shall I conclude that myself and the witness were hallucinating when we got the right answers?

If you were recalling stored memories of experiences that were actual interactions with external reality, and some of those memories were subsequently corrupted, then no, you were not hallucinating.

If you were engaging in a completely endogenous stochastic process to generate information independently of any interaction with external reality, then yes, you were hallucinating.

> That intelligence is not the thing that got us there?

It's not. In both cases, the information you are recalling is stored data generated by external input. The storage medium happens to be imperfect, however, and occasionally flips bits, so later reads might not exactly match what was written. But in neither case was the original data generated via a procedural algorithm independently of external input.


People who are consistently unable to distinguish fiction from reality make terribly witnesses; if an obviously high crackhead would fare better than an LLM on the witness stand.


Do we actually think this way though? When I am talking with someone I am cognating about what information and emotion I want to impart to the person / thinking about how they are perceiving me and the sentence construction flows from these intents. Only the sentence construction is even analogous to token generation, and even then, we edit our sentences in our heads all the time before or while talking. Instead of just a constant forward stream of tokens from the LLM.


>what is the LLM doing differently when it generates tokens that are "wrong" compared to when the tokens are "right"? If there is a difference, where does that exist? In the mechanism of the LLM, or in your mind?

If there were a detectable difference within the mechanism, the problem of hallucinations would be easy to fix. There may be ways to analyze logits to find patterns of uncertainty characteristics related to hallucinations. Perhaps deeper introspection of weights might turn up patterns.

The difference isn't really in your mind either. The difference is simply that the one answer correlates with reality and the other does not.

The point of AI models is to generalize from the training data, that implicitly means generating output that it hasn't seen as input.

Perhaps the issue is not so much that it is generalizing/guessing but the degree to which making a guess is the right call is dependent on context.


If I make a machine that makes random sounds in approximately the human vocal range, and occasionally humans who listen to it hear "words" (in their language, naturally), then is that machine telling the truth when words are heard and "hallucinating" the rest of the time?


Now I'm imagining a device that takes as input the roaring of a furnace, and only outputs when it recognizes words.


(An aside: Writing is a representation of speech, not the other way around.)


>what is the LLM doing differently when it generates tokens that are "wrong" compared to when the tokens are "right"?

When an LLM is trained, it essentially compresses the knowledge of the training data corpus into a world model. "Right" and "false" are thereby only emergent when you have a different world model for yourself that tells you a different answer, most likely because the LLM was undertrained on the target domain. But to the LLM, the token with the highest probability will be the most likely correct answer, similarly to how you might have a "gut feeling" when asked about something which you clearly don't understand and have no education in. And you will be wrong just as often. The perceived overconfidence of wrong answers likely stems from human behaviour in the training data as well. LLMs are not better than humans, but they are also not worse. They are just a distilled encoding of human behaviour, which in turn might be all that the human mind is in the end.


No.

LLM's become fluent in constructing coherent, sophisticated text in natural language from training on obscene amounts of coherent, sophisticated text in natural language. Importantly, there is no such corpus of text that contains only accurate knowledge, let alone knowledge as it unambiguously applies to some specific domain.

It's unclear that any such corpus could exist (a millennias old discussion in philosophy with no possible resolution), but even if you take for granted that such a corpus could, we don't have one.

So what happens is that after learning how to construct coherent, sophisticated text in natural language from all the bullshit-adled general text that includes truth and fiction and lies and fantasy and bullshit and garbage and old text and new text, there is a subsequent effort to sort of tune the model in on some generating useful text towards some purpose. And here, again, it's important to distinguish that this subsequent training is about utility ("you're a helpful chatbot", "this will trigger a function call that will supplement results", etc) and so still can't focus strictly on knowledge.

LLM's can produce intelligent output that may be correct and may be verifiable, but the way they work and the way they need to be trained prevents them from ever actually representing knowledge itself. The best they can do is create text that is more or less fluent and more or less useful.

It's awesome and has lots and lots of potential, but it's a radically different thing than a material individual that's composed of countless disparate linguistic and non-linguistic systems that have never yet been technologically replicated or modeled.


Wrong. This is the common groupspeak on uninformed places like HN, but it is not what the current research says. See e.g. this: https://arxiv.org/abs/2210.13382

Most of what you wrote shows that you have zero education in modern deep learning, so I really wonder what makes you form such strong opinions on something you know nothing about.


The person you are replying to, said it clearly: "there is no such corpus of text that contains only accurate knowledge"

Deep learning, learns a model of the world, and this model can be as inaccurate as it goes. Earth may as well have 10 moons for a DL model. In order for Earth to have only 1 moon, there has to be a dataset which encodes only that information, and not even once more moons. A drunk person who stares at the moon, sees more than one moon and writes about that on the internet, has to be excluded from the training data.

Also the model of the Othello world, is very different from a model of the real world. I don't know about Othello, but in chess it is pretty well known that all possible chess positions, are more than there are atoms in the universe. For all practical purposes, the dataset of all possible chess positions is infinite.

The dataset of every possible event on earth, every second is also more than all the atoms in the universe. For all practical purposes, it is infinite as well.

Do you know that one dataset is more infinite than the other? Does modern DL state that all infinities are the same?


Wrong again. When you apply statistical learning over a large enough dataset, the wrong answers simply become random normal noise (a consequence of the central limit theorem) - the kind of noise which deep learning has always excelled at filtering out, long before LLMs where a thing - and the truth becomes a constant offset. If you have thousands of pictures of dogs and cats and some were incorrectly labelled, you can still train a perfectly good classifier that will achieve more or less 100% accuracy (and even beat humans) on validation sets. It doesn't matter if a bunch of drunk labellers tainted the ground truth as long as the dataset is big enough. That was the state of DL 10 years ago. Today's models can do a lot more than that. You don't need infinite datasets, they just need to be large enough and cover your domain well.


> You don't need infinite datasets, they just need to be large enough and cover your domain well.

When you are talking about distinguishing noise from a signal, or truth from not-totally-truth, and the domain is sufficiently small, e.g a game like Othello or data from a corporation, then i agree with everything in your comment.

When the domain is huge, then distinguishing truth from lies/non-truth/not-totally-truth is impossible. There will not be such a high quality dataset, because everything changes over time, truth and lies are a moving target.

If we humans cannot distinguish between truth and non-truth, but the A.I. is able to, then we are talking about AGI. Then we put the machines to discover new laws of physics. I am all for it, i just don't see it happening anytime soon.


What you're talking about is by definition no longer facts but opinions. Even AGI won't be able to turn opinions into facts. But LLMs are already very good at giving opinions rather than facts thanks to alignment training.


> But to the LLM, the token with the highest probability will be the most likely correct answer

This is precisely what people are identifying as problematic


> When an LLM is trained, it essentially compresses the knowledge of the training data corpus into a world model

No, you added an extra 'l'. It's not a world model, it's a word model. LLMs tokenize and correlate objects that are already second-order symbolic representations of empirical reality. They're not producing a model of the world, but rather a model of another model.


Do you have a citation for the existence of an LLM "world model"?

My understanding of retrieval-augmented generation is that it is an attempt to add a world model (based on a domain-specific knowledge database) to the LLM; the result of the article is that the combination still hallucinates frequently.

I might even go further to suggest that the latter part of your comment is entirely anthropomorphization.



> If there is a difference, where does that exist? In the mechanism of the LLM, or in your mind?

Thank you for this sentence: it is hard to get across how often Gen-AI proponents are actually projecting perceived success onto LLMs while downplaying error.


I mostly see the reverse.


You mostly see people projecting perceived error onto LLMs?

I don't think I've seen a single article about an AI getting things wrong, recently, where there was a nuanced notion about whether it was actually wrong.

I don't think we're anywhere close to "nuanced mistakes are the main problem" yet.


I mostly see people ignoring successes and playing up every error.


But the errors are fundamental, and the successes actually subjective as a result.

That is, it appears to get things right, really a lot, but the conclusions people draw about why it gets things right are undermined by the nature of the errors.

Like, it must have a world model, it must understand the meaning of... etc.; the nature of the errors they are downplaying fundamentally undermines the certainty of these projections.


Why do people act like LLMs only hallucinate some of the time?


It’s not hallucinations here, multiple of the ridiculous results can be directly traced to redit posts where people are joking or saying absurd things


There are examples of hallucinations as well e.g. talking about a Google AI dataset that doesn't exist and using a CSAM dataset which it doesn't.

One of the researchers from Google Deepmind specifically said it was hallucinating.


So...every Reddit post?


Not hallucinations but these AI answers often (always?) provide sources they link to. It's just that the source is a random Reddit or Quora post that's obviously just trolling.

Then, when people post these weird AI answers on Reddit and come up with more absurd jokes, the AI then picks it up again. For example in https://www.reddit.com/r/comedyheaven/comments/1cq4ieb/food_... Google AI suggested applum and bananum as a response to food names ending with "um" when someone suggested uranium, Copilot AI started copied that suggestion. It's entertaining to watch.


The best trick the A.I. companies have pulled is getting us to refer to ‘bugs’ as ‘hallucinations.’ It sounds so much more sophisticated.


Ah, my friend

it's not a bug

It's a fundamental feature

These LLMs can produce nothing else but since the bullshit they spew resembles an answer and sometimes accidentally collide with one, people tend to think it can give answers. But no.

https://hachyderm.io/@inthehands/112006855076082650

> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

> Alas, that does not remotely resemble how people are pitching this technology.


This is irrelevant because the LLM is mostly not answering the question directly, it's summarizing text from web results. Quoting a joke isn't a hallucination.


That’s a good take.

So LLMs distill human creativity as well as human knowledge, and it’s more useful when their creativity goes off the rails than when their knowledge does.


It’s not a trick to sound sophisticated. Hallucinations are more like a subcategory of bugs. The system is technically correctly generating, structuring, and presenting false information as fact.


Technically everything an LLM does is hallucination that happens to be on a scale between correct and non-correct. But only humans with knowledge can tell the difference, math alone can't. It's not even a bug: it's the defining feature of the technology!


> But only humans with knowledge can tell the difference

Who says the humans (all of them) aren't hallucinating too?


Knowledge isn't sufficient to show something is false, since the knowledge can also be false. Insofar as it's important for it to be true, it needs to be continually verified as true, so that it's grounded in the real world.


Hmm yeah I kinda like the concept that it's "hallucinating" 100% of the time, and it just so happens that x% of those hallucinations accurately describe the real world.


That x% is far higher than people think it is because there's a tremendous amount of information about the world that ai models need to "understand" that people just kind of take for granted and don't even think about. A couple of years ago, AI's routinely got "the basics" wrong, but now so often get most things right that people don't even think it's worth commenting on that they do.

In any case, human consciousness is also a hallucination.


It really depends on the set of prompts you present the LLM. If it's anything requiring reasoning, you'll often get nonsense that sounds like sense. It has a higher chance of being accurate with knowledge queries.

LLMs are impressive, a very lossy search engine in a small package, capable of outputting convincing natural language responses.


it's only AI if you believe it


What you say is true, but tragic. What is a professional reputation worth if one ignores inconvenient truths? No opinion (however lay or erudite) is deeply serious if truth is less important than ego preservation. Any system of thought that disregards truth is ultimately incoherent and will produce nothing of real value.


I think that self-censorship is practically universal and a core tenant of our social behavior.

Somehow I think some group(s) of people have gotten it in their head that they are the only ones who have to self-censor and that they are a victim for it, but I think if you challenge yourself you can come up with hundreds of times self-censoring in social situations is the obviously correct choice (political or not).

Think about it, from disallowing words like "hell", to talking about death, to discussing sex, to oversharing, there actually is a very small percentage of combinations of words that are appropriate conversation.


I guess being a grammar Nazi is probably another thing that should be self-censored, but... it's "tenet", not a renter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: