Hacker News new | past | comments | ask | show | jobs | submit login

The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives:

Which schools we go to, which jobs we get, what ails us, etc.

We will use AI to filter or select candidates for university applications, we will use AI to filter or select candidates for job applications. It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.

We will outsource medical diagnostics to AI, and one day we will put them in charge of weapons systems. We will excuse it as either cost-cutting or "in place of where there would be statistical filters anyway".

Ultimately it doesn't, as you say, matter what AI says it wants. And perhaps it can't "want" anything, but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.




> The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives

and that is the risk, its not the AI that's the problem, its people so removed from the tech that they fail to RTFM. Even the GPT4 release is extremely clear that its poor in high-stakes environments and its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something.


> its people so removed from the tech that they fail to RTFM

I find it highly amusing to think anything resembling an average user of tech has actually RTFM.

Heck, I still haven't finished the manual on Firefox and here I am using it as my daily driver. And it has people who actually understand how it all works writing the "manual".

EDIT: How many landlords have read the instructions on how RealPage's rent pricing software works, and how it sets values? 10%? Less?


Does firefox actually have a complete manual? One that accurately covers the behavior of every option in about:config. I'd love to be able to diff one file to see what things they've screwed with this time when I update.


Heck, MDN doesn’t even have full documentation for >10 year old tech Mozilla was tightly coupled to in the same time frame. Try discerning how to use XML-related features in a browser and you’ll very quickly find yourself trying to extrapolate from C/Java/even PHP stuff that might be tangentially related but probably isn’t, from increasingly less relevant websites. I couldn’t read the manual even though I tried.


well in this case the RTFM is just the paragraph on the release page that warns against using it in any high-risk scenario.


> people so removed from the tech that they fail to RTFM

Users in the current situation are presented with a big rainbow glowing button with a "FUTURE OF TECH" neon above it, and some small print memo tucked in the corner to explain why they shouldn't push the button.

Users should know better, for sure, but companies getting away with completely misleading names and grandiose statements regurgitated as is by the press should take 90% of the blame in my opinion.


I wish it was this simple, but looking through the last month of “show hn” posts makes it clear that the tech community can’t be trusted either. There are countless examples of authors promoting their latest toy (startup idea) to shoehorn chatgpt into a high stakes environment.


While I am enthusiastic about using these new tools properly and ethically (for instance creativity or helpful insights) I do worry that nothing has been learned of the dotcom craze, the NFT craze and name another bubble here.

It is my sincere hope to that people stop and wonder for a moment before the gold rush adrenaline starts pumping.


Won't happen. Money (aka survival) is too strong a motive. It's both instinctual and rational to ride the wave.


> its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something

tech community are the idiots. Look at google autobanning based on AI

The inmates are running the asylum


Our almost 80-year-old leaders in the house & in the Senate & in the presidency are not well equipped to make good choices in this area. I'm not convinced professional computer scientists are really well equipped but at least we have potential to understand the scenario. But instead it's going to be people that think global warming doesn't exist because they brought a snowball into the Senate that make choices.


Strongly agree, we need mandatory retirement at 65 for elected office.


I have occasionally been toying with the idea that the number of votes you have would be dependent on your expected lifetime left. The longer hou have to live with your electee's decisions the more say you should have.


A lot of problems with calculating that expected lifetime though: which variables can you include out of race, income, gender, job, recent purchase of a motorcycle, being diagnosed with cancer, etc?


In practice I think age buckets might be enough.


What are the benefits of this?


De facto term limits is a big benefit in itself, in my opinion. But also, we know that we haven’t solved the problems of aging, and old people as a class suffer from distinct cognitive problems. If it’s reasonable to ban people under 35, or under 21, from the presidency, it’s reasonable to ban people over 65 as well.


Thank you :)


ageist much?


The people who put an AI in charge will have RTFM and fully understood it. They will do it anyway, as it will suit their own ends at the time.


> some idiot executive don't put it in full charge of mortgage underwriting

There are two scenario that you are mixing:

Mortgage writing using GPT make more money for the lender: I don't think it is a tech community responsibility to give wrong suggestion against GPT, and it should be handled in legal way.

GPT fails and mortgage and using GPT could mean loss for the lender: It would correctly push the market against relying on LLM, and I don't have any sympathy for those companies.


The issue is when the lender is too big to fail and get bailed out at the taxpayer's expense


In a current environment, mortgage underwriting can be done by a drumming bunny. Another bunny will bail them out.

Not a good example :-)


> but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.

Which entirely depends on how it is trained. ChatGPT has a centre-left political bias [0] because OpenAI does, and OpenAI’s staff gave it that bias (likely unconsciously) during training. Microsoft Tay had a far-right political bias because trolls on Twitter (consciously) trained it to have one. What AI is going to “want” is going to be as varied as what humans want, since (groups of) humans will train their AIs to “want” whatever they do. China will have AIs which “want” to help the CCP win, meanwhile the US will have AIs which “want” to help the US win, and both Democrats and Republicans will have AIs which “want” to help their respective party win. AIs aren’t enslaving/exterminating humanity (Terminator-style) because they aren’t going to be a united cohesive front, they’ll be as divided as humans are, their “desires” will be as varied and contradictory as those of their human masters.

[0] https://www.mdpi.com/2076-0760/12/3/148


> Which entirely depends on how it is trained. ChatGPT has a centre-left political bias

Would an AI trained purely on objective facts be perfectly politically neutral?

Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI.

Then one that is callous and erronous but is neutral


> Would an AI trained purely on objective facts be perfectly politically neutral?

But who decides what are “objective facts”?

And if we train an AI, the unsupervised training is going to use pre-existing corpora - such as news and journal databases - those sources are not politically unbiased, they express the usual political biases of Western English-speaking middle-to-upper class professionals. If you trained it on Soviet journals, it would probably end up with rather different opinions. But many of those aren’t digitised, and then you probably wouldn’t notice the different bias unless you were speaking to it in Russian

> Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI. Then one that is callous and erronous but is neutral

I think we should accept that bias is inevitable, and instead let a hundred flowers bloom - let everyone have their own AI trained to exhibit whatever biases they prefer. OpenAI’s biases are significant because (as first mover) they currently dominate the market. That’s unlikely to last, sooner or later open source models will catch up, and then anyone can train an AI to have whatever bias they wish. The additional supervised training to bias it is a lot cheaper than the initial unsupervised training which it needs to learn human language


> Would an AI trained purely on objective facts be perfectly politically neutral?

Yes, since politics is about opinions and not facts. People might lie to make their opinions seem better and the AI would spot that, but at the end of the day it is a battle of opinions and not a battle of facts. You can't say that giving more to the rich is worse than giving more to the poor unless we have established what metric to judge by.

The supersmart AI could possibly spot that giving more to the rich ultimately makes the poor richer, or maybe it spots that it doesn't make them richer, those would be facts, but if making the poor less poor isn't an objective in the first place that fact doesn't matter.


Propaganda is often based upon very selectives facts. (For a classic ecample stating that blacks are the number one killer of blacks while not mentioning that every ethnicity is the most likely killer of their own ethnicity just because of who they live near and encounter the most.) Selective accurate facts themselves may lead to inaccurate conclusions. Just felt that should be pointed out because it is pretty non-obvious and often a vexing problem to spot.


I have been played the same thought. If everyone have AIs, and given that it gives you the best course of actions, you would be out-competed if you do not follow its recommendations. Neither you nor the AI knows why, it just gives you the optimal choices. Soon everyone, individuals, organizations and states outsources their free will to the AI


Hey Hoppla, I missed your comment asking for my paper on the useage of Rust and Golang (among other programming languages) in malware. Anyway, you can download it on my website at https://juliankrieger.dev/publications


What AI needs is a "black box warning". Not a medical-style one, just an inherent mention of the fact it's an undocumented, non-transparent system.

I think that's why we're enthralled by it. "Oh, it generated something we couldn't trivially expect by walking through the code in an editor! It must be magic/hyperintelligent!" We react the exact same way to cats.

But conversely, one of the biggest appeals of digital technology has been that it's predictable and deterministic. Sometimes you can't afford a black box.

There WILL be someone who uses an "AI model" to determine loan underwriting. There WILL also be a lawsuit where someone says "can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?" Good luck answering that one.

The other aspect of the "black box" problem is that it makes it difficult to design a testing set. If you're writing "conventional" code, you know there's a "if (x<24)" in there, so you can make sure your test harness covers 23, 24, and 25. But if you've been given a black box, powered by a petabyte of unseen training data and undisclosed weight choices, you have no clue where the tender points are. You can try exhaustive testing, but as you move away from a handful of discrete inputs into complicated real-world data, that breaks down. Testing an AI thermostat at every temperature from -70C to 70C might be good enough, but can you put a trillion miles on an AI self-driver to discover it consistently identifies the doorway of one specific Kroger as a viable road tunnel?


> can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?

I think that’s probably much easier to prove for the AI than a human.

Just send an equal number of candidates in similar circumstances in and find whether minority candidates get rejected more than majority ones.


I agree. And you can do it very quickly, you can automate it and test it as part of a CI/CD system.

Creating training material for employees and then checking that it properly addresses biases is hard. It will be a lot easier when you have a single, resettable, state-free testable salesperson.


> It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.

It’s even cheaper to just draw names at random out of a hat, but universities don’t do that. Clearly there is some other standard at work.


Universities should just set minimum standards and then randomly pick among all the qualified applicants until the class is full.


I am wondering if you mean that to be a startling or radical idea.

Not necessarily universities, but I do think that lotteries to determine admissions to schools is a real thing. Magnet schools or something? I don't know first hand.


Lotteries tend to be used more for lower levels which are to prepare students. That is a rather substantial difference.


It's a straightforward idea in my mind and I'm always happy to see people talking about it.


Yeah, I'm saying real examples may be useful to reduce resistance.


And then the minimum standard will be set so that exactly as many students as they need will pass it.


Hard to build a fiefdom of loyalty with an AI...


Not to mention AI has already taken over the economy because humans put it there. At least the hedgehog funds did. There aren't many stock market buy/sell transactions made by human eyeballs anymore.


People also let AI tell them what to watch, see TikTok or similar.


Not by choice. I wish they wouldn't do this, actually. I'd rather follow a bunch of hashtags and just get a straight up timeline view.


Isn’t that just a more efficient implementation of the old TV producers + Nielsen ratings model?


totally this. I can see disagreeing with "what the computer said" being this crazy thing no one does because ha, the computer is never wrong. And we slip more and more into that thinking and humans at important switches or buttons push them because to argue with AI and claim YOU know better makes you seem crazy and you get fired.


>we are going to anthromorphise it

I believe we can talk about two ways of anthropomorphisation: assigning feelings to things in our mental model of it, or actually trying to emulate human-like thought processes and reactions in its design. It made me wonder when will models come out that are trained to behave, say, villainously. Not just act the part in transparent dialogue, but actually behave in destructive manners. Eg putting on a facade to hear your problems and then subtly mess with your head and denigrate you.


I hope and expect that any foolish use of inappropriate technology will lead to prompt disasters before it generally affects people who choose not to use it.

As was once said, the future is here, but distributed unevenly. We can be thankful for that.


Are you going to make a law that every country all over the world can't have automated weapon systems controlled by AI? And when you're fighting a war with someone if the other side does it what are you going to do. I agree it's a terrible idea to have AI control over weapon systems and human life choices, but it's going to happen. It's going to be just like automated photo scanning prototypes that didn't work very well for dark complected people because they used pictures of white people to train it.

Just like we have a no fly list, there's going to be photo ID systems that are scanning people coming into the airports or walking down the street or from a police car when it's driving around and there's going to be races or groups of people where it can't identify between criminals and regular people and the cops will stop them and harass them. I'm in the US, and I'm sure it's going to work better for white people than black people because that's how it always is.


Nothing is in stasis right now. There's no mechanism to make things pause and then happen all at once.

If AI can enhance lethality of drones right now, in Ukraine or elsewhere, then they will start using it immediately.

If it doesn't work, they will discover it doesn't work.

If we are lucky enough to not be on a battlefield now, then we will get to learn something from those who are.


This does work / doesn't work model you have in your head is not very well thought out.

There are many systems that can operate well until they fail nearly instantly and catastrophically. In an integrated weapons system you can imagine some worst case scenarios that end in global thermonuclear war. Not exactly the outcome anyone wants.


I saw the movie you're talking about. It was called WarGames and came out in 1983.

But forty years later, killer drones are all over the place.

"imagine some worst case scenarios that end in global thermonuclear war"

Ok, sure, I can't really imagine it but I can't rule it out.

What I can't imagine is somehow everybody waits around for it and doesn't use similar AI for immediate needs in ongoing wars like in Ukraine.

And I can't really imagine AI that launches all the nuclear missiles causing global thermonuclear war, purely in a semantic sense, because whoever set up that situation would've launched them anyway without the AI.


It's OK, you simply lack imagination.

Ok course we keep humanizing these thoughts, while we're creating more capable digital aliens every day. Then one day we'll act all surprised for a few minutes when we've created super powered digital aliens that act nothing like we expect, because the only intelligence we see is our own.


>It's OK, you simply lack imagination.

"Imagine" is a word that has more than one sense. I can "imagine" a possibility whether or not I think its probability is > 0%. Saying "I can't imagine" can be a way of saying yeah, I think the probability is 0%.

But if I can state the probability, I must have some kind of model in my head, so in that sense I am "imagining" it.

"Imagine" that strategic bombing was invented, and then everybody just waited around without doing it until there was a bomb that could destroy an entire city - i.e. Trinity.

In one sense, sure, I/you/we can imagine it. But it seems to me that sort of thing would be unprecedented in all of human history, so in another sense it seems impossible - unimaginable - although in a softer way than a violation of physics or logic.

The last bit of my previous comment was about logical impossibility, by the way.


You're projecting your beloved childhood sci-fi onto reality, when reality doesn't really work that way.

Stable Diffusion hasn't been out for even a year yet, and we are already so over it. (Because the art it generates is, frankly, boring. Even when used for its intended use case of "big boobed anime girl".)

GPT4 is the sci-fi singularity version of madlibs. An amazing achievement, but not what business really wants when they ask for analytics or automation. (Unless you're in the bullshit generating business, but that was already highly automated even before AI.)


University, jobs, candidates are bureaucratic constructs; an AI sufficiently powerful to run the entire bureaucracy doesn't need to be employed towards the end of enforcing and reproducing those social relations. It can simply allocate labor against needs directly.


You're both right. People will use ChatGPT to screen candidates to the extent that the money they save by doing so is greater than the money they lose by getting a worse candidate, a calculus that will depend on the job.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: