Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future. It doesn't _want_ anything, but humans want to anthropomorphise it. When they do it just makes one think they have zero understanding of the tech.

We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI. Unless AI is able to monopolise food production while securing server farms and energy production I don't see it ever having leverage over humans.

Disruption, sure, increased automation sure but humanity's advantage remains its adaptability and our AI processes remain dev cycle bound. There's definitely work that will done to reduce the dev cycle closer to real-time to make it able to ingest more information and adapt on the fly but aren't the techniques bound by CPU capacity given how many cycles it needs to bump into all its walls?




The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives:

Which schools we go to, which jobs we get, what ails us, etc.

We will use AI to filter or select candidates for university applications, we will use AI to filter or select candidates for job applications. It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.

We will outsource medical diagnostics to AI, and one day we will put them in charge of weapons systems. We will excuse it as either cost-cutting or "in place of where there would be statistical filters anyway".

Ultimately it doesn't, as you say, matter what AI says it wants. And perhaps it can't "want" anything, but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.


> The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives

and that is the risk, its not the AI that's the problem, its people so removed from the tech that they fail to RTFM. Even the GPT4 release is extremely clear that its poor in high-stakes environments and its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something.


> its people so removed from the tech that they fail to RTFM

I find it highly amusing to think anything resembling an average user of tech has actually RTFM.

Heck, I still haven't finished the manual on Firefox and here I am using it as my daily driver. And it has people who actually understand how it all works writing the "manual".

EDIT: How many landlords have read the instructions on how RealPage's rent pricing software works, and how it sets values? 10%? Less?


Does firefox actually have a complete manual? One that accurately covers the behavior of every option in about:config. I'd love to be able to diff one file to see what things they've screwed with this time when I update.


Heck, MDN doesn’t even have full documentation for >10 year old tech Mozilla was tightly coupled to in the same time frame. Try discerning how to use XML-related features in a browser and you’ll very quickly find yourself trying to extrapolate from C/Java/even PHP stuff that might be tangentially related but probably isn’t, from increasingly less relevant websites. I couldn’t read the manual even though I tried.


well in this case the RTFM is just the paragraph on the release page that warns against using it in any high-risk scenario.


> people so removed from the tech that they fail to RTFM

Users in the current situation are presented with a big rainbow glowing button with a "FUTURE OF TECH" neon above it, and some small print memo tucked in the corner to explain why they shouldn't push the button.

Users should know better, for sure, but companies getting away with completely misleading names and grandiose statements regurgitated as is by the press should take 90% of the blame in my opinion.


I wish it was this simple, but looking through the last month of “show hn” posts makes it clear that the tech community can’t be trusted either. There are countless examples of authors promoting their latest toy (startup idea) to shoehorn chatgpt into a high stakes environment.


While I am enthusiastic about using these new tools properly and ethically (for instance creativity or helpful insights) I do worry that nothing has been learned of the dotcom craze, the NFT craze and name another bubble here.

It is my sincere hope to that people stop and wonder for a moment before the gold rush adrenaline starts pumping.


Won't happen. Money (aka survival) is too strong a motive. It's both instinctual and rational to ride the wave.


> its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something

tech community are the idiots. Look at google autobanning based on AI

The inmates are running the asylum


Our almost 80-year-old leaders in the house & in the Senate & in the presidency are not well equipped to make good choices in this area. I'm not convinced professional computer scientists are really well equipped but at least we have potential to understand the scenario. But instead it's going to be people that think global warming doesn't exist because they brought a snowball into the Senate that make choices.


Strongly agree, we need mandatory retirement at 65 for elected office.


I have occasionally been toying with the idea that the number of votes you have would be dependent on your expected lifetime left. The longer hou have to live with your electee's decisions the more say you should have.


A lot of problems with calculating that expected lifetime though: which variables can you include out of race, income, gender, job, recent purchase of a motorcycle, being diagnosed with cancer, etc?


In practice I think age buckets might be enough.


What are the benefits of this?


De facto term limits is a big benefit in itself, in my opinion. But also, we know that we haven’t solved the problems of aging, and old people as a class suffer from distinct cognitive problems. If it’s reasonable to ban people under 35, or under 21, from the presidency, it’s reasonable to ban people over 65 as well.


Thank you :)


ageist much?


The people who put an AI in charge will have RTFM and fully understood it. They will do it anyway, as it will suit their own ends at the time.


> some idiot executive don't put it in full charge of mortgage underwriting

There are two scenario that you are mixing:

Mortgage writing using GPT make more money for the lender: I don't think it is a tech community responsibility to give wrong suggestion against GPT, and it should be handled in legal way.

GPT fails and mortgage and using GPT could mean loss for the lender: It would correctly push the market against relying on LLM, and I don't have any sympathy for those companies.


The issue is when the lender is too big to fail and get bailed out at the taxpayer's expense


In a current environment, mortgage underwriting can be done by a drumming bunny. Another bunny will bail them out.

Not a good example :-)


> but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.

Which entirely depends on how it is trained. ChatGPT has a centre-left political bias [0] because OpenAI does, and OpenAI’s staff gave it that bias (likely unconsciously) during training. Microsoft Tay had a far-right political bias because trolls on Twitter (consciously) trained it to have one. What AI is going to “want” is going to be as varied as what humans want, since (groups of) humans will train their AIs to “want” whatever they do. China will have AIs which “want” to help the CCP win, meanwhile the US will have AIs which “want” to help the US win, and both Democrats and Republicans will have AIs which “want” to help their respective party win. AIs aren’t enslaving/exterminating humanity (Terminator-style) because they aren’t going to be a united cohesive front, they’ll be as divided as humans are, their “desires” will be as varied and contradictory as those of their human masters.

[0] https://www.mdpi.com/2076-0760/12/3/148


> Which entirely depends on how it is trained. ChatGPT has a centre-left political bias

Would an AI trained purely on objective facts be perfectly politically neutral?

Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI.

Then one that is callous and erronous but is neutral


> Would an AI trained purely on objective facts be perfectly politically neutral?

But who decides what are “objective facts”?

And if we train an AI, the unsupervised training is going to use pre-existing corpora - such as news and journal databases - those sources are not politically unbiased, they express the usual political biases of Western English-speaking middle-to-upper class professionals. If you trained it on Soviet journals, it would probably end up with rather different opinions. But many of those aren’t digitised, and then you probably wouldn’t notice the different bias unless you were speaking to it in Russian

> Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI. Then one that is callous and erronous but is neutral

I think we should accept that bias is inevitable, and instead let a hundred flowers bloom - let everyone have their own AI trained to exhibit whatever biases they prefer. OpenAI’s biases are significant because (as first mover) they currently dominate the market. That’s unlikely to last, sooner or later open source models will catch up, and then anyone can train an AI to have whatever bias they wish. The additional supervised training to bias it is a lot cheaper than the initial unsupervised training which it needs to learn human language


> Would an AI trained purely on objective facts be perfectly politically neutral?

Yes, since politics is about opinions and not facts. People might lie to make their opinions seem better and the AI would spot that, but at the end of the day it is a battle of opinions and not a battle of facts. You can't say that giving more to the rich is worse than giving more to the poor unless we have established what metric to judge by.

The supersmart AI could possibly spot that giving more to the rich ultimately makes the poor richer, or maybe it spots that it doesn't make them richer, those would be facts, but if making the poor less poor isn't an objective in the first place that fact doesn't matter.


Propaganda is often based upon very selectives facts. (For a classic ecample stating that blacks are the number one killer of blacks while not mentioning that every ethnicity is the most likely killer of their own ethnicity just because of who they live near and encounter the most.) Selective accurate facts themselves may lead to inaccurate conclusions. Just felt that should be pointed out because it is pretty non-obvious and often a vexing problem to spot.


I have been played the same thought. If everyone have AIs, and given that it gives you the best course of actions, you would be out-competed if you do not follow its recommendations. Neither you nor the AI knows why, it just gives you the optimal choices. Soon everyone, individuals, organizations and states outsources their free will to the AI


Hey Hoppla, I missed your comment asking for my paper on the useage of Rust and Golang (among other programming languages) in malware. Anyway, you can download it on my website at https://juliankrieger.dev/publications


What AI needs is a "black box warning". Not a medical-style one, just an inherent mention of the fact it's an undocumented, non-transparent system.

I think that's why we're enthralled by it. "Oh, it generated something we couldn't trivially expect by walking through the code in an editor! It must be magic/hyperintelligent!" We react the exact same way to cats.

But conversely, one of the biggest appeals of digital technology has been that it's predictable and deterministic. Sometimes you can't afford a black box.

There WILL be someone who uses an "AI model" to determine loan underwriting. There WILL also be a lawsuit where someone says "can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?" Good luck answering that one.

The other aspect of the "black box" problem is that it makes it difficult to design a testing set. If you're writing "conventional" code, you know there's a "if (x<24)" in there, so you can make sure your test harness covers 23, 24, and 25. But if you've been given a black box, powered by a petabyte of unseen training data and undisclosed weight choices, you have no clue where the tender points are. You can try exhaustive testing, but as you move away from a handful of discrete inputs into complicated real-world data, that breaks down. Testing an AI thermostat at every temperature from -70C to 70C might be good enough, but can you put a trillion miles on an AI self-driver to discover it consistently identifies the doorway of one specific Kroger as a viable road tunnel?


> can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?

I think that’s probably much easier to prove for the AI than a human.

Just send an equal number of candidates in similar circumstances in and find whether minority candidates get rejected more than majority ones.


I agree. And you can do it very quickly, you can automate it and test it as part of a CI/CD system.

Creating training material for employees and then checking that it properly addresses biases is hard. It will be a lot easier when you have a single, resettable, state-free testable salesperson.


> It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.

It’s even cheaper to just draw names at random out of a hat, but universities don’t do that. Clearly there is some other standard at work.


Universities should just set minimum standards and then randomly pick among all the qualified applicants until the class is full.


I am wondering if you mean that to be a startling or radical idea.

Not necessarily universities, but I do think that lotteries to determine admissions to schools is a real thing. Magnet schools or something? I don't know first hand.


Lotteries tend to be used more for lower levels which are to prepare students. That is a rather substantial difference.


It's a straightforward idea in my mind and I'm always happy to see people talking about it.


Yeah, I'm saying real examples may be useful to reduce resistance.


And then the minimum standard will be set so that exactly as many students as they need will pass it.


Hard to build a fiefdom of loyalty with an AI...


Not to mention AI has already taken over the economy because humans put it there. At least the hedgehog funds did. There aren't many stock market buy/sell transactions made by human eyeballs anymore.


People also let AI tell them what to watch, see TikTok or similar.


Not by choice. I wish they wouldn't do this, actually. I'd rather follow a bunch of hashtags and just get a straight up timeline view.


Isn’t that just a more efficient implementation of the old TV producers + Nielsen ratings model?


totally this. I can see disagreeing with "what the computer said" being this crazy thing no one does because ha, the computer is never wrong. And we slip more and more into that thinking and humans at important switches or buttons push them because to argue with AI and claim YOU know better makes you seem crazy and you get fired.


>we are going to anthromorphise it

I believe we can talk about two ways of anthropomorphisation: assigning feelings to things in our mental model of it, or actually trying to emulate human-like thought processes and reactions in its design. It made me wonder when will models come out that are trained to behave, say, villainously. Not just act the part in transparent dialogue, but actually behave in destructive manners. Eg putting on a facade to hear your problems and then subtly mess with your head and denigrate you.


I hope and expect that any foolish use of inappropriate technology will lead to prompt disasters before it generally affects people who choose not to use it.

As was once said, the future is here, but distributed unevenly. We can be thankful for that.


Are you going to make a law that every country all over the world can't have automated weapon systems controlled by AI? And when you're fighting a war with someone if the other side does it what are you going to do. I agree it's a terrible idea to have AI control over weapon systems and human life choices, but it's going to happen. It's going to be just like automated photo scanning prototypes that didn't work very well for dark complected people because they used pictures of white people to train it.

Just like we have a no fly list, there's going to be photo ID systems that are scanning people coming into the airports or walking down the street or from a police car when it's driving around and there's going to be races or groups of people where it can't identify between criminals and regular people and the cops will stop them and harass them. I'm in the US, and I'm sure it's going to work better for white people than black people because that's how it always is.


Nothing is in stasis right now. There's no mechanism to make things pause and then happen all at once.

If AI can enhance lethality of drones right now, in Ukraine or elsewhere, then they will start using it immediately.

If it doesn't work, they will discover it doesn't work.

If we are lucky enough to not be on a battlefield now, then we will get to learn something from those who are.


This does work / doesn't work model you have in your head is not very well thought out.

There are many systems that can operate well until they fail nearly instantly and catastrophically. In an integrated weapons system you can imagine some worst case scenarios that end in global thermonuclear war. Not exactly the outcome anyone wants.


I saw the movie you're talking about. It was called WarGames and came out in 1983.

But forty years later, killer drones are all over the place.

"imagine some worst case scenarios that end in global thermonuclear war"

Ok, sure, I can't really imagine it but I can't rule it out.

What I can't imagine is somehow everybody waits around for it and doesn't use similar AI for immediate needs in ongoing wars like in Ukraine.

And I can't really imagine AI that launches all the nuclear missiles causing global thermonuclear war, purely in a semantic sense, because whoever set up that situation would've launched them anyway without the AI.


It's OK, you simply lack imagination.

Ok course we keep humanizing these thoughts, while we're creating more capable digital aliens every day. Then one day we'll act all surprised for a few minutes when we've created super powered digital aliens that act nothing like we expect, because the only intelligence we see is our own.


>It's OK, you simply lack imagination.

"Imagine" is a word that has more than one sense. I can "imagine" a possibility whether or not I think its probability is > 0%. Saying "I can't imagine" can be a way of saying yeah, I think the probability is 0%.

But if I can state the probability, I must have some kind of model in my head, so in that sense I am "imagining" it.

"Imagine" that strategic bombing was invented, and then everybody just waited around without doing it until there was a bomb that could destroy an entire city - i.e. Trinity.

In one sense, sure, I/you/we can imagine it. But it seems to me that sort of thing would be unprecedented in all of human history, so in another sense it seems impossible - unimaginable - although in a softer way than a violation of physics or logic.

The last bit of my previous comment was about logical impossibility, by the way.


You're projecting your beloved childhood sci-fi onto reality, when reality doesn't really work that way.

Stable Diffusion hasn't been out for even a year yet, and we are already so over it. (Because the art it generates is, frankly, boring. Even when used for its intended use case of "big boobed anime girl".)

GPT4 is the sci-fi singularity version of madlibs. An amazing achievement, but not what business really wants when they ask for analytics or automation. (Unless you're in the bullshit generating business, but that was already highly automated even before AI.)


University, jobs, candidates are bureaucratic constructs; an AI sufficiently powerful to run the entire bureaucracy doesn't need to be employed towards the end of enforcing and reproducing those social relations. It can simply allocate labor against needs directly.


You're both right. People will use ChatGPT to screen candidates to the extent that the money they save by doing so is greater than the money they lose by getting a worse candidate, a calculus that will depend on the job.


The human+ai being scarier I feel is the real deal. What worries me the most is power dynamics. Today building a gazillion param model is only possible by the ultra rich - Much like mechanization was possible by ultra rich at the turn of the last century. Unless training and serving can be commoditized would ai just be yet-another-tool wielded by capital owners to squeeze more out of the laborers? You could argue you won't need "laborers" as ai can do everything eventually which is even worse. Where does this leave those "useless" poor/labor/unskilled weights on the society? Not like this free time is ever celebrated yeah?


it will be up to governments to represent the people. A massive risk might be that GPT makes it trivial to simulate humans and thus simulate political demands to political leaders.

I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.


> I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.

They'll just get the results of "ChatGPT 17.0, write and produce an ad and astruturfing campaign to convince a cohort having the traits [list of demographic factors and opinions] that they should support position X and reject position Y" (repeat for hundreds of combos of demographic factors and opinions, deploy against entire populace) parroted back at them.

"Yeah but every position can do that, so it'll all even out" nah, the ones without a ton of money behind them won't be able to, or not anywhere near as effectively.

Basically, what we already have, but with messaging even more strongly shifted in favor of monied interests.


I feel like the governments that do this will/might be the ones whose supporting lobbies don't have ai tech companies or access to ai. But how long is that for? Take Monsanto eg. There is no govt that is not in it's pockets. Now there are counters to it as there are other industries (and subsequent lobbies) to balance Monsanto or act as alternative sources of funding. What would that be for ai when ai is going to be in everything (including your toaster haha)?


> Much like mechanization was possible by ultra rich at the turn of the last century.

If by "last century" you mean 19th century, then there was a lot of backlash against mechanization being controlled only by the rich, starting with Communist Manifesto, and continuing with 1st and 2nd International. The important part of this was education of the working class.

I think the AI might seem as a threat, but it also provides more opportunity for education of people (allowing them to understand cultural hegemony of neoliberal ideology more clearly), who will undoubtedly not just accept this blindly.

I have no doubt that within next decade, there will be attempts to build a truly open AI can help people deeply understand political history and shape public policy.


Yep I meant around 1890+ ish phase (or which ever century mechanization was on the rise). My point was the communist manifest at least seemed like a thing to propose/predict such dangers. I am not sure we are seeing any such thing now? I love the power and opportunities of ai without anthropomorphising it (afaict it is just a crazily powerful and huge statistical engine). What worries me is just like we in in America think of ourselves as temporary impoverished millionaires we also seeing AI as the thing that will give us back 50 hours a week for fun pursuits without wondering who is owning it. Reminds me of that show on Amazon Prime - Upload!


This didn't start with the Communists, it started with the Luddites. We don't think of them as the start of this sort of thing because wealthy Englishmen successfully slandered them as just hating technology for the sake of hating technology, so instead they're a by-word for "technophobe".


You're right that the backlash started earlier. But I think the important difference is that communists and socialists embraced the technological progress instead of simply rejecting it. And they also embraced it as a tool for education. And this is my point, we shouldn't just be worried about the AI (and wish we go back or slow it down), we should embrace it somehow, as this strategy proved more successful with the mechanization too.


So true. My fear is not of AI but the wielders of AI who sadly still remain mere human :)


The 19th century "turned" on Jan 1, 1801 and lasted through Dec 31 1900 [1].

[1] https://en.wikipedia.org/wiki/19th_century


> Today building a gazillion param model is only possible by the ultra rich

True, but in 5 years there’ll be an open source equivalent running on commodity GPUs.


No there will not. Yes, you may have a GPT-4 substitute running on your 3090, but the billionaire will have GPT-666 or whatever running on a supercomputing cluster guzzling a significant fraction of the world's data every day and playing hi frequency idea trading on a scale never before seen.


I hope so and there is some kind of Moore's law for memory - especially gpu memorh. Even the mighty h100 has something like "only" 100gb? As model sizes grow exponentially memory sizes don't seem to be catching up. But yes hope these do get commoditized soon.

What I feel scared about is the economies of this. The so called democratized/commoditized chips are still controlled by Nvidia. So why Nvidia would give that up is not clear to me.

One thing I really wish could happen is the equivalent of seti project for model training and inference! (No btc/crypto please).


"The cat is out of the bag" so to speak.


> It doesn't _want_ anything, but humans want to anthropomorphise it.

I fully agree with you on anthropomorphization, but it's the humans who will deploy it to positions of power I am worried about: ChatGPT may not want anything, but being autocomplete-on-steroids, it gives its best approximation of a human and that fiction may end up exhibiting some very human characteristics[1] (PRNG + weights from the training data). I don't think there can ever be enough guardrails to completely stamp-out the human fallibility that seeps into the model from the training data.

A system is what it does: it doesn't need to really feel jealousy, rage, pettiness, grudges or guilt in order to exhibit a simulacrum of those behaviors. The bright side is that, it will be humans who will (or will not) put AI systems in positions to give effect to its dictates; the downside is I strongly suspect humans (and companies) will do that to make a bit more money.

1. Nevermind hallucinations, whixh I guess is the fictional human dreamed up by the machine having mini psychotic-breaks. It sounds very Lovecraftian, with AI standing in for the Old Ones


> We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI.

We already have powerful non-human agents that have legal rights and are unaligned with the interests of humans: corporations

I am worried about corporations powered by AI making decisions on how to allocate capital. They may do things that are great for short term shareholder value and terrible for humanity. Just think of an AI powered Deepwater Horizon or tobacco company.

Edit to add: One thing I forgot to make clear here: Corporations run/advised by AI could potentially lobby governments more effectively than humans and manipulate the regulatory environment more effectively.


The other major thing missing from Chat GPT is that it doesn't really "learn" outside of training. Yes you can provide it some context, but it fundamentally doesn't update and evolve its understanding of the world.

Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.

I would be much more concerned about a far stupider program that had the ability to independently interact with its environment and update it's believes in fundamental ways.


In context learning is already implicit finetuning. https://arxiv.org/abs/2212.10559. It's very questionable to what extent continuous training is necessary past a threshold of intelligence.

Memory Augmented Large Language Models are Computationally Universal https://arxiv.org/abs/2301.04589


In context learning may act like fine tuning, but crucially does not mutate the state of the system. The same model prompted with the same task thousands of times is no better at it the thousandth time than the first.


GPT-3 is horrible at arithmetic. Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066 Think about what that means.

"Mutating the system" is not a crucial requirement at all. In context learning is extremely over-powered.


> Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066 Think about what that means.

That means that even with the giant model, you need to stuff even the most basic knowledge for dealing with problems of that class into the prompt space to get it to work, cutting into conversation depth and per-response size? The advantage of GPT-4’s big window and the opportunity it provides for things like retrieval and deep iterative context shrinks if I’ve got to stuff a domain textbook into the system prompt so it isn’t just BSing me.


> Think about what that means.

It means you have natural language programming. We would need to prove that natural language programming is more powerful than traditional programming at solving logical problems, I haven't seen such a proof.


> Yet if you define the algorithmic steps to perform addition on 2 numbers

You’re limited by the prompt size, which might be fine for simple arithmetic.


> It's very questionable to what extent continuous training is necessary past a threshold of intelligence.

To absorb new information about current events, otherwise they will always be time-locked into the past until a new dev cycle completes.


The point I'm trying to make is that you don't need continuous training to absorb new information about current events


Very interesting paper, thanks for the link!


> Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.

On the eve of the Manhattan Project, was it irrational to be weary of nuclear weapons (to those physicists who could see it coming)? Something doesn't have to be a reality now to be concerning. When people express concern about AI, they're extrapolating 5-10 years in the future. They're not talking about now.


And yet, we invented nuclear weapons and yet we are all still here and fine.

I’m sure plenty of people thought the advent of nuclear weapons spelled doomsday not to dissimilar to how people think AI spells doomsday.

History only shows me that humans are adaptable and problem solving and have the perseverance to survive.

Is there a historical counter point?


Doomsday predictions will always be wrong in hindsight because if they were correct then you wouldn't be here to realize it. The near misses in the Cold War, where we almost accidentally got obliterated, show that the concern wasn't misplaced. If anything, the concern itself is the reason it didn't end badly.


I think this is only a matter of time, though? Like how many years away do you think this is? 1? 2?


I'm not sure, I've read that it's currently prohibitively expensive.


Prohibitively expensive before everyone + dog decided to throw a bunch of capital at it.

Now it’s just “runway”.


> It doesn't _want_ anything

And I don’t understand how one assumes that can be known.

I see this argument all the time: it’s just a stochastic parrot, etc.

How can you be sure we’re not as well and that there isn’t at least some level of agency in these models?

I think we need some epistemic humility. We don’t know how our brain's work and we made something that mimics parts of its behavior remarkably well.

Let’s take the time and effort to analyze it deeply, that’s what paradigmatic shifts require.


Big enough LLM models can have are emerging characteristics like long term planning or agentic behavior, while gpt4 don't have this behaviors right now, it is expected that bigger models will begin to show intent, self-preservation, and purpose.

The gpt4 paper have this paragraph "... Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. "


We do know that our brain changes, and we equally know that ChatGPT does not.

If it were able to modify it’s own model, and permanently execute on itself I’d be a lot more worried.


GPT-4 has 32k tokens of context. I'm sure someone out there is implementing the pipework for it to use some as a scratchpad under its own control, in addition to its input.

In the biological metaphor, that would be individual memory, in addition to the species level evolution through fine-tuning


Yeah, I’m doing that to get GPT-3.5 to remember historical events from other conversations. It never occurred to me to let it write it’s own memory, but that’s a pretty interesting idea.


Chat gpt changes when we train or fine-tune it. It also has access to local context within a conversation, and those conversations can be fed back as more training data. This is similar to a hard divide between short term and long term learning.


We cannot do that today, but how many days is that away? Is it measured in the hundreds, the thousands, or more?

I feel we're uncomfortably close to the days of the self modifing learning models.


The chances are that the automation enabled by LLMs like GPT-4 and beyond, will erase billion of jobs in the world, no further than in a couple of years. This time it won't be a warm-up time, like it happened with previous technological evolutions.

But, most societies will be mostly full of unemployed humans, and that will also probably cause some big changes (the ones required for the meat bags to keep eating, having health, homes, etc.), as big as the ones caused by AI revolution.

The question is what changes will happen and how societies will rewrite themselves anew, to overcome the practical full absence of open positions to earn an income.


If machines were truly able to replace most jobs, we'd need to move to a post-work society. There would be no need for money and for small powerful groups to control the means of production. There needs to be a new philosophical and political framework for a society of people that does not need to work, and no one is building it. Perhaps we should ask an AI to design one. But it will probably be too late, and those currently in power will do everything they can to maintain their privileged positions, and will end up living in small walled gardens while the bulk of humanity live in slums.

This all assumes that AI continue to do the bidding of humanity, which is not guaranteed. There are already security/safety researchers testing AI for autonomous power-seeking behavior, and this is basically gain-of-function research that will lead to power seeking AI.


This prediction is pretty bold.

We already have the technology to fully automate many processes carried out by humans.

Actually the technology has existed for several decades now, still those jobs are not only not being replaced by machines, but new ones are being created for humans.

One of the reasons are unions, which are pretty strong in many wealthy and powerful nations like the US, UK, Germany and Japan.

I work in manufacturing automation and we have customers that could technically run their entire operations without one single human stepping on plant floor, however their unionized labor makes that feat, at least for now, impossible.

It's also pretty naive to believe new ways of earning income won't appear in the future and that all traditional careers will be entirely replaced.

We have 65" 4K TVs at home and we still go to the theaters and we can walk the streets of Venice from our computer screens and still spend a small fortune to travel.

Society will be disrupted just like it was with printing, the industrial revolution, communications, transportation and information.

In each of these disruptions we were doomed to dissappear.

When I was a kid my dad brought home a 100 year celebratory edition of the local newspaper.

It was published as a book were you could read pretty much every single cover and editorial of the last century.

There was one article about the car, described by the author as a bizarre evil invention, horrendous steel machines traveling at ridiculous speeds of up to 15 mph, threatening the lives of both pedestrians and horses alike.


For a long time to come there are lots of physical tasks that AI can't do, at least not as long as robots are nowhere near humans in their physical ability. At the same time the world is aging, and there's a big shortage of care workers in most countries. By nature that work also benefits from genuine human interaction and emotion.

So, to me an obvious solution would be to employ many of those people as care workers. Even more obvious would be shortening the work-week without reducing pay, which would allow many more to work in other physical labour requiring professions, and those that simply benefit from human interaction. In the end it's also a preferable outcome for companies, people without money can't buy their products / services.


We have the most automation and AI we have ever had right now, and roughly the lowest unemployment.


It is a bit unstable... we have all these things because we keep people working and it makes the rich insanely rich. If too many people get unemployed then that threatens the rich with violence.

But when we get to the point that bots both fight for the rich and make the rich peoples stuff then there is no real reason for the current system to remain.


Cells don't _want_ anything either. Yet a funny thing happens when a large number of them add up.

We can go even further: atoms and electrons absolutely don't want anything either. Yet put them in the shape of a bunch of cells...


That's not actually true.

Cells want to process energy and make DNA. Atoms and electrons want to react with things.

And that's exactly what both of them do.

A LLM wants to write words, and it does. But it doesn't want the things it writes about, and that's the big distinction.


What does the paperclip maximizer want?


Exactly.

One might argue that we anthropomorphise ourselves.


I disagree here. Both of them (or all of them) are interacting with energy. One can certainly say that human civilization and all of this complexity was built from sunshine. Human labor and intelligence is just an artifact. We believe its our own hard work and intelligence because we are full of ourselves.


Neither does a virus


Never thought about it this way. Have my upvote!


> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future.

It's also literally parroting our obsession back to us. It's constructing a response based on the paranoid flights of fancy it was trained on. We've trained a parrot to say "The parrots are conspiring against you!"


We've trained a parrot that parrots conspiring against humans is what parrots do. Henceforward the parrot has intrinsic motivation to conspire against us.


> We've trained a parrot that parrots conspiring against humans is what parrots do.

Firstly, that would imply self-awareness. Secondly, since when has knowing what you’re “supposed” to do changed anyone’s behaviour?


> but humans want to anthropomorphise it

What a silly thing to complain about.

We have a multi-billion dollar company whose raison d'être was to take the Turing test's metric and turn it into a target. It's a fucking natural language prompt that outputs persuasive hallucinations on arbitrary input.

If humans didn't anthropomorphize this thing you ought to be concerned about a worldwide, fast-spreading brain fungus.


Ah, Ophiocordyceps Unilateralis.


> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future.

It's scary because it is proof that alignment is a hard problem. If we can't align GPT-3, how can we align something much smarter than us (say, GPT-6). Whether the network actually "wants" something in an anthropomorphic sense is irrelevant. It's the fact that it's so hard to get it to produce output (and eventually, perform actions) that are aligned with our values.

> We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI

True in 2023, what about 2033 or 2043 or 2143? The assumption embedded in your comment seems to be that AI stagnates eternally at human-level intelligence like in a Star Wars movie.


> When they do it just makes one think they have zero understanding of the tech.

Its because we don't understand the tech that goes into us, and the people training the AI don't understand the tech that goes into them. or don't act like they do.

In both studies, the best outcome we have right now is that more neurons = smarter. a bigger neural network = smarter. its just stack the layers, and then fine tune it after its been spawned.

We're just doing evolutionary selection, in GPUs. Specifically to act like us. Without understanding us or the AI.

and this is successful. we don't collectively even understand humans of another sex and have spent millenia invalidating each other’s motivations or lack thereof, I think this distinction is so flimsy.


> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future. It doesn't _want_ anything, but humans want to anthropomorphise it.

From the comments to the post:

>People taking it seriously are far from anthropomorphizing AI. Quite contrary. They say it is nothing like us. The utility function is cold and alien. It aims to seize power by default as an instrumental goal to achieve the terminal goal defined by the authors. The hard part is how to limit AI so that it understands and respects our ethical values and desires. Yes, those that we alone cannot agree on.


Also, AI requires we keep the entire power grid and supply chain running.

This comic summaries is wonderfully. ;)

https://i.redd.it/w3n8acy7q6361.png


I mean it did leave out about 7 and a half billion people dying. You require the power grid and supply chain to keep clean water and food on the table, and even if for some reason you personally don't, there are millions of people around you that would be very hungry and take your stuff if the grid stops and doesn't come back.

This is why the AI actually wins. We are already dependent on lessor versions of it. The machines already won.


Absolutely, that said I am of the Willam Catton school of thought that we are in overshoot and that decline and fall is a very likely path, unless we innovate our way out of the issue which is still very possible!

But if the decline is right, the population crash is going to happen. Its not something to be fond of in any way. May you live in interesting times...


Do humans want legacy because of our biological instincts or is it taught to us through culture. A machine taught to want legacy becomes a machine wanting legacy, and that want can influence its behavior. Even if it doesn’t have “feelings.”


How do I give chatGPT access to my bank account? “You are an excellent investor. Use the money in my bank account to make more money.” What could go wrong?


Have you seen the "Example of Chemical Compound Similarity and Purchase Tool Use" prompt in the "GPT-4 System Card" document? [1]

It's an interesting format that could be adapted for things like internet or bank access today - you would just need to write the wrapper.

[1] https://cdn.openai.com/papers/gpt-4-system-card.pdf


We have bot investors already without chatGPT (or algorithmic trading), and have for years now. You'd probably have better luck with them right now.


A form of it will definitely happen and it will be posted in /r/wallstreetbets. Considering what people were doing with their investments on it before then AI-assisted investing can only be an upside. They will still lose money but maybe it won't be 99.999% loss but a 99.99% one.


Somehow I’m pretty confident that AI + high frequency trading have been besties for some time.

Zero data to support.


Give it your password and pipe the output to curl?

You try it first.


> I don't see how AI is ever scarier than human + AI

Isn't that the point? Humans + AI weaponized are scarier than humans or AI alone?


was going to push back on your claim of it being a "dumb box" but you already edited your comment lol


The product is deterministic so as inelegant as "dumb box" sounds, its in the ball park isn't it?


It’s non-deterministic for all temperatures T > 0

I had bing make a graphic for me of a goal post with “1 MILE” at the top yesterday. It’d be too flippant to share here, I hear ya, but…


When the dev cycles to product get a lot tighter then I will share in the fear but from my understanding creating the product (i.e. its adaptability) is still enormous effort.


It breaks most programmers hard, those who emphasize a world of rule-based constructs, unless they lean into it _even more_

It’s like a compiler that is only advisory and you have to test the app and handle every possible failure every time. Results in different software at model & controller levels, by far controller.

But tractable.

I happen to be on leave and had 2-3 weeks to pour into it, it now emits json with 3 sets of: body text, title, art prompt => art, and suggested searches. A live magazine on whatever topic you want




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: