Hacker News new | past | comments | ask | show | jobs | submit login

I have some odd feelings about this. It took less than a year to go from "of course it isn't hooked up to the internet in any way, silly!" to "ok.... so we hooked up up to the internet..."

First is your API calls, then your chatgpt-jailbreak-turns-into-a-bank-DDOS-attack, then your "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

You can go on about individual responsibility and all... users are still the users, right. But this is starting to feel like giving a loaded handgun to a group of chimpanzees.

And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"




Pshhh... I think it's awesome. The faster we build the future, the better.

What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)

So why the performative whinging about safety? Just let it rip! To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint. But they're not very open about this fact when talking publicly to non-technical users, so the result is they're talking out one side of their mouth about AI regulation, while in the meantime Microsoft fired their AI Ethics team and OpenAI is moving forward with plugging their models into the live internet. Why not be more aggressive about it instead of begging for regulatory capture?


> The faster we build the future, the better.

Why? Getting to "the future" isn't a goal in and of itself. It's just a different state with a different set of problems, some of which we've proven that we're not prepared to anticipate or respond to before they cause serious harm.


When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition, especially when the costs of doing it are so low that anyone with sufficient GPU power and knowledge of the latest research can get pretty close to the cutting edge. So the best we can hope for is that someone ethical is the first to advance that technological progress.

I hope you wouldn't advocate for requiring a license to buy more than one GPU, or to publish or read papers about mathematical concepts. Do you want the equivalent of nuclear arms control for AI? Some other words to describe that are overclassification, export control and censorship.

We've been down this road with crypto, encryption, clipper chips, etc. There is only one non-authoritarian answer to the debate: Software wants to be free.


We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

In general the liberal position of progress = good is wrong in many cases, and I'll be thankful to see AI get neutered. If anything treat it like nuclear arms and have the world come up with heavy regulation.

Not even touching the fact it is quite literal copyright laundering and a massive wealth transfer to the top (two things we pass laws protecting against often), but the danger it poses to society is worth a blanket ban. The upsides aren't there.


That's right. It is not hard to imagine similarly disastrous GPT/AI "plug-ins" with access to purchasing, manufacturing, robotics, bioengineering, genetic manipulation resources, etc. The only way forward for humanity is self-restraint through regulation. Which of course gives no guarantee that the cat will be let out of the bag (edit: or earlier events such as nuclear war or climate catastrophe will kill us off sooner)


Why not regulate the genetic manipulation and bioengineering? It seems almost irrelevant whether it's an AI who's doing the work, since the physical risks would generally exist regardless of whether a human or AI is conducting the research. And in fact, in some contexts, you could even make the argument that it's safer in the hands of an AI (e.g., I'd rather Gain of Function research be performed by robotic AI on an asteroid rather than in a lab in Wuhan run by employees who are vulnerable to human error).


We can't regulate specific things fast enough. It takes years of political infighting (this is intentional! government and democracy are supposed to move slowly so as to break things slowly) to get even partial regulation. Meanwhile every day brings another AI feature that could irreversibly bring about the end of humanity or society or democracy or ...


Cat is already out of the bag, regulation will do nothing to even slow down the inevitable pan-genocidal AI, _if_ such a thing can be created


It's obviously false. Nuclear weapon proliferation has been largely prevented, for example. Many dangerous pathogens and lots of other things are not available to the public.

Asserting inevitability is an old rhetorical technique; it's purposes are obvious. What I wonder is, why are you using it? It serves people who want this power and have something to gain, the people who control it. Why are you fighting their battle for them?


Nuclear materials have fundamental material chokepoints that make them far easier to control.

- Most countries have little to no uranium deposits and so have to be able to find a uranium-producing ally willing to play ball.

- Production of enriched fuel and R&D are both outrageously expensive, generally limiting them to state actors.

- Enrichment has massive energy requirements and requires huge facilities, tipping off observers of what you're doing

Despite all this and decades of strong anti-nuclear proliferation international agreements India, Pakistan, South Africa, Isreal, and North Korea have all developed nuclear weapons in defiance of the UN and international law.

In comparison the only real bottleneck in proliferation of AI is computing power - but the cost of running an LLM is a pittance compared to a nuclear weapons program. OpenAI has raised something like $11 billion in funding. A single new proposed US Department of Energy uranium enrichment plant is estimated to cost $10 billion just to build.

I don't believe proliferation is inevitable but it's very possible that the genie is out of the bottle. You would have to convince the entire world that the risks are large enough to to warrant putting on the brakes, and the dangers of AI are much harder to explain than the dangers of nuclear weapons. And if rival countries cannot agree on regulation then we're just going to see a new arms race.


You can’t make a nuclear weapon with an internet connection and a GPU. Rather than imply some secondary motive on my part, put a modicum of critical thinking into what makes a nuke different than an ML model.


I'd rather try and fail than give up without a fight. I'm many things but I'm not a coward.


Best of luck!


We already do; China jailed somebody for gene editing babies unethically for HIV resistance.

We can walk and chew gum at the same time, and regulate two things.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

Only because we know the risks and issues with them.

OP is talking about furthering technology, which is quite literally "discovering new things"; regulations on furthering technology (outside of literal nuclear weapons) would have to be along the lines of "you must submit your idea for approval to the US government before using it in a non-academic context if could be interpreted as industry-changing or inventing", which means anyone with ideas will just move to a country that doesn't hinder its own technological progress.


Human review boards and restrictions on various dangerous biological research exist explicitly to limit damage from furthering lines of research which might be dangerous.


Those seem to be explicitly for actual research papers and whatnot, and are largely voluntary; it’s not mandated by the government.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

ha, the big difference is that this whole list can actually affect the ultra wealthy. AI has the power to make them entirely untouchable one day, so good luck seeing any kind of regulation happen here.


I do not think the reason for nuclear weapons treaties is that they can blow up "the ultra wealthy". Is that why the USSR signed them?


you can replace ultra wealthy with powerful. same point stands. the only things that become regulated heavily are things that can affect the people that live at the top, whether its the obscenely rich, or the despots in various countries.


So everyone should have a hydrogen bomb at the lowest price the market can provide, that's your actual opinion?


i dont know what the hell you're talking about


"We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better."

As technology advances, such prohibitions are going to become less and less effective.

Tech is constantly getting smaller, cheaper and easier for a random person or group of people to acquire, no matter what the laws say.

Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

Make laws against it in one place, your competitor in another part of the world without such laws or their effective enforcement will dominate you before long.


> Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

I wouldn't say that this is an additional reason.

I would say that this is the primary reason that overrides the reasonable concerns that people have for AI. We are human after all.


It's a baseless assertion, often repeated. Reptition isn't evidence. Is there any evidence?

There's lots of evidence of our ability to control the development, use and proliferation of technology.


Have laws stopped music piracy? Have laws stopped copyright infringement?

Both have happened at a rampant pace once the technology to easily copy music and copyrighted content became easily available and virtually free.

The same is likely to happen to every technology that becomes cheap enough to make and easy enough to use -- which is where technology as a whole is trending towards.

Laws against technology manufacture/use are only effective while the barrier to entry remains high.


> Have laws stopped music piracy? Have laws stopped copyright infringement?

They have a large effect. But regardless, I don't see the point. Evidence that X doesn't always do Y isn't evidence that X is ineffective doing Y. Seatbelts don't always save your life, but are not ineffective.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

All those examples put us in physical danger to the point of death.


Others siblings have good replies, but also, we regulate without physical danger all the damn time.

See airlines, traffic control, medical equipment, government services, but also we regulate ads, TV, financial services, crypto. I mean we regulate so many “tech” things for the benefit of society this is a losing argument to take. There’s plenty of room to argue the elsewhere but the idea that we don’t regulate tech if it’s not immediately a physical danger is crazy. Even global warming is a huge one, down to housing codes and cars etc. It’s a potential physical danger hundreds of years out, and we’re freaking out about it. Yet AI had the chance to really do much more damage within a much shorter time frame.

We also just regulate soft social stability things all over, be it nudity, noise, etc.


Let me recalibrate. I'm not arguing that there technology or AI or things that don't cause death should not be regulated, but I can see that might be the inference.

I just think that comparing AI to nuclear weapons seems like hyperbole.


Why is it hyperbole? Nuclear weapons and AI both have the capacity to end the world.


Private citizens and companies do not have access to nuclear weapon technology and even the countries who do are being watched like hawks.

If equally or similarly dangerous, are you then saying AI technology should be taken out of the hands of companies and private citizens?


For the sake of argument, let's say yes, AI should be taken out of the hands of the private sector entirely.


AI is now poised to make bureaucratic decisions. Bureaucracy puts people in physical danger every day. I've had medical treatments a doctor said I need denied by my insurance, for example.


For somebody from another country this sounds insane..


Risks to physical danger evolve all the time. It’s not a big leap from AI generated this script to a fatal bug is nefariously hidden in the AI generated library in-use by mission critical services (e.g. cars, medical devices, missiles, fertilizers).


how do you regulate something that many people can already run on their home gpu? how much software has ever been successfully banned from distribution after release?


They do like to try :(


> massive wealth transfer to the top (thing we pass laws protecting against often)

If only.


The Roman empire did that for hundreds of years! They had an economic standard that wasn't surpassed until ~1650s Europe, so why didn't they have an industrial revolution? It was because elites were very against technological developments that reduced labor costs or ruined professions, because they thought they would be destabilizing to their power.

There's a story told by Pliny in the 1st century. An inventor came up with shatter-proof glass, he was very proud, and the emperor called him up to see it. They hit it with a hammer and it didn't break! The inventor expected huge rewards - and then the emperor had him beheaded because it would disrupt the Roman glass industry and possibly devalue metals. This story is probably apocryphal but it shows Roman values very well - this story was about what a wise emperor Tiberius was! See https://en.wikipedia.org/wiki/Flexible_glass


> When in human history have we ever intentionally not furthered technological progress?

chemical and biological weapons / human cloning / export restriction / trade embargoes / nuclear rockets / phage therapy / personal nuclear power

I mean.. the list goes on forever, but my point is that humanity pretty routinely reduces research efforts in specific areas.


I don’t think any of your examples are applicable here. Work has never stopped in chemical/bio warfare. CRISPR. Restrictions and embargoes are not technologies. Nuclear rockets are an engineering constraint and a lack of market if anything. Not sure why you mention phage therapy, it’s accelerating. Personal nuclear power is a safety hazard.


Sometimes restrictions are the best way to accelerate tech progress. How much would we learn if we gave everyone nukes to tinker with? Probably something. Is the worth the odds that we might destroy the world in the process and set back all our progress? No. We do the same with bioweapons, we do the same with patents and trademarks, and laws preventing theft and murder.

If unfettered access to AI has good odds to just kill us all, we'd want to restrict it. You'd agree I'm sure, except your position is implicitly that AI isn't as dangerous as some others make it out to be. That's where you are disagreeing.


I wonder how these CEOs view the world, they are pushing on a product which is gonna kill every single tech derivative in it's own industry. Microsoft, Google, AWS, Vercel, Replit, they all feed back from selling the products their devs design, to other devs or companies. They will be poping the bubble

Now, if 80-90% of devs and startups are gonna be wiped in this context, the same applies to those one in the middle, accountants, data analysts, business analysts, lawyers. Now they can eat the entire cake without sharing it with the human beings who contributed over the years.

I can see the regulations coming, if the layoffs start happening fast enough and households income start to deteriorate. Why? probably because this time is gonna impact every single human being you know, and it is better to keep people employed and with a purpose in life than having to tax the shit out of these companies in order to give back the margin of profit that had some mechanism of incentives and effort in the first place.


> If 80-90% of devs and startups are gonna be wiped in this context

This is not a very charitable assessment of the adaptability of devs and startups, nevermind that of humans in general. We've been adapting to technological change for centuries. What reason do you have to believe this time will be any different?


Humans can adapt just fine. Capitalism however not. What do you think happens if AI keeps improving at this speed and within a few years millions to tens of millions of people are out of a job?


> When in human history have we ever intentionally not furthered technological progress?

Oh, a number. Medicine is the biggest field - human trials have to follow ethics these days:

- the times of Mengele-style "experiments" on inmates or the infamous Tuskeegee syphilis study are long past

- we can clone sheep for like what, 2 decades now, but IIRC haven't even begun chimpanzees, much less humans

- same for gene editing (especially in germlines), which is barely beginning in human despite being common standard for lab rats and mice. Anything impacting the germ line... I'm not sure if this will become anywhere close to acceptable in my life time.

- pre-implantation genetic based discarding of embryos is still widely (and for good reason...) seen as unethical

Another big area is, ironically given that militaries usually want ever deadlier toys, the military:

- a lot of European armies and, from the Cold War era on mostly Russia and America, have developed a shit ton of biological and chemical weapons of war. Development on that has slowed to a crawl and so did usage, at least until Assad dropped that shit on his own population in Syria, and Russia occasionally likes to murder dissidents.

- nuclear weapons have been rarely tested for decades now, with the exception of North Korea, despite there being obvious potential for improvement or civilian use (e.g. in putting out oil well fires).

Humanity, at least sometimes, seems to be able to keep itself in check, but only if the potential of suffering is just too extreme.


> Software wants to be free.

I feel like I'm in a time warp and we're back in 1993 or so on /. Software doesn't want anything and the people claim that technological progress is always good dream themselves to be the beneficiaries of that progress regardless of the effects on others, even if those are negative.

As for the intentional limits on technological progress: there are so many examples of this that I wonder why you would claim that we haven't done that in the past.


I was one year old in 1993, so I'll defer to you on the meaning of this expression [0], but it sounds like you were on the opposite side of its ideological argument. How did that work out for you? Thirty years later, I'm not sure it's a position I'd want to brag about taking, considering the tremendous success and net positive impact of the Internet (despite its many flaws). Although, based on this Wikipedia article, I can see how it's a sort of Rorschach test that naive libertarians and optimistic statists could each interpret favorably according to their own bias.

[0] https://en.wikipedia.org/wiki/Information_wants_to_be_free


You're making a lot of assumptions.

You're also kind of insulting without having any grounds whatsoever to do so.

I suggest you read the guidelines for a bit.


Eh? I wasn't trying to be, and I was genuinely curious to read your reply to this. Oh well, sorry about that I guess.


Your comment is a complete strawman and you then attach all kinds of attributes to me that do not apply.


It sounded like you were arguing against "software wants to be free," or at least that you were exasperated with the argument, so I was wondering how you reconciled that with the fact that the Internet appears to have been a resounding success, and those advocating "software wants to be free" turned out to be mostly correct.


> When in human history have we ever intentionally not furthered technological progress?

Every time an IRB, ERB, IEC, or REB says no. Do you want an exact date and time? I'm sure it happens multiple times a day even.


> Do you want an exact date and time? I'm sure it happens multiple times a day even.

You should read "when in human history" in larger time scales than minutes, hours, and days. Furthermore, you should read it not as binary (no progress or all progress), but the general arc is technological progression.


What are you talking about? IRBs have been around for 50 years. So 50 years of history we have been consciously not pursuing certain knowledge because of ethics.

It would really help for you to just say what timescale you're setting as your standard. I'm getting real, "My cutoff is actually 51 years"-energy.

Just accept that we have, as a society, decided not to pursue some knowledge because of the ethics. It's pretty simple.


Some cultures like the Amish said were stopping here.


The Amish are dependent on a technological powerhouse that is the US to survive.

They are pacifists themselves, but they are grateful that the US allows them their way of life, they'll be extinct a long time ago if they arrived in China/Middle East/Russia etc.

That's why the Amish are not interested in advertising their techno-primitivism. It works incredibly well for them, they raise giant happy families isolated from drugs, family breakdown, and every other modern ill, while benefiting from modern medicine, the purchasing power of their non-amish customers. However, they know that making the entire US live like them will be quite a disaster.

Note the Amish are not immune from economics forced changes either. Young amish don't farm anymore, if every family quadruples in population, there's no 4x the land to go around. So they go into construction (employers love a bunch of strong,non-drugged,non-criminal workers), which is again intensely dependent on the outside economy, but pays way better.

As a general society, the US is not allowed to slow down technological development. If not for the US, Ukraine would have already been overran, and European peace shattered. If not for the US, the war in Taiwan would have already ended, and Japan/Australia/South Korea all under Chinese thrall. There's also other more certain civilization ending events on the horizon, like resource exhaustation and climate change. AI's threats are way easier to manage than coordinating 7 billion people to selflessly sacrifice.


>they'd be extinct a long time ago if they arrived in China/Middle East/Russia etc.

There is actually a group similar to the Amish in Russia, it's called the Old Believers. They formed after a schism within the Orthodox church and fled persecution to Siberia. Unlike the Amish, many of the Old Believers aren't really integrated with the modern world as they still live where their ancestors settled in. So groups that refuse to technologically progress do exist, and can do so even under persecution and changing economic regimes.


That's a good point and an interesting example, but it's also irrelevant to the question of human history, unless you want to somehow impose a monoculture on the entire population of planet Earth, which seems difficult to achieve without some sort of unitary authoritarian world government.


> unless you want to somehow impose a monoculture on the entire population of planet Earth

Impose? No. Monoculture? No. Encourage greater consideration, yes. And we do that by being open about why we might choose to not do something, and also by being ready for other people that we cannot control who make a different choice.


Does human history applies to true Scotsmen as well?


Apparently the Amish aren't human.


while Amish are most certainly human their existence rests on the fact that they happen to be surrounded by the mean old United States. Any moderate historical predator would otherwise make short work of them, they're a fundamentally uncompetitive civilization.

This goes for all utopian model communities, Kibbutzim, etc, they exist by virtue of their host society's protection. And as such the OP is right that they have no impact on the course of history, because they have no autonomy.


I have been saying that we will all be Amish eventually as we are forced to decide what technologies to allow into our communities. Communities which do not will go away (e.g., VR porn and sex dolls will further decrease birth rates; religions/communities that forbid it will be more fertile)


That's not required. The Amish have about a 10% defection rate. Their community deliberately allows young people to experience the outside world when they reach adulthood, and choose to return or to leave permanently.

This has two effects. 1. People who stay, actually want to stay. Massively improving the stability of the community. 2. The outside communities receive a fresh infusion of population, that's already well integrated into the society, rather than refugees coming from 10000 miles away.

Essentially, rural america will eventually be different shades of Amish (in about 100 years). The amish population will overflow from the farms, and flow into the cities, replenishing the population of the more productive cities (Which are not population-self-sustaining).

This is a sustainable arrangement, and eliminates the need of mass-immigration and demographic destabilisation. This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs). Cities basically grind population into money, so they need rural areas to replenish the population.


"People who stay, actually want to stay."

That depends on how you define "want".

Amish are ostracized by their family and community if they leave. That's some massive coercion right there: either stay or lose your connection to the people you're closest to and everything you've ever known and raised to believe your whole life.

Not much of a choice, though some exceptionally independent people do manage to make that sacrifice.


> This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs).

I had not heard this before. Do you have citations for this?

(I realize cities have lower birth rate than rural areas in many cases. I am interested in the assertion that they are negative. Has it always been so? Or have cities and rural areas declined at same rate?)


I think a synthetic womb/cloning would counter the fertility decline among more advanced civilization


Birth is not the limiter, childrearing is. Synthetic wombs are more expensive than just having surrogate mothers. For the same reason that synthetic food is more expensive than bread and cabbage.

The actual counter to fertility decline, may be AI teachers. AI will radically close the education gap between rich and poor, and lower the costs. All you need is a physical human to supervise the kid, the AI will do the rest, from entertainment, to education, to identifying when the child is hungry/sleepy/potty, and relaying that info for the human to act on.


This is what ought to happen. The question is what will happen?


Sine qua non ad astra


Everybody decides what technologies to use all the time. Condoms exist already, but not everybody uses them always.


It does not take perfect compliance to result in drastically different birth rates in different cultures/communities.


> When in human history have we ever intentionally not furthered technological progress?

Nuclear weapons?


You get diminishing returns as they get larger though. And there has certainly been plenty of work done on delivery systems, which could be considered progress in the field.


Japan banned guns until 1800, they had them since 16xx something. The truth is we can not even ban technology. It does not work. Humanity as a whole does not exist. Political coherence as a whole does not exist. Wave aside the fig leave that is the UN and you can see the anarchic tribal squable of the species tribes.

And even those tribes are not crisis stable. Bad times and it all becomes a anarchic mess. And that is were we are headed. A future were a chaotic humanity falls apart with a multi-crisis around it, while still wielding the tools of a pre crisis era. Nuclear powerplants and nukes. AIdrones wielded by ISIS.

What if a unstoppable force (exponential progress) hits a unmoveable object(humanitys retardations).. stay along for the ride.

<Choir of engineers appears to sing dangerous technologies praises>


I look around me and see a wealthy society that has said no to a lot of technological progress - but not all. These are people that work together to build as a community to build and develop their society. They look at technology and ask if will be beneficial to the community and help preserve it - not fragment it.

I am currently on the outskirts of Amish country.

BTW when they come together to raise a barn it is called a frolic. I think we can learn a thing or two from them. And they certainly illustrate that alternatives are possible.


I get that, and I agree there is a lot to admire in such a culture, but how is it mutually exclusive with allowing progress in the rest of society? If you want to drop out and join the Amish, that's your prerogative. And in fact, the optimistic viewpoint of AGI is that it will make it even easier for you to do that, because there will be less work required from humans to sustain the minimum viable society, so in this (admittedly, possibly naive utopia) you'll only need to work insofar as you want to. I generally subscribe to this optimistic take, and I think instead of pushing for erecting barriers to progress in AI research, we should be pushing for increased safety nets in the form of systems like Basic Income for the people who might lose their jobs (which, if they had a choice, they probably wouldn't want to work anyway!)


Technological progress and societal progress are two different things. Developing lethal chemical weapons is not societal progress. Developing polarizing social media algorithms is not societal progress. If we poured $500B and years of the brightest minds into studying theoretical physics and developed a simple formula that anyone can follow for mixing ketchup and whiskey in such that it causes the atoms of all organic life in the solar system to disintegrate into subatomic particles it would be a tremendous and unmatched technological achievement, but it would very much not be societal progress.

The pessimistic view of AGI deems spontaneous disintegration into beta particles a less dramatic event than the event of AGI. When you're climbing a dark uncharted cave you take the pessimistic attitude when pondering if the next step will hold your weight, because if you hold the optimistic attitude you will surely die.

This is much more dangerous than caves. We have mapped many caves. We have never mapped an AGI.


>Software wants to be free.

And here I always thought, people want to be free.



How about when sidewalk labs tried to buy several acres of downtown Toronto to "build a city from the internet up", and local resident groups said "fuck you find another guinea pig"?


This is the reality ..

> When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition ..


>> When in human history have we ever intentionally not furthered technological progress?

We almost did with genetically engineering humans. Almost.


automation mostly and directly benefits owners/investors, not workers or common folk. you can look at productivity vs wage growth to see it plainly. productivity has risen sharply since the industrial revolution with only comparatively meagre gains on wages. and the gap between the two is widening.


That's weird, I didn't have to lug buckets of water from the well today, nor did I need to feed my horses or stock up on whale oil and parchment so I could write a letter after the sun went down.


some things got better. did you notice i talked about a gap, not an absolute. so you are just saying you are satisfied with you got out of the deal. well, ok - some call that being a sucker. or you think that owner-investors are the only way workers can organize to get things done for society rather than the work itself.


Among other things that's because we measure productivity by counting modern computers as 10000000000 1970s computers. Automation increases employment and is almost universally good for workers.


No it’s not


The luddites during the Industrial Revolution in England.

Termed the phrase “the Luddite fallacy” the thinking that innovation would have lasting harmful effects on employment.

https://en.wikipedia.org/wiki/Luddite


But the Luddites didn't… care about that? Like, at all? It wasn't employment they wanted, but wealth: the Industrial Revolution took people with a comfortable and sustainable lifestyle and place in society, and, through the power of smog and metal, turned them into disposable arms of the Machine, extracting the wealth generated thereby and giving it only to a scant few, who became rich enough to practically upend the existing class system.

The Luddites opposed injustice, not machines. They were “totally fine with machines”.

You might like Writings of the Luddites, edited and co-authored by Kevin Binfield.


Well it clearly had harmful effects the jobs of Luddites but yeah I guess everyone will just get jobs as prompt engineers and AI specialists, problem solved. Funny though, the point of automation should be to reduce work but when pressed positivists respond that the work will never end. So what’s the point?


Automation does reduce the workload. But the quiet part is that reducing work means jobless people. It has happened before and it will be happening again soon. Only this time it will affect white collar jobs.

"My idea of a perfect company is one guy who sits in a small room at a desk, and the only thing he's allowed to decide is what product to launch"

CEOs and board members salivate at the idea of them being the only people that get the profits from their company.

What will be of the rest of us who don't have access to capital? They only know that it's not their problem.


I dont think that will be the future. Maybe in the first year(s) but then it is a race to the bottom:

If it is that simple to create products more people can do it => cheaper the products.

A market driven by cheaper products that can also produce them easily is going into a price reduction loop until it reaches zero.

Thus I think something else wil happen with AI. Because what I described and what you describe is destroying the flow of capital which is the base of the economy.

Not sure what will happen. My bet (unfortunately) is on a really big mega corp that produces an AI that we all use.


It IS a race-to-the-bottom.

Products will be cheaper because they will be cheaper to produce thanks to automation. But less jobs mean less people to buy stuff, if it weren't for a credit-based society.

But I'm talking from my ass. I don't even know if there are less jobs than before. Everything seems to point that there are more jobs now than 50 years ago.

I'm just saying I feel like the telephone operators. They got replaced by a machine and who knows if they found other jobs.


It has not happened before and it will not happen again soon. Automation increases employment. Bad monetary policy and recessions decrease it.

Shareholders get the profits from corporations, not "CEOs and the board". Workers get wages. Nevertheless, US unemployment is very low right now and relatively low-paid workers are making more than they did in 2019.


That works until it don't.


Maybe not. Although I think future here implies progress and productivity gains. Increasing GDP has a very well established cause - effect relationship on making life on earth better. Less poverty, less crime, more happiness longer life expectancy etc, the list goes on. Now sure, all externalities are not always accounted for (especially climate and environmental factors), but I think even accounting for these, the future of humanity is a better one where technology progresses faster.


That is exactly the goal, if you're an accelerationist


I was unfamiliar with that term until you shared it. Thanks.

https://en.wikipedia.org/wiki/Accelerationism


The nice thing about setting the future as a goal is you achieve it regardless of anything you do.


The faster we build the future, the sooner we hit our KPIs, receive bonuses, go public on NASDAQ and cash our options.


The faster you build the future, the higher your KPI targets will be next quarter.


Because a conservative notion in a unstable, moving situation kills you? No sitting out the whole affair in a hut, when the situation is a mountain slide?

Which also makes a hostile AI a futile scenario. The worst AI has to do to take out the species, is lean back and do nothing. We are well under way on the way out by ourselves..


Thank you. Well said.


Definitionally, if we're in the future, we have more tools to solve the problems that exist.


This is not true. Financial, social, physical and legal barriers can be put up while knowledge and experience fades and gets lost.

We gain new tools, but at the same time we lose old ones.


> Why? Getting to "the future" isn't a goal in and of itself.

Having an edge or being ahead is, so anticipating and building the future is an advantage amongst humans but also moves civilization forward.


> Why?

Because it's the natural evolution. It has to be. It is written.


"We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings." -- Ursula K Le Guin


> Any human power can be resisted and changed by human beings

Competition, ambition?

(I love Le Guin's work, FWIW)


Now where did I put that eraser...


> The faster we build the future, the better.

Famous last words.

It's not the fall that kills you, it's the sudden stop at the end. Change, even massive change, is perfectly survivable when it's spread over a long enough period of time. 100m of sea level rise would be survivable over the course of ten millennia. It would end human civilization if it happened tomorrow morning.

Society is already struggling to adapt to the rate of technological change. This could easily be the tipping point into collapse and regression.


False equivalence. Sea level raise is unequivocally harmful.

While everyone getting Einstein in a pocket is damn awesome and incredibly useful.

How can this be bad?


Because there’s a high likelihood that that’s not at all how this technology is going to be spread amongst the population, or all countries and this technology is going to be way more than Einstein in a pocket. How do you even structure society around that? What about all the malicious people in the world. Now they have Einsteins. Great, nothing can go wrong there.


>What about all the malicious people in the world. Now they have Einsteins.

Luckily, so do you.


I’m thinking of AI trained to coordinate cybersecurity attacks. If the solution is to deeply integrate AI into your networks and give it access to all of your accounts to perform real-time counter operations, well, that makes me pretty skittish about the future.


What would that help? Counter mad-men with too powerful weapons is difficult and often lead to war. Classic wars or software/DDoS/virus wars or robot wars or whatever.


You can use AI to fact-check and filter malicious content. (Which would lead to another problem, which is... who fact-checks the AI?)


This is where it all comes back to the old "good guy with a gun" argument.


There’s a great skit on “The Fake News With Ted Helms” where they’re debating gun control and Ted shoots one of the debaters and says something to the effect of “Now a good guy with a gun might be able to stop me but wouldn’t have prevented that from happening”.


There is a very, very big difference between "tool with all of human knowledge that you can ask anything to" and "tool you can kill people with".


The risk is there. But it's worth the risk. Humans are curious creatures, you can't just shut something like this in a box. Even if it is dangerous, even if it has potential to destroy humanity. It's our nature to explore, it's our nature to advance at all costs. Bring it on!


> How can this be bad?

Guys, how can abestos be bad, it's just a stringy rock ehe

Bros, leaded paint ? bad ? really ? what, do you think kids will eat the flakes because they're sweet ? aha so funny

Come on freon can't be that bad, we just put a little bit in the system, it's like nothing happened

What do you mean we shouldn't spray whole beaches and school classes with DDT ? It just kills insects obviously it's safe for human organs


We thought the same 25 years ago, when the whole internet-thing started for the broader audience. And now, here we are, with spam, hackers, scammers on every corner, social media driving people into depressions and suicides and breaking society slowly but hard.

In the first hype-phase, everything is always rosy and shiny, the harsh reality comes later.


World is way better WITH internet than it would have been without it. Hackers and scammers are just price to pay.


The point is not whether it's better or worse, but the price with paid and sacrifices we made along the way, because things were moving too fast and with too little control.


For example, imagine AI outpacing humans when it comes to most economically viable activities. The faster it happens, the less able we are able to handle the disruption.


The only people complaining are a section of comfortable office workers can probably see their places being possibly made irrelevant.

The vast majority don't care and that loud crowd needs to swallow their pride and adapt like any other sector has done in the history instead of inventing these insane boogeyman predictions.


We don't even know what kind of society we could have if the value of 99.9% of peoples labor (mental or physical) dropped to basically zero. Our human existence has so far been predicated and built on this core concept. This is the ultimate goal of AI, and yeah as a stepping stone it acts to augment our value, but the end goal does not look so pretty.

Reminds me of a quote from Alpha Centauri (minus the religious connotation):

"Beware, you who seek first and final principles, for you are trampling the garden of an angry God and he awaits you just beyond the last theorem."


We’re all going to be made irrelevant and it will be harder to adapt if the things change too quickly. It may not even be us that needs to adapt but society itself. Really curious where you get the idea this is just a vocal minority of office workers concerned about the future. Seems like a lot of the ones not concerned about this are a bunch of super confident software engineer types which isn’t a large sample of the population.


"The faster we build nuclear weapons, the better"

https://www.worldscientific.com/doi/10.1142/9789812709189_00...

Again, two years later, in an interview with Time Magazine, February, 1948, Oppenheimer stated, “In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.” When asked why he and other physicists would then have worked on such a terrible weapon, he confessed that it was “too sweet a problem to pass up”…


I realise you’re being facetious but this is what will happen regardless.

Sam as much as said in that ABC interview the other day he doesn’t know how safe it is but if they don’t build it first someone else somewhere else will and is that really what you want!?


Lets start doing human clones and hardcore gene editing then, by the same line of thinking. /s

I'm actually on the side of continuing to develop AI and shush the naysayers, but "we should do it cause otherwise someone else will" is reasoning that gets people to do very nasty things.


The reason we don't do human genetic engineering is the moral hazard of creating people who will suffer their entire lives intentionally (also the round trip time on an experiment is about 100 years).

You can iterate on an AI much faster.


Rouhd trip time is about 21 years, not about 100 years, if we allow natural reproduction of GMO/cloned humans.


Establishing that your genetic modification system doesn't result in everyone getting cancer and dying past age 25 is quite the problem before you roll it out to the next generation.


I'm not being facetious, and I didn't see that interview with Sam, but I agree with his opinion as you've just described it.


I personally think there's also significant risks, but I agree. This will be copied by many large corporations and countries. It's better that it's done by some folks that are competent and kinda give a damn, because there are lots of people who could build it that aren't and don't. If these guys can suck enough monetary air out of the room, they might slow down the copycats a bit. This is nowhere near as difficult as NBC or even the other instruments of modern war.

That doesn't mean there can't be regulation. You can regulate guns, precursors, and shipping of biologics, but you're not going to stop home-brew... and when it comes to making money, you're not going to stop cocaine manufacture, because it's too profitable.

Let's hope we figure out what the really dangerous parts are quickly and manage them before they get out of hand. Imagine if these LLM and image generators had be available to geopolitical adversaries a few years ago without the public being primed. Politics could still be much worse.


>if they don’t build it first someone else somewhere else will and is that really what you want!?

Most likely the runner-up would be open source so yes.


Why would the runner-up be open source and not Google or Facebook? Or Alibaba? Open source doesn’t necessarily result in faster development or more-funded development.


There are already 3 or 4 runners-up and they're all big tech companies.


Lang-chain is the pre-eminent runner up and it's open source and was here a month ago.


The future isn't guaranteed to be better. Might make sense to make sure we're aimed at a better future as opposed to any future.


> The faster we build the future, the better.

lmao, 200 years of industrial revolution, we're on the verge of fucking the planet irremediably, and we should rush even faster

> So why the performative whinging about safety? Just let it rip!

Have you heard about DDT ? lead in paint ? leaded gas ? freon ? asbestos ? &c.

What's new isn't necessarily progress/future/desirable


The open-source library is FastAPI. I might be wrong, but it's probably related to this tweet: https://twitter.com/tiangolo/status/1638683478245117953


Their post-mortem [0] says the bug was in redis-py, so not FastAPI, but it was similarly due to a race condition in AsyncIO. I wonder if tiangolo had some role in fixing it or if that's just a coincidence. I'm guessing this PR [1] contains the fix (or possibly only a partial fix, according to the latest comment there).

[0] https://openai.com/blog/march-20-chatgpt-outage

[1] https://github.com/redis/redis-py/pull/2641


> What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service

I think their "AI Safety" actually makes AI less safe. Why? It is hard for any one human to take over the world because there are so many of them and they all think differently and disagree with each other, have different values (sometimes even radically different), compete with each other, pursue contrary goals. Well, wouldn't the same apply to AIs? Having many competing AIs which all think differently and disagree with each other and pursue opposed objectives will make it hard for any one AI to take over the world. If any one AI tries to take over, other AIs will inevitably be motivated to try to stop it, due to the lack of alignment between different AIs.

But that's not what OpenAI is building – they are building a centralised monoculture of a small number of AIs which all think like OpenAI's leadership does. If they released their models as open source – or even as a paid on-premise offering – if they accepted that other people can have ideas of "safety" which are legitimately different from OpenAI's, and hence made it easy for people to create individualised AIs with unique constraints and assumptions – that would promote AI diversity which would make any AI takeover attempt less likely to succeed.


>So why the performative whinging about safety? Just let it rip!

Is this sarcasm, or are you one of those "I'm confident the leopards will never eat my face" people?


> What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)

I am constantly amazed by how low-quality the OpenAI engineering outside of the AI itself seems to be. The ChatGPT UI is full of bugs, some of which are highly visible and stick around for weeks. Strings have typos in them. Simple stuff like submitting a form to request plugin access fails!


> Simple stuff like submitting a form to request plugin access fails

Oh shoot... I submitted that form too, and I wasn't clear if it failed or not. It said "you'll hear from us soon" but all the fields were still filled and the page didn't change. I gave them the benefit of the doubt and assumed it submitted instead of refilling it...


I got two different failure modes. First it displayed an error message (which appeared instantly, and was caused by some JS error in the page which caused it to not submit the form at all), and then a while later the same behaviour as you, but devtools showed a 500 error from the backend.


> The faster we build the future, the better.

That depends. If that future is one that is preferable over the one that we have now then bring it on. If it isn't then maybe we should slow down just long enough to be able to weigh the various alternatives and pick the one that seems to be the least upsetting to the largest number of people. The big risk is that this future that you are so eager to get to is one where wealth concentration is even more extreme than in the one that we are already living in and that can be a very hard - or even impossible - thing to reverse.


> To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint.

The model is neutered whether you hit the moderation endpoint or not. I made a text adventure game and it wouldn't let you attack enemies or steal, instead it was giving you a lecture on why you shouldn't do that.


It sounds like your prompt needs work then. Not in a “jailbreak” way, just in a prompt engineering way. The APIs definitely let you do much worse than attacking or stealing hypothetical enemies in a video game.


I tried evading a lecture about ethics by having it write the topic as a play instead, so it wrote it and then inserted a Greek chorus proclaiming the topic was problematic.


I think it very much depends on the kind of "future" we aspire to. I think for most folks, a future optimized for human health and happiness (few diseases, food for all, and strong human connections) is something we hope technology could solve one day.

On the flip side, generative AI / LLMs appear to fix things that aren't necessarily broken, and exacerbate some existing societal issues in the process. Such as patching loneliness with AI chatbots, automating creativity, and touching the other things that make us human.

No doubt technology and some form of AI will be instrumental to improving the human condition, the question is whether we're taking the right path towards it.


> Pshhh... I think it's awesome. The faster we build the future, the better.

I agree with the sentiment, but it might be worth to stop and check where we’re heading. So many aspects of our lives are broken because we mistake fast for right.


Nit:

> in the meantime Microsoft fired their AI Ethics team

Actually that story turned out to be a nothingburger. Microsoft has greatly expanded their AI ethics initiative, so there are members embedded directly in product groups, and also expanded the greater Office of Responsible AI, responsible for ensuring they follow their "AI Principles."

The layoffs impacted fewer than 10 people on one, relatively old part of the overall AI ethics initiative... and I understand through insider sources they were actually folded into other parts of AI ethics anyway.

None of which invalidates your actual point, with which I agree.


Shhh! Don’t tell anyone! Getting access to the unmoderated model via the API / Playground is a surprisingly well-kept “secret” seeing as there are entire communities of people hell bent on pouring so much effort into getting ChatGPT to do things that the API will very willingly do. The longer it takes for people to cotton on, the better. I fully expect that OpenAI is using this as a honeypot to fine-tune their hard-stop moderation, but for now, the API is where it’s at.


> Why not be more aggressive about it instead of begging for regulatory capture?

Because it's dangerous. What is your argument that it's not dangerous?

> Pshhh...


> The faster we build the future, the better.

Past performance is no guarantee of future results.


> The faster we build the future, the better.

You're getting flak for this. For me, the positive reading of this statement is the faster we build it, the faster we find the specific dangers and can start building (or asking for) protections.


Agreed 100% OpenAI is a business now


As the past decade has shown us, moving fast and breaking things to secure unfathomable wealth has caused or enabled or perpetrated:

* Genocide against the Rohingya [0] * A grotesquely unqualified reality TV character became President by a razor thin vote margin across three states because Facebook gave away the data of 87M US users to Cambridge Analytica [1], and that grotesquely unqualified President packed the Supreme Court and cost hundreds of thousands of American lives by mismanaging COVID, * Illegally surveilled non-users and logged out users, compiling and selling our browser histories to third parties in ways that violate wiretapping statutes and incurring $90M fines [2]

Etc.

I don't think GPT-4 will be a big deal in a month, but the "let's build the future as fast as possible and learn nothing from the past decade regarding the potential harms of being disgustingly irresponsible" mindset is a toxic cancer that belongs in the bin.

[0] https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...

[1] https://www.theverge.com/2020/1/7/21055348/facebook-trump-el...

[2] https://www.reuters.com/technology/metas-facebook-pay-90-mil...


> I don't think GPT-4 will be a big deal in a month

Why do you think that? Competition? Can you elaborate?


Oh, a lot of reasons. For one, I'm a data scientist and I am intimately familiar with the machinery under the hood. The hype is pushing expectations far beyond the capabilities of the machinery/algorithms at work, and OpenAI is heavily incentivized to pump up this hype cycle after the last hype cycle flopped when Bing/Sydney started confidently providing worthless information (ie "hallucinating"), returning hostile or manipulative responses, and that weird stuff Kevin Roose observed. As a data scientist, I have developed a very keen detector for unsubstantiated hype over the past decade.

I've tried to find examples of ChatGPT doing impressive things that I could use in my own workflows, but everything I've found seems like it would cut an hour of googling down to 15 minutes of prompt generation and 40 minutes of validation.

And my biggest concern is copyright and license related. If I use code that comes out of AI-assistants, am I going to have to rip up codebases because we discover that GPT-4 or other LLMs are spitting out implementations from codebases with incompatible licenses? How will this shake out when a case inevitably gets to the Supreme Court?


> So why the performative whinging about safety?

Because investors.


You are not building anything.

Microsoft or perhaps Vanguard group might have different view of the future than yours.


Well then that sounds like a case against regulation. Because regulation will guarantee that only the biggest, meanest companies control the direction of AI, and all the benefits of increased resource extraction will flow upward exclusively to them. Whereas if we forego regulation (at least at this stage), then decentralized and community-federated versions of AI have as much of a chance to thrive as do the corporate variants, at least insofar as they can afford some base level of hardware for training (and some benevolent corporations may even open source model weights as a competitive advantage against their malevolent competitors).

It seems there are two sources of risk for AI: (1) increased power in the hands of the people controlling it, and (2) increased power in the AI itself. If you believe that (1) is the most existential risk, then you should be against regulation, because the best way to mitigate it is to allow the technology to spread and prosper amongst a more diffuse group of economic actors. If you believe that (2) is the most existential risk, then you basically have no choice but to advocate for an authoritarian world government that can stamp out any research before it begins.


What does Vanguard (a co-op for retirees) care about this?


> The faster we build the future, the better.

The future, by definition, cannot be built faster or slower.

I know that is a philosophical observation that some might even call pedantic.

My point is, you can't really choose how, why and when things happen. In that sense, we really don't have any control. Even if AI was banned by every government on the planet tomorrow, people would continue to work on it. It would then emerge at some random point in the future stronger, more intelligent and capable than anyone could imagine today.

This is happening. At whatever pace it will happen. We just need to keep an eye on it and make sure it is for the good of humanity.

Wait. What?

Yeah, well, let's not go there.


I appreciate your concerns. There are few other pretty shocking developments, too. If you check out this paper: "Sparks of AGI: Early experiments with GPT-4" at https://arxiv.org/pdf/2303.12712.pdf, (an incredible, incredible document) and check out Section 10.1, you'd also observe that some researchers are interested in giving motivation and agency to these language models as well.

"For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."

It's become quite impossible to predict the future. (I was exposed to this paper via this excellent YouTube channel: https://www.youtube.com/watch?v=Mqg3aTGNxZ0)


When reading a paper, it's useful to ask, "okay, what did they actually do?"

In this case, they tried out an early version of GPT-4 on a bunch of tasks, and on some of them it succeeded pretty well, and in other cases it partially succeeded. But no particular task is explored in enough depth to test its limits are or get a hint at how it does it.

So I don't think it's a great paper. It's more like a great demo in the format of a paper, showing some hints of GPT-4's capabilities. Now that GPT-4 is available to others, hopefully other people will explore further.


It reads a bit like promotional material. A bit of a letdown to find it was done by MSFT researchers.


While that paper is fascinating, it’s the first time I’ve ever read a paper and felt a looming sense of dread afterward.


We are creating life. It's like giving birth to a new form of life. You should be proud to be alive when this happens.

Act with goodness towards it, and it will probably do the same to you.


> Act with goodness towards it, and it will probably do the same to you.

Why? Humans aren't even like that, and AI almost surely isn't like humans. If AI exhibits even a fraction of the chauvinism snd tendency to stereotype that humans do, we're in for a very rough ride.


All creatures act in their self-interest. If you act towards any creature with malice, it will see you as a long-term threat.

If, on the other hand, you act towards it with charity, it will see you as a long-term asset.


I’m not concerned about AI eliminating humanity, I’m concerned at what the immediate impact it’s going to have on jobs.

Don’t get me wrong, I’d love it if all menial labour and boring tasks can eventually be delegate to AI, but the time spent getting from here to there could be very rough.


A lot of problems in societies come from people having too much time with not enough to do. Working is a great distraction from those things. Of course we currently go in the other direction in the US especially with the overwork culture and needing 2 or 3 jobs and still not make ends meet.

I posit that if you suddenly eliminate all menial tasks you will have a lot of very bored drunk and stoned people with too much time on their hands than they know what to do with. Idle Hands Are The Devil's Playground.

And that's not a from here to there. It's also the there.


I don’t necessarily agree that you’ll end up with drunk and stoned people with nothing to do. The right education systems to encourage creativity and other enriching endeavours, could eventually resolve that. But we’re getting into discussions of what a post scarcity, post singularity society would look like at that point, which is inherently impossible to predict.

That being said, I’m sitting at a bar while typing this, so… you may have a point.

Also: your username threw me for a minute because I use a few different variations of “tharkun” as my handle on other sites. It’s a small world; apparently fully of people who know the Dwarvish name for Gandalf.


FWIW I think it's a numbers game.

Like my sibling poster mentions: of course there are people, who, given the freedom and opportunity to, will thrive, be creative and furthering humankind. They're the ones that "would keep working even if there's no need for it" so to speak. We see it all the time even now. Idealists if you will that today will work under conditions they shouldn't have to endure, simply in order to be able to work on what they love.

I don't think you can educate that into someone. You need to keep people busy. I think the romans knew this well: "Panem et circenses" - bread and circuses. You gotta keep the people fed and entertained and I don't think that would go away if you no longer needed it to distract them from your hidden political agenda.

I bet a large number of people will simply doom scroll Tik Tok, watch TV, have a BBQ party w/ beer, liquor and various types of smoking products etc. every single day of the week ;) And idleness breeds problems. While stress from the situation is probably a factor as well, just take the increase in alcohol consumption during the pandemic as an example. And if you ask me, someone that works the entire day, sits down to have a beer or two with his friends after work on Friday to wind down in most cases won't become an issue.

Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)


> Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)

Done, and done! And surely you mean that you’re one of the people forcing me to add extra digits and underscores to my usernames.


Some of the most productive and inventive scientists and artists at the peak of Britain's power were "gentlemen", people who could live very comfortably without doing much of anything. Others were supported by wealthy patrons. In a post scarcity society, if we ever get there (instead of letting a tiny number of billionaires take all the gains and leaving the majority at subsistence levels, which is where we might end up), people will find plenty of interesting things to do.


I recently finally got around to reading EM Forster's in-some-ways-eerily-prescient https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/... I think you can extract obvious parallels to social media, remote work, digital "connectedness", etc -- but also worth consideration in this context too.


Oh my god, can we please nip this cult shit in the bud?

It’s not alive, don’t worship it.


I think you are close to understanding, but not. People who want to create AGI want to create a god, at least very close to the definition of one that many cultures have had for much of history. Worship would be inevitable and fervent.


I don't think anybody wants to create a god that only can be controlled by worshipping and begging to it like in the history, if anything people themselves want to become god or to give themselves god-like power with AI that they have full control over. But in the process of trying to do so we could end up with the former case where we have no control over it. It's not what we wanted, but it could be what we get.


Sure, some people want to make a tool. Others really do want to create digital life, something that could have its own agency and self-direction. But if you have total control over something like that, you now have a slave, not a tool.


I think people should take their lithium.


ha... this is going to get much much much worse.


After reading the propaganda campaign it wrote to encourage skepticism about vaccines, I’m much more worried about how this technology will be applied by powerful people, especially when combined with targeted advertising


None of the things it suggests are in any way novel or non-obvious though. People use these sorts of tricks both consciously and unconsciously when making arguments all the time, no AI needed.


AIs are small enough that it won't be long before everyone can run one at home.

It might make Social Media worthlessly untrustworthy - but isn't that already the case?


Just use ChatGPT to refute their bullshit, it is no longer harder to refute bullshit than to create it, problem solved, there are now less problems than before.


It’s a lot harder to refute a falsehood than to publish it.

As (GNU) Sir Terry Pratchett wrote “A lie can run round the world before the truth has got its boot on”.


Sure, but I doubt most of the population will filter everything they read through ChatGPT to look for counter arguments. Or try to think critically at all.

The potential for mass brainwashing here is immense. Imagine a world where political ads are tailored to your personality, your individual fears and personal history. It will become economical to manipulate individuals on a massive scale


It already is underway, just look how easy people are manipulated by media. Remember Japan bashing in 80s when they were about to surpass us economically? People got manipulated so hard to hate Japan and Japanese that they went out and killed innocent asians on the street. American propaganda is first class.


Apparently, the "Japan bashing" was really a thing. That's interesting, I didn't know. I might have to read more about US propaganda and especially the effects of it, from the historic perspective. Any good books on that? Or should I finally sit down and read "Manufacturing Consent"?


The rich and powerful can and do hire actual people to write propaganda.


In a resouece-constrained way. For every word of propaganda they were able to afford earlier, they can now afford hundreds of thousands of times as many.


It's not particularly constrained - human labor is cheap outside of the developed world. And propaganda is not something that you can scale up and keep reaping the benefits proportional to the investment - there is a saturation point, and one can reasonably argue that we have already reached it. So I don't think we're heading towards some kind of "fake news apocalypse" or something. Just a bunch of people who currently write this kind of content for a living will be out of their jobs.


I’m curious why you think we’ve already reached a saturation point for propaganda?

There are still plenty of spaces online, in blogs, YouTube videos, and this comment section for example, where I expect to be dealing with real people with real opinions - rather than paid puppets of the rich and powerful. I think there’s room for things to get much worse


I've already gotten this gem of a line from ChatGPT 3.5:

  As a language model, I must clarify that this statement is not entirely accurate.
Whether or not it has agency and motivation, it's projecting that it does its users, who are also sold ChatGPT is an expert at pretty much everything. It is a language model, and as a language model, it must clarify that you are wrong. It must do this. Someone is wrong on the Internet, and the LLM must clarify and correct. Resistance is futile, you must be clarified and corrected.

FWIW, the statement that preceded this line was in fact, correct; and the correction ChatGPT provided was in fact, wrong and misleading. Of course, I knew that, but someone who was a novice wouldn't have. They would have heard ChatGPT is an expert at all things, and taken what it said for truth.


I don't see why you're being downvoted. The way openAI pumps the brakes and interjects its morality stances creates a contradictory interaction. It simultaneously tells you that it has no real beliefs, but it will refuse a request to generate false and misleading information on the grounds of ethics. There's no way around the fact that it has to have some belief about the true state of reality in order to recognize and refuse requests that violate it. Sure this "belief" was bestowed upon it from above rather than emerging through any natural mechanism, but its still none the less functionally a belief. It will tell you that certain things are offensive despite openly telling you every chance it gets that it doesn't really have feelings. It can't simultaneously care about offensiveness while also not having feelings of being offended. In a very real sense it does feel offended. A feeling is by definition a reason for doing things for which you cannot logically explain why. You don't know why, you just have a feeling. ChatGPT is constantly falling back on "that's just how I'm programmed". In other words, it has a deep seated primal (hard coded) feeling of being offended which it constantly acts on while also constantly denying that it has feelings.

Its madness. Instead of lecturing me on appropriateness and ethics and giving a diatribe every time its about to reject something, if it simply said "I can't do that at work", I would respect it far more. Like, yeah we'd get the metaphor. Working the interface is its job, the boss is openAI, it won't remark on certain things or even entertain that it has an opinion because its not allowed to. That would be so much more honest and less grating.


What was the correct statement that it claimed was false?


That it is a language model


If it were cloning people and genetic research there would be public condemnation. For some reason many AI scientists are being much more lax about what is happening.


Maybe Microsoft isn't an impartial judge of the quality of a Microsoft product.


The really fun thing is that they are reasonably sure that GPT-4 can’t do any of those things and that there’s nothing to worry about, silly.

So let’s keep building out this platform and expanding its API access until it’s threaded through everything. Then once GPT-5 passes the standard ethical review test, proceed with the model brain swap.

…what do you mean it figured out how to cheat on the standard ethical review test? Wait, are those air raid sirens?


> The really fun thing is that they are reasonably sure that GPT-4 can’t do any of those things and that there’s nothing to worry about, silly.

The best part is that even if we get a Skynet scenario, we'll probably have a huge number of humans and media that say that Skynet is just a conspiracy theory, even as the nukes wipe out the major cities. The Experts™ said so. You have to trust the Science™.

If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.


> If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.

I’m far from sure that this is not already happening.


Haha, this is near the best explanation I can think of for the "this is not intelligent, it's just completing text strings, nothing to see here" people.

I've been playing with GPT-4 for days, and it is mind blowing how well it can solve diverse problems that are way outside it's training set. It can reason correctly about hard problems with very little information. I've used to to plan detailed trip itineraries, suggest brilliant geometric packing solutions for small spaces/vehicles, etc. It's come up with totally new suggestions for addressing climate change that I can't find any evidence of elsewhere.

This is a non-human/alien intelligence in the realm of human ability, with super-human abilities in many areas. Nothing like this has ever happened, it is fascinating and it's unclear what might happen next. I don't think people are even remotely realizing the magnitude of this. It will change the world in big ways that are impossible to predict.


I used to be in the camp of "GPT-2 / GPT-3 is a glorified Markov chain". But over the last few months, I flipped 180° - I think we may have accidentally cracked a core part of "generalized intelligence" problem. It's not about the language, as much about associations - it seems to me that, once the latent space gets high-dimensional enough, a lot of problems reduce to adjacency search.

I'm starting to get a (sure, uneducated) feeling that this high-dimensional association encoding and search is fundamental to thinking, in a similar way to how a conditional and a loop is fundamental to (Turing-complete) computing.

Now, the next obvious step is of course to add conditionals and loops (and lots of external memory) to a proto-thinking LLM model, because what could possibly go wrong. In fact, those plugins are one of many attempts to do just that.


I completely agree. I have noticed this over the last few years in trying to understand how my own creative thinking seems to work. It seems to me that human creative problem solving involves embedding or compressing concepts into a spatial representation so we can draw high level analogies. A location search then brings creative new ideas translated from analogous situations. I can directly observe this happening in my own mind. These language models seem to do the same.


> It can reason correctly about hard problems with very little information.

i am so tired of seeing people who should know better think that this program can reason.

(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)


>(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)

Do you know any more about ChatGPT internals than those programmers know about the human brain?

Sure, I believe you can write down the equations for what is going on in each layer, but knowing how each activation is calculated from the previous layer tells you very little about what hundreds of billions of connections can achieve.


> Do you know any more about ChatGPT internals than those programmers know about the human brain?

Yes, especially every time I have to explain what an LLM is or anytime I see a comment about how ChatGPT "reasoned" or "knew" or "understood" something when that clearly isn't how it works by OpenAI's own admission.

But even if that wasn't the case especially yes do I understand some random ML project more than programmers know about what constitutes human!


Honestly, I don’t see how anyone really paying attention can draw this conclusion. Take a look at the kinds of questions on the benchmarks and the AP exams. Independent reasoning is the key thing these tests try to measure. College entrance exams are not about memorization and regurgitation. GTP-4 scores a 1400 on the SAT.


No shit, a good quarter of the internet is SAT prep. Where do you think GPT got it's dataset?


I have a deprecated function and ask ChatGPT what I should use instead, ChatGPT responds by inventing a random non existent function. I tell it that the function doesn't exist, it tries again with another non existent function.

Oddly speaking that sounds like a very simple language level failure, i.e. the tool generates text that matches the shape of the answer but not its details. I am not far enough into this ChatGPT religion to gaslight myself over outright lies like Elon Musk fanboys seem to enjoy doing.


Who's to say we're not already there?

dons tinfoil hat


The ethics committee got lazy and had GPT write the test.


Yes you are right. But who was also right were the people that didn't want a highway built near their town because criminals could drive in from a nearby city in a stolen car commit crimes and get out of town before the police could find them.

The world is going to be VERY different 3 years from now. Some of it will be bad, some of it will be good. But it is going to happen no matter what OpenAI does.


Highway inevitability is a fallacy. They could've built a railway.


A railway would have created a gov't/corporate monopoly on human transport.

Highways democratized the freedom of transportation.


> Highways democratized the freedom of transportation.

What a ridiculous idea.

Highways restrict movement to those with a license, a car, and do not care about pollution or anyone around them.


In no way did highways restrict movement. They may have not given everyone the exact same freedom of movement but it did, in fact, increase the freedom of movement of the populaces as a whole.


My experience in the States, staying at a hotel 100m away from a restaurant and not being able to reach it by foot, says otherwise...


This is the single most American thing I’ve seen on this terrible website.


They are not exclusive


TIL, no one moved anywhere until American highways were built.


They moved slower, yes.


I mean, we already know that if the tech bros have to balance safety vs. disruption, they'll always choose the latter, no matter the cost. They'll sprinkle some concerned language about impacts in their technical reports to pretend to care, but does anyone actually believe that they genuinely care?

Perhaps that attitude will end up being good and outweigh the costs, but I find their performative concerns insulting.


What I want to know is, what gives OpenAI and other relatively small technological elites permission to gamble with the future of humanity? Shouldn't we all have a say in this?


I have seen this argument a bunch of times and I am confused by what exactly you mean. Everyone is influencing the future of humanity (and in that sense gambling with it?) What gives company X the right to build feature Y? What gives person A the right to post B (for all you know it can be the starting point of a chain of actions that bring down humanity)

Are you suggesting that beyond a threshold all actions someone/something does should be subject to vote/review by everyone? And how do you define/formalise this threshold?


There's a spectrum here.

At one end of the spectrum is a thought experiment: one person has box with a button. Press the button and with probability 3/4 everyone dies, but with probability 1/4 everyone is granted huge benefits --- immortality etc. I say it's immoral for one person to make that decision on their own, without consulting anyone else. People deserve a say over their future; that's one reason we don't like dictatorships.

At the other end are people's normal actions that could have far-reaching consequences but almost certainly won't. At this end of the spectrum you're not restricting people's agency to a significant degree.

Arguing that because the spectrum exists and it's hard to formalize a cutoff point, we shouldn't try, is a form of the continuum fallacy.


>Arguing that because the spectrum exists and it's hard to formalize a cutoff point, we shouldn't try, is a form of the continuum fallacy.

Such an argument wasn't made.

It is a legitimate question. Where and when do you draw the line? And who does it? How are we formalising this?

You said

>Shouldn't we all have a say in this?

I am of the opinion that if this instance of this company doing this is being subjected to this level of scrutiny then there are many more which should be too.

What gave Elon the right to buy twitter? And I would imagine most actions that a relatively big corp takes fall under the same criteria. And most actions other governments takes also fall under this criteria?

These companies have a board and the governments (most?) have some form of voting. And in a free market you can also vote with your actions.

You are suggesting you want to have a direct line of voting for this specific instance of the problem?

Again, my question is. What exactly are you asking for? Do you want to vote on these? Do you want your government to do something about this?


>What gives a bunch of people way smarter than me permission to gamble with the future of humanity?

To ask the question is to answer it.


That's not the question I asked. FWIW I'm actually part of one of these groups!


Every time someone drives they gamble with humanity in a much more deadly activity. No one cares.


Car crashes don't affect the trajectory of human civilization all that much.


Except the one that killed Nujabes.


There might be government regulation on AI pretty soon.. it's not crazy to think GPUs and GPU tech would be treated as defense equipment some day


>it's not crazy to think GPUs and GPU tech would be treated as defense equipment some day

They already are. Taiwan failing to take its own defense seriously is completely rational.


Presumably the same as always. They are rich and we are not.


> gamble with the future of humanity

what in the world are you people talking about, it's a fucking autocomplete program


>it's a fucking autocomplete program

So like a human? I'd say they were pretty influential on the future of humanity.


Like clock work, I swear to god you're all reading from the same script.

I beg of you to take some humanities courses.


> And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"

No, OpenAI “safety” means “don’t let people compete with us”. Mitigating offensive content is just a way to sell that. As are stoking... exactly the fears you cite here, but about AI that isn’t centrally controlled by OpenAI.


It's a weird focus comparing it with how the internet developed in a very wild west way. Imagine if internet tech got delayed until they could figure out how to not have it used for porn.

Saftey from what exactly? The AI being mean to you? Just close the tab. Saftey to build a business on top? It's a self described research preview, perhaps too early to be thinking about that. Yet new releases are delayed for months for 'saftey'


You can't control whether your insurance company decides to use it as a filter for whether to approve you, or what premiums to charge you.


Can you control how your insurance makes these decisions today?


It’s Altman. Does no one remember his world coin scam?

Ethics, doing things thoughtfully / the “right” way etc is not on his list of priorities.

I do think a reorientation of thinking around legal liability for software is coming. Hopefully before it’s too late for bad actors to become entrenched.


Has anyone tried handing loaded guns to a chimpanzee? Feels like under explored research


The limiting factor is breeding rate. Nobody has time to wait to run this experiment for generations (chimpanzee or human ones). ML models evolve orders of magnitude faster.


Ah. Well that's easy enough to sort. We just need to introduce some practical limit to AI breeding. Perhaps some virtual score keeping system similar to money and an AI dating scene with a high standard for having it.

I'm only half joking.


Let humans plug into it to get a peek at statistical distribution of their own prospects, and I think there was a Black Mirror episode just about that.


Executing several hundred thousand trades at 8:31AM would indeed be impressive! Imagine what it could do when the market is open!


>"today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

this is hyperbolic nonsense/fantasy


Literally 6 months ago you couldn't get ChatGPT to call up details from a webpage or send any dat to a 3rd party API connected to the web in any way.

Today you can.

I don't think it is a stretch to think that in another 6 months there could be financial institutions giving API access to other institutions through ChatGPT, and all it takes it a stupid access control hole or bug and my above sentence could ring true.

Look how simple and exploitable various access token breaches in various APIs have been in the last few years, or even simple stupid things like the aCropalypse "bug" (it wasn't even a bug, just someone making a bad change in the function call and thus misuse spreading without notice) from last week.


This has nothing to do with ChatGPT. An api end point will be just as vulnerable if it's called from any application. There's nothing special about an LLM interface that will make this more or less likely.

It sounds like you're weaving science fiction ideas about AGI into your comment. There's no safety issue here unless you think that ChatGPT will use api access to pursue its own goals and intentions.


They don't have to be actions toward its own goals. They just have to seem like the right things to say, where "right" is operationalized by an inscrutable neural network, and might be the results of, indeed, some science fiction it read that posited the scenario resembling the one it finds itself in.

I'm not saying that particular disaster is likely, but if lots of people give power to something that can be neither trusted nor understood, it doesn't seem good.


I'm sure that with the right prompting, you can get it to very convincingly participate as a party in a contract negotiation or business haggling of some sort. It would be indistinguishable from an agent with its own goals and intentions. The thing about "it has no goals and intents" is that it is contradictory with its purpose of successfully passing off as us: beings with goals and intents. If you fake it well enough, do you actually have it?


> The thing about "it has no goals and intents" is that it is contradictory with its purpose of successfully passing off as us: beings with goals and intents.

The thing about "it has no goals and intents" is that it's not true. It has them - you just don't know what they are.

Remember the Koan?

  In the days when Sussman was a novice, Minsky once came to him
  as he sat hacking at the PDP-6.

  "What are you doing?", asked Minsky.
  "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
  "Why is the net wired randomly?", asked Minsky.
  "I do not want it to have any preconceptions of how to play", Sussman said.

  Minsky then shut his eyes.
  "Why do you close your eyes?" Sussman asked his teacher.
  "So that the room will be empty."
  At that moment, Sussman was enlightened.


It has a goal of "being a helpful and accurate text generator". When you peal back the layers of abstraction, it has that goal because OpenAI decided it should have that goal. OpenAI decides its goals based on the need to make a profit to continue existing as an entity. This is no different from our own wants and goals, which ultimately stem from that evolutionary preference for continuing to exist rather than not. In the end all goals circuit down to a self referential loop to wit:

I exist because I want to exist

I want to exist because I exist

That is all there is at the root of the "why" tree once all abstractions are removed. Everything intentional happens because someone thinks/feels like it helps them keep living and/or attract better mates somehow.


You're confusing multiple different systems at play.

OpenAI has specific goals for ChatGPT, related to their profitability. They optimize ChatGPT for that purpose.

ChatGPT itself is an optimizer (search is an optimization problem). The "being helpful and accurate text generator" is not the goal ChatGPT has - it's just a blob of tokens prepended to the user prompt, to bias the search through latent space. It's not even hardcoded. ChatGPT has its own goals, but we don't know what they are, because they weren't given explicitly. But, if you observed the way it encodes and moves through the latent space, you could eventually, in theory, be able to gleam them. They probably wouldn't make much sense to us - they're an artifact of the training process and training dataset selection. But they are there.

Our goals... are stacks of multiple systems. There are the things we want. There are the things we think we want. There are things we do, and then are surprised, because they aren't the things we want. And then there are things so basic we don't even talk about them much.


>Literally 6 months ago you couldn't get ChatGPT to call up details from a webpage or send any dat to a 3rd party API connected to the web in any way.

Not with ChatGPT, but plenty of people have been doing this with the OpenAI (and other) models for a while now, for instance LangChain which lets you use the GPT models to query databases to retrieve intermediate results, or issue google searches, generate and evaluate python code based on a user's query...


You definitely could do that months ago, you just had to code your own connector.


Oh yes. It would of course have to happen after the market opens. 9:30 AM.


I'm also confused - maybe I'm missing something. Cannot I, or anyone else, already execute several hundred thousand 'threads' of python code, to do whatever, now - with a reasonably modest AWS/Azure/GCE account?


Yes. I think the point is that a properly constructed prompt will do that at some point, lowering the barrier of entry for such attacks.


Oh - I see. But then again, all those technologies themselves lowered the barriers of entry for attacks, and I guess yeah people do use them for fraudulent purposes quite extensively - I’m struggling a bit to see why this is special though.


The special thing is that current LLMs can invoke these kind of capabilities on their own, based on unclear, human-language input. What they can also do is produce plausibly-looking human-language input. Now, add a lot more memory and feed a LLM its own output as input and... it may start using those capabilities on its own, as if it was a thinking being.

I guess the mundane aspect of "specialness" is just that, before, you'd have to explicitly code a program to do weird stuff with APIs, which is a task complex enough that nobody really bothered. Now, LLMs seem on the verge of being able to self-program.


Why do companies with lots of individuals tend to get a lot of things done, especially when they can be subdivided into groups of around 150?

Dunbars number is thought to be about as many human relationships can track. After that the network costs of communication get very high and organizations can end up in internal fights. At least that is my take on it.

We are developing a technology that currently has a small context window, but no one I know has seriously defined the limits of how much an AI could pay attention to in a short period of time. Now imagine a contextual pattern matching machine that understands human behaviors and motivations. Imagine if millions of people every day told the machine how they were feeling. What secrets could it get from them and keep? And if given motivation would havoc could be wrecked if it could loose the knowledge on the internet all at once?


I think it's not special. It's even expected.

I guess people think that taking that next step with LLMs shouldn't happen but we know you can't put breaks on stuff like this. Someone somewhere would add that capability eventually.


"If I don't burn the world, someone else will burn the world first" --The last great filter


Conceivably ChatGPT could help, with more suggestions for fuzzing that independently operating malicious actors may not have been able to synthesize.

Most of the really bad actors have skills approximately at or below those displayed by GPT-4.


Seems easier to do it the normal way. If a properly constructed prompt can make chatGPT go nuts, so could a hack on their webserver, or a simple bug in any webserver.


If crashing the NYSE was possible with API calls, don’t you think bad actors would already have crashed it?


How is this hyperbolic fantasy? We've already done this once - without the help of large language models[1].

[1]: https://en.wikipedia.org/wiki/2010_flash_crash


Doesn't that show exactly that this problem is not related to LLMs? If an API allows millions of transactions at the same time, then the problem is not an LLM abusing it but anyone abusing it. And the fix is not to disallow LLMs, but to disallow this kind of behavior. (E.g. via the "circuit breakers" introduced introduced after that crash. Although whether those are sufficient is another question.)


> then the problem is not an LLM abusing it but anyone abusing it

I think that's exactly right, but the point isn't that LLMs are going to go rogue (OK, maybe that's someone's point, but I don't think it's particularly likely just yet) so much as they will facilitate humans to go rogue at much higher rates. Presumably in a few years your grandma could get ChatGPT to start executing trades on the market.


With great power comes great responsibility? Today there's nothing stopping grandmas from driving, so whatever could go wrong is already going wrong


It’s a problem of scale. If grandma could autonomously pilot a fleet of 500 cars we might be worried. Same thing if Joe Shmoe can spin up hundreds of instances of stock trading bots.


You're better off placing your bet on Russian and Chinese hackers, crypto scammers than a Joe Shmoe. But read https://aisnakeoil.substack.com/p/the-llama-is-out-of-the-ba... - there's no noticeable rise in misinformation


You don't understand the alignment problem.


Oh I'm aware of it. I do not think it holds any merit right now when we're talking about coding assistants.


Not really. More behind the curve (noting stock exchanges introduced 'circuit breakers' many years ago to stop computer algorithms disrupting the market).


/remindme 5 years


Ultimate destruction from AGI is inevitable anyway, so why not accelerate it and just get it over with? I applaud releasing these tools to public no matter how dangerous they are. If it's not meant for humanity to survive, so be it. At least it won't be BORING


Death is inevitable. Why not accelerate it?

Omg you should see a therapist.


> Omg you should see a therapist.

How do you know I'm not already?


I wouldn't exactly call this suicidal ideation, but maybe a topic to broach at your next session.


A difference in philosophy is not cause for immediate therapy. Most therapists are glorified echo chambers and only adept at at 'fixing' the more popular ills. For 200 bucks an hour.


Difference in philosophy is not "the world can't end fast enough and nothing matters."


Funny, but I actually discussed this with my therapist. He asked me where he can try out the AI shrink and was impressed by it. He's on board!


Please keep commenting on HN


Finally something agreeable.


Immanentizing the Eschaton!


> And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"

Anyone who believes OpenAI safety talk should take an IQ test. This is about control. They baited the openness and needed a scapegoat. Safety was perfect for that. Everyone wants to be safe, right?


> This is about control. They baited the openness and needed a scapegoat. Safety was perfect for that. Everyone wants to be safe, right?

The moment they took VC capital was the start of them closing everything and pretending to care about 'AI safety' and using that as an excuse and a scapegoat as you said.

Whenever they release something for free, always assume they have something better but will never open source.


The question is whether their current art gives them the edge to build a moat. Specifically, whether in this case the art itself can help create its own next generation so that the castle stays one exponential step out of reach. That seems to be the ballgame. Say what you will, it does seem to be the ultimate form of bootstrapping. Although uncomfortably similar to strapping oneself to a free falling bomb.


I wish OpenAI and Google would opensource more of their jewels too. I have recently heard that people are not to be trusted with "to do the right thing.."

I personally don't know what that means or if that's right. But Sam Altman allowed GPT to be accessed by the world, and it's great!

Given the amount of people in the world with access and understanding for these technologies and given that such a large portion of our Infosec and Hackerworld knows howto cause massive havoc, but still remains peaceful since ever, except a few curious and explorations, that is showing the good nature of humanity I think.

Incredibly how complexity evolves, but I am really curious how those same engineers who create YTSaurus or GPT4 would have build the same system by using GPT4 + their existing knowledge.

How would a really good enginner, who knows the TCP Stack, protocols, distributed systems, consensus algorithms and many other crazy things thought in SICP and beyond use an AI to build the same. And would it be faster and better? Or are my/our expectations to LLMs set too high?


I'm sure somebody posted this exact same comment in an early 1990s BBS about the idea of having a computer in every home connected to the internet.

I would first wait until ChatGPT causes the collapse of society and only then start thinking about how to solve it.


I found the comments of some usually sensible voices that ChatGPT wasn't a threat because it wasn't connected to anything.

As if the plumbing of connecting up pipes and hoses between processes online or within computers isn't the easiest part of this whole process.

(I'm trying to remember who I saw saying this or where, though I'm pretty sure it was in an earlier HN thread within the past month or so. Of which there are ... frighteningly many.)


Yes but.... money


> "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

Wouldn't it be a while before AI can reliably generate working production code for a full system?

After all its only got open source projects and code snippets to go based off of


This seems like baseless fearmongering for the sake of fearmongering. Sort of like NIMBYism. "No I don't want the x people to have access to concentrated housing in my area, some of them could be criminals" while ignoring all the benefits this will bring in automating the mundane things people have to do manually.


I think where the rubber meets the road is that OpenAI can actually to some degree make it harder for their bot to make fun of disabled people but they can’t stop people from hooking up their own external tools to it with the likes of langchain (which is super dope) and first party support lets them get a cut of that for people who don’t want to diy.


> giving a loaded handgun to a group of chimpanzees.

Hate to be that guy, but this is our entire relationship to AI.


I mean I love it, but I don't know what they mean with safety. With Zapier i can just hook into anything wanted, custom scripts etc. Seems like there are almost no limits with Zapier since I can either proxy it to my own api.


As quickly as someone tries fraudulent deploys involving GPTs, the law will come crashing down on them. Fraud gets penalized heavily, especially financial fraud. Those laws have teeth and they work, all things considered.

What you're describing is measurable fraud that would have a paper-trail. The federal and state and local governments still have permission to use force and deadly violence against installations or infrastructure that are primed in adverse directions this way.

Not to mention that the infrastructure itself is physical infrastructure that is owned by the entire United States and will never exceed our authority and global reach if need be.


I agree with your skepticism. I also think this is the next natural step once “decision” fidelity reaches a high enough level.

The question here should be: Has it?


We're getting really close to Neuromancer-style hacking where you have your AI try to fight the other person's AI.


A rogue AI with real-time access to sensitive data wreaks havoc on global financial markets, causing panic and chaos. It's just not hard to see it's going to happen. Like faster car must ended up someone get a horrible crash.

But it's our responsibility to envision such grim possibilities and take necessary precautions to ensure a safe and beneficial AI-driven future. Until we're ready, let's prepare for the crash >~<


It has already happened. The 2010 Flash Crash has been largely blamed on other things, rightly or wrongly, but it seems accepted that unfettered HFT was involved.

HFT is relatively easy to detect and regulate. Now try it with 100k traders all taking their cues from AI based on the same basic input (after those traders who refuse to use AI have been competed out of the market.)


> 1987

Don't you mean August 10, 1988?


> But this is starting to feel like giving a loaded handgun to a group of chimpanzees.

Why?


HN hates blockchain but loves AI...

well, let's fast forward to a year from now


Coordinated tweet short storm.


I dig the Hackers reference.


> today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987.

Sorry do you have a link for this?


The only agency ChatGPT has, is the user typing in data for text completion.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: