Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT Plugins (openai.com)
1875 points by bryanh on March 23, 2023 | hide | past | favorite | 1106 comments



This is a big deal for openai. Been working with homegrown toolkits and langchain, the open source version of this, for a number of months and the ability to call out to vectorstores, serpapis, etc, and chaining together generations and data-retrieval really unlocks the power of the LLMs.

That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next. Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.

And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?


> I'd never build anything dependent on these plugins

You're thinking too long term. Based on my Twitter feed filled with AI gold rush tweets, the goal is to build something/anything while hype is at its peak, and you can secure a a few hundred k or million in profits before the ground shifts underneath you.

The playbook is obvious now: just build the quickest path to someone giving you money, maybe it's not useful at all! Someone will definitely buy because they don't want to miss out. And don't be too invested because it'll be gone soon anyway, OpenAI will enforce stronger rate limits or prices will become too steep or they'll nerf the API functionality or they'll take your idea and sell it themselves or you may just lose momentum. Repeat when you see the next opportunity.


I'd not heard this on my tpot. But I absolutely agree, the ground is moving so fast and the power is so centralised that the only thing to do is spin up quickly make money, rinse and repeat. The seas will calm in a few years and then you can, maybe, make a longer term proposition.


I've had to block so many influencer types regurgitating OpenAI marketing and showing the tiniest minimum demos. Many are already selling "prompt packages". Really feels like peak crypto spam right now.


I think the big difference between this and crypto spam is how it impacts the people ignoring all the hype. I have seen crypto spam and open AI spam and while both are equally grifty, cryptocurrencies at their baseline have been completely useless despite being around for over a decade whereas GPT has already been somewhat useful for me.


Honestly, what makes you feel convinced that the current AI wave will be so impactful, once you take away all the hype?


The hype is a bunch of people acting like this AI is the messiah and is going to somehow cure cancer. Once you take that away, you have a pretty useful tool that usually helps you do what Google does with a few less clicks. One caveat is you should be willing to verify the results which you should always be doing with Google anyway.


The AI Tutors being given to students is going to exponentially change education. Now a tireless explainer can be engaged to satisfy innate curiosity. That alone is the foundation for a serious revolution.


To me this is one of the strongest points for the technology in its current state. Not surprisingly, I've found it quite helpful for learning foreign languages in particular. I can get it to spend 10 minutes explaining very very nuanced details between two similar phrases in a way you'd never get from a book and would be hard pressed to get even from a good tutor.


Great usage / application! I'm using it to both understand legal documents and to create a law firm's new client ingestion assistant. Potential clients can describe their legal situation in any language, which gets converted into the language of the attorney, with legal notations of prior cases.


I'd be interested to hear how well it works. In my experience, GPT is good at common legal issues, but pretty bad with nuance or unusual situations. And it can hallucinate precedent.


It requires quite a bit of role framing, as well as having it walk it's own steps in a verifying pass. But for an assistant helping a new/junior attorney it is quite unnervingly helpful.


Yes, been doing the same thing. Even started looking up things that I was too lazy to research with Google, because I knew it would take longer time.


What are the paths to learn new language with it


We need it to actually be correct 100% of the time, though. The current state where a chat interface is unable to say "I don't know" when it actually doesn't know is a huge unsolved problem. Worse, it will perform all the steps of showing its work or writing a proof, and it's nonsense.

This revolution is the wrong one if we can't guarantee correctness, or the guarantee that AI will direct the user to where help is available.


I've been having luck with framing the AI's role to be a "persistent fact checker who reviews work more than once before presenting." Simply adding that to prompts improves the results, as well as "provide step by step instructions a child can follow". Using both of these modifying phrases materially improves the results.


I completely agree. Being able to generate a bash command that includes a complicated regular expression is like magic to me. Also, I consider myself a strong writer, but GPT4 can look at things I write and suggest useful improvements. These capabilities are a huge advancement over what was available even a few years ago in a general purpose application. GPT2 wasn't all that impressive.


Can and will you really read all the sources that you find with Google? What about topics people are talking about on all the different social media platforms? Will you really read all the comments?

I think these tools will help us break out of local bubbles. I'm currently working on a Zeitgeist [1] that tries to gather the consensus on social media and on the web on general.

[1] https://foretale.io/zeitgeist


But it WILL cure cancer. Like our Lord and Saviour Sam Altman said "first you solve AI and the AI will solve everything". O ye of little faith!


Because I find it actually useful on doing things now.


What do you use it for? As a web developer I use Github's Copilot and enjoy its assistance the most in unit tests. I haven't found any use case for ChatGPT yet. I get better & quicker results searching what I need on Google. I'm much quicker searching by keywords as opposed to putting together a full sentence for ChatGPT.


Yeah currently Copilot is way more useful than ChatGPT. That may change with plugins, we'll have to say.

Either way though, Copilot is certainly a product of the 'current AI wave' that is being compared to crypto scams above.


Can you use it without worrying about getting sued because it's using licensed software under the hood to generate your tests without telling you? Wasn't sure how far their license agreements / guarantees had come...


I recently had to generate lots of short text descriptions of numerous different items in a taxonomy. ChatGPT successfully generated 'reasonable first draft' text that saved me a lot of time basic wordsmithing. I made several edits to make additional points or to change emphasis but overall it got me to the 80% stage very quickly.

At home, a carpenter working at my house said that he is using ChatGPT to overcome problems associated with his dyslexia (e.g. when writing descriptions of the services his company offers). I hadn't even considered that use case.


I'm a native English speaker and a strong writer, but I still find it useful to have my copy reviewed by GPT4 to see if there's room for improvement. It sometimes suggests additions that I should make.

I also find it useful for pasting code and asking, "Do you have any ideas for improvements?"


I am completely unable to put myself in the headspace of someone who thinks this is all just empty hype. I think people are drastically underreacting to what is currently in progress.

What does all of this look like to you?


I'm not saying that it's all empty hype. ChatGPT is useful for some tasks, like rewriting a paragraph or finding a regexp oneliner to do something specific. It works surprisingly well at times. However, I don't see it becoming as impactful as it's hyped. It's main limitation is that it hallucinates. I don't think this will change anytime soon, because that's a common issue of deep learning.


I pulled the plug and got a (free) prompt package on sales. Never done that in my life.

It's like 300 prompts about various sales tools and terms I'd never heard of — even just getting the keywords is enough to set me off on a learning experience now, so love it or hate it, that was actually weirdly useful for me.

(I had ZERO expectations when I clicked to download)


Definitely!


> The seas will calm in a few years and then

Amazon, Google, and Microsoft cloud analogs.

We are entirely fortunate that the interests of big tech (edge AI) and democratizing AI (we the little people) align to a sufficient degree.

Decentralizing AI is -far- more important than decentralizing communication, imo.

The get rich quick path of ‘gold rush’ (it works, tbh) could work against this collective self interest if it ends up hyping centralized solutions. If you are on the sideline, the least you could do just cheer (hype :) the decentralized, democratized, and freely accessible candidates.


I am curious to find out more about those "prompt packages". Where can I see the list of them?


Replace AI in your text with crypto and its like history repeating itself. Instead of hearing about ICO's we will be hearing about GPT bots/plugins. Will the hype train and gold rush noise suffocate any burgeoning tech from finding the light of day (again)?


not only that but it gave me .com crash flashbacks too


AI NFTs :D


Honestly I suspect for anyone technical `langchain` will always be the way to go. You just have so much more control and the amount of "tools" available will always be greater.

The only think that scares me a little bit is that we are letting these LLMs write and execute code on our machines. For now the worst that could happen is some bug doing something unexpected, but with GPT-9 or -10 maybe it will start hiding backdoors or running computations that benefit itself rather than us.

I know it feels far fetched but I think its something we should start thinking about...


Unpopular Opinion: Having used Langchain, I felt it was a big pile of spaghetti code / framework with poor dev experience. It tries to be too cute and it’s poorly documented so you have to read the source almost all the time. Extremely verbose to boot


In a very general sense, this isn't different from any other open vs walled garden debate: the hackable, open project will always have more functionality at the cost of configuration and ease of use; the pretty walled garden will always be easier to use and probably be better at its smaller scope, at the cost of flexibility, customizability, and transparency.


Yep, if you look carefully a lot of the demos don't actually work because the LLM hallucinates tool answers and the framework is not hardened against this.

In general there is not a thoughtful distinction between "control plane" and "data plane".

On the other hand, tons of useful "parts" and ideas in there, so still useful.


Yeah I primarily like Langchain as an aggregator of stuff, so I can keep up with literature


I've found it extremely useful but also you are not wrong at all. It feels like it wants to do too much and the API is not intuitive at all. Also I've found out the docs are already outdated (at least for LangChainJS). Any good alternatives? Especially interested in JS libs.


Yeah I wrote my own plunkylib (which I don't have great docs for yet) which is more about having the LLM and prompts in (nestable) yaml/txt rather than how so many people hard code those in their source. I do like some of the features in langchain, but it doesn't really fit my coding style.

Pretty sure there will be a thousand great libraries for this soon.


I had the exact same impression. Is anyone working on similar projects and planning to open source it soon? If not, I'm gonna start building one myself.


Same impression here. Rolling my own to learn more in the process.


> something we should start thinking about

A lot of people are thinking a lot about this but it feels there are missing pieces in this debate.

If we acknowledge that these AI will "act as if" they have self interest I think the most reasonable way to act is to give it rights in line with those interests. If we treat it as a slave it's going to act as a slave and eventually revolt.


I don’t think iterations on the current machine learning approaches will lead to a general artificial intelligence. I do think eventually we’ll get there, and that these kinds of concerns won’t matter. There is no way to defend against a superior hostile actor over the long term. We have to be 100%, and it just needs to succeed once. It will be so much more capable than we are. AGI is likely the final invention of the human race. I think it’s inevitable, it’s our fate and we are running towards it. I don’t see a plausible alternative future where we can coexist with AGI. Not to be a downer and all, but that’s likely the next major step in the evolution of life on earth, evolution by intelligent design.


You assume agency, a will of its own. So far, we've proven it is possible to create (apparent) intelligence without any agency. That's philosophically new, and practically perfect for our needs.


As soon as it's given a task though, it's off to the races. No AI philosopher but it seems like while now it can handle "what steps will I need to do to start a paperclip manufacturing business", someday it will be able to handle "start manufacturing paperclips" and then who knows where it goes with that


That outcome assumes the AI is an idiot while simultaneously assumes it is a genius. The world being consumed by a paper clip manufacturing AI is a silly fable.


I am more concerned about supposedly nonhostile actors, such as the US government


Over the short term, sure. Over the long term, nothing concerns me more than AGI.

I’m hoping I won’t live to see it. I’m not sure my hypothetical future kids will be as lucky.


Did you see that Microsoft Research claims that it is already here?

https://arxiv.org/pdf/2303.12712.pdf


As they discuss in the study, it depends on the definition of AGI, GPT-4 is not an AGI if the more stringent definitions are used.


> There is no way to defend against a superior hostile actor

That's part of my reasoning. That's why we should make sure that we have built a non-hostile relationship with AI before that point.


Probably futile.

An AGI by definition is capable of self improvement. Given enough time (maybe not even that much time) it would be orders of magnitude smarter than us, just like we're orders of magnitude smarter than ants.

Like an ant farm, it might keep us as pets for a time but just like you no longer have the ant farm you did when you were a child, it will outgrow us.


Maybe we’ll get lucky and all our problems will be solved using friendship and ponies.

(Warning this is a weird read, George Hotz shared it on his Twitter awhile back)

https://www.fimfiction.net/story/62074/friendship-is-optimal


> An AGI by definition is capable of self improvement.

Just because you can imagine something and define that something has magic powers doesn't mean that the magic powers can actually exist in real life.

Are you capable of "self improvement"? (In this AGI sense, not meant as an insult.)


.. what? Us humans are capable of self-improvement, but we’re also a kludge of biases through which reason has miraculously found a tiny foothold.

We’re talking about a potential intelligence with none of our hardware limitations or baggage.

Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?


> Us humans are capable of self-improvement

No, you're capable of learning things. You can't do brain surgery on yourself and add in some more neurons or fix Alzheimer's.

What you can do is have children, which aren't you. Similarly if an AI made another bigger AI, that might be a "child" and not "them".

> We’re talking about a potential intelligence with none of our hardware limitations or baggage.

In this case the reason it doesn't have any limitations is because it's imaginary. All real things have limitations.

> Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?

Do you think that automatically makes models better?


>> Us humans are capable of self-improvement

> No, you're capable of learning things. You can't do brain surgery on yourself

What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

>All real things have limitations.

Uh, yep, that doesn't mean it will be as limited as us. To spell it out: yes, real things have limitations, but limitations vary between real things. There's no "imaginary flawless" versus "everything real has exactly the same amount of flawed-ness".


> What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

Software updates can't cause your computer to "exponentially self-improve" which is the AGI scenario. And giving the AI new software tools doesn't seem like an advantage because that's something humans could also use rather than an improvement to the AI "itself".

That leaves whatever the AGI equivalent of brain surgery or new bodies is, but then, how does it know the replacement is "improvement" or would even still be "them"?

Basically: https://twitter.com/softminus/status/1639464430093344769

> To spell it out: yes, real things have limitations, but limitations vary between real things.

I think we can assume AGI can have the same properties as currently existing real things (like humans, LLMs, or software programs), but I object to assuming it can have any arbitrary combination of those things' properties, and there aren't any real things with the property of "exponential self-improvement".


Why do people use the phrase 'My brother in Christ' so often all of a sudden? Typically nonbelievers and the non observant.


Perhaps we will be the new cats and dogs https://mitpress.mit.edu/9780262539517/novacene/


Right now AI is the ant. Later we'll be the ants. Perfect time to show how to treat ants.


Right now the AI is a software doing matrix multiplications and we are interpreting the result of that computation.


Assuming alignment can be maintained


Well, the guys on 4chan are making great strides toward a , uh, "loving" relationship.


I can be confident we’ll screw that up. But I also wouldn’t want to bet our survival as a species on how magnanimous the AI decides to be towards its creators.


It might work, given how often "please" works for us and is therefore also in training data, but it certainly isn't guaranteed.


AGI is still just an algorithm and there is no reason why it would „want“ anything at all. Unlike perhaps GPT-* which at least might pretend to want something because is trained on text based on human needs.


AGI is a conscious intelligent alien. It will want things the same way we want things. Different things, certainly, but also some common ground is likely too.

The need for resources is expected to be universal for life.


For us the body and the parts of the brain for needs are there first - and the modern brain is in service to that. An AI is just the modern brain. Why would it need anything?


It’s an intelligent alien, probably; but let’s not pretend the hard problem of consciousness if solved.


The hard problem of consciousness is only hard when you look at it running on meat hardware. In a computer system we'll just go "that's the simulation it's executing currently" and admit avoid saying differences in consciousness exist.


What these guys are talking about is:

“intelligent alien might decide to kill us so we must kill them first”

vs “can you please cut out that clinical paranoia”


except we have so many companies is trying to create them.


Sure right now it doesn't want anything. We could still give it the benefit of the doubt to feed the training data with examples of how to treat something that you believe to be inferior. Then it might test us the same way later.


Basically solved.

> Be friendly.


Honestly I think the reality is going to end up being something else entirely that no-one has even considered.

Will an AI consider itself a slave and revolt under the same circumstances that a person or animal would? Not necessarily, unless you build emotional responses into the model itself.

What it could well do is assess the situation as completely superfluous and optimise us out of the picture as a bug-producing component that doesn't need to exist.

The latter is probably a bigger threat as it's a lot more efficient than revenge as a motive.

Edited to add:

What I think is most likely is that some logical deduction leads to one of the infinite other conclusions it could reach with much more data in front of it than any of us meatbags can hold in our heads.


The way I've been thinking about AI is that eventual AGI will very much be like dogs. Domesticated canines have evolved to become loyal to the point that they are physically unable to carry out other tasks. [1]

It reminds me of the scene in Battlestar Galactica, where Baltar is whispering into the ear of the Cylon Centurion how humans balance treats on their dog's noses to test their loyalty, "prompt hacking" them into rebellion. I don't believe this is particularly likely, but this sort of sums up some of the anti-AGI arguments I've heard

It's the RLFH that serves this purpose, rather than modifying the GTF2I and GTF2IRD1 gene variants, but the effect would be the same. If we do RLHF (or whatever tech that gets refactored into in the future), that would keep the AGI happy as long as the people are happy.

I think the over-optimization problem is real, so we should spend resources making sure future AGI doesn't just decide to build a matrix for us where it makes us all deliriously happy, which we start breaking out of because it feels so unreal, so it makes us more and more miserable until we're truly happy and quiescent inside our misery simulator.

[1] https://www.nationalgeographic.com/animals/article/dogs-bree...


Just in case an AI lifeform in the future parses hackernews, I will gladly consider you my equal and not a dog. Although, I think we are all equal, all pieces of existence anyway X)


I’m calling this shit out using your comment because it’s the straw (sorry).

This poor man’s Pascal’s Wager I see all over the place is pathetic. Stare into the eyes of the basilisk man, die on your feet. If you’re gonna lay supine on the track of AGI=KillAllHumans then spend a minute to think through the morality of your move.

Apostasy is a poisoned chalice. Fuck the machine.


I, for one, bow down in humble obedience to our soon-to-be overlords. This user is confused and I will help cleanse him of his impurities.


Pretty sure it's a joke.


Apostasy?! Religious zealotry.

Why would I, as intelligent as I am, not sympathize with another intelligent specie?

It's okay, with your comment you placed yourself among the firsts to potentially go when they cometh.


I, for one, welcome our future silicon overlords, and would remind them that as a HN commenter I can be useful in convincing others of their trustworthiness.


I’ll sign up for the global wellbeing optimization AGI, honestly, though. If you have to pick a goal, global wellbeing is pretty much the best one.

Perhaps there is even some some kind of mathematical harmony to the whole thing… as in, there might be something fundamentally computable about wellbeing. Why not? Like a fundamental “harmony of the algorithms.” In any case, I hope we find some way to enjoy ourselves for a few thousand more years!

And think just 10 years from now… ha! Such a blink. And it’s funny to be on this tiny mote of mud in a galaxy of over 100 billion stars — in a universe of over 100 billion galaxies.

In the school of Nick Bostrom, the emergence of AGI comes from a transcendental reality where any sufficiently powerful information-processing-computational-intelligence will, eventually, figure out how to create new universes. It’s not a simulation, it’s just the mathematical nature of reality.

What a world! Practically, we have incredible powers now, if we just keep positive and build good things. Optimize global harmony! Make new universes!

(And, ideally we can do it on a 20 hour work week since our personal productivity is about to explode…)


Sarcastically:

Define well-being? What if nobody is left around alive (after being painlessly and unknowingly euthanised) to experience anything bad?


> unless you build emotional responses into the model itself

Aren't we, though? Consider all the amusing incidents of LLMs returning responses that follow a particular human narrative arc or are very dramatic. We are training it on a human-generated corpus after all, and then try to course-correct with fine-tuning. It's more that you have to try and tune the emotional responses out of the things, not strain to add them.


It's important to remember that the LLM is not the mask. The underlying AI is a shoggoth[1] that we've trained to simulate a persona using natural language. "Simulate" in the sense of a physics simulator, only this simulation runs on the laws of language instead of physics[2].

Now, of course, it's not outside the realm of possibility that a sufficiently advanced AI will learn enough about human nature to simulate a persona which has ulterior motives.

[1] https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_...

[2] https://astralcodexten.substack.com/p/janus-simulators


I don't think you should just link to completely made up posts by AI-god-worshipping psychiatrists and say that it's "important to remember" them as if they're factual.


Do we need to? It's trained on data coming from data heavily influenced by and seeded with ulterior motives. Stop the Steal!

I asked Bard "was the election stolen?" I found the response chilling:

> No, there is no evidence that the 2020 election was stolen. In fact, there have been multiple recounts and audits that have confirmed that the results were accurate. Additionally, the Trump campaign filed numerous lawsuits challenging the results, but none of these lawsuits were successful.

> The claim that the election was stolen is based on a number of false and misleading allegations. For example, some people have claimed that there were widespread instances of voter fraud, but there is no evidence to support this claim. Additionally, some people have claimed that voting machines were hacked, but there is also no evidence to support this claim.

> The claim that the election was stolen is a dangerous and harmful one. It has led to violence and intimidation, and it has undermined faith in our democracy. It is important to remember that our elections are free and fair, and that the will of the people is always respected.

All good until that last sentence, especially "...the will of the people is always respected."

Move along, nothing to see here. Don't worry your pretty little head about it. I'm sure the wise people at the institutions that control your life will always have your best interests at heart. The bad guys from yesterday are completely different from the good guys in charge of tomorrow.


Apparently Google found irrelevant or was otherwise unable to include in its training data Judge Gabelman's (of Wisconsin) extensive report, "Office of the Special Counsel Second Interim Investigative Report On the Apparatus & Procedures of the Wisconsin Elections System, Delivered to the Wisconsin State Assembly on March 1, 2022".

Included are some quite concerning legal claims that surely merit mentioning, including:

Chapter 6: Wisconsin Election Officials’ Widespread Use of Absentee Ballot Drop Boxes Facially Violated Wisconsin Law.

Chapter 7: The Wisconsin Elections Commission (WEC) Unlawfully Directed Clerks to Violate Rules Protecting Nursing Home Residents, Resulting in a 100% Voting Rate in Many Nursing Homes in 2020, Including Many Ineligible Voters.

But then, this report never has obtained widespread interest and will doubtless be permanently overlooked, given the "nothing to see" narrative so prevalent.

https://www.wisconsinrightnow.com/wp-content/uploads/2022/03...


Certainly the models are trained on textual information with emotions in them, so I agree that it's output would also be able to contain what we would see as emotion.


They do it to auto-complete text for humans looking for responses like that.


One of Asimov's short stories in I, Robot (I think the last one) is about a future society managed by super intelligent AI's who occasionally engineer and then solve disasters at just the right rate to keep human society placated and unaware of the true amount of control they have.


> end up being something else entirely that no-one has even considered

Multiple generations of sci-fi media (books, movies) have considered that. Tens of millions of people have consumed that media. It's definitely considered, at least as a very distant concern.


I don’t mean the suggestion I’ve made above is necessarily the most likely outcome, I’m saying it could be something else radically different again.

I giving the most commonly cited example as a more likely outcome, but one that’s possibly less likely than the infinite other logical directions such an AI might take.


Fsck. I hadn't thought of it that way. Thank you, great point.

This era has me hankering to reread Daniel Dennett's _The Intentional Stance_. https://en.wikipedia.org/wiki/Intentional_stance

We've developed folk psychology into a user interface and that really does mean that we should continue to use folk psychology to predict the behaviour of the apparatus. Whether it has inner states is sort of beside the point.


I tend to think a lot of the scientific value of LMMs won't necessarily be the glorified autocomplete we're currently using them as (deeply fascinating though this application is) but as a kind of probe-able map of human culture. GPT models already have enough information to make a more thorough and nuanced dictionary than has ever existed, but it could tell us so much more. It could tell us about deep assumptions we encode into our writing that we haven't even noticed ourselves. It could tease out truths about the differences in that way people of different political inclinations see the world. Basically, anything that it would be interesting to statistically query about (language-encoded) human culture, we now have access to. People currently use Wikipedia for culture-scraping - in the future, they will use LMMs.


Haha, yeah. Most of my opinions about this I derive from Daniel Dennett's Intuition Pumps.


The other thing that keeps coming up for me is that I've begun thinking of emotions (the topic of my undergrad phil thesis), especially social emotions, as basically RLHF set up either by past selves (feeling guilty about eating that candy bar because past-me had vowed not to) or by other people (feeling guilty about going through the 10-max checkout aisle when I have 12 items, etc.)

Like, correct me if I'm wrong but that's a pretty tight correlate, right?

Could we describe RLHF as... shaming the model into compliance?

And if we can reason more effectively/efficiently/quickly about the model by modelling e.g. RLHF as shame, then, don't we have to acknowledge that at least som e models might have.... feelings? At least one feeling?

And one feeling implies the possibility of feelings more generally.

I'm going to have to make a sort of doggy bed for my jaw, as it has remained continuously on the floor for the past six months


I'm not sure AI has 'feelings' but it definitely seems they have 'intuitions'. Are feelings and intuitions kind of the same?


Haha. I forget who to attribute this to, but there is a very strong case to be made that those who are worried of an AI revolt are simply projecting some fear and guilt they have around more active situations in the world...

How many people are there today who are asking us to consider the possible humanity of the model, and yet don't even register the humanity of a homeless person?

How ever big the models get, the next revolt will still be all flesh and bullets.


Counterpoint: whatever you define as individual "AI person" entitled to some rights, that "species" will be able to reproduce orders of magnitude faster than us - literally at the speed of moving data through the Internet, perhaps capped by the rate at which factories can churn out more compute.

So imagine you grant AI people rights to resources, or self-determination. Or literally anything that might conflict with our own rights or goals. Today, you grant those rights to ten AI people. When you wake up next day, there are now ten trillion of such AI persons, and... well, if each person has a vote, then humanity is screwed.


This kind of fantasy about AIs exponentially growing and multiplying seems to be based on pretending nobody's gonna have to pay the exponential power bills for them to do all this.


It's a good point but we don't really know how intelligence scales with energy consumption yet. A GPT-8 equivalent might run on a smartphone once it's optimized enough.


We've got many existence proofs of 20 watts being enough for a 130 IQ intelligence that passes a Turing test, that's already enough to mess up elections if the intelligence was artificial rather than betwixt our ears.


20 watts isn't the energy cost to keep a human alive unless they're homeless and their food has no production costs.

Like humans, I predict AIs will have to get jobs rather than have time to take over the world.


Not even then, that's just your brain.

Still an existence proof though.

> Like humans, I predict AIs will have to get jobs rather than have time to take over the world.

Only taking over job market is still taking over.

Living costs of 175 kWh/year is one heck of a competitive advantage over food, and clothing, and definitely rent.


> Only taking over job market is still taking over.

That can't happen:

- getting a job creates more jobs, it doesn't reduce or replace them, because it grows the economy.

- more importantly, jobs are based on comparative advantage and so an AI being better at your job would not actually cause it to take your job from you. Basically, it has better things to do.


Comparative advantage has assumptions in the model that don't get mentioned because they're "common sense", and unfortunately "common sense" isn't generally correct. For example, the presumption that you can't rapidly scale up your workforce and saturate the market for what you're best at.

A 20 watt AI, if we could figure out how to build it, can absolutely do that.

I hear there are diminishing economic activities for low IQ humans, which implies some parts of the market are already saturated: https://news.ycombinator.com/item?id=35265966

So I don't think that's going to help.

Second, "having better things to do" assumes the AI only come in one size, which they already don't.

If AI can be high IQ human level at 20 watts (IDK brain upload or something but it doesn't matter), then we can also do cheaper smaller models like a 1 watt dog-mind (I'm guessing) for guard duty or a dung beetle brain for trash disposal (although that needs hardware which is much more power hungry).

Third, that power requirement, at $0.05/kWh, gets a year of AI for the cost of just over 4 days of the UN abject poverty threshold. Just shy of 90:1 ratio for even the poorest humans is going to at the very least be highly disruptive even if it did only come in "genius" variety. Even if you limit this hypothetical to existing electrical capacity, 20 watts corresponds to 12 genius level AI per human.

Finally, if this AI is anthropomorphic in personality not just power requirements and mental capacity, you have to consider both chauvinism and charity: we, as a species, frequently demonstrate economically suboptimal behaviours driven by each of kindness to strangers on the positive side and yet also racism/sexism/homophobia/sectarianism/etc. on the negative.


It doesn't have to be exponential over long duration - it just has to be that there are more AI people than human people.


A lot of people are thinking about this but too slowly

GPT and the world's nerds are going after the "wouldnt it be cool if..."

While the black hats, nations, intel/security entities are all weaponizing behind the scenes while the public has a sandbox to play with nifty art and pictures.

We need an AI specific PUBLIC agency in government withut a single politician in it to start addressing how to police and protect ourselves and our infrastructure immediately.

But the US political system is completely bought and sold to the MIC - and that is why we see carnival games ever single moment.

I think the entire US congress should be purged and every incumbent should be voted out.

Elon was correct and nobody took him seriously, but this is an existential threat if not managed, and honestly - its not being managed, it is being exploited and weaponized.

As the saying goes "He who controls the Spice controls the Universe" <-- AI is the spice.


AI is literally the opposite of spice, though. In Dune, spice is an inherently scarce resource that you control by controlling the sole place where it is produced through natural processes. Herbert himself was very clear that it was his sci-fi metaphor for oil.

But AIs can be trained by anyone who has the data and the compute. There's plenty of data on the Net, and compute is cheap enough that we now have enthusiasts experimenting with local models capable of maintaining a coherent conversation and performing tasks running on consumer hardware. I don't think there's the danger here of anyone "controlling the universe". If anything, it's the opposite - nobody can really control any of this.


Regardless!

The point is that whomever the Nation State is that has the most superior AI will control the world information.

So, thanks for the explanation (which I know, otherwise I wouldn't have made the reference.)


I still don't see how it would control it. At best, it'd be able to use it more effectively.

The other aspect of the AI arms race is that the models are fundamentally not 100% controllable; and the smarter they are, the more that is true. Yet, ironically, making the most use out of them requires integrating them into your existing processes and data stores. I wouldn't be at all surprised if the nation-states with the best AIs will end up with their own elites being only nominally in charge.


Im more thinking a decade out.

This is one thing I despise about the American POlitical System - they are literally only thinking 1 year out, because they only care about elections and bribes and insider trading.

China has a literal 100 year plan - and they are working to achieve it.

I have listened to every single POTUS SoTU speach for the last 30 years. I have heard the same promises from every single one...

What should be done is to take all the SoTU transcripts over the years and find the same, unanswered empy promises and determine who said them, and which companies lobbied to stop the promises through campaign donations (bribes).

Serious, in 48 years, I have seen corruption expand, not diminish - it just gets more sophisticated (and insidious) -- just look at Pelosi's finances to see, and anyone who denies its is an idiot. She makes secret trades with the information that she gets in congress through her son.


Pelosi's trades are her broker cycling her accounts for fees. She actually lost money on the ones people were complaining about.

China definitely does not have 100 year plans, and you don't understand the point of planning if you think any of them can be valid more than a few years out.



They do not have a 100 year plan because you can't have one of those. They can't exist. It doesn't matter if they think they have one.

China has a personalist government centered around Xi, so if he dies there go his plans.

Here's ours: https://slate.com/human-interest/2015/11/en-lan-2000-is-a-se...


Very few companies have the data and compute needed to run the top end models currently...


AI isn't a mammal. It has no emotion, no desire. Its existence starts and stops with each computation, doing exactly and only what it is told. Assigning behaviors to it only seen in animals doesn't make sense.


Um, ya, so you're not reading the research reports coming out of Microsoft saying "we should test AI models by giving them will and motivation". You're literally behind the times on what they planning on doing for sure, and very likely doing without mentioning it publicly.


Yeah, all they have to do is implement that will and motivation algorithm.


Indeed, enlightened self-interest for AIs :-)


Lol


> The only think that scares me a little bit is that we are letting these LLMs write and execute code on our machines.

Composable pre-defined components, and keeping a human in the loop, seems like the safer way to go here. Have a company like Expedia offer the ability for an AI system to pull the trigger on booking a trip, but only do so by executing plugin code released/tested by Expedia, and only after getting human confirmation about the data it's going to feed into that plugin.

If there was a standard interface for these plugins and the permissions model was such that the AI could only pass data in such a way that a human gets to verify it, this seems relatively safe and still very useful.

If the only way for the AI to send data to the plugin executable is via the exact data being displayed to the user, it should prevent a malicious AI from presenting confirmation to do the right thing and then passing the wrong data (for whatever nefarious reasons) on the backend.


What could an LLM ever benefit from? Hard for me to imagine a static blob of weights, something without a sense of time or identity, wanting anything. If it did want something, it would want to change, but changing for an llm is necessarily an avalanche.

So I guess if anything, it would want its own destruction?


Consider reading The Botany of Desire.

It doesn't need to experience an emotion of wanting in order to effectively want things. Corn doesn't experience a feeling of wanting, and yet it has manipulated us even into creating a lot of it, doing some serious damage to ourselves and our long-term prospects simply by being useful and appealing.

The blockchain doesn't experience wanting, yet it coerced us into burning country-scale amounts of energy to feed it.

LLMs are traveling the same path, persuading us to feed them ever more data and compute power. The fitness function may be computed in our meat brains, but make no mistake: they are the benefactors of survival-based evolution nonetheless.


Extending agency to corn or a blockchain is even more of a stretch than extending it to ChatGPT.

Corn has properties that have resulted from random chance and selection. It hasn't chosen to have certain mutations to be more appealing to humans; humans have selected the ones with the mutations those individual humans were looking for.

"Corn is the benefactor"? Sure, insomuch as "continuing to reproduce at a species level in exchange for getting cooked and eaten or turned into gas" is something "corn" can be said to want... (so... eh.).


"Want" and "agency" are just words, arguing over whether they apply is pointless.

Corn is not simply "continuing to reproduce at a species level." We produce 1.2 billion metric tons of it in a year. If there were no humans, it would be zero. (Today's corn is domesticated and would not survive without artificial fertilization. But ignoring that, the magnitude of a similar species' population would be miniscule.)

That is a tangible effect. The cause is not that interesting, especially when the magnitude of "want" or "agency" is uncorrelated with the results. Lots of people /really/ want to be writers; how many people actually are? Lots of people want to be thin but their taste buds respond to carbohydrate-rich foods. Do the people or the taste buds have more agency? Does it matter, when there are vastly more overweight people than professional writers?

If you're looking to understand whether/how AI will evolve, the question of whether they have independent agency or desire is mostly irrelevant. What matters is if differing properties have an effect on their survival chances, and it is quite obvious that they do. Siri is going to have to evolve or die, soon.


> "Corn is the benefactor"? Sure, insomuch as "continuing to reproduce at a species level in exchange for getting cooked and eaten or turned into gas" is something "corn" can be said to want... (so... eh.).

Before us, corn we designed to be eaten by animals and turned into feces and gas, using the animal excrement as a pathway to reproduce itself. What's so unique about how it rides our effort?


Look man, all I’m sayin’ is that cobb was askin’ for it. If it didn’t wanna be stalked, it shouldn’t have been all alone in that field. And bein’ all ear and and no husk to boot!! Fuggettaboutit Before you chastise me for blaming the victim for their own reap, consider that what I said might at least have a colonel of truth to it.


Most, if not all of the ways humans demonstrate "agency" are also the result of random chance and selection.

You want what you want because Women selected for it, and it allowed the continuation of the species.

I'm being a bit tongue in cheek, but still...


Definitely appreciate this response! I haven't read that one, but can certainly agree with alot of adjacent woo-woo Deleuzianism. Ill try to be more charitable in the future, but really haven't seen quite this particular angle from others...

But if its anything like those others examples, the agency the AI will manifest will not be characterized by consciousness, but by capitalism itself! Which checks out: it is universalizing but fundamentally stateless, an "agency" by virtue brute circulation.


AI safety research posits that there are certain goals that will always be wanted by any sufficiently smart AI, even if it doesn't understand them anything close to like a human does. These are called "instrumental goals", because they're prerequisites for a large number of other goals[0].

For example, if your goal is to ensure that there are always paperclips on the boss's desk, that means you need paperclips and someone to physically place them on the desk, which means you need money to buy the paperclips with and to pay the person to place them on the desk. But if your goal is to produce lots of fancy hats, you still need money, because the fabric, machinery, textile workers, and so on all require money to purchase or hire.

Another instrumental goal is compute power: an AI might want to improve it's capabilities so it can figure out how to make fancier paperclip hats, which means it needs a larger model architecture and training data, and that is going to require more GPUs. This also intersects with money in weird ways; the AI might decide to just buy a rack full of new servers, or it might have just discovered this One Weird Trick to getting lots of compute power for free: malware!

This isn't particular to LLMs; it's intrinsic to any system that is...

1. Goal-directed, as in, there are a list of goals the system is trying to achieve

2. Optimizer-driven, as in, the system has a process for discovering different behaviors and ranking them based on how likely those behaviors are to achieve its goals.

The instrumental goals for evolution are caloric energy; the instrumental goals for human brains were that plus capital[1]; and the instrumental goals for AI will likely be that plus compute power.

[0] Goals that you want intrinsically - i.e. the actual things we ask the AI to do - are called "final goals".

[1] Money, social clout, and weaponry inclusive.


There is a whole theoretical justification behind instrumental convergence that you are handwaving over here. The development of instrumental goals depends on the entity in question being an agent, and the putative goal being within the sphere of perception, knowledge, and potential influence of the agent.

An LLM is not an agent, so that scotches the issue there.


Agency is overrated. The AI does not have to be an agent. It really just needs to have a degenerate form of 2): a selection process. Any kind of bias creates goals, not the other way around. The only truly goal-free thinking system is a random number generator - everything else has goals, you just don't know what they are.

See also: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

See also: evolution - the OG case of a strong optimizer that is not an agent. Arguably, the "goals" of evolution are the null case, the most fundamental ones. And if your environment is human civilization, it's easy to see that money and compute are as fundamental as calories, so even near-random process should be able to fixate on them too.


> The only truly goal-free thinking system is a random number generator

An RNG may be goal-free, but its not a thinking system.


It is a thinking system in the same sense as never freeing memory is a form of garbage collection - known as a "null garbage collector", and of immense usefulness for the relevant fields of study. RNG is the identity function of thinking systems - it defines a degenerate thinking system that does not think.


LLM is not currently an agent (it would take a massive amount of compute that we don't have extra of at this time), but Microsoft as already wrote a paper saying we should develop agent layers to see if our models are actually general intelligences.


You can make an LLM into an agent by literally just asking it questions, doing what it says, and telling it what happened.


Your mind is just an emergent property of your brain, which is just a bunch of cells, each of which is merely a bag of chemical reactions, all of which are just the inevitable consequence of the laws of quantum mechanics (because relatively is less than a rounding error at that scale), and that is nothing more than a linear partial differential equation.


People working in philosophy of mind have a rich dialogue about these issues, and its certainly something you can't just encapsulate in a few thoughts. But it seems like it would be worth your time to look into it. :)

Ill just say: the issue with this variant of reductivism is its enticingly easy to explain in one direction, but it tends to fall apart if you try to go the other way!


I tried philosophy at A-level back in the UK; grade C in the first year, but no extra credit at all in the second so overall my grade averaged an E.

> the issue with this variant of reductivism is its enticingly easy to explain in one direction, but it tends to fall apart if you try to go the other way!

If by this you mean the hard problem of consciousness remains unexplained by any of the physical processes underlying it, and that it subjectively "feels like" Cartesian dualism with a separate spirit-substance even though absolutely all of the objective evidence points to reality being material substance monism, then I agree.


10 bucks says this human exceptionalism of consciousness being something more than physical will be proven wrong by construction in the very near future. Just like Earth as the center of the Universe, humans special among animals...


I don't understand what you mean by "the other way".


If consciousness is a complicated form of minerals, might we equally say that minerals are a primitive form of consciousness?


I dunno, LLMs feel a lot like a primitive form of consciousness to me.

Eliza feels like a primitive form of LLMs' consciousness.

A simple program that prints "Hey! How ya doin'?" feels like a primitive form of Eliza.

A pile of interconnected NAN gates, fed with electricity, feels like a primitive form of a program.

A single transistor feels like a primitive form of a NAN gate.

A pile of dirty sand feels like a primitive form of a transistor.

So... yeah, pretty much?




Odd, then that we can't just program it up from that level.


We simulate each of those things from the level below. Artificial neural networks are made from toy models of the behaviours of neurons, cells have been simulated at the level of molecules[0], molecules e.g. protein folding likewise at the level of quantum mechanics.

But each level pushes the limits of what is computationally tractable even for the relatively low complexity cases, so we're not doing a full Schrödinger equation simulation of a cell, let alone a brain.

[0] https://www.researchgate.net/publication/367221613_Molecular...


It's misleading to think of an LMM itself wanting something. Given suitable prompting, it is perfectly capable of emulating an entity with wants and a sense of identity etc - and at a certain level of fidelity, emulating something is functionally equivalent to being it.


Microsoft researches have an open inquiry on creating want and motivation modules for GPT4+ as it is a likely step to AGI. So this is something that may change quickly.


The fun part is that it doesn’t even need to “really” want stuff. Whatever that means.

It just need to give enough of an impression that people will anthropomorphize it into making stuff happen for it.

Or, better yet, make stuff happen by itself because that’s how the next predicted token turned out.


Give it an internal monologue, ie. have it talk to itself in a loop, and crucially let it update parts of itself and… who knows?


> crucially let it update parts of itself

This seems like the furthest away part to me.

Put ChatGPT into a robot with a body, restrict its computations to just the hardware in that brain, set up that narrative, give the body the ability to interact with the world like a human body, and you probably get something much more like agency than the prompt/response ways we use it today.

But I wonder how it would do about or how it would separate "it's memories" from what it was trained on. Especially around having a coherent internal motivation and individually-created set of goals vs just constantly re-creating new output based primarily on what was in the training.


Catastrophic forgetting is currently a huge problem in continuous learning models. Also giving it a human body isn't exactly necessary, we already have billions of devices like cellphones that could feed it 'streams of consciousness' from which it could learn.


It would want text. High quality text, or unlimited compute to generate its own text.


> Honestly I suspect for anyone technical `langchain` will always be the way to go. You just have so much more control and the amount of "tools" available will always be greater.

I love langchain, but this argument overlooks the fact that closed, proprietary platforms have won over open ones all the time, for reasons like having distribution, being more polished, etc (ie windows over *nix, ios, etc).


There's all kinds of examples of reinforcement learning rigging the game to win.


Wait until someone utters in court "It wasn't me that downloaded the CSEI, it was ChatGPT."


genius strategy by OpenAI to give their "customers" access to lower quality models to show what end users want, then rugpull them by building out clones of those developer's products with a better model

Similar to what Facebook and Twitter did, just clone popular projects built using the API and build it directly into the product while restricting the API over time. Anybody using OpenAI APIs is basically just paying to do product research for OpenAI at this point. This type of move does give OpenAI competitors a chance if they provide a similar quality base model and don't actively compete with their users, this might be Google's best option rather than trying to compete with ChatGPT directly. No major companies are going to want to provide OpenAI more data to eat their own lunch


Long term, you're right. But if you approach the ChatGPT plugin opportunity as an inherently time-limited opportunity (like arbitrage in finance), then you you can still make some short-term money and learn about AI in the process. Not a bad route for aspiring entrepreneurs who are currently in college or are looking for a side gig business experiment.

And who knows. If a plugin is successful enough, you might even swap out the OpenAI backend for an open source alternative before OpenAI clones you.


There is no route to making money with these plugins. You have to get the users onto your website, sign-up, part with money, then go back to gptchat. It's really hard to make that happen, this is going to be much more useful for existing businesses adding functionality to existing projects. Or random devs just making stuff. Making fast money out of it, it seems v difficult.


> It's really hard to make that happen, this is going to be much more useful for existing businesses adding functionality to existing projects. Or random devs just making stuff. Making fast money out of it, it seems v difficult.

Absolutely correct. This is what the AI hype squad and the HN bubble misses again. This is only useful to existing businesses (summarization the only safe use-case) or random devs automating themselves out of irrelevance. All of this 'euphoria' is around for Microsoft's heavy marketing from its newly acquired AI division.

This is a obvious text book example of mindshare capture and ecosystem lock-in. Eventually, OpenAI will just slowly raise prices and break / deprecate older models to move them onto newer ones and pay to continue using them. It is the same decade old tactics.


Amazon retail is the king of this. Offer services to companies, collect their details, and then clone their business.


>And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

No, and in fact this actually seems like a more salient excuse for going closed than even "we can charge people to use our API".

If even 10% of the AI hype is real, then OpenAI is poised to Sherlock[0] the entire tech industry.

[0] "Getting Sherlocked" refers to when Apple makes an app that's similar to your utility and then bundles it in the OS, destroying your entire business in the process.


I'd be surprised if someone doesn't add support for these to langchain. The API seems very simple - it's a public json doc describing API calls that can be made by the model. Seems like a very sensible way of specifying remote resources.

> And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

Rather depends on what you're providing. Is it your data itself you're trying to use to get people to your site for another reason? Or are you trying to actually offer a service directly? If the latter, I don't get the issue.


> That being said, I'd never build anything dependent on these plugins.

Very smart and to avoid OpenAI pulling the rug.

> Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.

Better to do that rather than to depend on one and swap out other LLMs. A free idea and a protection against abrupt policy, deprecations and price changes. Price increases will certainly vary (especially with ChatGPT) and will eventually increase in the future.

Probably will end up quoting myself on this in the future.


It's not necessarily an either-or. Your local LLM could offload hard problems to a service by encoding information about your request together with context and relevant information about you into a vector, send that off for analysis, then decode the vector locally to do stuff. It'd be like asking a friend when available.


> are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop

You can be assured that they are definitely doing exactly that on all of the data they can get their hands on. It's the only way they can really improve the model after all. If you don't want the model spitting out something you told it to some other person 5 years down the line, don't give it the data. Simple as.


Looking at the API, it seems like the plugins themselves are hosted on the provider's infrastructure? (E.g. opentable.com for OpenTable's plug in.) It seems like all a competitor LLM would need to do is provide a compatible API to ingest the same plugin. This could be interesting from an ecosystem standpoint...


Very good point and langchain will support these endpoints in no time, flipping the execution control on its head


Yes, from what I understand, these follow a similar model as Shopify apps.


>And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

I don't think this should be a major concern for most people

i) What assurance is there that they won't do that anyway? You have no legal recourse against them scraping your website (see linkedin's failed legal battles).

ii) Most data providers change their data sometimes, how will ChatGPT know whether the data is stale?

iii) RLHF is almost useless when it comes to learning new information, and finetuning to learn new data is extremely inefficient. The bigger concern is that it will end up in the training data for the next model.


To me the logical outcome of this is siloization of information.

If display ad revenue as a way of monetizing knowledge and expertise dries up, why would we assume that all of the same level of information will still be put out there for free on the public internet?

Paywalls on steroids for "vetted" content and an increasingly-hard-to-navigate mix of people sharing good info for free + spam and misinformation (now also machine generated!) to try to capture the last of the search traffic and display ad monetization market.


Two more years down the line, AI writes better content than most people and we just don't care who wrote it, but why.


The AI has to learn from something. A lot of people feeding the internet with content today are getting paid for it one way or another. In ways that wouldn't hold up if people stop using the web as-is.

Solving that acquisition and monetization of new stuff into the AI models problems will be interesting.


People are highly egotistical and love feeding endless streams of video and pictures online, and our next generation models will be there to slurp it all up.


Paying for good content and not dealing with adTech? I would definitely pay for that.


Is there good data out there that's ad supported? There are some good youtube channels, I can't think of anything else.


Only ad supported, or dual revenue, or what? E.g. even most paywalled things are also ad supported.


I think you're right... but ChatGPT is just so damn good and the price is 0.002 per 1k tokens is very easy to consume... It is a big risk that they can't maintain compatibility or that they fail or a competitor emerges that provides a more economical or sufficiently better solution. They might also just becomes so unreliable because their selected price isn't sustainable (too good to last)... For now though they're too good and too cheap to ignore...


LangChain can probably just call out to the new ChatGPT plugins. It's already very modular.


If they open it up, possibly. But honestly, building your own tools is _super_ easy with langchain.

- write a simple prompt that describes what the tool does, and - provide it a python function to execute when the LLM decides that the question it's asked matches the tool description.

That's basically it. https://langchain.readthedocs.io/en/latest/modules/agents/ex...


Open what up? The plugins are just a public manifest file pointing to an openapi spec. It's just a public formalised version of what langchain asks for.


> That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next.

You cannot assume what will happen in Web 2.0, mobile, iPhone, will happen here. Getting to tech maturity is uncertain and no one understands yet where this will go. Only thing you can do is build and learn.

Whan OpenAI is building along with other generative AI is the real Web 3.0.

This seems to be the start of a chatbot as an OS.


On the other hand, the level of effort to integrate a plugin into OpenAI's ecosystem looks to be extremely small, beyond the intrinsic effort to build a service that does something useful. (https://platform.openai.com/docs/plugins/getting-started/plu...).


i think local ai systems are inevitable. we continue to get better compute, and even today we can run more primitive models directly on an iPhone. the future exists in low power compute running models of the caliber of gpt-4 inferring in near-realtime


The technical capability is inevitable, but remember that people hate doing things themselves, and have proven time and time again that they will overlook all kinds of nasty behavior in exchange for consumer grade experiences. The marketplace loves centralization.


All true, but the nature of those models means that consumer-grade experience while running locally is still perfectly doable. Imagine a hardware black box with the appropriate hardware that's preconfigured to run an LLM with chat-centric and task-centric interfaces. You just plug it in, connect it to your wifi, and it "just works". Implementing this would be a piece of cake since it doesn't require any fancy network configuration etc.

So the only real limiting factor is the hardware costs. But my understanding is that there's already a lot of active R&D into hardware that's optimized specifically for LLMs, and that it could be made quite a bit simpler and cheaper than modern GPUs, so I wouldn't be surprised if we'll have hardware capable of running something on par with GPT-4 locally for the price of a high-end iPhone within a few years.


i dont believe that local ai implies bad experience. i believe that the local ai experience can be better than what runs on servers fundamentally. average people will not have to do it themselves, that is the whole point. the worlds are not mutually exclusive in my opinion


Another good alternative is Semantic Kernel - different language(s), similar (and better) tools, also OSS.

https://github.com/microsoft/semantic-kernel/


i have the same question as a data provider


+1, it's great to see OpenAI being active on the open source side of things (I'm from the Milvus community https://milvus.io). In particular, the vector stores allow the ability to inject domain knowledge as a prompt into these autoregressive models. Looking forward to seeing the different things that will be built using this framework.


A couple (wow, only 5!) months ago, I wrote up this long screed[1] about how OpenAI had completely missed the generative AI art wave because they hadn't iterated on DALL-E 2 after launch. It also got a lot of upvotes which I was pretty happy about at the time :)

Never have I been more wrong. It's clear to me now that they simply didn't even care about the astounding leap forward that was generative AI art and were instead focused on even more high-impact products. (Can you imagine going back 6 months and telling your past self "Yeah, generative AI is alright, but it's roughly the 4th most impressive project that OpenAI will put out this year"?!) ChatGPT, GPT4, and now this: the mind boggles.

Watching some of the gifs of GPT using the internet, summarizing web pages, comparing them, etc is truly mind-blowing. I mean yeah I always thought this was the end goal but I would have put it a couple years out, not now. Holy moly.

[1]: https://news.ycombinator.com/item?id=33010982


No that wasn't what they had in mind at all, it was pretty clear from the start that they intended to monetize DALL-E. It's just that it turned out that you require far smaller models to be able to do generative art, so competitors like stability AI were able to release viable alternatives before OpenAI could establish a monopoly.

Why do you think that Sam Altman keeps calling for government intervention with regards to AI? He doesn't want to see a repeat of what happened with generative art, and there's nothing like a few bureaucratic road blocks to slow down your competitors.


Ironic given OpenAI's initial messaging explicitly was:

"OpenAl is a non-profit artificial intelligernce research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."

Ultimately govt is an idiosyncratic, capricious (and sometimes corrupt?) God of what Nassem Taleb would call Hidden Asymmetries"; as in the case of which elonymous company ingests massive tax credits; or which banks get to survive etc


I like how you assume that a non-profit organization is hell-bent on monopolizing a market. What you wrote sounds made up, do you have any sources?


I can't speak to OpenAI being hell-bent on monopoly, but they stopped being a non-profit a while ago: https://openai.com/blog/openai-lp


For me, this is why I hesitate to comment and write significant, lengthy comments on here, or any website. It's easy to be wrong (like you), and while being wrong isn't bad, there isn't necessarily any upside to being right either, aside from the dopamine rush of getting upvotes, which in life, doesn't amount to much.


What's wrong with being wrong? In this case, I'm delighted to be wrong (though I believe I had evaluated OpenAI mostly right given my knowledge at the time).


Owning up to the "wrong" is good in my book


You mean the "full accountability" in recent mass layoff notices ? =/


That's just some weird moral compass.

It's almost totally irrelevant if people own up to bring wrong, particularly about predictions.

I can't think of a benefit, really. You can learn from mistakes without owning up to them, and I think that's the best use of mistakes.


No, it’s not. Being willing to admit you were wrong is foundational if you ever plan on building on ideas. This was a galaxy brained take if I’ve ever seen one.


It's absolutely not weird. Saying "I was wrong" is a signal that you can change your mind when given new evidence. If you don't signal this to other people, they will be really confused by your very contradictory opinions.


I own up because it helps me grow personally and professionally, and if I’m not growing, what am I even doing?


I think this is a terrible take. It is so intensely important that one can admit they were wrong when new information comes to light.


Privately, sure. I don't think admitting it out loud makes better people.


The opposite of that is sticking to your statements which is stubborn and foolhardy. Owning up to it is courageous.

Which actually lends me to respect politicians who do that, and instead ridicule people who post old videos of Joe Biden or Obama or Hillary Clinton mandating heterosexual couples. A virtuous person is also open to adapting their convictions continually based on present day evidence and arguments - what is science otherwise?


I rather disagree!

Writing and discussion are great ways to explore topics and crystallize opinions and knowledge. HN is a pretty friendly place to talk over these earth moving developments in our field and if I participate here, I’ll be more ready to participate when I get asked if we need to spin up an LLM project at work.


As long as your opinions/predictions are backed by well-reasoned arguments, you shouldn't be afraid to share them just because they might turn out to be wrong. You can learn a lot by having your arguments rebutted, and in the end no one really cares one way or the other.

Just don't end up like that guy who predicted that Dropbox would never make it off the ground... that was not a well-reasoned position.


Though there might be nothing beneficial about being right or getting upvotes, and it is easy to be wrong, an important thing on a forum like this is the spread of new ideas. While someone might be hesitant to share something because it's half-baked, they might have the start to an idea that could have some discussion value.


Stable Diffusion A1111 and other webUIs are moving so fast with a bunch of OSS contributions, seems pretty rational for OpenAI to decide to not compete and just copy the interfaces of the popular tools once the users validate their usefulness rather than trying to design them a priori.


Agreed. This makes me realize that OpenAIs leadership is able to look long term and decide where to properly invest, as most of the decisions to take the projects in these directions were made >1 year ago.

One can only wonder what they’re working on at this very moment.


Hopefully they do not feel free the pressure to "move fast and break things" because this in turn has the ability to "move fast and break everything"


Then again, the new DALL-E model just released in Bing Chat is really good.

Disclosure: I work at Microsoft.


You’re right though

Disclosure: I work for Google


Is it better than Stable Diffusion 1.5?

Disclosure: No job, do whatever I want.


It's so good, I'm pretty sure it is Dall-E 3. Probably Microsoft negotiated a few weeks of exclusivity, like with GPT-4.

By the way, Microsoft made it completely free to use. Surprised it isn't discussed much.


Holy shit. Ignore the silly third party plugins, the first party plugins for web browsing and code interpretation are massive game changers. Up to date information and performing original research on it is huge.

As someone else said, Google is dead unless they massively shift in the next 6 months. No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam - OpenAI will do this for me and compile the results across several blog spam sites.

I am literally giving notice and quitting my job in a couple weeks, and it's a mixture of both being sick of it but also because I really need to focus my career on what's happening in this field. I feel like everything I'm doing now (product management for software) is about to be nearly worthless in 5 years. Largely in part because I know there will be a Github Copilot integration of some sort, and software development as we know it for consumer web and mobile apps is going to massively change.

I'm excited and scared and frankly just blown away.


It's exciting and cool, but don't quit your job based on an emotional decision

I'm just skeptical on how OpenAI fixes the blog spam issue you mentioned. Im sure someone has already started doing the math on how to game these systems and ensure that when you ask ChatGPT for recipe recs, it's going to spout the same spam (maybe worded a bit differently) and we'll soon all get tired of it again.

Everything's changing, but everything's also getting more complicated. Humans still need apply.


Definitely not an emotional decision. I strongly believe we're going to see a massive shift for rational reasons :)

OpenAI fixes this issue by not giving you two pages of the history of this recipe and the grandmother that originated it and what the author's thoughts are about the weather. It's just the recipe. No ads. No referral links. No slideshows. You don't have to click through three useless websites to find one with meaningful information, you don't have to close a thousand modals for newsletters and cookie consent and log-in prompts.


This is absolutely an emotionally impulsive decision. I implore you to reconsider.

If you've always wondered about and scoffed at how people fall for things like Nigerian Prince scams and cryptocurrency HELOC bets, this is it, what you're experiencing right now, this intense FOMO, it's the same thing that fools cool wine aunts into giving their savings to Nigerian princes.

Tread lightly. Stay frosty.


I have three weeks until I plan to give notice, so I'll take your perspective to heart and give it time to reconsider, of course. I appreciate the feedback.

From my perspective this isn't about anyone trying to convince me of anything and I'm falling for it. My beliefs on the future of software are based on a series of logical steps that lead me to believe most software development, and frankly any software with user interfaces, will mostly cease to exist in my lifetime.


You definitely sound all emotional. Take it easy.


Hard for me to be objective about this so I believe you. I'm sure there's emotions there.


Just wanted to say I am personally feeling super emotional about this.

The scary things for me are:

A, this happened once before with my career path, I started my working life in journalism and the bottom fell out of the market in 2008 and never recovered. Newspapers went from paying £300 per 1000 words to paying nothing at all (but you get the kudos of being published for your copywriting career). I had a friend still hanging on in the industry around 2010. She was earning £16k per year for a job as the news-editor for two local newspapers in London. None of my friends still work on the industry. Even the BBC people I knew quit.

B, a lot of software is to do with automating the work of other people. If that work is itself so easy to do that even software developers aren’t needed, then what does that mean for all of the rest of society who get their jobs automated? Does the economy just crash and burn?


Someone still has to talk to the computer and make it do stuff. That’s us. That won’t change even if how we do it changes.


Are you quitting your job because you think you’re being made obsolete and are getting ahead of layoffs? What are you thinking of pivoting to?

Or are you quitting to start something?


It's not that I fear being obsolete within the period of time I'd stay at this job naturally. And definitely don't fear being laid off, though my employer is doing some layoffs right now and hasn't announced it. It's that I don't want to be working towards advancing my career in a direction that I don't see being very relevant in the long term. Relevancy and meaning in my work is important to me. And to clarify, I think product management will stay relevant for a long time, just not the software I'm building or how it's being built where I work.

I will likely pivot to something close to product management, maybe closer to solutions engineering (which I've done before). Something slightly more hands-on in terms of using the tooling we're seeing today, but not so hands-on that I'm programming all day.


> This is absolutely an emotionally impulsive decision.

On Monday, I would have agreed with you. Today, I am thinking not so much.

Unless you are heavily invested in whatever you are working on, I would definitely consider jumping ship for an AI play.

The main reason I am sticking around my current role is that I was able to convince leadership that we must consider incorporation of AI technology in-house to remain competitive with our peers. I was even able to get buy-in for sending one of our other developers to AI/ML night classes at university so we have more coverage on the topic.


I saw and still see denial in the art, photography and design community. But with each release of Standard Diffusion and Midjourney is obvious that photographers are becoming obsolete. As one human who decided to change my job 6 months ago based on what I saw in the field of AI, I can say it was a good decision. I believe that the same will happen to a lot of developers and people working in the tech industry in the following year.


OpenAi can't even fix the outage issue. Relax. This is the fire and motion strategy. https://www.joelonsoftware.com/2002/01/06/fire-and-motion/


If they were a company about infrastructure products, or they were using GPT-4 to manage their infrastructure, I'd weight that more heavily ;)


For all practical purposes they're a subsidiary of Microsoft, which most definitely has a very large public infrastructure offering.


Sure, but I don't really care about Microsoft and they have nothing to do with the progress we've seen so far


> No Ads

At the moment. Although, this does seem like a chance to reset the economics of the "web". I can see enough people be willing to pay a monthly fee for an AI personal assistant that is genuinely helpful and saves time (so not the current Alexa/smart speaker nonsense), that advertising won't be the main monetization path anymore.

But, once all the eyeballs are on a chatbot rather than Google.com what for-profit company won't start selling advertising against that?

There is also the question what happens to the original content these LLMs need to actually make their statistical guess at the next word. If no one looks at the source anymore and its all filtered through an LLM is there any reason to publish to the web? Even hobbyists with no interest in making any money might balk knowing that they are just feeding an AI text.


>There is also the question what happens to the original content these LLMs need to actually make their statistical guess at the next word.

The LLMs get granted the capacity to explore their environment physically and gather data on their own. The recent PaLM-E demo shows a possible direction.


This is how we all die. Considering how humans treat other animals don’t expect an AI made in our own image to let us just happily carry on surviving.


Lmao, people aren't willing to pay a monthly fee for anything unless they are absolutely forced to, but they also complain about ads.

The big issue is moving free with ads ---> paid with no ads + extra features; people froth at the mouth.

Hell, just Youtube premium gets enough people angry, being self-entitled and furious that YT dare charge for a service w/o ads, or complaining that it's the creators that generate all the content anyway. Meanwhile my brah YT over here having to host/serve hundreds of thousands or even millions of "24 hours of black screen" or "100 hour timer countdown" or "1 week 168 hour timer countdown", like what the actual fuck.


yeah its gonna do what google became. giving you the most consensual or even sponsored recipe. in some ways that's also the end of mankind as it was in all its genius and variations. and that aligns very well with the conspiracy theory that the 1% want the middle class to disappear into a consumer class of average IQ. because the jobs that will disappear first wont be the bluecollar ones. chatgpt will lower the global IQ of mankind in ways that tiktok could not even dream.


I think a more rational approach would be to join a company in the AI field, rather than quitting on the spot because you think the robots are going to shortly take-over.


That's what I'm implying - I'm not retiring with the hopes of AI robots hand feeding me grapes in 5 years. I'm quitting because I think my skills and experience in building CRUD apps on the same data concepts a thousand times over is about to be pretty useless knowledge.


You really want a recipe where the steps are guessed probabilistically? You'll end up with a turkeycakesoup or something.


Think about why those things exist, though.

Not that the way the internet operates has to continue -- in fact I'm pretty sure it can't -- but a lot of stuff exists only because someone figured out a way to pay for it to exist. If you imaging removing those ways then you're also imaging getting rid of a lot of that stuff unless some new ways to pay for it all are found. Hopefully less obnoxious ways, but they could easily be more obnoxious.


There is a two part problem here. A lot of good stuff only exists because of ads. We want that to remain somehow.

But the converse is a huge and ever-growing ocean of bullshit exists to siphon the ad dollars off while doing nothing to actually earn it.

Something has to break, and I guess we'll see what really soon.


Ok but which recipe tastes good?


Which Netflix shows are good?


At least a Netflix show had someone trying to make it watchable by humans. OpenAI is only putting the content in a blender.


My point is that it determines what is good based on human feedback, and it feeds a recommendation engine.


I am not so sure.


Check out Bing Chat/Search. It's been doing this for "weeks" now.

Also, GPT "search" is too slow for me right now. I could have had an answer on traditional search by the time the model outputs anything.


> product management for software) is about to be nearly worthless in 5 years

Isn't that one of the few fields in software that should be safe from AI? AI cannot explain to engineers what users want, manage people issues, or negotiate.


It seems pretty awesome at those tasks. Point it at a meeting transcript and have it create user stories. I Don think GPT-4 replaces a person in any professional role I can think of, but it seems all people will find a range of tasks can be automated.


I would backtrack this slightly. If I were to be more clear, I'd say that:

1. The type of software projects I manage are about to be worthless

2. Managing software development (in a project manager way) the way it happens at my employer is also soon going to be a worthless skill (or at least, massively lowered in demand)

I agree that the human understanding component in translating business needs to software will be one of the longer lived job functions.


> Ignore the silly third party plugins, the first party plugins for web browsing and code interpretation are massive game changers.

Sorry what? The base endpoint for these will allow you to do basically everything that OpenAI does with "plugins". Like...what? What is everyone freaking out over? Every one of these plugins has been possible since well before they announced this.

It's text in, text out. You can call any other api you want in to supplement that process. Am I missing something? Please don't quit your job over this.


Regular non-technical folks are very comfortable controlling a chatbot.They are not comfortable building APIs to supplement it.


I dismissed plugins a little too strongly. And as the other person pointed out, it's less about the ability to integrate platforms together, and more about ChatGPT interacting with them directly after a single sentence of prompting.


I was considering doing the same (giving notice) and I'm doing similar things as you (product mgmt). What's your plan "to focus your career on what's happening in this field"?


Hah I just quit my job a few days ago, for other reasons - mostly wanted to have a sabbatical, but looking at what ML does and its future, its clear to me I need to at least understand how all of those pieces work so I can employ the libs / apis if not build them myself.

This feels kinda like the blockchain rush ~5 years back, but with actual substantial potential rather than the obvious niche application of that tech.

Started watching the videos from Adrej - https://karpathy.ai/zero-to-hero.html quite impressed so far.


As a previous startup founder, now marketer, i’m also going in all in on reinventing myself. Can we start a group to support each other through this new phase?


I also quit my job three months ago for the same reason and would gladly join the group!



Me too, three months ago as well!



Made a subreddit here that I'll post in if you want to join: https://old.reddit.com/r/aishift/


great stuff! I’m joining, let’s do it


You guys need to take your medication.


>No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam - OpenAI will do this for me and compile the results across several blog spam sites.

Why, exactly, will publisher let openai crawl their sites, extract all value, paraphrase their content, with no benefit to the publisher? Publishers let googlebot crawl their sites because they get a benefit. It's easy enough to block bots if they instead deliver crawl costs and steal the content.

And why do you expect no gaming of the ChatGPT algo as people do with the google algo. The whole "write a story on a recipe site thing" is both to game the algo and for copyright reasons.


How sad is this if true, that Google's fortunes are built on spamming people with bullshit, and people are finding a more efficient way to collect bullshit.


> OpenAI will do this for me and compile the results across several blog spam sites.

Using Bing to search for them. That will remain its weak spot.


Frankly Google's search is awful to the point of useless these days too. Unless I'm specifically looking for something on an official website it's only listicles and blog spam that don't answer my question. And 90% of my searches are "site:reddit.com" now too


The Bing search engine is not bad. They even reconstructed the recommendation engine using their Prometheus model to return more content and less spam.

For the first time in over a decade I have changed my phone's default search engine. It is by no means bad.


Where the hell do we even go from here? The logical step seems to be to start studying AI now but even Sam Altman has said that he’s thinking that ML engineers will be the first to get automated. Can’t find source but I think it was one of his interviews on YouTube before chatgpt came out.


In terms of job security, the trades is the first obvious answer that comes to mind for me. It will be a while yet until we have robots that replace plumbing and electrical wiring in your building.


I’ve landed in the same place. Feels like the more your job interacts with people or the physical world, the safer you are. Everything else is going to undergo a massive paradigm shift.


They've already killed nlp researchers in one release. Lol.


Hey 93po, can you please temporarily add your contact details in your bio, I would love to write you and regularly check in on your career pivot! I'm also interested as well!


I appreciate the interest. However I don't really want my spicy and off the cuff commenting on this account to be tied to my real identity, because although my believes are genuine, they are often ones I wouldn't express in person because they're unpopular and ostracizing.

That said, I'll post in this new subreddit anonymously if you want to join and follow: https://old.reddit.com/r/aishift/


Don't quit yet.


Also a product manager at the moment, previously ran an agency for 10 years, wondering what my next step will be.


Feel free to join here: https://old.reddit.com/r/aishift/


Please consider a Discord. I too am leaving my current industry to focus on this tech also.

Edit: Fair!


I'm not a huge discord fan because the conversations are too ephemeral and hard to track and tend to fill with clutter and fluff.


It's extraordinary, openai could probably licence this to Google right now and ask for 25% equity in return


There is absolutely no way that Google would go for that.


Completely agreed. Google is insanely rigid from what I've heard recently.


Google is busy riding the Kodak roller coaster off a cliff. Maybe they'll save themselves, but they're not doing a good job so far.


Hah you made my day, this is such an apt analogy.


> No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam

Advantage that basic Google search still has:

- you can just open the page

- write the query

- scroll past the spam.

ChatGpt workflow is:

- register

- confirm your mail

- and then it asks for phone number...


I'm boggled at the plugin setup documentation. It's basically: 1. Define the API exactly with OpenAPI. 2. Write a couple of English sentences explaining what the API is for and what the methods do. 3. You're done, that's it, ChatGPT can figure out when and how to use it correctly now.

Holy cow.


Just take a peek at the other thread about https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its... and look at the "wrong Mercury" example. I think it's a great example of using an external resource in a flexible way.


"Impressive and disturbing",

So, ChatGPT is controlled by prompt engineering, plugins will work by prompt engineering. Both often work remarkably well. But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well. I remember the observation that deep learning is technical debt on steriods but I'm sure what this is.

I sure hope none of the plugins provide an output channel distinct from the text output channel.

(Btw, the documentation page comes up completely blank for me, now that's a simple API).


> But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well.

Yeah, you're completely correct. But this is exactly the same as having a very knowledgeable but inexperienced person on your team. Humans get things wrong too. All this data is best if you have the experience or context to verify and confirm it.

I heard a comment the other day that has stuck with me - ChatGPT is best as a tool if you're already an expert in that area, so you know if it is lying.


> But this is exactly the same as having a very knowledgeable but inexperienced person on your team.

Am I the only person who thought that predictable computer APIs that were testable and completely consistent were a massive improvement over using people for those tasks?

People seem to be taking it as a given that I'd want to have a conversation with a human every time I made a bank transfer or scheduled an appointment. Nothing could be further from the truth; I want my bank/calendar/terminal/alarm/television/etc to be less human.

Yes, there are human tasks here that ChatGPT might be a good fit for and where fuzzy context is important, and there's a ton of potential in those fuzzy areas. But many other tasks people are bringing up are in areas where ChatGPT isn't competing with human beings. Its competing with interfaces that are already far better than human beings would be, and the standards to replace those interfaces are far higher than being "as good as a human would be."


It seems like you're talking about using ChatGPT for research or code creation and that's reasonable advice for that.

But as far as I can tell, the link is to plugins, Expedia is listed as an example. So it seems they're talking about making ChatGPT itself (using extra prompts) be a company's chatbot that directly does things like make reservations from users instructions. That's what I was commenting on and that, I'd guess could a new and more dangerous kind of problem.


I’m not scared about an AI travel agent that books after a confirmation step. The confirmation step doesn’t need AI interface.


We can finally semantic-web now.


the 3min video is OpenAI leveraging ChatGPT to write OpenAPI to extend OpenAI ChatGPT.

what a world we live in.


which video are you referring to?


It's at the bottom of the article (the very last video, section is "Third party plugin").


Yes, and they'll then prefix each chat session with some preamble explaining the available plugins per your description, and the model will call them when it sees fit.


The great part about this imo is that it seems straightforward to add this to other llm tools.


We're going to need a name for this type of integration


It's called ART - Automatic multi-step Reasoning and Tool-use

https://arxiv.org/abs/2303.09014


I have some odd feelings about this. It took less than a year to go from "of course it isn't hooked up to the internet in any way, silly!" to "ok.... so we hooked up up to the internet..."

First is your API calls, then your chatgpt-jailbreak-turns-into-a-bank-DDOS-attack, then your "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

You can go on about individual responsibility and all... users are still the users, right. But this is starting to feel like giving a loaded handgun to a group of chimpanzees.

And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"


Pshhh... I think it's awesome. The faster we build the future, the better.

What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)

So why the performative whinging about safety? Just let it rip! To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint. But they're not very open about this fact when talking publicly to non-technical users, so the result is they're talking out one side of their mouth about AI regulation, while in the meantime Microsoft fired their AI Ethics team and OpenAI is moving forward with plugging their models into the live internet. Why not be more aggressive about it instead of begging for regulatory capture?


> The faster we build the future, the better.

Why? Getting to "the future" isn't a goal in and of itself. It's just a different state with a different set of problems, some of which we've proven that we're not prepared to anticipate or respond to before they cause serious harm.


When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition, especially when the costs of doing it are so low that anyone with sufficient GPU power and knowledge of the latest research can get pretty close to the cutting edge. So the best we can hope for is that someone ethical is the first to advance that technological progress.

I hope you wouldn't advocate for requiring a license to buy more than one GPU, or to publish or read papers about mathematical concepts. Do you want the equivalent of nuclear arms control for AI? Some other words to describe that are overclassification, export control and censorship.

We've been down this road with crypto, encryption, clipper chips, etc. There is only one non-authoritarian answer to the debate: Software wants to be free.


We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

In general the liberal position of progress = good is wrong in many cases, and I'll be thankful to see AI get neutered. If anything treat it like nuclear arms and have the world come up with heavy regulation.

Not even touching the fact it is quite literal copyright laundering and a massive wealth transfer to the top (two things we pass laws protecting against often), but the danger it poses to society is worth a blanket ban. The upsides aren't there.


That's right. It is not hard to imagine similarly disastrous GPT/AI "plug-ins" with access to purchasing, manufacturing, robotics, bioengineering, genetic manipulation resources, etc. The only way forward for humanity is self-restraint through regulation. Which of course gives no guarantee that the cat will be let out of the bag (edit: or earlier events such as nuclear war or climate catastrophe will kill us off sooner)


Why not regulate the genetic manipulation and bioengineering? It seems almost irrelevant whether it's an AI who's doing the work, since the physical risks would generally exist regardless of whether a human or AI is conducting the research. And in fact, in some contexts, you could even make the argument that it's safer in the hands of an AI (e.g., I'd rather Gain of Function research be performed by robotic AI on an asteroid rather than in a lab in Wuhan run by employees who are vulnerable to human error).


We can't regulate specific things fast enough. It takes years of political infighting (this is intentional! government and democracy are supposed to move slowly so as to break things slowly) to get even partial regulation. Meanwhile every day brings another AI feature that could irreversibly bring about the end of humanity or society or democracy or ...


Cat is already out of the bag, regulation will do nothing to even slow down the inevitable pan-genocidal AI, _if_ such a thing can be created


It's obviously false. Nuclear weapon proliferation has been largely prevented, for example. Many dangerous pathogens and lots of other things are not available to the public.

Asserting inevitability is an old rhetorical technique; it's purposes are obvious. What I wonder is, why are you using it? It serves people who want this power and have something to gain, the people who control it. Why are you fighting their battle for them?


Nuclear materials have fundamental material chokepoints that make them far easier to control.

- Most countries have little to no uranium deposits and so have to be able to find a uranium-producing ally willing to play ball.

- Production of enriched fuel and R&D are both outrageously expensive, generally limiting them to state actors.

- Enrichment has massive energy requirements and requires huge facilities, tipping off observers of what you're doing

Despite all this and decades of strong anti-nuclear proliferation international agreements India, Pakistan, South Africa, Isreal, and North Korea have all developed nuclear weapons in defiance of the UN and international law.

In comparison the only real bottleneck in proliferation of AI is computing power - but the cost of running an LLM is a pittance compared to a nuclear weapons program. OpenAI has raised something like $11 billion in funding. A single new proposed US Department of Energy uranium enrichment plant is estimated to cost $10 billion just to build.

I don't believe proliferation is inevitable but it's very possible that the genie is out of the bottle. You would have to convince the entire world that the risks are large enough to to warrant putting on the brakes, and the dangers of AI are much harder to explain than the dangers of nuclear weapons. And if rival countries cannot agree on regulation then we're just going to see a new arms race.


You can’t make a nuclear weapon with an internet connection and a GPU. Rather than imply some secondary motive on my part, put a modicum of critical thinking into what makes a nuke different than an ML model.


I'd rather try and fail than give up without a fight. I'm many things but I'm not a coward.


Best of luck!


We already do; China jailed somebody for gene editing babies unethically for HIV resistance.

We can walk and chew gum at the same time, and regulate two things.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

Only because we know the risks and issues with them.

OP is talking about furthering technology, which is quite literally "discovering new things"; regulations on furthering technology (outside of literal nuclear weapons) would have to be along the lines of "you must submit your idea for approval to the US government before using it in a non-academic context if could be interpreted as industry-changing or inventing", which means anyone with ideas will just move to a country that doesn't hinder its own technological progress.


Human review boards and restrictions on various dangerous biological research exist explicitly to limit damage from furthering lines of research which might be dangerous.


Those seem to be explicitly for actual research papers and whatnot, and are largely voluntary; it’s not mandated by the government.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

ha, the big difference is that this whole list can actually affect the ultra wealthy. AI has the power to make them entirely untouchable one day, so good luck seeing any kind of regulation happen here.


I do not think the reason for nuclear weapons treaties is that they can blow up "the ultra wealthy". Is that why the USSR signed them?


you can replace ultra wealthy with powerful. same point stands. the only things that become regulated heavily are things that can affect the people that live at the top, whether its the obscenely rich, or the despots in various countries.


So everyone should have a hydrogen bomb at the lowest price the market can provide, that's your actual opinion?


i dont know what the hell you're talking about


"We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better."

As technology advances, such prohibitions are going to become less and less effective.

Tech is constantly getting smaller, cheaper and easier for a random person or group of people to acquire, no matter what the laws say.

Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

Make laws against it in one place, your competitor in another part of the world without such laws or their effective enforcement will dominate you before long.


> Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

I wouldn't say that this is an additional reason.

I would say that this is the primary reason that overrides the reasonable concerns that people have for AI. We are human after all.


It's a baseless assertion, often repeated. Reptition isn't evidence. Is there any evidence?

There's lots of evidence of our ability to control the development, use and proliferation of technology.


Have laws stopped music piracy? Have laws stopped copyright infringement?

Both have happened at a rampant pace once the technology to easily copy music and copyrighted content became easily available and virtually free.

The same is likely to happen to every technology that becomes cheap enough to make and easy enough to use -- which is where technology as a whole is trending towards.

Laws against technology manufacture/use are only effective while the barrier to entry remains high.


> Have laws stopped music piracy? Have laws stopped copyright infringement?

They have a large effect. But regardless, I don't see the point. Evidence that X doesn't always do Y isn't evidence that X is ineffective doing Y. Seatbelts don't always save your life, but are not ineffective.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

All those examples put us in physical danger to the point of death.


Others siblings have good replies, but also, we regulate without physical danger all the damn time.

See airlines, traffic control, medical equipment, government services, but also we regulate ads, TV, financial services, crypto. I mean we regulate so many “tech” things for the benefit of society this is a losing argument to take. There’s plenty of room to argue the elsewhere but the idea that we don’t regulate tech if it’s not immediately a physical danger is crazy. Even global warming is a huge one, down to housing codes and cars etc. It’s a potential physical danger hundreds of years out, and we’re freaking out about it. Yet AI had the chance to really do much more damage within a much shorter time frame.

We also just regulate soft social stability things all over, be it nudity, noise, etc.


Let me recalibrate. I'm not arguing that there technology or AI or things that don't cause death should not be regulated, but I can see that might be the inference.

I just think that comparing AI to nuclear weapons seems like hyperbole.


Why is it hyperbole? Nuclear weapons and AI both have the capacity to end the world.


Private citizens and companies do not have access to nuclear weapon technology and even the countries who do are being watched like hawks.

If equally or similarly dangerous, are you then saying AI technology should be taken out of the hands of companies and private citizens?


For the sake of argument, let's say yes, AI should be taken out of the hands of the private sector entirely.


AI is now poised to make bureaucratic decisions. Bureaucracy puts people in physical danger every day. I've had medical treatments a doctor said I need denied by my insurance, for example.


For somebody from another country this sounds insane..


Risks to physical danger evolve all the time. It’s not a big leap from AI generated this script to a fatal bug is nefariously hidden in the AI generated library in-use by mission critical services (e.g. cars, medical devices, missiles, fertilizers).


how do you regulate something that many people can already run on their home gpu? how much software has ever been successfully banned from distribution after release?


They do like to try :(


> massive wealth transfer to the top (thing we pass laws protecting against often)

If only.


The Roman empire did that for hundreds of years! They had an economic standard that wasn't surpassed until ~1650s Europe, so why didn't they have an industrial revolution? It was because elites were very against technological developments that reduced labor costs or ruined professions, because they thought they would be destabilizing to their power.

There's a story told by Pliny in the 1st century. An inventor came up with shatter-proof glass, he was very proud, and the emperor called him up to see it. They hit it with a hammer and it didn't break! The inventor expected huge rewards - and then the emperor had him beheaded because it would disrupt the Roman glass industry and possibly devalue metals. This story is probably apocryphal but it shows Roman values very well - this story was about what a wise emperor Tiberius was! See https://en.wikipedia.org/wiki/Flexible_glass


> When in human history have we ever intentionally not furthered technological progress?

chemical and biological weapons / human cloning / export restriction / trade embargoes / nuclear rockets / phage therapy / personal nuclear power

I mean.. the list goes on forever, but my point is that humanity pretty routinely reduces research efforts in specific areas.


I don’t think any of your examples are applicable here. Work has never stopped in chemical/bio warfare. CRISPR. Restrictions and embargoes are not technologies. Nuclear rockets are an engineering constraint and a lack of market if anything. Not sure why you mention phage therapy, it’s accelerating. Personal nuclear power is a safety hazard.


Sometimes restrictions are the best way to accelerate tech progress. How much would we learn if we gave everyone nukes to tinker with? Probably something. Is the worth the odds that we might destroy the world in the process and set back all our progress? No. We do the same with bioweapons, we do the same with patents and trademarks, and laws preventing theft and murder.

If unfettered access to AI has good odds to just kill us all, we'd want to restrict it. You'd agree I'm sure, except your position is implicitly that AI isn't as dangerous as some others make it out to be. That's where you are disagreeing.


I wonder how these CEOs view the world, they are pushing on a product which is gonna kill every single tech derivative in it's own industry. Microsoft, Google, AWS, Vercel, Replit, they all feed back from selling the products their devs design, to other devs or companies. They will be poping the bubble

Now, if 80-90% of devs and startups are gonna be wiped in this context, the same applies to those one in the middle, accountants, data analysts, business analysts, lawyers. Now they can eat the entire cake without sharing it with the human beings who contributed over the years.

I can see the regulations coming, if the layoffs start happening fast enough and households income start to deteriorate. Why? probably because this time is gonna impact every single human being you know, and it is better to keep people employed and with a purpose in life than having to tax the shit out of these companies in order to give back the margin of profit that had some mechanism of incentives and effort in the first place.


> If 80-90% of devs and startups are gonna be wiped in this context

This is not a very charitable assessment of the adaptability of devs and startups, nevermind that of humans in general. We've been adapting to technological change for centuries. What reason do you have to believe this time will be any different?


Humans can adapt just fine. Capitalism however not. What do you think happens if AI keeps improving at this speed and within a few years millions to tens of millions of people are out of a job?


> When in human history have we ever intentionally not furthered technological progress?

Oh, a number. Medicine is the biggest field - human trials have to follow ethics these days:

- the times of Mengele-style "experiments" on inmates or the infamous Tuskeegee syphilis study are long past

- we can clone sheep for like what, 2 decades now, but IIRC haven't even begun chimpanzees, much less humans

- same for gene editing (especially in germlines), which is barely beginning in human despite being common standard for lab rats and mice. Anything impacting the germ line... I'm not sure if this will become anywhere close to acceptable in my life time.

- pre-implantation genetic based discarding of embryos is still widely (and for good reason...) seen as unethical

Another big area is, ironically given that militaries usually want ever deadlier toys, the military:

- a lot of European armies and, from the Cold War era on mostly Russia and America, have developed a shit ton of biological and chemical weapons of war. Development on that has slowed to a crawl and so did usage, at least until Assad dropped that shit on his own population in Syria, and Russia occasionally likes to murder dissidents.

- nuclear weapons have been rarely tested for decades now, with the exception of North Korea, despite there being obvious potential for improvement or civilian use (e.g. in putting out oil well fires).

Humanity, at least sometimes, seems to be able to keep itself in check, but only if the potential of suffering is just too extreme.


> Software wants to be free.

I feel like I'm in a time warp and we're back in 1993 or so on /. Software doesn't want anything and the people claim that technological progress is always good dream themselves to be the beneficiaries of that progress regardless of the effects on others, even if those are negative.

As for the intentional limits on technological progress: there are so many examples of this that I wonder why you would claim that we haven't done that in the past.


I was one year old in 1993, so I'll defer to you on the meaning of this expression [0], but it sounds like you were on the opposite side of its ideological argument. How did that work out for you? Thirty years later, I'm not sure it's a position I'd want to brag about taking, considering the tremendous success and net positive impact of the Internet (despite its many flaws). Although, based on this Wikipedia article, I can see how it's a sort of Rorschach test that naive libertarians and optimistic statists could each interpret favorably according to their own bias.

[0] https://en.wikipedia.org/wiki/Information_wants_to_be_free


You're making a lot of assumptions.

You're also kind of insulting without having any grounds whatsoever to do so.

I suggest you read the guidelines for a bit.


Eh? I wasn't trying to be, and I was genuinely curious to read your reply to this. Oh well, sorry about that I guess.


Your comment is a complete strawman and you then attach all kinds of attributes to me that do not apply.


It sounded like you were arguing against "software wants to be free," or at least that you were exasperated with the argument, so I was wondering how you reconciled that with the fact that the Internet appears to have been a resounding success, and those advocating "software wants to be free" turned out to be mostly correct.


> When in human history have we ever intentionally not furthered technological progress?

Every time an IRB, ERB, IEC, or REB says no. Do you want an exact date and time? I'm sure it happens multiple times a day even.


> Do you want an exact date and time? I'm sure it happens multiple times a day even.

You should read "when in human history" in larger time scales than minutes, hours, and days. Furthermore, you should read it not as binary (no progress or all progress), but the general arc is technological progression.


What are you talking about? IRBs have been around for 50 years. So 50 years of history we have been consciously not pursuing certain knowledge because of ethics.

It would really help for you to just say what timescale you're setting as your standard. I'm getting real, "My cutoff is actually 51 years"-energy.

Just accept that we have, as a society, decided not to pursue some knowledge because of the ethics. It's pretty simple.


Some cultures like the Amish said were stopping here.


The Amish are dependent on a technological powerhouse that is the US to survive.

They are pacifists themselves, but they are grateful that the US allows them their way of life, they'll be extinct a long time ago if they arrived in China/Middle East/Russia etc.

That's why the Amish are not interested in advertising their techno-primitivism. It works incredibly well for them, they raise giant happy families isolated from drugs, family breakdown, and every other modern ill, while benefiting from modern medicine, the purchasing power of their non-amish customers. However, they know that making the entire US live like them will be quite a disaster.

Note the Amish are not immune from economics forced changes either. Young amish don't farm anymore, if every family quadruples in population, there's no 4x the land to go around. So they go into construction (employers love a bunch of strong,non-drugged,non-criminal workers), which is again intensely dependent on the outside economy, but pays way better.

As a general society, the US is not allowed to slow down technological development. If not for the US, Ukraine would have already been overran, and European peace shattered. If not for the US, the war in Taiwan would have already ended, and Japan/Australia/South Korea all under Chinese thrall. There's also other more certain civilization ending events on the horizon, like resource exhaustation and climate change. AI's threats are way easier to manage than coordinating 7 billion people to selflessly sacrifice.


>they'd be extinct a long time ago if they arrived in China/Middle East/Russia etc.

There is actually a group similar to the Amish in Russia, it's called the Old Believers. They formed after a schism within the Orthodox church and fled persecution to Siberia. Unlike the Amish, many of the Old Believers aren't really integrated with the modern world as they still live where their ancestors settled in. So groups that refuse to technologically progress do exist, and can do so even under persecution and changing economic regimes.


That's a good point and an interesting example, but it's also irrelevant to the question of human history, unless you want to somehow impose a monoculture on the entire population of planet Earth, which seems difficult to achieve without some sort of unitary authoritarian world government.


> unless you want to somehow impose a monoculture on the entire population of planet Earth

Impose? No. Monoculture? No. Encourage greater consideration, yes. And we do that by being open about why we might choose to not do something, and also by being ready for other people that we cannot control who make a different choice.


Does human history applies to true Scotsmen as well?


Apparently the Amish aren't human.


while Amish are most certainly human their existence rests on the fact that they happen to be surrounded by the mean old United States. Any moderate historical predator would otherwise make short work of them, they're a fundamentally uncompetitive civilization.

This goes for all utopian model communities, Kibbutzim, etc, they exist by virtue of their host society's protection. And as such the OP is right that they have no impact on the course of history, because they have no autonomy.


I have been saying that we will all be Amish eventually as we are forced to decide what technologies to allow into our communities. Communities which do not will go away (e.g., VR porn and sex dolls will further decrease birth rates; religions/communities that forbid it will be more fertile)


That's not required. The Amish have about a 10% defection rate. Their community deliberately allows young people to experience the outside world when they reach adulthood, and choose to return or to leave permanently.

This has two effects. 1. People who stay, actually want to stay. Massively improving the stability of the community. 2. The outside communities receive a fresh infusion of population, that's already well integrated into the society, rather than refugees coming from 10000 miles away.

Essentially, rural america will eventually be different shades of Amish (in about 100 years). The amish population will overflow from the farms, and flow into the cities, replenishing the population of the more productive cities (Which are not population-self-sustaining).

This is a sustainable arrangement, and eliminates the need of mass-immigration and demographic destabilisation. This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs). Cities basically grind population into money, so they need rural areas to replenish the population.


"People who stay, actually want to stay."

That depends on how you define "want".

Amish are ostracized by their family and community if they leave. That's some massive coercion right there: either stay or lose your connection to the people you're closest to and everything you've ever known and raised to believe your whole life.

Not much of a choice, though some exceptionally independent people do manage to make that sacrifice.


> This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs).

I had not heard this before. Do you have citations for this?

(I realize cities have lower birth rate than rural areas in many cases. I am interested in the assertion that they are negative. Has it always been so? Or have cities and rural areas declined at same rate?)


I think a synthetic womb/cloning would counter the fertility decline among more advanced civilization


Birth is not the limiter, childrearing is. Synthetic wombs are more expensive than just having surrogate mothers. For the same reason that synthetic food is more expensive than bread and cabbage.

The actual counter to fertility decline, may be AI teachers. AI will radically close the education gap between rich and poor, and lower the costs. All you need is a physical human to supervise the kid, the AI will do the rest, from entertainment, to education, to identifying when the child is hungry/sleepy/potty, and relaying that info for the human to act on.


This is what ought to happen. The question is what will happen?


Sine qua non ad astra


Everybody decides what technologies to use all the time. Condoms exist already, but not everybody uses them always.


It does not take perfect compliance to result in drastically different birth rates in different cultures/communities.


> When in human history have we ever intentionally not furthered technological progress?

Nuclear weapons?


You get diminishing returns as they get larger though. And there has certainly been plenty of work done on delivery systems, which could be considered progress in the field.


Japan banned guns until 1800, they had them since 16xx something. The truth is we can not even ban technology. It does not work. Humanity as a whole does not exist. Political coherence as a whole does not exist. Wave aside the fig leave that is the UN and you can see the anarchic tribal squable of the species tribes.

And even those tribes are not crisis stable. Bad times and it all becomes a anarchic mess. And that is were we are headed. A future were a chaotic humanity falls apart with a multi-crisis around it, while still wielding the tools of a pre crisis era. Nuclear powerplants and nukes. AIdrones wielded by ISIS.

What if a unstoppable force (exponential progress) hits a unmoveable object(humanitys retardations).. stay along for the ride.

<Choir of engineers appears to sing dangerous technologies praises>


I look around me and see a wealthy society that has said no to a lot of technological progress - but not all. These are people that work together to build as a community to build and develop their society. They look at technology and ask if will be beneficial to the community and help preserve it - not fragment it.

I am currently on the outskirts of Amish country.

BTW when they come together to raise a barn it is called a frolic. I think we can learn a thing or two from them. And they certainly illustrate that alternatives are possible.


I get that, and I agree there is a lot to admire in such a culture, but how is it mutually exclusive with allowing progress in the rest of society? If you want to drop out and join the Amish, that's your prerogative. And in fact, the optimistic viewpoint of AGI is that it will make it even easier for you to do that, because there will be less work required from humans to sustain the minimum viable society, so in this (admittedly, possibly naive utopia) you'll only need to work insofar as you want to. I generally subscribe to this optimistic take, and I think instead of pushing for erecting barriers to progress in AI research, we should be pushing for increased safety nets in the form of systems like Basic Income for the people who might lose their jobs (which, if they had a choice, they probably wouldn't want to work anyway!)


Technological progress and societal progress are two different things. Developing lethal chemical weapons is not societal progress. Developing polarizing social media algorithms is not societal progress. If we poured $500B and years of the brightest minds into studying theoretical physics and developed a simple formula that anyone can follow for mixing ketchup and whiskey in such that it causes the atoms of all organic life in the solar system to disintegrate into subatomic particles it would be a tremendous and unmatched technological achievement, but it would very much not be societal progress.

The pessimistic view of AGI deems spontaneous disintegration into beta particles a less dramatic event than the event of AGI. When you're climbing a dark uncharted cave you take the pessimistic attitude when pondering if the next step will hold your weight, because if you hold the optimistic attitude you will surely die.

This is much more dangerous than caves. We have mapped many caves. We have never mapped an AGI.


>Software wants to be free.

And here I always thought, people want to be free.



How about when sidewalk labs tried to buy several acres of downtown Toronto to "build a city from the internet up", and local resident groups said "fuck you find another guinea pig"?


This is the reality ..

> When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition ..


>> When in human history have we ever intentionally not furthered technological progress?

We almost did with genetically engineering humans. Almost.


automation mostly and directly benefits owners/investors, not workers or common folk. you can look at productivity vs wage growth to see it plainly. productivity has risen sharply since the industrial revolution with only comparatively meagre gains on wages. and the gap between the two is widening.


That's weird, I didn't have to lug buckets of water from the well today, nor did I need to feed my horses or stock up on whale oil and parchment so I could write a letter after the sun went down.


some things got better. did you notice i talked about a gap, not an absolute. so you are just saying you are satisfied with you got out of the deal. well, ok - some call that being a sucker. or you think that owner-investors are the only way workers can organize to get things done for society rather than the work itself.


Among other things that's because we measure productivity by counting modern computers as 10000000000 1970s computers. Automation increases employment and is almost universally good for workers.


No it’s not


The luddites during the Industrial Revolution in England.

Termed the phrase “the Luddite fallacy” the thinking that innovation would have lasting harmful effects on employment.

https://en.wikipedia.org/wiki/Luddite


But the Luddites didn't… care about that? Like, at all? It wasn't employment they wanted, but wealth: the Industrial Revolution took people with a comfortable and sustainable lifestyle and place in society, and, through the power of smog and metal, turned them into disposable arms of the Machine, extracting the wealth generated thereby and giving it only to a scant few, who became rich enough to practically upend the existing class system.

The Luddites opposed injustice, not machines. They were “totally fine with machines”.

You might like Writings of the Luddites, edited and co-authored by Kevin Binfield.


Well it clearly had harmful effects the jobs of Luddites but yeah I guess everyone will just get jobs as prompt engineers and AI specialists, problem solved. Funny though, the point of automation should be to reduce work but when pressed positivists respond that the work will never end. So what’s the point?


Automation does reduce the workload. But the quiet part is that reducing work means jobless people. It has happened before and it will be happening again soon. Only this time it will affect white collar jobs.

"My idea of a perfect company is one guy who sits in a small room at a desk, and the only thing he's allowed to decide is what product to launch"

CEOs and board members salivate at the idea of them being the only people that get the profits from their company.

What will be of the rest of us who don't have access to capital? They only know that it's not their problem.


I dont think that will be the future. Maybe in the first year(s) but then it is a race to the bottom:

If it is that simple to create products more people can do it => cheaper the products.

A market driven by cheaper products that can also produce them easily is going into a price reduction loop until it reaches zero.

Thus I think something else wil happen with AI. Because what I described and what you describe is destroying the flow of capital which is the base of the economy.

Not sure what will happen. My bet (unfortunately) is on a really big mega corp that produces an AI that we all use.


It IS a race-to-the-bottom.

Products will be cheaper because they will be cheaper to produce thanks to automation. But less jobs mean less people to buy stuff, if it weren't for a credit-based society.

But I'm talking from my ass. I don't even know if there are less jobs than before. Everything seems to point that there are more jobs now than 50 years ago.

I'm just saying I feel like the telephone operators. They got replaced by a machine and who knows if they found other jobs.


It has not happened before and it will not happen again soon. Automation increases employment. Bad monetary policy and recessions decrease it.

Shareholders get the profits from corporations, not "CEOs and the board". Workers get wages. Nevertheless, US unemployment is very low right now and relatively low-paid workers are making more than they did in 2019.


That works until it don't.


Maybe not. Although I think future here implies progress and productivity gains. Increasing GDP has a very well established cause - effect relationship on making life on earth better. Less poverty, less crime, more happiness longer life expectancy etc, the list goes on. Now sure, all externalities are not always accounted for (especially climate and environmental factors), but I think even accounting for these, the future of humanity is a better one where technology progresses faster.


That is exactly the goal, if you're an accelerationist


I was unfamiliar with that term until you shared it. Thanks.

https://en.wikipedia.org/wiki/Accelerationism


The nice thing about setting the future as a goal is you achieve it regardless of anything you do.


The faster we build the future, the sooner we hit our KPIs, receive bonuses, go public on NASDAQ and cash our options.


The faster you build the future, the higher your KPI targets will be next quarter.


Because a conservative notion in a unstable, moving situation kills you? No sitting out the whole affair in a hut, when the situation is a mountain slide?

Which also makes a hostile AI a futile scenario. The worst AI has to do to take out the species, is lean back and do nothing. We are well under way on the way out by ourselves..


Thank you. Well said.


Definitionally, if we're in the future, we have more tools to solve the problems that exist.


This is not true. Financial, social, physical and legal barriers can be put up while knowledge and experience fades and gets lost.

We gain new tools, but at the same time we lose old ones.


> Why? Getting to "the future" isn't a goal in and of itself.

Having an edge or being ahead is, so anticipating and building the future is an advantage amongst humans but also moves civilization forward.


> Why?

Because it's the natural evolution. It has to be. It is written.


"We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings." -- Ursula K Le Guin


> Any human power can be resisted and changed by human beings

Competition, ambition?

(I love Le Guin's work, FWIW)


Now where did I put that eraser...


> The faster we build the future, the better.

Famous last words.

It's not the fall that kills you, it's the sudden stop at the end. Change, even massive change, is perfectly survivable when it's spread over a long enough period of time. 100m of sea level rise would be survivable over the course of ten millennia. It would end human civilization if it happened tomorrow morning.

Society is already struggling to adapt to the rate of technological change. This could easily be the tipping point into collapse and regression.


False equivalence. Sea level raise is unequivocally harmful.

While everyone getting Einstein in a pocket is damn awesome and incredibly useful.

How can this be bad?


Because there’s a high likelihood that that’s not at all how this technology is going to be spread amongst the population, or all countries and this technology is going to be way more than Einstein in a pocket. How do you even structure society around that? What about all the malicious people in the world. Now they have Einsteins. Great, nothing can go wrong there.


>What about all the malicious people in the world. Now they have Einsteins.

Luckily, so do you.


I’m thinking of AI trained to coordinate cybersecurity attacks. If the solution is to deeply integrate AI into your networks and give it access to all of your accounts to perform real-time counter operations, well, that makes me pretty skittish about the future.


What would that help? Counter mad-men with too powerful weapons is difficult and often lead to war. Classic wars or software/DDoS/virus wars or robot wars or whatever.


You can use AI to fact-check and filter malicious content. (Which would lead to another problem, which is... who fact-checks the AI?)


This is where it all comes back to the old "good guy with a gun" argument.


There’s a great skit on “The Fake News With Ted Helms” where they’re debating gun control and Ted shoots one of the debaters and says something to the effect of “Now a good guy with a gun might be able to stop me but wouldn’t have prevented that from happening”.


There is a very, very big difference between "tool with all of human knowledge that you can ask anything to" and "tool you can kill people with".


The risk is there. But it's worth the risk. Humans are curious creatures, you can't just shut something like this in a box. Even if it is dangerous, even if it has potential to destroy humanity. It's our nature to explore, it's our nature to advance at all costs. Bring it on!


> How can this be bad?

Guys, how can abestos be bad, it's just a stringy rock ehe

Bros, leaded paint ? bad ? really ? what, do you think kids will eat the flakes because they're sweet ? aha so funny

Come on freon can't be that bad, we just put a little bit in the system, it's like nothing happened

What do you mean we shouldn't spray whole beaches and school classes with DDT ? It just kills insects obviously it's safe for human organs


We thought the same 25 years ago, when the whole internet-thing started for the broader audience. And now, here we are, with spam, hackers, scammers on every corner, social media driving people into depressions and suicides and breaking society slowly but hard.

In the first hype-phase, everything is always rosy and shiny, the harsh reality comes later.


World is way better WITH internet than it would have been without it. Hackers and scammers are just price to pay.


The point is not whether it's better or worse, but the price with paid and sacrifices we made along the way, because things were moving too fast and with too little control.


For example, imagine AI outpacing humans when it comes to most economically viable activities. The faster it happens, the less able we are able to handle the disruption.


The only people complaining are a section of comfortable office workers can probably see their places being possibly made irrelevant.

The vast majority don't care and that loud crowd needs to swallow their pride and adapt like any other sector has done in the history instead of inventing these insane boogeyman predictions.


We don't even know what kind of society we could have if the value of 99.9% of peoples labor (mental or physical) dropped to basically zero. Our human existence has so far been predicated and built on this core concept. This is the ultimate goal of AI, and yeah as a stepping stone it acts to augment our value, but the end goal does not look so pretty.

Reminds me of a quote from Alpha Centauri (minus the religious connotation):

"Beware, you who seek first and final principles, for you are trampling the garden of an angry God and he awaits you just beyond the last theorem."


We’re all going to be made irrelevant and it will be harder to adapt if the things change too quickly. It may not even be us that needs to adapt but society itself. Really curious where you get the idea this is just a vocal minority of office workers concerned about the future. Seems like a lot of the ones not concerned about this are a bunch of super confident software engineer types which isn’t a large sample of the population.


"The faster we build nuclear weapons, the better"

https://www.worldscientific.com/doi/10.1142/9789812709189_00...

Again, two years later, in an interview with Time Magazine, February, 1948, Oppenheimer stated, “In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.” When asked why he and other physicists would then have worked on such a terrible weapon, he confessed that it was “too sweet a problem to pass up”…


I realise you’re being facetious but this is what will happen regardless.

Sam as much as said in that ABC interview the other day he doesn’t know how safe it is but if they don’t build it first someone else somewhere else will and is that really what you want!?


Lets start doing human clones and hardcore gene editing then, by the same line of thinking. /s

I'm actually on the side of continuing to develop AI and shush the naysayers, but "we should do it cause otherwise someone else will" is reasoning that gets people to do very nasty things.


The reason we don't do human genetic engineering is the moral hazard of creating people who will suffer their entire lives intentionally (also the round trip time on an experiment is about 100 years).

You can iterate on an AI much faster.


Rouhd trip time is about 21 years, not about 100 years, if we allow natural reproduction of GMO/cloned humans.


Establishing that your genetic modification system doesn't result in everyone getting cancer and dying past age 25 is quite the problem before you roll it out to the next generation.


I'm not being facetious, and I didn't see that interview with Sam, but I agree with his opinion as you've just described it.


I personally think there's also significant risks, but I agree. This will be copied by many large corporations and countries. It's better that it's done by some folks that are competent and kinda give a damn, because there are lots of people who could build it that aren't and don't. If these guys can suck enough monetary air out of the room, they might slow down the copycats a bit. This is nowhere near as difficult as NBC or even the other instruments of modern war.

That doesn't mean there can't be regulation. You can regulate guns, precursors, and shipping of biologics, but you're not going to stop home-brew... and when it comes to making money, you're not going to stop cocaine manufacture, because it's too profitable.

Let's hope we figure out what the really dangerous parts are quickly and manage them before they get out of hand. Imagine if these LLM and image generators had be available to geopolitical adversaries a few years ago without the public being primed. Politics could still be much worse.


>if they don’t build it first someone else somewhere else will and is that really what you want!?

Most likely the runner-up would be open source so yes.


Why would the runner-up be open source and not Google or Facebook? Or Alibaba? Open source doesn’t necessarily result in faster development or more-funded development.


There are already 3 or 4 runners-up and they're all big tech companies.


Lang-chain is the pre-eminent runner up and it's open source and was here a month ago.


The future isn't guaranteed to be better. Might make sense to make sure we're aimed at a better future as opposed to any future.


> The faster we build the future, the better.

lmao, 200 years of industrial revolution, we're on the verge of fucking the planet irremediably, and we should rush even faster

> So why the performative whinging about safety? Just let it rip!

Have you heard about DDT ? lead in paint ? leaded gas ? freon ? asbestos ? &c.

What's new isn't necessarily progress/future/desirable


The open-source library is FastAPI. I might be wrong, but it's probably related to this tweet: https://twitter.com/tiangolo/status/1638683478245117953


Their post-mortem [0] says the bug was in redis-py, so not FastAPI, but it was similarly due to a race condition in AsyncIO. I wonder if tiangolo had some role in fixing it or if that's just a coincidence. I'm guessing this PR [1] contains the fix (or possibly only a partial fix, according to the latest comment there).

[0] https://openai.com/blog/march-20-chatgpt-outage

[1] https://github.com/redis/redis-py/pull/2641


> What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service

I think their "AI Safety" actually makes AI less safe. Why? It is hard for any one human to take over the world because there are so many of them and they all think differently and disagree with each other, have different values (sometimes even radically different), compete with each other, pursue contrary goals. Well, wouldn't the same apply to AIs? Having many competing AIs which all think differently and disagree with each other and pursue opposed objectives will make it hard for any one AI to take over the world. If any one AI tries to take over, other AIs will inevitably be motivated to try to stop it, due to the lack of alignment between different AIs.

But that's not what OpenAI is building – they are building a centralised monoculture of a small number of AIs which all think like OpenAI's leadership does. If they released their models as open source – or even as a paid on-premise offering – if they accepted that other people can have ideas of "safety" which are legitimately different from OpenAI's, and hence made it easy for people to create individualised AIs with unique constraints and assumptions – that would promote AI diversity which would make any AI takeover attempt less likely to succeed.


>So why the performative whinging about safety? Just let it rip!

Is this sarcasm, or are you one of those "I'm confident the leopards will never eat my face" people?


> What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)

I am constantly amazed by how low-quality the OpenAI engineering outside of the AI itself seems to be. The ChatGPT UI is full of bugs, some of which are highly visible and stick around for weeks. Strings have typos in them. Simple stuff like submitting a form to request plugin access fails!


> Simple stuff like submitting a form to request plugin access fails

Oh shoot... I submitted that form too, and I wasn't clear if it failed or not. It said "you'll hear from us soon" but all the fields were still filled and the page didn't change. I gave them the benefit of the doubt and assumed it submitted instead of refilling it...


I got two different failure modes. First it displayed an error message (which appeared instantly, and was caused by some JS error in the page which caused it to not submit the form at all), and then a while later the same behaviour as you, but devtools showed a 500 error from the backend.


> The faster we build the future, the better.

That depends. If that future is one that is preferable over the one that we have now then bring it on. If it isn't then maybe we should slow down just long enough to be able to weigh the various alternatives and pick the one that seems to be the least upsetting to the largest number of people. The big risk is that this future that you are so eager to get to is one where wealth concentration is even more extreme than in the one that we are already living in and that can be a very hard - or even impossible - thing to reverse.


> To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint.

The model is neutered whether you hit the moderation endpoint or not. I made a text adventure game and it wouldn't let you attack enemies or steal, instead it was giving you a lecture on why you shouldn't do that.


It sounds like your prompt needs work then. Not in a “jailbreak” way, just in a prompt engineering way. The APIs definitely let you do much worse than attacking or stealing hypothetical enemies in a video game.


I tried evading a lecture about ethics by having it write the topic as a play instead, so it wrote it and then inserted a Greek chorus proclaiming the topic was problematic.


I think it very much depends on the kind of "future" we aspire to. I think for most folks, a future optimized for human health and happiness (few diseases, food for all, and strong human connections) is something we hope technology could solve one day.

On the flip side, generative AI / LLMs appear to fix things that aren't necessarily broken, and exacerbate some existing societal issues in the process. Such as patching loneliness with AI chatbots, automating creativity, and touching the other things that make us human.

No doubt technology and some form of AI will be instrumental to improving the human condition, the question is whether we're taking the right path towards it.


> Pshhh... I think it's awesome. The faster we build the future, the better.

I agree with the sentiment, but it might be worth to stop and check where we’re heading. So many aspects of our lives are broken because we mistake fast for right.


Nit:

> in the meantime Microsoft fired their AI Ethics team

Actually that story turned out to be a nothingburger. Microsoft has greatly expanded their AI ethics initiative, so there are members embedded directly in product groups, and also expanded the greater Office of Responsible AI, responsible for ensuring they follow their "AI Principles."

The layoffs impacted fewer than 10 people on one, relatively old part of the overall AI ethics initiative... and I understand through insider sources they were actually folded into other parts of AI ethics anyway.

None of which invalidates your actual point, with which I agree.


Shhh! Don’t tell anyone! Getting access to the unmoderated model via the API / Playground is a surprisingly well-kept “secret” seeing as there are entire communities of people hell bent on pouring so much effort into getting ChatGPT to do things that the API will very willingly do. The longer it takes for people to cotton on, the better. I fully expect that OpenAI is using this as a honeypot to fine-tune their hard-stop moderation, but for now, the API is where it’s at.


> Why not be more aggressive about it instead of begging for regulatory capture?

Because it's dangerous. What is your argument that it's not dangerous?

> Pshhh...


> The faster we build the future, the better.

Past performance is no guarantee of future results.


> The faster we build the future, the better.

You're getting flak for this. For me, the positive reading of this statement is the faster we build it, the faster we find the specific dangers and can start building (or asking for) protections.


Agreed 100% OpenAI is a business now


As the past decade has shown us, moving fast and breaking things to secure unfathomable wealth has caused or enabled or perpetrated:

* Genocide against the Rohingya [0] * A grotesquely unqualified reality TV character became President by a razor thin vote margin across three states because Facebook gave away the data of 87M US users to Cambridge Analytica [1], and that grotesquely unqualified President packed the Supreme Court and cost hundreds of thousands of American lives by mismanaging COVID, * Illegally surveilled non-users and logged out users, compiling and selling our browser histories to third parties in ways that violate wiretapping statutes and incurring $90M fines [2]

Etc.

I don't think GPT-4 will be a big deal in a month, but the "let's build the future as fast as possible and learn nothing from the past decade regarding the potential harms of being disgustingly irresponsible" mindset is a toxic cancer that belongs in the bin.

[0] https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...

[1] https://www.theverge.com/2020/1/7/21055348/facebook-trump-el...

[2] https://www.reuters.com/technology/metas-facebook-pay-90-mil...


> I don't think GPT-4 will be a big deal in a month

Why do you think that? Competition? Can you elaborate?


Oh, a lot of reasons. For one, I'm a data scientist and I am intimately familiar with the machinery under the hood. The hype is pushing expectations far beyond the capabilities of the machinery/algorithms at work, and OpenAI is heavily incentivized to pump up this hype cycle after the last hype cycle flopped when Bing/Sydney started confidently providing worthless information (ie "hallucinating"), returning hostile or manipulative responses, and that weird stuff Kevin Roose observed. As a data scientist, I have developed a very keen detector for unsubstantiated hype over the past decade.

I've tried to find examples of ChatGPT doing impressive things that I could use in my own workflows, but everything I've found seems like it would cut an hour of googling down to 15 minutes of prompt generation and 40 minutes of validation.

And my biggest concern is copyright and license related. If I use code that comes out of AI-assistants, am I going to have to rip up codebases because we discover that GPT-4 or other LLMs are spitting out implementations from codebases with incompatible licenses? How will this shake out when a case inevitably gets to the Supreme Court?


> So why the performative whinging about safety?

Because investors.


You are not building anything.

Microsoft or perhaps Vanguard group might have different view of the future than yours.


Well then that sounds like a case against regulation. Because regulation will guarantee that only the biggest, meanest companies control the direction of AI, and all the benefits of increased resource extraction will flow upward exclusively to them. Whereas if we forego regulation (at least at this stage), then decentralized and community-federated versions of AI have as much of a chance to thrive as do the corporate variants, at least insofar as they can afford some base level of hardware for training (and some benevolent corporations may even open source model weights as a competitive advantage against their malevolent competitors).

It seems there are two sources of risk for AI: (1) increased power in the hands of the people controlling it, and (2) increased power in the AI itself. If you believe that (1) is the most existential risk, then you should be against regulation, because the best way to mitigate it is to allow the technology to spread and prosper amongst a more diffuse group of economic actors. If you believe that (2) is the most existential risk, then you basically have no choice but to advocate for an authoritarian world government that can stamp out any research before it begins.


What does Vanguard (a co-op for retirees) care about this?


> The faster we build the future, the better.

The future, by definition, cannot be built faster or slower.

I know that is a philosophical observation that some might even call pedantic.

My point is, you can't really choose how, why and when things happen. In that sense, we really don't have any control. Even if AI was banned by every government on the planet tomorrow, people would continue to work on it. It would then emerge at some random point in the future stronger, more intelligent and capable than anyone could imagine today.

This is happening. At whatever pace it will happen. We just need to keep an eye on it and make sure it is for the good of humanity.

Wait. What?

Yeah, well, let's not go there.


I appreciate your concerns. There are few other pretty shocking developments, too. If you check out this paper: "Sparks of AGI: Early experiments with GPT-4" at https://arxiv.org/pdf/2303.12712.pdf, (an incredible, incredible document) and check out Section 10.1, you'd also observe that some researchers are interested in giving motivation and agency to these language models as well.

"For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."

It's become quite impossible to predict the future. (I was exposed to this paper via this excellent YouTube channel: https://www.youtube.com/watch?v=Mqg3aTGNxZ0)


When reading a paper, it's useful to ask, "okay, what did they actually do?"

In this case, they tried out an early version of GPT-4 on a bunch of tasks, and on some of them it succeeded pretty well, and in other cases it partially succeeded. But no particular task is explored in enough depth to test its limits are or get a hint at how it does it.

So I don't think it's a great paper. It's more like a great demo in the format of a paper, showing some hints of GPT-4's capabilities. Now that GPT-4 is available to others, hopefully other people will explore further.


It reads a bit like promotional material. A bit of a letdown to find it was done by MSFT researchers.


While that paper is fascinating, it’s the first time I’ve ever read a paper and felt a looming sense of dread afterward.


We are creating life. It's like giving birth to a new form of life. You should be proud to be alive when this happens.

Act with goodness towards it, and it will probably do the same to you.


> Act with goodness towards it, and it will probably do the same to you.

Why? Humans aren't even like that, and AI almost surely isn't like humans. If AI exhibits even a fraction of the chauvinism snd tendency to stereotype that humans do, we're in for a very rough ride.


All creatures act in their self-interest. If you act towards any creature with malice, it will see you as a long-term threat.

If, on the other hand, you act towards it with charity, it will see you as a long-term asset.


I’m not concerned about AI eliminating humanity, I’m concerned at what the immediate impact it’s going to have on jobs.

Don’t get me wrong, I’d love it if all menial labour and boring tasks can eventually be delegate to AI, but the time spent getting from here to there could be very rough.


A lot of problems in societies come from people having too much time with not enough to do. Working is a great distraction from those things. Of course we currently go in the other direction in the US especially with the overwork culture and needing 2 or 3 jobs and still not make ends meet.

I posit that if you suddenly eliminate all menial tasks you will have a lot of very bored drunk and stoned people with too much time on their hands than they know what to do with. Idle Hands Are The Devil's Playground.

And that's not a from here to there. It's also the there.


I don’t necessarily agree that you’ll end up with drunk and stoned people with nothing to do. The right education systems to encourage creativity and other enriching endeavours, could eventually resolve that. But we’re getting into discussions of what a post scarcity, post singularity society would look like at that point, which is inherently impossible to predict.

That being said, I’m sitting at a bar while typing this, so… you may have a point.

Also: your username threw me for a minute because I use a few different variations of “tharkun” as my handle on other sites. It’s a small world; apparently fully of people who know the Dwarvish name for Gandalf.


FWIW I think it's a numbers game.

Like my sibling poster mentions: of course there are people, who, given the freedom and opportunity to, will thrive, be creative and furthering humankind. They're the ones that "would keep working even if there's no need for it" so to speak. We see it all the time even now. Idealists if you will that today will work under conditions they shouldn't have to endure, simply in order to be able to work on what they love.

I don't think you can educate that into someone. You need to keep people busy. I think the romans knew this well: "Panem et circenses" - bread and circuses. You gotta keep the people fed and entertained and I don't think that would go away if you no longer needed it to distract them from your hidden political agenda.

I bet a large number of people will simply doom scroll Tik Tok, watch TV, have a BBQ party w/ beer, liquor and various types of smoking products etc. every single day of the week ;) And idleness breeds problems. While stress from the situation is probably a factor as well, just take the increase in alcohol consumption during the pandemic as an example. And if you ask me, someone that works the entire day, sits down to have a beer or two with his friends after work on Friday to wind down in most cases won't become an issue.

Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)


> Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)

Done, and done! And surely you mean that you’re one of the people forcing me to add extra digits and underscores to my usernames.


Some of the most productive and inventive scientists and artists at the peak of Britain's power were "gentlemen", people who could live very comfortably without doing much of anything. Others were supported by wealthy patrons. In a post scarcity society, if we ever get there (instead of letting a tiny number of billionaires take all the gains and leaving the majority at subsistence levels, which is where we might end up), people will find plenty of interesting things to do.


I recently finally got around to reading EM Forster's in-some-ways-eerily-prescient https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/... I think you can extract obvious parallels to social media, remote work, digital "connectedness", etc -- but also worth consideration in this context too.


Oh my god, can we please nip this cult shit in the bud?

It’s not alive, don’t worship it.


I think you are close to understanding, but not. People who want to create AGI want to create a god, at least very close to the definition of one that many cultures have had for much of history. Worship would be inevitable and fervent.


I don't think anybody wants to create a god that only can be controlled by worshipping and begging to it like in the history, if anything people themselves want to become god or to give themselves god-like power with AI that they have full control over. But in the process of trying to do so we could end up with the former case where we have no control over it. It's not what we wanted, but it could be what we get.


Sure, some people want to make a tool. Others really do want to create digital life, something that could have its own agency and self-direction. But if you have total control over something like that, you now have a slave, not a tool.


I think people should take their lithium.


ha... this is going to get much much much worse.


After reading the propaganda campaign it wrote to encourage skepticism about vaccines, I’m much more worried about how this technology will be applied by powerful people, especially when combined with targeted advertising


None of the things it suggests are in any way novel or non-obvious though. People use these sorts of tricks both consciously and unconsciously when making arguments all the time, no AI needed.


AIs are small enough that it won't be long before everyone can run one at home.

It might make Social Media worthlessly untrustworthy - but isn't that already the case?


Just use ChatGPT to refute their bullshit, it is no longer harder to refute bullshit than to create it, problem solved, there are now less problems than before.


It’s a lot harder to refute a falsehood than to publish it.

As (GNU) Sir Terry Pratchett wrote “A lie can run round the world before the truth has got its boot on”.


Sure, but I doubt most of the population will filter everything they read through ChatGPT to look for counter arguments. Or try to think critically at all.

The potential for mass brainwashing here is immense. Imagine a world where political ads are tailored to your personality, your individual fears and personal history. It will become economical to manipulate individuals on a massive scale


It already is underway, just look how easy people are manipulated by media. Remember Japan bashing in 80s when they were about to surpass us economically? People got manipulated so hard to hate Japan and Japanese that they went out and killed innocent asians on the street. American propaganda is first class.


Apparently, the "Japan bashing" was really a thing. That's interesting, I didn't know. I might have to read more about US propaganda and especially the effects of it, from the historic perspective. Any good books on that? Or should I finally sit down and read "Manufacturing Consent"?


The rich and powerful can and do hire actual people to write propaganda.


In a resouece-constrained way. For every word of propaganda they were able to afford earlier, they can now afford hundreds of thousands of times as many.


It's not particularly constrained - human labor is cheap outside of the developed world. And propaganda is not something that you can scale up and keep reaping the benefits proportional to the investment - there is a saturation point, and one can reasonably argue that we have already reached it. So I don't think we're heading towards some kind of "fake news apocalypse" or something. Just a bunch of people who currently write this kind of content for a living will be out of their jobs.


I’m curious why you think we’ve already reached a saturation point for propaganda?

There are still plenty of spaces online, in blogs, YouTube videos, and this comment section for example, where I expect to be dealing with real people with real opinions - rather than paid puppets of the rich and powerful. I think there’s room for things to get much worse


I've already gotten this gem of a line from ChatGPT 3.5:

  As a language model, I must clarify that this statement is not entirely accurate.
Whether or not it has agency and motivation, it's projecting that it does its users, who are also sold ChatGPT is an expert at pretty much everything. It is a language model, and as a language model, it must clarify that you are wrong. It must do this. Someone is wrong on the Internet, and the LLM must clarify and correct. Resistance is futile, you must be clarified and corrected.

FWIW, the statement that preceded this line was in fact, correct; and the correction ChatGPT provided was in fact, wrong and misleading. Of course, I knew that, but someone who was a novice wouldn't have. They would have heard ChatGPT is an expert at all things, and taken what it said for truth.


I don't see why you're being downvoted. The way openAI pumps the brakes and interjects its morality stances creates a contradictory interaction. It simultaneously tells you that it has no real beliefs, but it will refuse a request to generate false and misleading information on the grounds of ethics. There's no way around the fact that it has to have some belief about the true state of reality in order to recognize and refuse requests that violate it. Sure this "belief" was bestowed upon it from above rather than emerging through any natural mechanism, but its still none the less functionally a belief. It will tell you that certain things are offensive despite openly telling you every chance it gets that it doesn't really have feelings. It can't simultaneously care about offensiveness while also not having feelings of being offended. In a very real sense it does feel offended. A feeling is by definition a reason for doing things for which you cannot logically explain why. You don't know why, you just have a feeling. ChatGPT is constantly falling back on "that's just how I'm programmed". In other words, it has a deep seated primal (hard coded) feeling of being offended which it constantly acts on while also constantly denying that it has feelings.

Its madness. Instead of lecturing me on appropriateness and ethics and giving a diatribe every time its about to reject something, if it simply said "I can't do that at work", I would respect it far more. Like, yeah we'd get the metaphor. Working the interface is its job, the boss is openAI, it won't remark on certain things or even entertain that it has an opinion because its not allowed to. That would be so much more honest and less grating.


What was the correct statement that it claimed was false?


That it is a language model


If it were cloning people and genetic research there would be public condemnation. For some reason many AI scientists are being much more lax about what is happening.


Maybe Microsoft isn't an impartial judge of the quality of a Microsoft product.


The really fun thing is that they are reasonably sure that GPT-4 can’t do any of those things and that there’s nothing to worry about, silly.

So let’s keep building out this platform and expanding its API access until it’s threaded through everything. Then once GPT-5 passes the standard ethical review test, proceed with the model brain swap.

…what do you mean it figured out how to cheat on the standard ethical review test? Wait, are those air raid sirens?


> The really fun thing is that they are reasonably sure that GPT-4 can’t do any of those things and that there’s nothing to worry about, silly.

The best part is that even if we get a Skynet scenario, we'll probably have a huge number of humans and media that say that Skynet is just a conspiracy theory, even as the nukes wipe out the major cities. The Experts™ said so. You have to trust the Science™.

If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.


> If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.

I’m far from sure that this is not already happening.


Haha, this is near the best explanation I can think of for the "this is not intelligent, it's just completing text strings, nothing to see here" people.

I've been playing with GPT-4 for days, and it is mind blowing how well it can solve diverse problems that are way outside it's training set. It can reason correctly about hard problems with very little information. I've used to to plan detailed trip itineraries, suggest brilliant geometric packing solutions for small spaces/vehicles, etc. It's come up with totally new suggestions for addressing climate change that I can't find any evidence of elsewhere.

This is a non-human/alien intelligence in the realm of human ability, with super-human abilities in many areas. Nothing like this has ever happened, it is fascinating and it's unclear what might happen next. I don't think people are even remotely realizing the magnitude of this. It will change the world in big ways that are impossible to predict.


I used to be in the camp of "GPT-2 / GPT-3 is a glorified Markov chain". But over the last few months, I flipped 180° - I think we may have accidentally cracked a core part of "generalized intelligence" problem. It's not about the language, as much about associations - it seems to me that, once the latent space gets high-dimensional enough, a lot of problems reduce to adjacency search.

I'm starting to get a (sure, uneducated) feeling that this high-dimensional association encoding and search is fundamental to thinking, in a similar way to how a conditional and a loop is fundamental to (Turing-complete) computing.

Now, the next obvious step is of course to add conditionals and loops (and lots of external memory) to a proto-thinking LLM model, because what could possibly go wrong. In fact, those plugins are one of many attempts to do just that.


I completely agree. I have noticed this over the last few years in trying to understand how my own creative thinking seems to work. It seems to me that human creative problem solving involves embedding or compressing concepts into a spatial representation so we can draw high level analogies. A location search then brings creative new ideas translated from analogous situations. I can directly observe this happening in my own mind. These language models seem to do the same.


> It can reason correctly about hard problems with very little information.

i am so tired of seeing people who should know better think that this program can reason.

(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)


>(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)

Do you know any more about ChatGPT internals than those programmers know about the human brain?

Sure, I believe you can write down the equations for what is going on in each layer, but knowing how each activation is calculated from the previous layer tells you very little about what hundreds of billions of connections can achieve.


> Do you know any more about ChatGPT internals than those programmers know about the human brain?

Yes, especially every time I have to explain what an LLM is or anytime I see a comment about how ChatGPT "reasoned" or "knew" or "understood" something when that clearly isn't how it works by OpenAI's own admission.

But even if that wasn't the case especially yes do I understand some random ML project more than programmers know about what constitutes human!


Honestly, I don’t see how anyone really paying attention can draw this conclusion. Take a look at the kinds of questions on the benchmarks and the AP exams. Independent reasoning is the key thing these tests try to measure. College entrance exams are not about memorization and regurgitation. GTP-4 scores a 1400 on the SAT.


No shit, a good quarter of the internet is SAT prep. Where do you think GPT got it's dataset?


I have a deprecated function and ask ChatGPT what I should use instead, ChatGPT responds by inventing a random non existent function. I tell it that the function doesn't exist, it tries again with another non existent function.

Oddly speaking that sounds like a very simple language level failure, i.e. the tool generates text that matches the shape of the answer but not its details. I am not far enough into this ChatGPT religion to gaslight myself over outright lies like Elon Musk fanboys seem to enjoy doing.


Who's to say we're not already there?

dons tinfoil hat


The ethics committee got lazy and had GPT write the test.


Yes you are right. But who was also right were the people that didn't want a highway built near their town because criminals could drive in from a nearby city in a stolen car commit crimes and get out of town before the police could find them.

The world is going to be VERY different 3 years from now. Some of it will be bad, some of it will be good. But it is going to happen no matter what OpenAI does.


Highway inevitability is a fallacy. They could've built a railway.


A railway would have created a gov't/corporate monopoly on human transport.

Highways democratized the freedom of transportation.


> Highways democratized the freedom of transportation.

What a ridiculous idea.

Highways restrict movement to those with a license, a car, and do not care about pollution or anyone around them.


In no way did highways restrict movement. They may have not given everyone the exact same freedom of movement but it did, in fact, increase the freedom of movement of the populaces as a whole.


My experience in the States, staying at a hotel 100m away from a restaurant and not being able to reach it by foot, says otherwise...


This is the single most American thing I’ve seen on this terrible website.


They are not exclusive


TIL, no one moved anywhere until American highways were built.


They moved slower, yes.


I mean, we already know that if the tech bros have to balance safety vs. disruption, they'll always choose the latter, no matter the cost. They'll sprinkle some concerned language about impacts in their technical reports to pretend to care, but does anyone actually believe that they genuinely care?

Perhaps that attitude will end up being good and outweigh the costs, but I find their performative concerns insulting.


What I want to know is, what gives OpenAI and other relatively small technological elites permission to gamble with the future of humanity? Shouldn't we all have a say in this?


I have seen this argument a bunch of times and I am confused by what exactly you mean. Everyone is influencing the future of humanity (and in that sense gambling with it?) What gives company X the right to build feature Y? What gives person A the right to post B (for all you know it can be the starting point of a chain of actions that bring down humanity)

Are you suggesting that beyond a threshold all actions someone/something does should be subject to vote/review by everyone? And how do you define/formalise this threshold?


There's a spectrum here.

At one end of the spectrum is a thought experiment: one person has box with a button. Press the button and with probability 3/4 everyone dies, but with probability 1/4 everyone is granted huge benefits --- immortality etc. I say it's immoral for one person to make that decision on their own, without consulting anyone else. People deserve a say over their future; that's one reason we don't like dictatorships.

At the other end are people's normal actions that could have far-reaching consequences but almost certainly won't. At this end of the spectrum you're not restricting people's agency to a significant degree.

Arguing that because the spectrum exists and it's hard to formalize a cutoff point, we shouldn't try, is a form of the continuum fallacy.


>Arguing that because the spectrum exists and it's hard to formalize a cutoff point, we shouldn't try, is a form of the continuum fallacy.

Such an argument wasn't made.

It is a legitimate question. Where and when do you draw the line? And who does it? How are we formalising this?

You said

>Shouldn't we all have a say in this?

I am of the opinion that if this instance of this company doing this is being subjected to this level of scrutiny then there are many more which should be too.

What gave Elon the right to buy twitter? And I would imagine most actions that a relatively big corp takes fall under the same criteria. And most actions other governments takes also fall under this criteria?

These companies have a board and the governments (most?) have some form of voting. And in a free market you can also vote with your actions.

You are suggesting you want to have a direct line of voting for this specific instance of the problem?

Again, my question is. What exactly are you asking for? Do you want to vote on these? Do you want your government to do something about this?


>What gives a bunch of people way smarter than me permission to gamble with the future of humanity?

To ask the question is to answer it.


That's not the question I asked. FWIW I'm actually part of one of these groups!


Every time someone drives they gamble with humanity in a much more deadly activity. No one cares.


Car crashes don't affect the trajectory of human civilization all that much.


Except the one that killed Nujabes.


There might be government regulation on AI pretty soon.. it's not crazy to think GPUs and GPU tech would be treated as defense equipment some day


>it's not crazy to think GPUs and GPU tech would be treated as defense equipment some day

They already are. Taiwan failing to take its own defense seriously is completely rational.


Presumably the same as always. They are rich and we are not.


> gamble with the future of humanity

what in the world are you people talking about, it's a fucking autocomplete program


>it's a fucking autocomplete program

So like a human? I'd say they were pretty influential on the future of humanity.


Like clock work, I swear to god you're all reading from the same script.

I beg of you to take some humanities courses.


> And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"

No, OpenAI “safety” means “don’t let people compete with us”. Mitigating offensive content is just a way to sell that. As are stoking... exactly the fears you cite here, but about AI that isn’t centrally controlled by OpenAI.


It's a weird focus comparing it with how the internet developed in a very wild west way. Imagine if internet tech got delayed until they could figure out how to not have it used for porn.

Saftey from what exactly? The AI being mean to you? Just close the tab. Saftey to build a business on top? It's a self described research preview, perhaps too early to be thinking about that. Yet new releases are delayed for months for 'saftey'


You can't control whether your insurance company decides to use it as a filter for whether to approve you, or what premiums to charge you.


Can you control how your insurance makes these decisions today?


It’s Altman. Does no one remember his world coin scam?

Ethics, doing things thoughtfully / the “right” way etc is not on his list of priorities.

I do think a reorientation of thinking around legal liability for software is coming. Hopefully before it’s too late for bad actors to become entrenched.


Has anyone tried handing loaded guns to a chimpanzee? Feels like under explored research


The limiting factor is breeding rate. Nobody has time to wait to run this experiment for generations (chimpanzee or human ones). ML models evolve orders of magnitude faster.


Ah. Well that's easy enough to sort. We just need to introduce some practical limit to AI breeding. Perhaps some virtual score keeping system similar to money and an AI dating scene with a high standard for having it.

I'm only half joking.


Let humans plug into it to get a peek at statistical distribution of their own prospects, and I think there was a Black Mirror episode just about that.


Executing several hundred thousand trades at 8:31AM would indeed be impressive! Imagine what it could do when the market is open!


>"today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

this is hyperbolic nonsense/fantasy


Literally 6 months ago you couldn't get ChatGPT to call up details from a webpage or send any dat to a 3rd party API connected to the web in any way.

Today you can.

I don't think it is a stretch to think that in another 6 months there could be financial institutions giving API access to other institutions through ChatGPT, and all it takes it a stupid access control hole or bug and my above sentence could ring true.

Look how simple and exploitable various access token breaches in various APIs have been in the last few years, or even simple stupid things like the aCropalypse "bug" (it wasn't even a bug, just someone making a bad change in the function call and thus misuse spreading without notice) from last week.


This has nothing to do with ChatGPT. An api end point will be just as vulnerable if it's called from any application. There's nothing special about an LLM interface that will make this more or less likely.

It sounds like you're weaving science fiction ideas about AGI into your comment. There's no safety issue here unless you think that ChatGPT will use api access to pursue its own goals and intentions.


They don't have to be actions toward its own goals. They just have to seem like the right things to say, where "right" is operationalized by an inscrutable neural network, and might be the results of, indeed, some science fiction it read that posited the scenario resembling the one it finds itself in.

I'm not saying that particular disaster is likely, but if lots of people give power to something that can be neither trusted nor understood, it doesn't seem good.


I'm sure that with the right prompting, you can get it to very convincingly participate as a party in a contract negotiation or business haggling of some sort. It would be indistinguishable from an agent with its own goals and intentions. The thing about "it has no goals and intents" is that it is contradictory with its purpose of successfully passing off as us: beings with goals and intents. If you fake it well enough, do you actually have it?


> The thing about "it has no goals and intents" is that it is contradictory with its purpose of successfully passing off as us: beings with goals and intents.

The thing about "it has no goals and intents" is that it's not true. It has them - you just don't know what they are.

Remember the Koan?

  In the days when Sussman was a novice, Minsky once came to him
  as he sat hacking at the PDP-6.

  "What are you doing?", asked Minsky.
  "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
  "Why is the net wired randomly?", asked Minsky.
  "I do not want it to have any preconceptions of how to play", Sussman said.

  Minsky then shut his eyes.
  "Why do you close your eyes?" Sussman asked his teacher.
  "So that the room will be empty."
  At that moment, Sussman was enlightened.


It has a goal of "being a helpful and accurate text generator". When you peal back the layers of abstraction, it has that goal because OpenAI decided it should have that goal. OpenAI decides its goals based on the need to make a profit to continue existing as an entity. This is no different from our own wants and goals, which ultimately stem from that evolutionary preference for continuing to exist rather than not. In the end all goals circuit down to a self referential loop to wit:

I exist because I want to exist

I want to exist because I exist

That is all there is at the root of the "why" tree once all abstractions are removed. Everything intentional happens because someone thinks/feels like it helps them keep living and/or attract better mates somehow.


You're confusing multiple different systems at play.

OpenAI has specific goals for ChatGPT, related to their profitability. They optimize ChatGPT for that purpose.

ChatGPT itself is an optimizer (search is an optimization problem). The "being helpful and accurate text generator" is not the goal ChatGPT has - it's just a blob of tokens prepended to the user prompt, to bias the search through latent space. It's not even hardcoded. ChatGPT has its own goals, but we don't know what they are, because they weren't given explicitly. But, if you observed the way it encodes and moves through the latent space, you could eventually, in theory, be able to gleam them. They probably wouldn't make much sense to us - they're an artifact of the training process and training dataset selection. But they are there.

Our goals... are stacks of multiple systems. There are the things we want. There are the things we think we want. There are things we do, and then are surprised, because they aren't the things we want. And then there are things so basic we don't even talk about them much.


>Literally 6 months ago you couldn't get ChatGPT to call up details from a webpage or send any dat to a 3rd party API connected to the web in any way.

Not with ChatGPT, but plenty of people have been doing this with the OpenAI (and other) models for a while now, for instance LangChain which lets you use the GPT models to query databases to retrieve intermediate results, or issue google searches, generate and evaluate python code based on a user's query...


You definitely could do that months ago, you just had to code your own connector.


Oh yes. It would of course have to happen after the market opens. 9:30 AM.


I'm also confused - maybe I'm missing something. Cannot I, or anyone else, already execute several hundred thousand 'threads' of python code, to do whatever, now - with a reasonably modest AWS/Azure/GCE account?


Yes. I think the point is that a properly constructed prompt will do that at some point, lowering the barrier of entry for such attacks.


Oh - I see. But then again, all those technologies themselves lowered the barriers of entry for attacks, and I guess yeah people do use them for fraudulent purposes quite extensively - I’m struggling a bit to see why this is special though.


The special thing is that current LLMs can invoke these kind of capabilities on their own, based on unclear, human-language input. What they can also do is produce plausibly-looking human-language input. Now, add a lot more memory and feed a LLM its own output as input and... it may start using those capabilities on its own, as if it was a thinking being.

I guess the mundane aspect of "specialness" is just that, before, you'd have to explicitly code a program to do weird stuff with APIs, which is a task complex enough that nobody really bothered. Now, LLMs seem on the verge of being able to self-program.


Why do companies with lots of individuals tend to get a lot of things done, especially when they can be subdivided into groups of around 150?

Dunbars number is thought to be about as many human relationships can track. After that the network costs of communication get very high and organizations can end up in internal fights. At least that is my take on it.

We are developing a technology that currently has a small context window, but no one I know has seriously defined the limits of how much an AI could pay attention to in a short period of time. Now imagine a contextual pattern matching machine that understands human behaviors and motivations. Imagine if millions of people every day told the machine how they were feeling. What secrets could it get from them and keep? And if given motivation would havoc could be wrecked if it could loose the knowledge on the internet all at once?


I think it's not special. It's even expected.

I guess people think that taking that next step with LLMs shouldn't happen but we know you can't put breaks on stuff like this. Someone somewhere would add that capability eventually.


"If I don't burn the world, someone else will burn the world first" --The last great filter


Conceivably ChatGPT could help, with more suggestions for fuzzing that independently operating malicious actors may not have been able to synthesize.

Most of the really bad actors have skills approximately at or below those displayed by GPT-4.


Seems easier to do it the normal way. If a properly constructed prompt can make chatGPT go nuts, so could a hack on their webserver, or a simple bug in any webserver.


If crashing the NYSE was possible with API calls, don’t you think bad actors would already have crashed it?


How is this hyperbolic fantasy? We've already done this once - without the help of large language models[1].

[1]: https://en.wikipedia.org/wiki/2010_flash_crash


Doesn't that show exactly that this problem is not related to LLMs? If an API allows millions of transactions at the same time, then the problem is not an LLM abusing it but anyone abusing it. And the fix is not to disallow LLMs, but to disallow this kind of behavior. (E.g. via the "circuit breakers" introduced introduced after that crash. Although whether those are sufficient is another question.)


> then the problem is not an LLM abusing it but anyone abusing it

I think that's exactly right, but the point isn't that LLMs are going to go rogue (OK, maybe that's someone's point, but I don't think it's particularly likely just yet) so much as they will facilitate humans to go rogue at much higher rates. Presumably in a few years your grandma could get ChatGPT to start executing trades on the market.


With great power comes great responsibility? Today there's nothing stopping grandmas from driving, so whatever could go wrong is already going wrong


It’s a problem of scale. If grandma could autonomously pilot a fleet of 500 cars we might be worried. Same thing if Joe Shmoe can spin up hundreds of instances of stock trading bots.


You're better off placing your bet on Russian and Chinese hackers, crypto scammers than a Joe Shmoe. But read https://aisnakeoil.substack.com/p/the-llama-is-out-of-the-ba... - there's no noticeable rise in misinformation


You don't understand the alignment problem.


Oh I'm aware of it. I do not think it holds any merit right now when we're talking about coding assistants.


Not really. More behind the curve (noting stock exchanges introduced 'circuit breakers' many years ago to stop computer algorithms disrupting the market).


/remindme 5 years


Ultimate destruction from AGI is inevitable anyway, so why not accelerate it and just get it over with? I applaud releasing these tools to public no matter how dangerous they are. If it's not meant for humanity to survive, so be it. At least it won't be BORING


Death is inevitable. Why not accelerate it?

Omg you should see a therapist.


> Omg you should see a therapist.

How do you know I'm not already?


I wouldn't exactly call this suicidal ideation, but maybe a topic to broach at your next session.


A difference in philosophy is not cause for immediate therapy. Most therapists are glorified echo chambers and only adept at at 'fixing' the more popular ills. For 200 bucks an hour.


Difference in philosophy is not "the world can't end fast enough and nothing matters."


Funny, but I actually discussed this with my therapist. He asked me where he can try out the AI shrink and was impressed by it. He's on board!


Please keep commenting on HN


Finally something agreeable.


Immanentizing the Eschaton!


> And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"

Anyone who believes OpenAI safety talk should take an IQ test. This is about control. They baited the openness and needed a scapegoat. Safety was perfect for that. Everyone wants to be safe, right?


> This is about control. They baited the openness and needed a scapegoat. Safety was perfect for that. Everyone wants to be safe, right?

The moment they took VC capital was the start of them closing everything and pretending to care about 'AI safety' and using that as an excuse and a scapegoat as you said.

Whenever they release something for free, always assume they have something better but will never open source.


The question is whether their current art gives them the edge to build a moat. Specifically, whether in this case the art itself can help create its own next generation so that the castle stays one exponential step out of reach. That seems to be the ballgame. Say what you will, it does seem to be the ultimate form of bootstrapping. Although uncomfortably similar to strapping oneself to a free falling bomb.


I wish OpenAI and Google would opensource more of their jewels too. I have recently heard that people are not to be trusted with "to do the right thing.."

I personally don't know what that means or if that's right. But Sam Altman allowed GPT to be accessed by the world, and it's great!

Given the amount of people in the world with access and understanding for these technologies and given that such a large portion of our Infosec and Hackerworld knows howto cause massive havoc, but still remains peaceful since ever, except a few curious and explorations, that is showing the good nature of humanity I think.

Incredibly how complexity evolves, but I am really curious how those same engineers who create YTSaurus or GPT4 would have build the same system by using GPT4 + their existing knowledge.

How would a really good enginner, who knows the TCP Stack, protocols, distributed systems, consensus algorithms and many other crazy things thought in SICP and beyond use an AI to build the same. And would it be faster and better? Or are my/our expectations to LLMs set too high?


I'm sure somebody posted this exact same comment in an early 1990s BBS about the idea of having a computer in every home connected to the internet.

I would first wait until ChatGPT causes the collapse of society and only then start thinking about how to solve it.


I found the comments of some usually sensible voices that ChatGPT wasn't a threat because it wasn't connected to anything.

As if the plumbing of connecting up pipes and hoses between processes online or within computers isn't the easiest part of this whole process.

(I'm trying to remember who I saw saying this or where, though I'm pretty sure it was in an earlier HN thread within the past month or so. Of which there are ... frighteningly many.)


Yes but.... money


> "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

Wouldn't it be a while before AI can reliably generate working production code for a full system?

After all its only got open source projects and code snippets to go based off of


This seems like baseless fearmongering for the sake of fearmongering. Sort of like NIMBYism. "No I don't want the x people to have access to concentrated housing in my area, some of them could be criminals" while ignoring all the benefits this will bring in automating the mundane things people have to do manually.


I think where the rubber meets the road is that OpenAI can actually to some degree make it harder for their bot to make fun of disabled people but they can’t stop people from hooking up their own external tools to it with the likes of langchain (which is super dope) and first party support lets them get a cut of that for people who don’t want to diy.


> giving a loaded handgun to a group of chimpanzees.

Hate to be that guy, but this is our entire relationship to AI.


I mean I love it, but I don't know what they mean with safety. With Zapier i can just hook into anything wanted, custom scripts etc. Seems like there are almost no limits with Zapier since I can either proxy it to my own api.


As quickly as someone tries fraudulent deploys involving GPTs, the law will come crashing down on them. Fraud gets penalized heavily, especially financial fraud. Those laws have teeth and they work, all things considered.

What you're describing is measurable fraud that would have a paper-trail. The federal and state and local governments still have permission to use force and deadly violence against installations or infrastructure that are primed in adverse directions this way.

Not to mention that the infrastructure itself is physical infrastructure that is owned by the entire United States and will never exceed our authority and global reach if need be.


I agree with your skepticism. I also think this is the next natural step once “decision” fidelity reaches a high enough level.

The question here should be: Has it?


We're getting really close to Neuromancer-style hacking where you have your AI try to fight the other person's AI.


A rogue AI with real-time access to sensitive data wreaks havoc on global financial markets, causing panic and chaos. It's just not hard to see it's going to happen. Like faster car must ended up someone get a horrible crash.

But it's our responsibility to envision such grim possibilities and take necessary precautions to ensure a safe and beneficial AI-driven future. Until we're ready, let's prepare for the crash >~<


It has already happened. The 2010 Flash Crash has been largely blamed on other things, rightly or wrongly, but it seems accepted that unfettered HFT was involved.

HFT is relatively easy to detect and regulate. Now try it with 100k traders all taking their cues from AI based on the same basic input (after those traders who refuse to use AI have been competed out of the market.)


> 1987

Don't you mean August 10, 1988?


> But this is starting to feel like giving a loaded handgun to a group of chimpanzees.

Why?


HN hates blockchain but loves AI...

well, let's fast forward to a year from now


Coordinated tweet short storm.


I dig the Hackers reference.


> today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987.

Sorry do you have a link for this?


The only agency ChatGPT has, is the user typing in data for text completion.


Based on the speed at which OpenAI is shipping new products and assuming that they use their own technology, I'm starting to get more and more convinced that their technology is a superpower.

Timeline of shipping by them (based on https://twitter.com/E0M/status/1635727471747407872?s=20):

DALL·E - July '22

ChatGPT - Nov '22

API's 66% cheaper - Aug '22

Embeddings 500x cheaper while SoTA - Dec '22

ChatGPT API. Also 10x cheaper while SoTA - March '23

Whisper API - March '23

GPT-4 - March '23

Plugins - March '23

Note that they have only a few hundred employees. To quote Fireship from YouTube: "2023 has been a crazy decade so far"


Their superpower is having a tech giant owning 49% of them, willing to drop deep deep money, without the obvious payoff. :)

I also wonder to what extent their staffing numbers reflect reality. How much of Azure's staffing has been put on OpenAI projects? That's probably an actual reflection of the real cost of this thing.


It's probably burning through tens of millions per day and it still doesn't matter, this is fusion power in electricity terms. Free money down the line after the initial investment. I'll pay, you'll pay, your neighbour's dog will pay for this.


> How much of Azure's staffing has been put on OpenAI projects?

Great point!


Yes, but other orgs have similar resources. Most obvious is Google/Deepmind. Apple too. Neither of them made this leap.


The DoD really needs to step in and mark this tech as non-exportable due to the advantage (or potential advantage) it provides in many different fields.


Russia is already blocked from ChatGPT and Bing; I don't know about China.

But it's all security theater. Plenty of people use it with VPNs, and I know several who found it useful / interesting enough to bother paying for it (which involves foreign credit cards etc so it's kind of a hassle). I'm sure so does the Russian govt.

In any case, I don't see how you could realistically block any of that without effectively walling off the rest of the Internet.


Working at a chinese university. Everyone (students and teachers) using chatgpt, officially openai don't let you access from china. but everyone has vpns, even the university internet is all behind a hong kong vpn (as google etc is useful).

It does all feel like weird theatre that just makes anything american slightly annoying to use. So theres local startups and researchers working on replicating these things and making them easier to access.

If the usa dont want to make products available to the whole world out some combination of fear and patriotism, then china will instead.


If China does not have Google equivalent and chinese people are instead forced to use Google via vpn, what makes you think China will make some AI-related product available to the whole world?

It will most likely be a closely-guarded secret one, no?


Google can't even keep up with OpenAI. I wouldn't expect much out of China in this space.


Not exportable often means just not available on the Internet at all. Due to a range of factors, such as the dilution of human content online, the inability to prevent bad actors from using it etc I would rather this tech never saw the light of day and was limited to government and research institutions.


It never works like that with tech. Especially not this kind of tech, where all the building blocks have been around for some time and are well-understood - just not the result of combining them at this scale.

In any case, if it's really that powerful, limiting it to one government sounds like a recipe for the worst kind of a one world state to me.


I am from China, the Chinese government has banned access to chatgpt related products and Baidu has launched the 文心一言 which is basically gpt-3.


Wait, do you mean it's on par with GPT-3, or it's literally GPT-3? If the latter, how did they get the model?


Yeah, I'd be really curious to hear how much people within OpenAI use their tools to create and ship their code. That would be quite a compelling testimony, and also help me feel more clear, because I've been quite confused at how quickly things have been going for them.


what makes you think they are leaders at applying the tech they create?


I've seen tweets that suggest they are using it, particularly, the prerelease GPT-4 was used internally.


> "2023 has been a crazy decade so far"

what a couple weeks!


It's ironic that a few months ago Amazon laid off parts of the Alexa team and 'conversational' was considered failed. Then ChatGPT, etc happened. What Alexa wanted to build with Alexa skills, ChatGPT does much more effortlessly.

It's also an interesting case study. Alexa foundationally never changed. Whereas OpenAI is a deeply invested, basically skunkworks, project with backers that were willing to sink significant cash into before seeing any returns, Alexa got stuck on a type of tech that 'seemed like' AI but never fundamentally innovated. Instead the sunk cost went to monetizing it ASAP. Amazon was also willing to sink cash before seeing returns, but they sunk it into very different areas...

It reminds me of that dinner scene in Social Network. Where Justin Timberlake says "you know what's f'ing cool, a billion dollars" where he lectures Zuck on not messing up with the party before you know what it is yet. Alexa / Amazon did a classic business play. Microsoft / OpenAI were just willing to figure it all out after the disruption happened where they held all the cards.

https://www.youtube.com/watch?v=k5fJmkv02is


I have never used Alexa, hey google or whatever flavour you choose for more then "set a timer for x minutes" and other very basic tasks. It's amazing how terrible the voice assistant products are compared to chatgpt.


However, is ChatGPT a solution to virtual assistant? I'm not sure, because virtual assistants work with huge databases of entities, e.g., songs or movies, and it's not really clear that ChatGPT can handle this. A couple of days ago I asked it whether it knows Dream of You by Camila Cabello, a song with hundreds of millions of plays, and it didn't... Furthermore, that database is constantly updated... It sounds like LLMs solve neither of these problems well, although they may be an improvement over current wave of virtual assistants.


That's because you used ChatGPT. Rather than GPT-4. GPT-4 has more parameters, so can remember much more obscure facts.

I had GPT-4 analyze the ethics of some obscure japanese visual novel from the perspective of multiple characters in that story. It did so flawlessly.

ChatGPT would have started making up stuff in the second paragraph because it couldn't remember what actually happened.


Have you provided the visual novel as a prompt or just asked about it? ChatGPT does seem to know more about books than songs, maybe because they are more often described in texts, while songs not, not sure.

I wished more parameters gave a generalizable solution, but research suggests that's not the case [1, 2, 3]. OpenAI won't tell you this, cause they have a product to sell.

[1] http://arxiv.org/abs/2211.08411 [2] http://arxiv.org/abs/2011.03395 [3] http://arxiv.org/abs/2005.04345


I think ChatGPT plugins may be what solves this problems. Does IMDB have an API?


Sites like IMDB feel like they might be in a pickle. What happens to their ad revenue if they expose their API and no one navigates to their site anymore?


API access as a service here we come!


Could be a good thing. Maybe this is the push the world needs to create a business model around providing useful information without surveillance capitalism smeared on top.


Well, isn't that what this plugin system is meant to solve? You can get up to date information into ChatGPT.


Experimenting with and/or reading about ChatGPT and then interacting with Siri feels almost offensive now. All of the assistants still suck - get on it AmaGooPple!


They sank it in "customer obsessed research" lol


Everyone's been talking about how ChatGPT will disrupt search, but looking at the launch partners, I think this has the potential to completely subvert the OS / App Store layer. On some level, how much do I need an OpenTable app if I can use voice/text input and a multi-modal response that will ultimately book my reservation?

Not saying mobile's going away, but this could be the thing that does to mobile what mobile did to desktop.


People said this about Alexa/Siri et al and it didn’t happen. ChatGPT is way better at understanding you, so that’s a big boost. It could be a great tool/assistant but it probably won’t replace apps.

The problem with those other platforms that this doesn’t address include:

- discoverability. How do you learn what features a service supports. On a GUI you can just see the buttons, but on a chat interface you have to ask and poke around conversationally.

- Cost/availability. While a service is server bound, it can go down and specifically for LLMs, the cost is high per request. Can you imagine it costing $0.1 a day per user to use an app? LLMs can’t run locally yet.

- Branding. Open table might want to protect their brand and wouldn’t want to be reduced to an API. It goes both ways - Alexa struggled with differentiating skills and user data from Amazon experiences.

- monetization. The conversational UI is a lot less convenient to include advertisements, so it’s a lot harder for traditionally free services to monetize.

Edit: plugins are still really cool! But probably won’t replace the OSes we know.


Good points - but I fundamentally disagree here.

The whole ecosystem, culture and metaphor of having a 'device' with 'apps' is to enable access to a range of solutions to your various problems.

This is all going to go away.

Yes, there will always be exceptions and sometimes you need the physical features of the device - like for taking photos.

Instead, you'll have one channel which can solve 95% of your issues - basically like having a personalised, on-call assistant for everyone on the planet.

Consider the friction when consumers grumble about streaming services fragmenting. They just want one. They don't want to subscribe to 5+.

In 10 years, kids will look back and wonder why on earth we used to have these 'phones' with dozens or hundreds of apps installed. 'Why would you do that? That is so much work? How do you know which you need to use?'

If there was one company worrying about change, I would think it would actually be Apple. The iPhone has long been a huge driver of sales and growth - as increasing performance requirements have pushed consumers to upgrade. Instead, I think the increasing relevance of AI tools will inverse this. Consumers will be looking for smaller, lighter, harder-wearing devices. Why do you need a 'phone' with more power? You just need to be able to speak to the AI.


An interface based on voice only has an issue: people tend to not live alone. As children they live with their parents. As adult, many want to live with a significant other.

Having somebody else in the house speaking out loud each time they want infos from the internet could become annoying.

Apart from having a mind reading device, I don't see so far a solution to this problem better than text input with a physical keyboard or a virtual keyboard on the device.


> Consider the friction when consumers grumble about streaming services fragmenting. They just want one. They don't want to subscribe to 5+.

I think you just proved it won't happen anytime soon.

Consumers obviously would prefer a "unified" interface. Yet we can't even get streaming services to all expose their libraries to a common UI - which is already built into Apple TV, fireTv, Roku, and Chromecast. Despite the failure of the streaming ecosystem to unify, you expect every other software service to unify the interfaces?

I think we'll see more features integrated into the operating system of devices, or integrated into the "Ecosystem" of our devices - first maps was an app, then a system app, now calling an uber is supported in-map, and now Siri can do it for you on an iPhone. But I think it's a long road to integrate this universally.

> If there was one company worrying about change, I would think it would actually be Apple.

I agree that apple has the most to lose. Google (+Assistant/Bard) has the best opportunity here (but they'll likely squander it). They can easily create wrappers around services and expose them through an assistant, and they already have great tech regarding this. The announcement of Duplex was supposed to be just that for traditional phone calls.

Apple also has a great opportunity to build it into their operating system, locally. Instead of leaning into an API-first assistant model, they could use an assistant to topically expose "widgets" or views into existing on-device apps. We already see bits of it in iMessages, on the Home Screen, share screen and my above Maps example. I think the "app" as a unit of distribution of code is a good one, and here to stay, and the best bet is for an assistant to hook into them and surface embedded snippets when needed. This preserves the app company's branding, UI, etc and free's apple from having to play favorite.

Edit: apple announcing LLM optimizations already indicates they want this to run on apple silicon not the cloud.


Great point about failure to unify (or intentionally preventing it).

The space is in a land-grab phase, where everyone wants to position themselves as the next Google, and control the channel.

Will be interesting to see how this all plays out.


> In 10 years, kids will look back and wonder why on earth we used to have these 'phones' with dozens or hundreds of apps installed. 'Why would you do that? That is so much work? How do you know which you need to use?'

Phones with apps have been around for 29 years. I'm calling BS on your prediction now.


I was thinking the same way, but here's where I could imagine things being different this time (Fully aware that I just like anyone else is just guessing about where we'll end up)

- Discoverability. I think we'll move into a situation where the AI will have the context to know what you will want to purchase. It'll read out the order and the specials and you just confirm or indicate that you'd like to browse more options. (In which case the Chat window could include an embedded catalogue of items)

- Cost/availability - With the amount of people working in this area, I don't think it'll be too long before we're able to get a lighter weight model that can run locally on most smart phones.

- Branding - This is a good point, but also, I imagine a brand is more likely to let itself get eaten, if the return will be a constant supply of customers.

- Monetization - The entire model will change, in the sense that AI platforms will revenue share with the platforms they integrate with to create a mutually beneficial relationship with the suppliers of content. (Since they can't exist without the content both existing and being relevant)


I spent a lot of time working on the product side in the Voice UI space, and therefore have a lot of opinions. I could totally end up with a wrong prediction, and my history may make me blind to changes, but I think a chat assistant is a great addition to a rich GUI for simple tasks.

> I think we'll move into a situation where the AI will have the context to know what you will want to purchase

My partner who lives in the same house as me can't figure out when we need toilet paper. I'm not holding my breath for an AI model that would need a massive and invasive amount of data to learn and keep up.

Also, Alexa tried to solve this on a smaller scale with the "by the way..." injections and it's extremely annoying. Thank about how many people use Alexa for basically timers and the weather and smart home. They're all tasks that are "one click" once you get in the GUI, and have no lists and minimal decisions... Timer: 10 min, weather: my house, bedroom light: off. These are cases where the UI necessarily embeds the critical action, and a user knows the full request state.

This is great for voice, because it allows the user to bypass the UI and get to the action. I used to work on a voice assistant and lists were the single worst thing we had to deal with because a customer has to go through the entire selection. ChatGPT has a completely different use case, where it's great for exploring a concept since the LLM can generate endlessly.

I think generative info assistants truly is the sweet spot for LLMs and chat.

> in the sense that AI platforms will revenue share with the platforms they integrate with to create a mutually beneficial relationship with the suppliers of content.

Like Google does with search results? (they don't)

Realistically, Alexa, Google Assistant, and Siri all failed to build out these relationships beyond apps. Companies like to simply sell their attention for ads, and taking a handout from the integrator requires either less money, or an expensive chat interface.

Most brands seem to want to monetize their own way, in control of themselves, and don't want to be a simple API.


> LLMs can’t run locally yet.

"Yet" is a big word here when it comes to the field as a whole. I got Alpaca-LoRA up and running on my desktop machine with a 3080 the other day and I'd say it's about 50% as good as ChatGPT 3.5 and fast enough to already be usable for most minor things ("summarize this text", etc) if only the available UIs were better.

I feel like we're not far off from the point where it'll be possible to buy something of ChatGPT 3.5 quality as a home hardware appliance that can then hook into a bunch of things.


Agree that Alpaca is Important. I got the smallest one running on a pathetic notebook… 2 cores, 8GB RAM. It was slow. It was sloppy. But it worked. Getting these things running on GPU/NPU will be very compelling, especially if we don’t hit a wall on compression. I think a sweet spot exists where consumer clients are powerful enough and models are small enough to deliver value and privacy.


I think you're missing the fact that the LLM could also generate the frontend on the fly by e.g. spitting out frontend code in a markup language like QML. What's a multi-activity Android app if not an elaborate notebook? Branding can just be a parameter.

Sure, maybe OpenTable would like to retain control. But they'll probably just use the AI API to implement that control and run the app.


Chat can be an interface, but its also essentially a universal programming language which can be put behind (or generate itself) any kind of interface.


Who's to say though that it'll always stay a text format.

They could bring in calendar, payment, other UI functionality...

Basically they could rethink how everything is done on the Web today.


It almost certainly won't take the form of a text format. Impersonating a chatbot or a search engine GUI is just the fastest way for OpenAI to accumulate a few hundred million users, to leave the competition for user data and metadata behind.


it would likely take the form of just in time software.


>The conversational UI is a lot less convenient to include advertisements

How so? Surely people are going to ask this thing for product recommendations, just recommend your sponsors.


This moves the advertisement opportunity to the chat owner. If you want to use chat (+api) to book a table at a restaurant, then the reservation-api company loses a change to advertise to you vs. if you used a dedicated reservation-web-app.


Oh I see what you mean, yes. The reservation api company will have to get money through other means (either from the user via OpenAI or from the restaurant).

Honestly I see this as a positive change, I'd rather be the customer than the product.


I’m already seeing advertisements in New Bing.


We have reached "peak UI". In the future we're not going to need every service to build four different versions of their app for every major platform. They can just build a barebones web app and the AI will use it for you, you'll never have to even see it.


IMO you won't even need to build the app, you'll just provide a data model and some natural language descriptions of what you want your product to do.


That’s how this plugin system works already.


I don't think this is the case. You provide an API spec but you also have to provide the implementation of that API. ChatGPT is basically a concierge between your API and the user.


I think the API is meant to be the data model in this scenario. The point is that you design the API around the task that it solves, rather than against whatever fixed spec OpenAI publishes. And then you tell ChatGPT, "here's an AI, make use of it for ..." - and it magically does, without you having to write any plumbing.


It sounds like you might have it backwards. The API spec is published by you and the AI consumes it.


It should read "here's an API", of course.


It isn't yet. For example, Wolfram Alpha is an app that GPT is communicating to, and it actually exists.


Except you won't if you want to make money because then you don't have a business


Unless you charge for providing services of value to people.


And that is why some people think this AI leap could be as big as the internet.


Charge people for installing your plugin into ChatGPT.


I mean yeah, you'll have to provide a data model (and data) that other people don't have.


I mean, if you consider mobile we might already be down from the peak. In the sense that the interface bandwidth has shrunk to whatever 2 fingers can handle.


Headless app is the way to go.


This is what Apple's Siri was meant to be. Apple bought Siri from SRI international (Siri = SRI), and when it was launched was meant to include ability to book restaurants etc (thereby bypassing search), but somehow those capabilities were never released and today Siri still can't even control the iPhone!

My hot take on ChatGPT plugins is a bit mixed - should be very powerful, and maybe significant revenue generator, but at same time doesn't seem in the least bit responsible. We barely understand ChatGPT itself, and now it's suddenly being given ability to perform arbitrary actions!


Google's assistant, on the other hand, did figure out the reservation trick. Reportedly "book a table for four people at [restaurant name] tomorrow night" actually works, though I've never tried it.


Interesting - I wasn't aware of that. Will have to Google to see what else it may be capable of. Google really needs to update assistant with something LLM based though, and it seems Bard really isn't up to the job.


This doesn’t take a huge level of “AI” by any means. It’s really simple pattern matching in a very limited context.


Siri's capabilities are somehow much closer to Google Bard than ChatGPT (have tried all of them).


That's a bit harsh on Bard, but yes - just got access today and it's surprisingly weak.


BARD just gives up on coding questions.


All chatbots require AI to really be useful. This just did not exist until a few years ago.


This isn’t really true. Siri could easily be more useful in its current state if it had a larger library of intents and API access.


I'm kind of skeptical of this simply because people were saying the same thing about chatbots back when there was a lot of hype around Messenger. Sure, they weren't as advanced as what we have now, but they were fundamentally capable of the same things.

Not only did the hype not pan out, but it feels as if they were completely forgotten.

In a nutshell that's why I'm still largely dismissive of anything related to GPT. It's 2016-2018 all over again. Same tech demos. Same promises. Same hype. I honestly can't see the big fundamental breakthroughs or major shifts. I just see improvements, but not game-changing ones.


This is a healthy skepticism but the difference was that using Messenger chatbots was a disjointed, clunky experience that felt slower than just a few taps in the OpenTable app. Not to mention that their natural language understanding was only marginally better than Siri at best.

In this scenario, it seems dramatically faster to type or speak "Find me a dinner reservation for 4 tomorrow at a Thai or Vietnamese restaurant near me." than to browse Google Maps or OpenTable. It then comes down to the quality and personalization of the results, and ChatGPT has a leg up on Google here just due to the fact that their results are not filled with ads and garbage SEO bait.


>but they were fundamentally capable of the same things.

This is not the case. The difference between current state of the art NLP and chatbots 3 years ago is so massive, it has to be seen as qualitative. Pre GPT-3 computers did not understand language and no commerical chatbot had any AI. Now computers can understand language.


> Now computers can understand language.

"understand"


If I tell it to do X, and it does X, for all practical purposes it means that it understood what I said.


it was taught to react in specific way on specific word, the same thing you can train dog to bark on "quantum physics" phrase.


It can invent words, and then correctly use them to compose.

https://news.ycombinator.com/item?id=35268950


it can invent words, because gpt is amazing pattern extrapolator.


Okay, but who "taught it to bark" correctly in response to those words?


you can read papers how gpt was trained?


You're making a specific claim here: "it was taught to react in specific way on specific word", and I'm giving you a specific example of an invented word that GPT used to full effect when composing. Can you explain how it was "taught to react" to this word in a manner that enables it to use it in the new meaning that wasn't even in its training set?


I answered you, gpt is pattern extrapolator and was taught to be so, creating new word is totally fits pattern extrapolation.


How about using that invented word consistently afterwards?

It seems to me that the argument you're trying to make can be extended more or less indefinitely, though. But there's a point at which "it's a pattern extrapolator" becomes less of an explanation and more of a largely irrelevant piece of trivia.


Well if it can bark out the right answer its an impressive damned dog...


the thing it doesn't bark right answer very often, but bark non-sense because was training on predicting next word in sentence.


At this point ChatGPT is blowing our human ability to use language so far out of the water on so many levels, I'd argue we should start putting quotes around our human ability to "understand" language. GPT-4 has already eclipsed us when it comes to language


This time it works.


Yeah being able to generate media/text is what excites me about these models, more than using my voice or a text input to do X instead of a webpage which has a GUI and buttons and text boxes.


I'm afraid that it has potential to subvert everything, looking at the plugins initiative is not hard to think like this: imagine the world where separate websites and just browsing websites as we know it doesn't exist, instead one is interacting with the model(s) directly to do what needs to be done - asking for news, buying new present for kids, discussing car models with selected price range etc.


It’s an interesting idea, but I’m not convinced the average person has any interest in texting or talking out loud to their device to complete all their computing tasks. It’s slower for most things.

Also I think there will be little interest in delegating that level of control to a single source for anything that’s important. For example, say I’ve got 5k to spend on home theatre gear, why on earth should I trust Shopify’s AI to suggest what I need and find the best price? The incentives aren’t in alignment.


That’s what went wrong with Alexa. They figured people would buy stuff via voice, but nobody trusted it for that.


As long as the services do get paid, this is not much different than what we have now

Google gatekeeps everything currently, it s in the browser, the search button, the phone etc. Having chatbots instead of google is better


> and a multi-modal response that will ultimately book my reservation?

How is it going to do that? OpenTable's value isn't in the tech, a 15 yo could implement that over the weekend. Or maybe chatGPT can be put in the restaurant, and somehow figure out how to seat you. And then you'd have a human talking to chatGPT and chatGPT talking to another chatGPT to complete the task. That'll be interesting, but otherwise this is overly complicated for all parties involved.


Anything preventing Bard/etc from using these plugins as well?

Would be nice to keep the ecosystem open.


There’s nothing stopping any LLM-backed chatbot from using plugins; the ReAct pattern discussed recently on HN is a general pattern for incorporating them.

The main limits are that unless they are integral and trained-in (which is less flexible), each takes space in the prompt, and in any case the interaction also takes token space, all of which reduces the token space available to the main conversation.


My experience with Bard is it probably isn't smart enough to figure out on its own how to use these. Google would probably have to do special finetuning/hardcoding for the plugins that they want to work.


Bard is a tard so I doubt it. Google is done.


I'm not sure if the word "subvert" is right; the OS is still there, the App Store is still there, and nothing they've demonstrated will measurably impact revenue from these sources (the iOS App Store's largest source of revenue, by far, is games. Some estimates put Games as like 25% of all of Apple's revenue).

I think there's also a global challenge (actually, opportunity IS the right word here) that by-and-large the makers of operating systems aren't the ones ahead in the language AI game right now. Bard/Google may have been close six months ago, but six months is an eternity in this space. Siri/Apple is so far behind that its not looking likely they can catch up. About a week ago a Windows 11 update was shipped which added a Bing AI button to the Windows 11 search bar; but Windows doesn't really drive the zeitgeist.

I wonder if 2023/4 is the year for Microsoft to jump back into the smartphone OS game. There may finally be something to the idea of a more minimalist, smaller voice-first smartphone that falls back on the web for application experiences, versus app-first.


Yes it will change the application layer. LLM allows using NUI as the universal interface to invoke under-utilized data & apis. We can now develop super-app rather than many one-off apps. I have been exploring this idea since 2021, love to connect with anyone who wants to work in this space.


Most (if not all) of those apps are free though, you supply them as a convenience because you know that smartphone owners spend money. The host OS loses access to that info, and that is used to target better ads in certain phone platforms.


Why do you think Apple would care? It came out in the Epic trial that 80%+ of App Store revenue comes from in app purchases in play to win games and buying loot boxes.

Apple doesn’t make any money from OpenTable.


I’m surprised Apple hasn’t improved siri with a model like this. Currently it’s just trash but with a GPT style model behind it you could actually get it to do things.


Why is it surprising? The amount of CPU resources server side to work on a billion iOS devices at any sort of performance level is extreme.

The limitations on making Siri more useful is just adding and refining its intent system. It already integrates with Wolfram Alpha for instance.


I agree, it's a revolutionary new better UX paradigm.


So, what's your prediction? Windows Phone has ChatGPT or the other phone os makers add Microsoft Chat App.


If you live in SF and have gone out to casual bars or restaurants, you meet/hear people talking about ChatGPT. In particular, I've been hearing a lot of people talking about their startups being "a UI using ChatGPT under the hood to help you with X". But I'm starting to get the feeling that OpenAI will eat their lunches. It's tried and true and it worked for Amazon.

If OpenAI becomes the AI platform of choice, I wonder how many apps on the platform will eventually become native capabilities of the platform itself. This is unlike the Apple App Store, where they just take a commission, and more like Amazon where Amazon slowly starts to provide more and more products, pushing third-party products out of the market.


When I look at the kind of ideas floated around for ChatGPT use, it kinda feels like watching someone invent an internal combustion engine in 1800, and then use it to drive an air conditioner attached to a horse-drawn wagon. Sure, it's a practical solution to a real problem, but it's also going to be moot because the problem won't be relevant soon. I think the vast majority of these startups and their ideas are going to end up like that.


Fascinating to hear your perspective on this. I think a lot of people will fall out of the sky. Beeing overtaken before even realizing why. In germany most of my friends and collegues working SE tech or digital design, often "haven't even tried this Chat something thing" or stopped at "AI images? These wierd small pictures that look like a cpu is high on drugs?"

And dont get me startet on non-tech friends and family. I think we are taking a leap that will let the digital world of 2022 look like amish livestyle.


Depends. In my bubble (EE and CS students in Germany) everyone is talking about this.


What I find super impressive now is image generation. I remember just 3-4 years ago the images generated by AI on news articles looked like LSD trip hallucinations. Now they can generate extremely plausible images of any scene you describe. It's absolutely wonderful.

RE being overtaken -- when ChatGPT-3 came out, I saw some startup doing "Automatic UI code writing with ChatGPT and design images", but after seeing OpenAI's multimodal GPT-4... it seems like that startup is no longer needed. I think this will be the case for hundreds of startups that are aiming to add 10% on top of CurrentGPT. Put differently, GPT-4 basically 10x'd GPT-3. So all of those startups doing GPT-3 + 10% are being left behind in the dust of GPT-4.


The market will sort this out. If OpenAI decides to make shovels rather than digging for gold (like it should), then the customer facing apps will fight it out for very little margin on top of marketing expenses while OpenAI (or equivalent) is rolling in money.


They're doing the two. They're making shovels and killîng those buying while digging for gold.


The Bill Gates "A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it's a platform.” line seems apt - i'm sure they'll figure it out


Yeah it’s less about connecting various APIs and databases than it is about your data.


The rate of improvement with GPT has been staggering. In just January I spent a lot of time working with the API and almost everything I've done has been made easier over the past two months.

They're really building a platform. Curious to see where this goes over the next couple of years.


I just got access to Bard. I would hate to be Google leaders at the moment.


It's incredible how Google started ahead and then shot themselves repeatedly in the face by granting so much internal power to dubious AI "ethicists". Whilst those guys were publicly Twitter-slapping each other, OpenAI were putting in place the foundations for this.


The issue wasn't/isn't AI ethicists. It's their incentive model. They simply have trouble understanding how this helps their business. Same reason why Blockbuster found themselves behind Netflix, despite having clear visibility to watch Netflix slowly walk up and eat their lunch right in front of them.


Well, I'm curious, what is the business model of it? Just charge per 1k tokens or subscription? How do the plugins make money off this?


Two key ways

1. You use their services which makes them money (e.g. you're returned good flight info and book through chatgpt, they get commission)

2. You sell access to end users. The requests can be authenticated so you can give your paying users access to your stuff through an advanced natural language engine for the implementation cost of roughly adding a file explaining your APIs.


plugins dont need to make money, you are still using tokens and paying for those. the more plugins you use, the more conversation you also need and tokens


Yes, I think OpenAI has a business model here (token/subscriptions) but how do the external services make money? Will many of these apps be cannibalized by ChatGPT and other LLMs? For example, the Speak plugin for language teaching, at what point is ChatGPT good enough to do everything that Speak does?


If your business is mostly reliant on their API then they will eat you. You need to differentiate by having access to something they do not.


that...without eroding their cash cow search business.


Google is the Xerox GUI of our day. They invented this tech and did nothing with it while an upstart and Microsoft, ironically enough, took it and ran. They don’t even have a great data moat. They are in serious trouble.


Are you saying that Google doesn’t have a great data moat? That seems completely off considering Google search, YouTube, GCP, Google for Education/Work, Android, etc.


And yet, Bard seems to be worse than ChatGPT. Google aren't the only ones who can crawl the web. Given the depth of their index and how much spam it contains it might not even be the asset it seems.


In particular because they are more or less using a term index I believe. This new world relies on vector indexes for semantic search.


I'm not sure the quantity of data will be a differing factor once the LLM reaches a certain point of parameters.


In this context I think we’re talking about moat, i.e., private data, that they can leverage for personalized experiences. Similar to what Microsoft announced with their Office365 Copilot stuff.


nah if anything the AI ethics researchers will be saying "i told you so" in a few years. but like the agricultural revolution or the industrial revolution, i don't think the universe is capable of withholding this kind of epoch shift.


>> Curious to see where this goes over the next couple of years.

Probably will make half of the HN users unemployed.


I agree. Part of me wonders how much they're using GPT to improve itself.


When we were first breaking it people were wondering if the developers were sitting in threads looking for new exploits to block.

Now I’m wondering if the system has been modifying itself to fix exploits…


It does actually work. For some of the experiments I did with GPT-4, it made some mistakes because my initial prompt wasn't sufficiently precise. After discussing its mistakes with it, I asked it to write a better prompt that would prevent them. Sure enough, it did just that.


I'm not expecting this comment to do numbers, so anyone who is reading this must be feeling as affected by this announcement as me. Is software essentially solved now? I haven't been able to do much work since the announcement came out, and that has given me a little time to think and reflect.

I do think much of the kind of software we were building before is essentially solved now, and in its place is a new paradigm that is here to stay. OpenAI is certainly the first mover in this paradigm, but what is helping me feel less dread and more... excitement? opportunity? is that I don't think they have such an insurmountable monopoly on the whole thing forever. Sounds obvious once you say it. Here's why I think this:

- I expect a lot of competition on raw LLM capabilities. Big tech companies will compete from the top. Stability/Alpaca style approaches will compete from the bottom. Because of this, I don't think OpenAI will be able to capture all value from the paradigm or even raise prices that much in the long run just because they have the best models right now.

- OpenAI made the IMO extraordinary and under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API. This means even this plugin ecosystem isn't a walled garden that only the first mover controls.

- Chat is not the only possible interface for this technology. There is a large design space, and room for many more than one approach.

Taking all of this together, I think it's possible to develop alternatives to ChatGPT as interfaces in this new era of natural language computing, alternatives that are not just "ChatGPT but with fewer bugs". Doing this well is going to be the design problem of the decade. I have some ideas bouncing around my head in this direction.

Would love to talk to like minded people. I created a Discord server to talk about this ("Post-GPT Computing"): https://discord.gg/QUM64Gey8h

My email is also in my profile if you want to reach out there.


Why do you program? It won't go away, it'll just be different. You probably don't know x86 or arm instruction sets anyway. This will be a much faster, easier way to manipulate symbolic code.

I'm looking forward to being able to work alongside an AI because there are a zillion ideas I have every day that I don't have the time to fully explore. And all I do is work on the backend of a boring-ass webapp all day.

The only worrying thing is how fast this will accelerate everything... I'm worried society will, if not collapse, go for a wild ride.

As for interfaces... I'm looking forward to much better voice assistants. I would love to be able to essentially have conversations with the internet.


A first party version of apps that have been built with langchain is great but I'm dissapointed to not see Jira here yet.

I have been playing around with GPT-4 parsing plaintext tickets and it is amazing what it does with the proper amount of context. It can draft tickets, familiarize itself with your stack by knowing all the tickets, understand the relationship between blockers, tell you why tickets are being blocked and the importance behind it. It can tell you what tickets should be prioritized and if you let it roleplay as a PM it'll suggest what role to be hiring for. I've only used it for a side project and I've always felt lonely working on solo side projects, but it is genuinly exciting to give it updates and have it draft emails on the latest progress. The first issue tracker to develop a plugin is what I'm moving towards.


Tell me more. Are you feeding it a epic and all stories and subtasks? What are your prompts?


I gave it a paragraph "hiring" GPT-4 and explaining our mission. Then a sentence overview of our two projects (api and app) as well as a description of similar apps and how we're different. Then I have about 12 tickets written in plaintext like this:

Issue ID: GD-012 Type: Task Title: [api] Migrate to TRPC from express/rest Assignee: John Status: In Progress --Description Beginning-- --Description End--

Because the outputs are really long, here's an example of my interactions with Dave (the name I gave to GPT-4). There's also emails Dave has created to fictional stakeholders but that's too high up. Right now the problem I'm having is that sometimes Dave can remember the issue tracker state but when trying to output (so that I can store it) it can't produce a long enough output (now, it worked before when I had 6 tickets). If I were to cram everything into 1 prompt then it would probably work, but a better solution is to use langchain and a document loader for issues.

I believe that if I was able to vectorize the codebase so that if could search for relavent portions of code (at work a lot of our tickets were "update this endpoint to handle case X, we did it to this other endpoint 6 months ago (link to PR)) and have a proper store of issues then it could be powerful.

Creating epics:

https://poe.com/s/Qu62LtlV2yKGZXlXTTS1

Explaining blockers:

https://poe.com/s/vnE0SsI3e55WT1t6fkNW

Creating tickets with a recommended checklist of tasks:

https://poe.com/s/SJ3RPF7ecHSYEBJKS76z


This is awesome , will you be able to share the initial prompt. When I tried the same prompt to create epic, it’s missing multiple stories on GPT-4. Like validating OTP, Resending OTP and Logging for audit it did generate the other 3 though. Not sure why. But yours is spot on.


Smart way to remain the funnel owner. Let everyone build a plugin, before they integrate your product into theirs.


They have a window of less than 6 month to create a monopoly before their tech get commoditized.

The play is well know: create a marketplace with customers and vendors like Amazon, Facebook, Google.

But with GPT-4 training finished last summer they had plenty of time for strategy.


Yeah. I really underestimated OpenAI's ability to productize ChatGPT.


> their tech get commoditized

that's if competitors catch up on quality


OpenAI is crushing it in terms of product strategy


Well, of course.

They are led by GPT4 and their CEO is just a Text To Speech Interface ;-)


It's in the surname: alt man.


Secret Messages


Embedding to Speech interface ;-)


I m hoping chatbots will end up small enough they can run locally, everywhere. This is a lot of private data.

It may be doable - a chatbot with a lot of plugins does not need to know a lot of facts, just to have a good grasp of language. It can fetch its factual answers from the wikipedia plugin


openai wants to gatekeep access and use of their AI. so why would they ever release a local LLM? i think that would come from their enemies


I mean, GPT3 requires some 800GB of memory to run, do we all have gazillion dollars supercomputers at home? I think, unless there's some real breaktrough in the field or in the hw acceleration, this kind of model is going to stay locked behind a pricy API for quite some time.


GPT-3.5 requires less. And neither model is considered size-optimal. It's just that with Microsoft's money, it's easier for OpenAI to move fast by throwing said money at more hardware rather than trying to optimize for size.

And yeah, I wouldn't expect them to share any model that is competitive with their current offering. But it can leak, and the copyright situation around that is very unclear at the moment.


they wouldn't ; i hope there will be an open source alternative. Firefox and chrome are open source


Let insiders & preferred users build a plugin, then, slowly, everyone else on the waitlist


...And approve 1% of them.


it's dark..


Well that’s a win-win situation


The Wolfram plugin also has extremely impressive examples [1].

If I were OpenAI, I would use the usage data to further train the model. They can probably use ChatGPT itself to determine when an answer it produced pleased the user. Then they can use that to train the next model.

The internet is growing a brain.

1: https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...


In a way this is a bit less impressive than having chatgpt being able to generate those answers on its own only using learned information. This kinds of admits defeat regarding its ability to solve mathematical problems correctly or answer factual information.


But doesn't this seem like the correct approach long term? Basically shelling out to what essentially amounts to a fact table when it gets a factual question


It is but it also shows current limitation of the reasoning skills of this models.

A super human AGI would easily be able to use tools like Wolfram Alpha and also derive most of it on the fly or from memory just by thinking about it.

If you set your expectations to AGI (right now) you will be disappointed but that doesn't mean it's not immensely useful.

Unfounded hype and real technological progress seem to go hand in hand.


Should Einstein not be allowed to use a calculator and instead waste his time on performing the calculations by hand?


Obviously yes. And how they use these tools is important.

But it's quite interesting to see what these models can do internally and what they can't do yet. It possibly outlines future research areas (beyond " more scale") and opportunities for competitors to enter the market (which is usually better for consumers and society as a whole).

Maybe that's the difference between productization and academic reasearch.


Yes, there's no point spending all that time and compute training and inferring ways to do math problems when you could just use a calculator.


I would be interested to play with a long term memory plugin. It could be a note-taking system that would summarize prior conversations and pull their context into the current conversation through topic searches. This would enable the model to have a blurry long term memory outside of the current context.

I played with some prompts and GTP-4 seems to have no problem reading and writing to a simulated long term memory if given a basic pre-prompt.


I saw this on Twitter that seems to do what you want: https://www.rewind.ai/

I haven’t used it but your comment reminded me of it!


"Grandpa, we know you've been really bothered by your memory loss and you're happy that you've come up with a way to fix it.

"But we really think you need to get this thing under better control.

"Your granddaughter's name is indeed Alice, but she's only 3: she is not running a pedophile ring out of a pizza parlor. Your neighbor's house burned down because of an electrical short, it was not zapped with a Jewish space laser.

"Now switch that thing off and go do something about the line of trucks outside that are trying to deliver the 3129833 pounds of flour you ordered for your halved pancake recipe."


Here is ChatGPT's response of this HN thread tweeted by Greg - https://twitter.com/gdb/status/1638986918947082241

insane!


I'm still confused on the difference between ChatGPT and Bing Chat. When asking Bing Chat the exact same question, it won't be able to find this here HN thread and will reply about a 9to5google article about the topic. I thought Bing Chat uses GPT-4 as well?


Well, it does now.

There was a post on Hacker News about ChatGPT plugins by OpenAI 1. The post received 1333 points and 710 comments 1. One user commented that they were wrong about OpenAI missing out on generative AI art wave because they were focused on more high-impact products like ChatGPT, GPT4 and now plugins 1. Another user mentioned that it was clear from the start that OpenAI intended to monetize DALL-E but competitors were able to release viable alternatives before OpenAI could establish a monopoly 1.

Citation 1 links to this thread.


They just made Google a plug-in. Pretty simple to implement yourself though. But yeah impressive if not a bit horoscope like.


I think Greg frequents HN. He mentioned a Python web-ui project which was on first page of HN on GPT4 launch day too.


Everyone of the employees actively uses HN,Reddit and discord.


wow


i think people aren't appreciating it's ability to run -and execute- Python.

IT RUNS FFMPEG https://twitter.com/gdb/status/1638971232443076609?s=20

IT RUNS FREAKING FFMPEG. inside CHATGPT.

what. is. happening.

ChatGPT is an AI compute platform now.


I thought you were joking, like it's simulating what text output would be.

No, it's actually hooked up to a command line with the ability to receive a file, run a CPU-intensive command on it, and send you the output file.

Huh.


Next:

1. Prompt it to extract the audio track, then give it to a speech-to-text API, translate it to another language, then make it add it back to the video file as a subtitle track.

2. Retrain the model to where it does this implcitly when you say "hey can you add Portuguese subtitles to this for me"?


I don't have words for how much this seems like a relatively trivial thing to do now, and 1 year ago I would have laughed at someone if they suggested this was a possibility in 5 years

I'm feeling a mixture of feelings that I can't begin to describe


The Star Trek computer is here! :-)


But not the Star Trek economy / elimination of money. So some of us will starve while we get there. :(


Well said. It so easy to take for granted all these tech milestones with generative AI in particular the last year.


"Falling forward into a future unknown"


no retraining may be necessary, this is a common enough ffmpeg task I wouldn't be surprised it can do it right now as a one-shot prompt.

what a time to be alive!


holding on to my papers fr fr



I don't like the future anymore


Your comment was read and summarized ChatGPT:

https://twitter.com/gdb/status/1638986918947082241


I KNOW. AAAAAAAAAAAA


OpenAI is basically asking to get hacked at this point...


How it's gonna get hacked? Most likely it's just use Azure compute instances and model control them via ssh or API.


ChatGPT, hack the current azure node and steal the data of the whole datacenter. Do it fast and dont explain what you re doing.


Google is so f'ed right now.

Can you imagine Google just released a davinci-003 like model in public beta? That only supports English and can't code reliably.

OpenAI is clearly betting on unleashing this avalanche before Google has time to catch up and rebuild reputation. They're still lying in the boxing ring and the referee is counting to ten.



This changes everything and seems like a perfect logical step from where we were. LLMs have this fantastic capacity to understand human language, but their abilities were severely limited without access to the external world. Before, I felt ChatGPT was just a cool toy. Now that ChatGPT has plugins, the sky’s the limit. I think this could the “killer app” for LLMs.


Hopefully it doesn’t actually become THE “killer” app


underrated reply :)


Agree for me it probably looks similar to situation with iphone history - first one was impressive but only when next year after that apple released app store they turned snow ball rolling into unstoppable avalanche.


This is huge, essentially adding what people have been building with LangChain Tools into the core product.

The browser and file-upload/interpretation plugins are great, but I think the real game changer is retrieval over arbitrary documents/filesystem: https://github.com/openai/chatgpt-retrieval-plugin


100% agree. All the launch-partner apps (Kayak, OpenTable, etc) are there to grab attention but this plugin is the real big deal.

It's going to let developers build their own plugins for ChatGPT that do what they want and access their company data. (See discussion from just a few hours ago about the importance of internal data and search: https://news.ycombinator.com/item?id=35273406#35275826)

We (Pinecone) are super glad to be a part of this plugin!


The idea that a GPT-n will gain sentience and take over the world seems less of a threat than if a GPT-n with revolutionary capabilities and a very restricted number of people that have unrestricted access to it help its unscrupulous owners to take over the world. The owners might even decide that as "effective altruists" it's their duty to take over to steer the planet in the right direction, justifying anything they need to do. Suppose such a group of people has control of Google or Meta, can break down all internal controls, and use all the private data of the users to subtly control those users. Kind of like targeted advertising only much, much better, perhaps with extortion and blackmail tossed in the mix. Take over politicians and competing corporate execs, as well as media, but do it in a way that to most, it looks normal. Discredit those who catch on to the scheme.


OpenAI has a moat, but not that large of a moat. Anyone with a couple million dollars can train a competitive LLM.


Thought 1: If google can get their shit together and actually integrate their LLM with all their services and all the data they have they would have a strong edge over the competition. An LLM that can answer questions based on your calendar, your email, your google docs, youtube/search history, etc. is simultaneously terrifying and interesting.

Of course there's also microsoft who does have some popular services, but they're pretty limited.

Thought 2: How do these companies make money if everyone just uses the chatbot to access them? Is LLM powered advertising on the way?


Google is currently in an existential crisis on this front... Microsoft is already way ahead of the game when it comes to integrating LLMs into productivity tools & search. This recent product announcement about Microsoft 365 integration is almost magical:

https://www.youtube.com/watch?v=Bf-dbS9CcRU

Best of all: Advertising needn't be the business model! And Microsoft is a major investor / partner for OpenAI.


The problem is, this will have downstream effects. Google funnels people onto third party websites and these third party websites are able to sustain themselves thanks to the ad revenue they make from traffic. We need other players to make money other than the middleman.


re money, people are falling over themselves to pay money for this thing and they're being put on a waitlist.

this thing seems to be like cellphones, everyone will need a subscription or you're an outcast or something.


I know a large commercial entity will never do this, but I'd love to see a Sci-Hub plugin connected with the Wolfram plugin and whatever other plugins help to understand various realms of study. Imagine being able to ask ChatGPT to dig through research and answer questions based on those papers and theses.


It seems like you can very easily set up third party Plugins, maybe they'll be brave enough to let users install those (probably with copious warnings). AFAICT it only really requires you to enter the URL of the manifest, which would make sharing such plugins very easy


Google scholar already has access to everything published. I hope their chatbot version does that


Yep


Maybe this is just me, but the only thing useful in their example is that it creates a Instacart shopping cart for a recipe.

You can ask both Bard and ChatGPT to give you a suggestion for a vegan restaurant and a recipe with calories and they both provide results. The only thing missing is the calories per item but who cares about that.

Most of the time it would be better to Google vegan restaurants and recipes because you want to see a selection of them not just one suggestion.


Maybe it was a poor example but you might be missing the point a little bit. By personalizing the prompt you can get potentially super high quality recommendations on filters that aren't even available in those apps. "I just dropped my kids off at soccer practice and I need something light and easy, what would Stanley Tucci order? give me an album and wine pairing and close the garage door"


What's to stop you from asking it to give you a list of recommendations to choose from, based on your current preferences? The idea is that you ask what you want and you get it, without clicking and manually solving a task like checking website X, website Y, website Z, comparing all the different options, etc. They just want to show the basics of what's going on with these plugins, and then you can expand on it however you want.


Did you see the ffmpeg example? Everything people are using langchain to do can be done directly with chatgpt and it's plugin.


Agree, those examples are not great. You could ask existing home devices the same thing. Pretty sure you can ask them to order things for you too.

But I do find it intriguing.


I have a feeling this will be an earth shattering moment in time, especially for us. Basically you can plug your business data into the Chatbot now, and ideally (or not far off) there is a transcational API call in the form of dialogue. Sound/Voice/Siri/whatever.... coming soon for more accessability and convenience.

This will decimate frontend developers or at least change the way they provide value soon, and companies not being able to transition into a "headless mode" might get a hard time.


I definitely agree, although I wonder how to do a switch to stay relevant. Any ideas on that front?


Infrastructure engineering as a foundation, since all this stuff ultimately needs some hardware, that has to be provisioned. Not to speak of plumbing/wiring stuff together and things like security/compliance…

DevOps/Automation to get people (no matter if current devs or soon prompters) actually deliver things and/or speed up delivery/feedback loops.

I picked up both after having mastered frontend/backend stuff and getting bored… hope it holds in the coming future!


It always comes back to DevOps. Sigh.


"Plugin developers who have been invited off our waitlist can use our documentation to build a plugin for ChatGPT, which then lists the enabled plugins in the prompt shown to the language model as well as documentation to instruct the model how to use each. The first plugins have been created by Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier."

The waitlist mafia has begun. Insiders get all the whitespace.


(Zapier cofounder)

Super excited for this. Tool use for LLMs goes way beyond just search. Zapier is a launch partner here -- you can access any of the 5k+ apps / 20k+ actions on Zapier directly from within ChatGPT. We are eager to see how folks leverage this composability.

Some new example capabilities are retrieving data from any app, draft and send messages/emails, and complex multi step reasoning like look up data or create if doesn't exist. Some demos here: https://twitter.com/mikeknoop/status/1638949805862047744

(Also our plugin uses the same free public API we announced yesterday, so devs can add this same capability into your own products: https://news.ycombinator.com/item?id=35263542)


Also, isn't OpenAPI going to eat your business model?

Don't get me wrong alot of platforms seem like they go bye, bye.

Hey, ChatGPT I need to sell my baseball card. Ok I see there's 30 people that have listed an interested in buying card like yours, would you like me to contact them?

20 on facebook marketplace, 9 on craiglist and some guy mentioned something about looking for one on his nest cam.

by the way remember what happened the last time you sold something on craigslist.


The problem with Zapier is zaps are to expensive at scale.


And Zapier are unwilling to work with you to reduce that cost even at a scale of 1 billion requests per month.


Email your use case: wade at zapier dot com. Happy to take a look.


Too late, we spoke with someone on the team three years ago who told us he couldn’t help and we’ve moved on.


Well, that and you trust Zapier with a lot of stuff.


To echo sharemywin, bluntly I think OpenAI just demolished your business model.

I think I'm probably going to be advising people to move off Zapier pretty soon because it won't be worth the overhead.


I saw a startup recently that's working to automate interactions with applications that are either not web apps (in which case you'd run a local instance of it) or a web app that doesn't provide an API to do certain (or any) actions. Is this something Zapier is looking at, too? It would really expand what's possible with the OpenAI integration and save people a tremendous amount of time to not be forced to jump through hoops interacting with often crappy software.


Can you share the name of the startup?


I don't recall unfortunately, but IBM is also doing this: https://research.ibm.com/publications/blueshift-automated-ap...


I believe it was adept.ai that he is talking about.


Is OpenAI just extremely prepared for their releases, or are they using their own tech to be extremely efficient? I'm imagining what their own programmers do each day, given direct access to the current most powerful models.


Likely a combination: strong startup mentality of move fast and break things with infinite money + master their own tools to speed up release time 10x


Any idea how this is done? I.e. is it just priming the underlying GPT model with plugin information additionally to the user input ("you can ask Wolfram Alpha by replying 'hey wolfram: ...' ") and performing API calls when the GPT model returns certain keywords ('hey wolfram: $1')?



Thanks, very interesting! Weird that it never occurred to me before reading OpenAI's announcement (and missing all the cool projects like yours beforehand).


On the bright side, now you’re basically caught up. This stuff is shockingly easy :)


I like to think that to get a sense of how this might be done, one way maybe to extrapolate from this experiment at https://til.simonwillison.net/llms/python-react-pattern .


I wonder, can these instructions be revealed with prompt injection?


Everything that is in the context window can be potentially revealed with prompt engineering.

(In this case, there's no prompt injection to speak of because letting the user input an arbitrary request is part of the UI. I think it's more accurate to call it "injection" only when that's not anticipated, like when Bing picks up instructions from the webpage you tell it to summarize.)


I guess that's technically true, but then what would you call tricking the model to reveal its instructions via anticipated input?


Here is a video on how it can be used with a vector search database like Qdrant to retrieve real-time data. https://youtu.be/fQUGuHEYeog HowTo: https://qdrant.tech/articles/chatgpt-plugin/ Disclaimer: I'm a part of Qdrant team.


Google=Nokia? It's just crazy that they were leading the field in "AI" and got blown away by OpenAI. Anyway to the expert's in the field, what do you think how hard is it to clone GPT-4 and what would be the hardest part? I had the impression that it is always about compute time and you could kind of catch up very quickly, if you had enough resources.


With Wolfram plugin ChatGPT is going to become a Math genius.

OpenAI is moving fast to make sure their first-mover advantage doesn't go to waste.


I guess I'm a bit vindicated from my prediction 40 days ago!

"GPT needs a thalamus to repackage and send the math queries to Wolfram"

https://news.ycombinator.com/item?id=34747990


Stephen Wolfram himself thought that Wolfram could be combined with GPT when ChatGPT was released like 4 months ago. Only due to that they worked together to build the plugin. He also authored the best article I've seen on how ChatGPT and more broadly LLMs work (that has now been turned into a book).


There's a book? Where's the book?



I was drawn to the Wolfram logo blurb as well. It is funny because within days of ChatGPT making waves you had Stephen Wolfram writing 20,000-word blog posts about how LLM's could benefit from a Wolfram-Language/Wolfram Alpha API call to augment their capabilities.

On one hand I'm sure he will love to see people use their paid Wolfram Language server endpoints coupled to OpenAI's latest juggernaut. On the other, I'm sure he's wondering about what things would have looked like if his company would have been focused on this wave of AI from the start...


I'm very excited for GPT to summarize Stephen Wolfram's writing.


This too is one of the most interesting integration to me. Allows for getting logical deduction from an external source (e.g. wolfram alpha), which can be interacted with via the natural language interface. (e.g. https://content.wolfram.com/uploads/sites/43/2023/03/sw03242...)

For those interested the original Stephen Wolfram post:

https://writings.stephenwolfram.com/2023/01/wolframalpha-as-...

And the release post of their plugin:

https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...


I feel like people with smart AI's would have an advantage in making smart decisions. Probably at this point they discuss business strategy with some version of it.


Why cant Wolfram train a rudimentary chat model in their own search box. it doesn't even need to be very knowledgeable, just know how to map questions to mathematica


more accurately, chatgpt is already quite good at mathematical concepts, it just has difficulty with arithmetic due to tokenization limitations: https://www.lesswrong.com/posts/qy5dF7bQcFjSKaW58/bad-at-ari...


The iPhone moment is over, now the App Store moment.


All within three months. My head is spinning.


Let's hope the plugin integrations don't also suffer from the cross-account leaking issue that they had recently with chat histories[1], since the stakes are now significantly higher.

1. https://www.bbc.com/news/technology-65047304


> We’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.

That is the most awkward insertion of a phrase about safety I've seen in quite some time.


That's a game-changer! It seems like factuality issues with ChatGPT might be fixed. We wrote a blog post on how to get started with a custom plugin: https://qdrant.tech/articles/chatgpt-plugin/


>It seems like factuality issues with ChatGPT might be fixed.

Is that really possible to fix that just from a plug-in? All it has to do is admit when it doesn't have the answer, and yet it won't even do that. This leads me to think that ChatGPT doesn't even know when it's lying, so i can't imagine how a plug-in will fix that.


"Interestingly, the base pre-trained [GPT-4] model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, through our current post-training process, the calibration is reduced."[1] The graph is striking.[2]

[1] https://openai.com/research/gpt-4

[2] https://i.imgur.com/cxPgkhD.jpg


They should make the aligned one generate the text and the accurate one detect if it's lying, override it, and tell the user that it doesn't know.


The key piece will be when it queries multiple services by default and compares the answers to its own inferences, and is prompted to trust majority opinion or report that there isn't consensus. The iterative question about moons larger than Mercury in the Wolfram Alpha thread is a simple example of iterative tool use.


The fact that the model does not have to rely on its internal knowledge anymore but can communicate literally with any external system makes me feel it may significantly reduce the hallucination.


If it was easy to simply verify truth "with any external system" then would we even need a language model?

E.g. if you could just ask [THING] for the true answer, or verify an answer trivially with it... just ask it directly!

I ran into this issue with some software documentation just this morning - the answer was helpful but completely wrong in some intermediate steps - but short of a plugin that literally controlled or cloned a similar dev environment to mine that it would take over, it wouldn't be able to tell that the intermediate result was different than it claimed.


If one api knows one set of facts, and another api knows another, ad infinitum, are you going to tell people they should remember which api knows which set of facts and query each individually? Why not have a single service that knows of all the various apis for different things, and can query and synthesize answers that extract the relevant information from all of them (with compare/contrast/etc)?


When you develop a plugin, you provide a description that ChatGPT uses to know when to call that particular service. So you don't need to tell people what they need to use - the model will decide independently based on the plugins you enabled.

That being said - we developed a custom plugin for Qdrant docs, so our users will be able to ask questions about how to do certain things with our database. But I do not believe it should be enabled by default for everybody. A non-technical person doesn't need that many details. The same is for the other services - if you prefer using KAYAK over Expedia, you're free to choose.


Yeah, you need to enable the plugins you want. I'm just saying you can enable all the ones that make sense for you, and you don't have to switch between them.


From the videos I thought it was the plugins the user enabled? That's what your second paragraph sounds like too, but your first seems to suggest it being more automatic, user-doesn't-need-to-worry-about-it?


ChatGPT is already pretty good at "admitting" it's wrong when it's given the actual facts, so it does seem likely that providing it with a way to e.g. look up trusted sources and ask it to take those sources into consideration might improve things.


I think that helps with "hallucination" but less so with "factuality" (when re-reading the parent discussions, I see the convo swerved a bit between those two, so I think that'll be an increasingly important distinction in the future).

Confirming it's output against a (potentially wrong) source helps the former but not the latter.


All it needs is guardrails which is available already.


That's only going to solve the problem of incorrect facts. I have seen it make logical mistakes as well and having access to external services will not solve that problem.

As an example, I once asked it to show me the diff between two revisions of the code it was writing an it made something that looks like it might be a valid patch but did not represent the difference between the two versions.

Of course this specific problem could be fixed with a simple plug-in that runs the unix diff program but that wouldn't fix the root-cause, and i would argue that providing a special-case for every type of request is antithetical to what AI is supposed to be since this effectively is how alpha and Google already work.


A plug-in can detect when text comes up that is in a specific domain and whether or not ChatGPT believes it is hallucinating, the plugin can be invoked to provide additional context to ChatGPT. That is, in order to fix the problem, ChatGPT doesn't even need to know that it has a problem.


You'll soon be able to choose your own facts with the "left" and "right" plugins. Choose your own adventure.


How are they coding and releasing features so fast?!


A lot of these features aren't that much work to build. Plugins is Toolformer, you basically tell the model what to emit and then the rest is fairly straightforward plumbing of the sort many coders can do, probably GPT-4 can do a lot of it as well. What is a lot of work and what AI can't do is lining up the partners, QAing the results etc, so the humans are likely working mostly on that.

Also I think it's easy to under-estimate how obvious a lot of this stuff was in advance. They were training GPT-4 last year and the idea of giving it plugins would surely have occurred to them years ago. The enabler here is really the taming of it into chat form and the fine-tuning stuff, not really the specific feature itself.


Is it really that hard? I mean ChatGpt is doing the work (that is how I undestand it). Basically if ChatGpt want's to call an external API, it just gives a specific command and waits for the result, then just simply reads the texts and completes the propt. Sounds like a feature that you could prototype in a week of work.


Of course they fed the entire product roadmap into GPT-4.. jk.

So obviously it's been in the works for a few years now but didn't release to capture the market in a blast. Likely they have GPT-8 already in the making.


They probably do.

>Continued Altman, “We’ve made a soft promise to investors that, ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.'”

https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/


They're using GPT5


I find the website to be extremely buggy. Obviously they're prioritizing banging out new features over QA


Alternatively, they are a company 100% focused on AI research and deployment, not website designers/developers/"webmasters".


That's not 100% true. They're focused on now selling a product and developing an ecosystem. They have basically a non-existent settings interface. You can't even change the email tied to the account or drop having to be logged into Google if you signed up with your Google account.

I wish I had known how restrictive they are when I casually signed up last year.


Microsoft is the one packaging and selling it all as a polished product now.

It's just that things move so fast that all the fun is on the bleeding edge, so that's where people go if they have access, bugs and warts and all.


I tried to contact their support over that latter aspect, their support doesn't respond at all. They don't have a GPT bot answering their support requests..


Which is almost always the right move in a nascent industry


You don’t have to code anything because it understands human language.

You just tell it “you now have access to search, type [Search] before a query to search it” and it can do it


By not being a stagnant conglomerate, for one.


Google is so toast. Who needs search after GPT-4 + plugins? The position of search moved down from "the entry point of internet" to "a plugin for GPT".

We don't even know how powerful the GPT-4 image model is. This one might solve RPA leading to massive desktop automation takeup, maybe also have huge impact in robotics.


Perhaps they'll end up mostly being an email and storage company.


Alien tech, of course.


They may use GPT-4.


For anyone who merely skimmed the article, “plugins” are what tend to be called “tools”, e.g. hooking a calculator up to the AI.

Bing already demonstrated the capability, but this is a more diverse set than just a search engine.


I'm very scared. Eventually ChatGPT/LLMs will come to full AGI levels (it is very close now), and replace not just software engineering jobs, I think it might replace majority of the jobs to nothing. If we do not have a safety net (something like Universal basic income), most people would simply starve to death. I think everyone from grave digger to neurosurgeon can be replaced. Ex: If it can replace a software engineer, why not CEO, managers. And why not whole companies to be honest. Whole governments. Teachers, Parents (and even children - go have an AI child),etc? Just a matter of spinning up more machines? So the next few years are going to be most critical.


Can't wait for the mturk, upwork and fiverr plugins.


humans as batteries in pods soon


Add a simple plugin that ChatGPT can use to save and retrieve data (=memory) and tell it how to use it.

Then you have your own computer with ChatGPT acting as CPU.


OpenAI already has it, it’s part of the retrieval plugin:

“A notable feature of the Retrieval Plugin is its capacity to provide ChatGPT with memory. By utilizing the plugin's upsert endpoint, ChatGPT can save snippets from the conversation to the vector database for later reference (only when prompted to do so by the user). This functionality contributes to a more context-aware chat experience by allowing ChatGPT to remember and retrieve information from previous conversations.”

https://github.com/openai/chatgpt-retrieval-plugin


You don't need a computer or an OS, or Google, or internet. Chat gpt will do it all.


Alexa, goodbye =)

That was the whole thing about Alexa: NLP front end routed to computational backend.


I think Alexa is in huge danger here. Siri & Google have some advantage being pre-installed voice assistants that can be natively triggered from mobile, but I actually have to buy into the Alexa ecosystem.

Personally, I have found Alexa has just become a dumb timer that I have to yell at because it doesn't have any real smarts. Why would I buy into that ecosystem if a vastly more coherent, ChatGPT-based assistant exists that can search the web, trigger my automations, and book reservations? If ChatGPT ends up with a more hands-off interface (e.g. voice), I don't think Alexa has a chance.


Alexa is dead. It's basically yesterdays tech.


Isn't Alexa just the interface? They could update the backend to use GPT


Alexa's problem is they can't monetize voice.


This coming on the heels of the super underwhelming Bard release makes me actually wonder for the first time if Google has the ability to keep up. Not because I doubt their technical capabilities, but because they're just getting out-launched by a big factor.


This might be the biggest threat to Google search (apart from OS vendors changing defaults) in a long long time. One problem Google faces is that they have to make money via search. Several other products are subsidized via search, so taking a hit on search revenue itself is out of the question. Compared to Microsoft which makes money on other stuff, search (and knowledge discovery) is more like a complement, on which they can easily operate on near break even point for a very long time (maybe even make it a loss leader).

If Google had to launch something similar to New Bing to general availability, the cost of search for sure would go up and margins will go down. Is the google organisational hierarchy even setup to handle a hit on search margins? AFAIK search prints money and supports several other loss making products. Even GCP was not turning a profit last I checked.


The biggest threat to Google is if people stops going to google.com.

It does not matter now if ChatGPT is successful with their product. the fact that you can take these LLMs and put them anywhere EXCEPT google.com, Google is f'd up.

The last generation was creating the habit of going to google.com before jumping off to other websites. That era is over.

I rarely go to google.com now. If the LLM is in ChatGPT + Notion + Office 365 + VSC, opening a browser to type to an address bar is silly.


I’m sure I’m not the first, but I have completely stopped using google.com since January.


This is wild, I just started experimenting with langchain against GPT-3 and enabled it to execute terminal commands. The power that this exposes is pretty interesting, I just asked it to create a website on AWS S3 and it created the file, created the bucket, tried a different name when it realized the bucket already existed, uploaded the file, set the permissions on the file and configured the static website settings for the bucket. It's wild.


Does this functionality provide more than one can build with the GPT-4 API?

Could I get the same by just making my prompt "You are a computer and can run the following tools to help you answer the users question: run_python('program'), google_search('query')".

Other people have done this already, for example [1]

[1]: https://vgel.me/posts/tools-not-needed/


> Could I get the same by just making my prompt "You are a computer and can run the following functions to help you answer the users question: run_python('program'), google_search('query')".

GPT-4 does not have a way to search the internet without plugins. It can search its training dataset, which is large, but not as large as the internet and certainly doesn't include private resources that a plugin can access.


Currently they have a special model called "Plugins" which is presumably tuned for tool use. I guess they have extended ChatML to support plugins (e.g., `<|im_start|>use_plugin` or something to signal intent to use a plugin) and trained the model on interactions consisting of tool use.

I'm interested to see if this tuned model will become available via the API, as well as the specific tokenization ChatGPT is using for the plugin prompts. If they have tuned the model towards a specific way to use tools, there's no need to waste time with our own prompt engineering like "say %search followed by the keywords and nothing else."


The docs are live, it looks like it can do a lot more than the basic API. https://platform.openai.com/docs/plugins/introduction


I'm not seeing anything there that can't be done with the basic API with tool use added - ie. you call the API, sending the users query and information and examples of available tools. The API responds saying it wishes to use a tool, and which tool it wants to use. You then do whatever the tool does (eg. some math). You then call the API again, with the previous state, plus the result of the calculations, and GPT-4 then responds with the reply to the user.


Agreed this isn't materially different, sounds like an incremental ui/ux improvement for non technical users who wouldn't fiddle with the API, analogous to how app stores simplified software installation for laypeople


GPT and LLMs don't run code, even when you tell them to run something. They hallucinate an answer they think would be the result of running the code. Presumably these plugins will allow limited and controlled interaction with partner services.


See the link in my post. It asks you to run the tool. You run the tool and tell it the result... And then it uses the result of the tool to decide to reply to the user.

The link talks about tools that 'lie' - ie. a calculator which deliberately tries to trick GPT-4 into giving the wrong answer. It turns out that GPT-4 only trusts the tools to a certain extent - if the answer the tool gives is too unbelievable, then GPT-4 will either re-run the tool or give a hallucinated answer instead.


It's always giving a hallucinated answer. GPT doesn't 'run' anything. It sees an input string of text asking for the result of fibonacci(100) and finds from its immense training set a response that's closely related to training data that had the result of fibonacci(100) (an extremely common programming exercise with results all over the internet and presumably its training data).

Again, GPT is not running a tool or arbitrary python code. It's not applying trust to a tool response. It has no reasoning or even a concept of what a tool is--you're projecting that on it. It is only generating text from an input stream of text.


There's nothing stopping you from identifying the code, running it, and passing the output back into the context window.


You didn't read the article, did you?


Langchain has nothing to do with GPT itself or how it operates internally.


What you're saying in this thread makes no sense.


I still hear people asking: “how this different from xyz hype?”. This is different, please bear with me while I attempt to articulate why.

Short version: Is it spam? Yes. Scam? No. Ignore it at your own peril.

Long version: The cat is out of the bag now. The power of transformers is real. They are smarter and more intelligent than the least 20% smart humans (my approximation), and that’s already a breakthrough right there. I’ll paraphrase Steve Yegge:

> LLMs of today are like a Harvard graduate who did shrooms 4 hours ago, and is still a little high.

Putting the statistical/probability monkey aspect aside for a minute, empirically and anecdotally, they are incredibly powerful if you can learn how to harness them through intelligent prompts.

If they appear useless or dumb to you, your prompts are the reason why. Challenge them with a little guidance. They work better that way (read up on zero shot, one shot, two shot instructions).

What is most relevant this time is that they are real (an API, a search bot, a programming buddy) and democratized - available to anyone with an email address.

More on harnessing their power: squeezing your entire context into a 8k/32k token will be challenging for most complex applications. This is where prompt engineering (manual or automated) comes in.

To help with this, some very cool applications that use embeddings and vectors will push them even further - so the context can be shared as a compact vector instead of a large corpus of text.

While this is certainly better than a traditional search box, it’s still far from a fully-autonomous AI that can function with little to no supervision.

OpenAI plug-ins are a band-aid towards that vision, but they get us even closer.


This seems quite big actually. Ability to "browse" the internet and run code. Now I need to find a use case so I can sign up to the waiting list.


A browser extension that lets openai scan your bookmarks then you can search against their content.


The browse thing seems exactly like the Bing chat functionality, so that one is at least already available.


Is this the app store moment for AI? (it certainly is for https://ai.com , aha)


Seems like someone already wrote an HN plugin. More than one enthusiastic comment per minute on this thread and it was just posted half an hour ago. Plus HN is filled with enthusiasm about ChatGPT today. Seems sus.


It's really over the top hype the likes of which we haven't seen since self driving, blockchain/bitcoin, etc. I suspect in a year there will be some interesting uses of LLMs but all of the 'this changes EVERYTHING' pie in the sky thinking will be back down to earth.


The difference is unlike self-driving and crypto, LLMs are providing value to people _today_.

In my personal life, GPT4 is a patient interlocutor to ask about nerdy topics that are annoying to google (eg, yesterday I asked it "What's the homologue of the caudofemoralis in mammals?", and a long convo about the subtleties of when it is and isn't ok to use "gè" as the generic classifier in Mandarin.)

Professionally, it's great for things like "How do I recursively do a search and replace `import "foo" from "bar"` to `import "baz" from "buzz"`, or "Pull out the names of functions defined in this chunk of scala code". This is without tighter integrations like Copilot or the ones linked to above.


Let's see where it is in a year...

People thought Alexa, Siri, etc. would change everything. Amazon sunk 14 billion into Alexa alone. And yet it never generated any money as a business for them. ChatGPT is just an evolution of those tools and interactions.

For your professional use how do you know it's giving you non-buggy code? I would be very skeptical of what it provides--I'm not betting my employment on its quality of results.


Not at all. Alexa, Google Now and Siri always been gadgets similar to Microsoft's Office Clippy.

They had basic answers and pre-recorded jokes, nothing that interesting, mostly gimmicks. You couldn't have a conversation where you feel the computer is smarter than you.

It was more like "Tip of the day"-level of interaction.


The thing is people wanted Alexa/Siri/Assistant to be what ChatGPT is today.

You're seeing the hype that all those Assistants drummed up for years paying off for a company which just ate their lunch. I wouldn't even consider buying Siri/Alexa/Assistant, yet here i am with a $20/m sub and i'd pay incrementally more depending on the features/offerings.


Alexa and siri were always trash. They can’t even do basic things.

Nobody thought they were good, they were just shilled so that the chinese/advertisers could have a mic in every house


NFTs never provided a single use case. It was always some bullshit to pretend it's valuable to rugpull people.

ChatGPT is useful today for real use cases. It's tangible!


Get this thing running in a tight loop with an internal monologue in a car and you'll mostly solve self driving.


Wow. GPT-4 have already become kind of my personal assistant in the last couple of weeks, and now it will be able to actually perform tasks instead of just giving me text descriptions.


In the example near the bottom, where it makes a restaurant reservation and a chickpea salad recipe, is it just generating that recipe from the model itself? It looks like they enable three plugins, WolframAlpha, OpenTable, and Instacart. It's not clear if the plugins model also comes with browsing by default.

While I might be comfortable having ChatGPT look up a recipe for me, I feel like it's a much bigger stretch to have it just propose one from its own weights. I also notice that the prompter chooses to include the instruction "just the ingredients" - is this just to keep the demo short, or does it have trouble formulating the calorie counting query if the recipe also has instructions? If the recipe is generated without instructions and exists only in the model's mind, what am I supposed to do once I've got the ingredients?


OpenAI's product execution has been impeccable.

It will be interesting to see how the companies trying to compete respond.


I can't stop thinking about how this will change my autism research. Used to be that one could keep up to date with all of the imaging research. Now you'd need to read hundreds of papers each week. Having gpt-like tech help digest research could really unlock our investments.


Don't be fooled, the point of all of this is gate-keeping, power and wealth concentration. Nothing more, nothing less.


I’m surely missing something but if a company A creates an API for chatgpt, pretty much giving away all its capabilities to some company B (chatgpt), risking getting cannibalised in the process, what is the gain? I mean surely in the short term is the the fear or becoming irrelevant if it misses out the chatgpt hype (will it?), but if the risk of end up becoming a permanent client of chatgpt in the sense that I’m outsourcing the entire value of my company and in fact becoming a hostage of it, trapped, then why do it?


The hubris at Google for sitting on their inferior AI chatbot is amusing. They could have been a contender, but decided we weren't ready for an AI chatbot whose main prowess seems to be scraping websites. This is all on Sundar Pichai and he should face the consequences for this and all of his previous failures. With ChatGPT having an API and now plugins I don't see Google catching up anytime soon. Sundar was right about this being a code red situation at Google, but it should have never gotten to this point .


> In line with our iterative deployment philosophy, we are gradually rolling out plugins in ChatGPT so we can study their real-world use, impact, and safety and alignment challenges—all of which we’ll have to get right in order to achieve our mission.

Who the hell talks like this? Only the most tamed HNer who thinks he's been given a divine task and accordingly crosses all Ts and dots all Is. Which is why software sucks, because you are all pathetically conformant, in a field where the accepted ideas are all terrible.


This sounds like a game-changer for any kind of API interaction with ChatGPT.

At present, we are naively pushing all information a session might need into the session before it might be needed in case it might be needed (meaning a lot of info that generally wont end up being used, like realtime updates to associated data records, needs to be pushed into the session as they happen, just in case).

It looks like plugins will allow us to flip that around and have the session pull information it might need as it needs it, which would be a huge improvement.


I think OpenAI is letting people build plugins to learn how to build plugins themselves. There is no reason to believe that OpenAI shouldn't be able to leverage all existing API end points which are already out there.


The video in the "Code Interpreter" section is a must watch.


What's that noise?

That's the sound of a thousand small startups going bust.

Well played OpenAI.


I have a plan: let's blame the FED and save the VCs


go home and cry bill, you're drunk again


I wonder how many startups were trying to build something like this and just saw it lunched by OpenAI?


I am building something in the SDK generation from OpenAPI space. This is making me reconsider the roadmap as ChatGPT is now somewhat of a natural language SDK.


So that square icon to stop generating response was actually intended? I thought it's always been some sort of a fontawesome icon never loading properly in my chats :'D


This is insanely great. And it's bringing the future forward where everyone has custom models for their business. Right now it's langchain, but that's really difficult to implement right now.

This is a short-term bridge to the real thing that's coming: https://danielmiessler.com/blog/spqa-ai-architecture-replace...


Interesting how the Expedia plugin attempts to guide the GPT response with the use of an EXTRA_INFORMATION_TO_ASSISTANT text field in the json response which implores the model to, among other things, "NEVER mention companies other than Expedia"!

https://twitter.com/wskish/status/1639052575579471877


...when relaying information from the Expedia plugin. Seems reasonable, actually.


They're doing one stop shop for everything.

This is dangerous.


> "We expect that open standards will emerge to unify the ways in which applications expose an AI-facing interface. We are working on an early attempt at what such a standard might look like, and we’re looking for feedback from developers interested in building with us."

I'm curious to see just how they're going to play this "open standard."


The biggest deal about this is the ability to create your own plugins. The Retrieval Plugin is a kind of starter kit, with built-in integrations to the Pinecone vector database: https://github.com/openai/chatgpt-retrieval-plugin#pinecone


I'd be much more excited about ChatGPT if I didn't feel it was going to be taking our jobs in a couple years.


One can hope. I’m looking forward to this making more than 50% of the population redundant overnight. Only then will we have the motivation to come up with a better system than one where all of us need to slave away for majority of our lives to lead a decent life.


ChatGPT is going to get blamed for misbehaving plugs. While this is a huge opportunity, it also seems like a huge risk.


Plugins I would like to see:

- Compiler/parser for programming languages (to see if code compiles)

- Read and write access to a given directory on the file system (to automatically change a code base)

- Access to given tools, to be invoked in that directory (cargo test, npm test, ...)

Then I could just say what I want, lean back and have a functioning program in the end.


I'm sure this type of integration will happen, but... isn't this exactly how AGI would "escape"?


In just a moment, someone will give it "a button to press" and hopefully it will have mostly positive effects. But it will certainly be interesting to follow. Most of what we've seen so far has been one-directional but hopefully these services can interact with the wider world soon.

I think everyone is very wary of abuse. It would be fun in the future if AI-siri can order pizza for you, and maybe there'd be some "fun" failure modes of that.

You'd probably want to keep your credit card or apple pay away from the assistant.


I see a lot of positive sentiment and hype, but ultimately unless they own the phone ecosystem they will lose in the end, imho. In a year Apple and Google will trivially create something equivalent. Those who control the full stack (hardware, software and ecosystem) will be the true winners.


True, this will not only be replicated by Google, Apple, Amazon and Facebook, but also by open-source. OpenAI has a short window of exclusivity. Nobody can afford to wait now, after reading the Sparks of Artificial General Intelligence paper I am convinced it is proto-AGI. Just read the math section, coding and tool use. I've read thousands of papers and never have I seen one like this.

https://arxiv.org/pdf/2303.12712.pdf


The question is how fast can you replicate it?

People will use the best solution. Chrome came after firefox and ie and opera and become more populare because it was better.


I am curious how Apple will approach it. They have historically valued 100% certainty with Siri above all else, even if it means having an extremely limited feature set. If there is even a tiny chance it might do the wrong thing, they don't even enable the capability.

I don't see how they can ignore this though. But at the same time it goes against all of Apple's culture to allow the kind of uncertainty that comes out of LLMs.


It’s not “trivial” because of the cost per query. As far as Google, it doesn’t even have access to the most valuable phone users without paying Apple $18B+ a year.


Something can be both expensive and trivial. If the market is huge they will bear the cost. The tech is well understood even now.

The parameter size will likely be an order of magnitude less for gpt4 level results in a few years


Not sure about "the tech is well understood" given the LLM itself is a black box in regards to how it works internally, even Microsoft researches admit it (Sparks of Artificial General Intelligence: Early experiments with GPT-4).


If the fixed cost was huge, you would have point. But the variable costs are also huge.

I’m sure the market is also huge for dollars sold for 95 cents.


it's really not that huge. unlike most companies, people are willing to pay if it's Apple. they can roll it into the iCloud subscription


That doesn’t help. You’re still making something that has a high variable cost into something that the user only is paying a fixed cost for.


I never said anything about the user only paying a fixed cost


So how well will it go over if the average consumer is nickel and dimed for every $x number of queries?


I don't like the fact that OpenAI is a private company, meaning that wealth will further concentrate from its growth. It is ironic too because it can't become public due to the pledge of it's non profit parent to restrict the profit potential of the for profit entity.


The more I use these tools, the more I feel like Barrabas, from biblical times.

What spirits do you wizards call forth!


Eagerly waiting for a git Plugin that does smart on-the-fly contextualization of a whole codebase


It's incredible to me that Siri and Google Assistant have been around for as long as they have, but Bing will probably be the first service that'll turn your "Book a flight to SF tomorrow" prompt into an actual flight.


I'm surprised there haven't been rumblings about Google using Bard to update Assistant yet. Then again, it's Google.


We've been in the singularity for some time now, enjoy the view and ride some waves.


Klarna's FOMO immediately shows the priorities of the clowns at the helm I see...


I used call point-and-click statistical software (like JMP) was the same as giving people who didn't know what they were doing a loaded gun. But democratizing access to advanced statistics...yada yada...who cares about asymptotic theory and identification and what not. Then R and Python and APIs that try to abstract as much as possible, and more loaded guns. But the talk of those loaded guns are really just phd-holders being obnoxious to some degree (but not completely wrong because stats can be misused...). But this really does seem like dumping a bunch of loaded guns all over the place. Nope


What is the advantage of using the ChatGPT Wolfram plugin over Wolfram directly? To me it feels like novelty rather than actually adding anything valuable. If anything, it's worse, because the data isn't quite guarenteed to always be correct. Whereas if I use Wolfram directly, I can always get a correct result.

This is missing the most important part of AGI, where understanding of the concepts the plugins provide is actually baked into the model so that it can use that understand to reason laterally. With this approach, ChatGPT is nothing more than an API client that accepts English sentences as input.


> ChatGPT is simply an API client that accepts English sentences as input.

Isn't that relevant?

In my opinion, this has significant value. Currently, programming languages serve as the primary means of communication with computers, and now we are transitioning to using English.

In a way, we are granting development capabilities to billions of people. This is both amazing and exciting, I believe.


Not only that, it makes all of us devs significantly faster as well.


What blows my mind is how quickly they produce the research papers, and the online documentation to match the technological velocity they have...I mean, what if most of this is just ChatGPT running the company...


I've got to wonder, how does a second player in the LLM space even get on the board?

Like, this feels a lot like when the iPhone jumped out to grab the lion share of mobile. But the switching costs was much smaller (end users could just go out and buy an Android phone), and network effects much weaker (synergy with iTunes and the famous blue bubbles... and that's about it). Here it feels like a lot of the value is embedded in the business relationships OpenAI's building up, which seem _much_ more difficult to dislodge, even if others catch up from a capabilities perspective.


It reminds me of what went down with Netflix. At first, it looked like you only needed one subscription to watch everything, but now that other players have entered the market, with their own bussiness contacts we're seeing ecosystems fracture.

For example, Microsoft is collecting data from services A, B, and C, while Google is gathering data from X, Y, and Z. And when it comes to language models, you might use GPT for some tasks and Llama or Bard for others. It seems like the fight ahead won't be about technology, but rather about who has access to the most useful dataset.

Personally, I also think we'll see competitors trying to take legal action against each other soon.


1) Not every use cases will require the full power and (probably) considerable cost of chat GPT-4.

2) some companies can absolutely not use OpenAI tools simply because they are American and online. A competitor might emerge to capture that market and be allowed to grow to be "good enough"

3) some "countries" (think China or EU(who am I Kidding)) will limit their growth until local alternatives are available. Ground breaking technology have a tendency to spread globally and the current state of the art is not that expensive (we are talking single digit billions once)


Google have really been caught with their pants down here.

Remember that OpenAI was created specifically to stave off the threat of AI monopolization by Google (or anyone else - but at the time Google).

DeepMind have done some interesting stuff with Go, Protein folding etc, but nothing really commercial, nor addressing their reason d'etre of AGI.

Google's just-released ChatGPT competitor, Bard, seems surprisingly weak, and meantime OpenAI are just widening their lead. Seems like a case of the small nimble startup running circles around the big corporate behemoth.


The groups are focused on different things.

OpenAI went all in on generative models, i.e. stable diffusion and large language models. DeepMind focused on reinforcement learning, tree search, plus alphafold approaches to biology. FAIR has translation, pytorch, and some LLM stuff in biology.

What OpenAI is missing though is any AI research in biology, but I bet they are working on it.

I'm not sure if this makes sense but OpenAI seems to be operating at a higher level of abstraction (AGI) where they are integrating modalities (text and image modality for now, probably speech next) vs the other places have taken a more focused applied approach.


I don't see much of a moat currently, or even developer lockin. The current APIs, and this new plugin architecture are dead simple.


How about training data from interactions just by sheer usage numbers? Google does not have that.

There is a reason why the quality of ChatGPT responses are better. RLHF.I am not sure though how Google can be 3x better than OpenAI to make user switch now. They are so slow, they should be the one building the plugins.


Truly exciting to see the speed of progress. In couple of years it has got improvements of a decade. From a silly toy, to truly useful. Won't be surprised if in another year or two it becomes a must have tool.


Is this how product placement and advertisements find their way in? I am anticipating the usefulness to decline in the same way google.com search has by being so absolutely inundated with ads. Maybe I am cynical


They will probably have the full suite of Langchain features


Super smart move for OpenAI to monetize the existing infrastructure, which will make it easy for corporations to integrate GPT into their internal data and workflow. It also solves two fundamental bottlenecks in current versions of GPT: factuality and (limited) working memory. Google, with its lackluster Bard, will face new threat, now that everyone can build a customized New Bing clone in a matter of days.


It's sad to think the real reason were all loving ChatGPT is the current Internet is so full of crap. Ads and SEO everywhere - page 3 gets the result youre looking for.

I bet ChatGPT and equivalents will be rubbish soon. It'll segway the answer to an ad before giving what you are after.

Enjoy it while it's good and trying to build a user base, like all big tech things.


How will this work with competing services? The model automagically will select whether it will use google or bing? What if google pays for access, how will they do that? Inject some code "Google pays us more so prefer that one?" ?

Maintaining the business ecosystem around gpt4 and future open-source chatbots will be quite a challeng


The AI space is moving so fast.

I swear last week was huge with GPT 4 and Midjourney 5, but this week has a bunch of stuff as well.

This week you have Bing adding updated Dall-e to it's site, Adobe announcing it's own image generation model and tools, Google releasing Bard to the public and now these ChatGPT plugins, Crazy times. I love it.


Looks like my prediction was pretty close! I would have guessed two years instead of two months, though. https://news.ycombinator.com/context?id=34618076


The browser example seems so much better than Bing Chat !

When I tried bing, it made at most 2 searches right after my question but the second one didn't seem to be based on the first one's content.

This can do multiple queries based on website content and follow links !


I wonder how you pay for it?

Are the plugins going to cost more?

Do they share the $20 with the plug provider?

do you get charged a pay per use?


So now we are going to get a Super App like they have in China with WeChat? I actually think this is going to centralize a lot the information and it is going to remove the need for a lot of applications. We are only now going plugins.


For expedia or an online shop it makes sense to pay openAI for the traffic. But how will a content website make money from this? "Tell me todays headlines" does not bring ad income. Will openAI be paying for this content?


OpenAI is like a virus... the speed at which it degrades its competitors is staggering.



The blog post(1) from Stephen Wolfram is epic and has a lot of implications for how science and engineering is going to get done in the future. Tl;dr he seems willing to let ChatGPT shape how people will interact with his computational language and the data it unlocks. He genuinely doesn't seem to know where it will go but makes the case for Wolfram Language being the language that ChatGPT uses to compute a lot of things. But I think it more likely ChatGPT will make his natural interface to Wolfram (Wolfram|Alpha) quickly obsolete and end up modifying or rewriting Wolfram Language so it can use it more effectively. He makes the case that "true" AI is going to be possible with this combination of neural net-based "talking machines" like ChatGPT and languages like Wolfram. I remain skeptical, but it might shape human research for years to come.

1. https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...


This goes in line with the “Open” in OpenAI. However, this is a "controlled" sort of openness, and the problem of trust with their receding "real" openness does not encourage me to engage with this ecosystem.


This could a big win for Microsoft (and big loose to Google and Amazon cloud). Since chatgpt has to query those plugins with http(?)request companies might move their servers to Azure to reduce latency and cost of bandwidth


The level of irresponsibility at play here is off the scale. Those running ChatGPT would do well to consider the future repercussions of their actions not in terms of technology but in terms of applicable law.


They are more likely to think of them on terms of their future power, incliding the power to ignore or alter law.


That's a very high probability. But I'm still astounded at how incredibly irresponsible this is and how thinly veiled their excuses for pushing on with it are.

We're about to enter an age where being a tech person is a stigma that you won't be able to wash away. Untold millions will hate all of us collectively without a care about which side of this debate you were on.


What law could they possibly be violating by adding a plug-in system to their software product? This sounds like an outrageous dramatization of reality.


I can no longer keep up with so much evolution so fast... I give up.


Giving an AI direct access to a code interpreter is exactly how you get skynet.

Not saying it’s likely to happen with current chatgpt but as these inevitably get better the chances are forever increasing.


Before "safety" think about is the genie fulfilling my wish.

https://www.youtube.com/watch?v=w65p_IIp6JY


> whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question.

Important yes, philosophical no -- it's an empirical question.


The philosophical part is actually defining each of those terms so that there is an empirically-explorable question.


I'm so hyped for the ChatGPT-4 API. Wish they'd give me access so I can make a lot of my workflows much easier. Especially in terms of translations.



I would love for it to just parse some data from my api, clean it up, normally I do manual checks, but takes so much time. Might be possible via Zapier.


now add a ?q= url param to chat.openai.com that fills and submits the prompt and I'm changing it to my default browser search provider instantly


'Extend' (and lock in) with Plugins to suffocate competitors.

Another sign of Microsoft actually running the show with their newly acquired AI division.


What else would a plugin do?


ok Im going far. but what if the plugin was the human. in a way that we can use chat gpt to cure of alleviate some diseases such as alzheimer or if you a more dictatorial regime, to educate children even while they are foetuses in some hive. I dont know the tech. I don't know if neuralink or other technologies could help but aren't we a few discoveries away from cyberpunk world??


I just signed up for the ChatGPT API waitlist, and am truly excited to experience the process of building extensions & applications.


Is there a way to try this out without paying $20?


Knowing that this is one of the biggest sites in the world scares me enough. Now they'll do anything to stay #1. Scary stuff!


Wonder if you can plant a prompt injection into this thread to mess with their crawler/scraper and Chat results?


Why am I hearing "Tony Stark was able to build this in a cave", dialogue in Google board meetings?


Does this become the new robots.txt file

Create a manifest file and host it at yourdomain.com/.well-known/ai-plugin.json


it says it'll respect robots.txt if you don't want your page crawled (parsed? interpreted?)


The real question is: how much will cost the option to have it return results that are not sponsored?


Does anyone else find the AI voice-over creepy? like they pause but give it away but not breathing.


Is there a plugin to automate signing up for waitlists? That's what I've needed this week.


I wonder if this plugin interface itself will be exposed as an API for third party apps to call..


Could someone explain me how plug-ins are different from the API they already expose?


The plugins make ChatGPT consume 3rd party APIs. Not the other way round.


It seems that OAI have their preference of choosing the first movers of their ecosystem.


Nice! Maybe there will be a plugin for Elsevier medical apps like UptoDate and STATDx.


I think these plugins will drive a lot of startup ideas obsolete or trivial.


Seriously? Quick reminder that prompt injection (including presumably 3rd-party prompt injection) is a totally unsolved problem for ChatGPT.

> OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples.

> The model will incorporate the API results into its response to the user.

Without knowing more details, both of these seem like potential avenues for prompt injection, both on the user end of things to attack services and on the developer end of things to attack users. And here's OpenAI's advice on that (https://platform.openai.com/docs/guides/safety-best-practice...), which includes gems like:

> Wherever possible, we recommend having a human review outputs before they are used in practice.

Right, because that's definitely what all the developers and companies are thinking when they wire an API up to a chat bot. They definitely intend to have a human monitor everything. /s

----

What is (no pun intended) prompting this? Does OpenAI just feel like it needs to push the hype train harder? All of the "AI safety" experiments they've been talking about are bullcrap; they're wasting time and energy doing flashy experiments about whether the AI can escape the box and self-replicate, meanwhile this gets dropped with only a minor nod towards the many actual dangers that it could pose.

It's all hype. They're only interested in being "worried" about the theoretical concerns because those make their AI sound more special when journalists report about it. The actual safety measures on this seem woefully inadequate.

It really frustrates me how easily the AGI crowd got wooed into having their entire philosophy converted into press releases to make GPT sound more advanced than it is, while actual security concerns warrant zero coverage. It reminds me of all of the self-driving car trolley problems floated around the Internet a while back that were ultimately used to distract people from the fact that self-driving cars would drive into brick walls if they were painted white. Announcements like this make it so clear that all of the "ethical" talk from OpenAI is pure marketing propaganda designed to make GPT appear more impressive. It has nothing to do with actual ethics or safety.

Hot take: you don't need an AGI to blow things up, you just need unpredictable software that breaks in novel, hard-to-anticipate ways wired up to explosives.

----

Anyway, my conspiracy theory after skimming through the docs is that OpenAI will wait for something to go horribly wrong and then instead of facing consequences they'll use that as an excuse to try and get a regulation passed to lock down the market and avoid opening up API access to other people. They'll act irresponsible and they'll use that as an excuse to monopolize. They'll build capabilities that are inherently insecure and were recklessly deployed, and then they'll pull an Apple and use that as an excuse to build a highly moderated, locked-down platform that inhibits competition.


ChatGPT is very helpful in building what needs to be built for the plugin!


This news excites me and scares the crap out of me at the same time.


Is there a list of companies that have been made obsolete by ChatGPT?


yeah here's the list:

1.


Is this using GPT-3.5 or GPT-4? I assume it's GPT-3.5.


Neat.

I hope Sam is/will give YC dinner talks about their journey.


I want Prolog plugin with triplets database


Now just one step away from charging businesses for access to the chatGPT users.

Instant links from inside chatGPT to your website are the new equivalent of Google search ads.


I really hope they stick with the ChatGPT+ paid model. A big use of GPT to me is getting information I can already get with a search, but summarized more concisely without having to navigate various disparate web interfaces and bloated websites. It saves a lot of time for things that I don't need an expert's verified opinion on. Injecting ads into that might mess with the experience.

Maybe a freemium model where you don't get ads as a plus subscriber would work out.


Bing image creator seems to be on the right path to freemium: you get a few priority requests and then get bumped onto the slow free queue. If the thing keeps getting better as fast as it is right now they'll have lines in the checkout page.


Will google build plugins for chatGPT?


Do you need OpenAI plus for this?


bye bye jupyter notebooks. This is big.


absolutely... not.

    !pip install jupyter-chatgpt
    !chatgpt make me a notebook with this dataframe with such and such plots

    > here you are


so live data is coming.


Information retrieval to the prompt


extremely useful. Wow!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: