So, ChatGPT is controlled by prompt engineering, plugins will work by prompt engineering. Both often work remarkably well. But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well. I remember the observation that deep learning is technical debt on steriods but I'm sure what this is.
I sure hope none of the plugins provide an output channel distinct from the text output channel.
(Btw, the documentation page comes up completely blank for me, now that's a simple API).
> But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well.
Yeah, you're completely correct. But this is exactly the same as having a very knowledgeable but inexperienced person on your team. Humans get things wrong too. All this data is best if you have the experience or context to verify and confirm it.
I heard a comment the other day that has stuck with me - ChatGPT is best as a tool if you're already an expert in that area, so you know if it is lying.
> But this is exactly the same as having a very knowledgeable but inexperienced person on your team.
Am I the only person who thought that predictable computer APIs that were testable and completely consistent were a massive improvement over using people for those tasks?
People seem to be taking it as a given that I'd want to have a conversation with a human every time I made a bank transfer or scheduled an appointment. Nothing could be further from the truth; I want my bank/calendar/terminal/alarm/television/etc to be less human.
Yes, there are human tasks here that ChatGPT might be a good fit for and where fuzzy context is important, and there's a ton of potential in those fuzzy areas. But many other tasks people are bringing up are in areas where ChatGPT isn't competing with human beings. Its competing with interfaces that are already far better than human beings would be, and the standards to replace those interfaces are far higher than being "as good as a human would be."
It seems like you're talking about using ChatGPT for research or code creation and that's reasonable advice for that.
But as far as I can tell, the link is to plugins, Expedia is listed as an example. So it seems they're talking about making ChatGPT itself (using extra prompts) be a company's chatbot that directly does things like make reservations from users instructions. That's what I was commenting on and that, I'd guess could a new and more dangerous kind of problem.
So, ChatGPT is controlled by prompt engineering, plugins will work by prompt engineering. Both often work remarkably well. But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well. I remember the observation that deep learning is technical debt on steriods but I'm sure what this is.
I sure hope none of the plugins provide an output channel distinct from the text output channel.
(Btw, the documentation page comes up completely blank for me, now that's a simple API).