One additional feature that I would like to see: interacting with 2 or more GPTs at the same time where they could perform different tasks based on their specific expertise and capabilities either in parallel or even sequentially as long as the replies/context of the discussion is accessible for further interactions, similar to what can be achieved with the assistants API.
This sounds similar to Microsoft's Autogen, and I think it's possible to replicate a lot of what you're talking about by using the rough structure of Autogen alongside the Assistants API
I know that the use-case that I mentioned as well as many of the agentive aspects can be achieved using code.
But I have to admit that using the UI and easily create GPTs, whether using them just as templates/personas or full-featured with actions/plugins, makes the use-case much easier, faster, and sharable. I can just @ at specific GPT to do something.
Take the use-case that Simon mentions in his blog post, Dejargonizer, I can have a research GPT that helps with reviewin papers and I can @Dejargonizer to quickly explain a specific term, before resuming the discussion with the research GPT.
Maybe this would require additional research, but I think having a single GPT with access to all tools might be slower and less optimal, especially if the user knows exactly what they need for a given task and can reach for that quickly.
> The default ChatGPT 4 UI has been updated: where previously you had to pick between GPT-4, Code Interpreter, Browse and DALL-E 3 modes, it now defaults to having access to all three.
...
So I built Just GPT-4, which simply turns all three modes off, giving me a way to use ChatGPT that’s closer to the original experience.
Isn't that what they have already built-in called "ChatGPT classic". The description litteraly says "The latest version of GPT-4 with no additional capabilities"
Which sounds like a pretty terrible idea in itself. At least to me, it would sound pretty chaotic if the edit that wins out on Wikipedia is the one that receives the most votes by other Wikipedia users, and if you wanted to get something on Wikipedia fixed, you'd need to gather enough people to support your fix over the previous version.
I've ran into issues with Wikipedia mods ignoring sources and going off whatever priors they had, but at least then I was able to just passive-aggressively berate them on the talk page to get them to bend.
Usually for asking questions about specific details, people are using RAG (Retrieval Augmented Generation) to ground the information and provide enough context for the llm to return the correct answers.
This means additional engineering plumbing and very specific context to query information from.
It's not. The API is different, since GPT-4 is a chat based model, and davinci isn't. It's not a huge difference, but these little sort of things add up.