Hacker News new | past | comments | ask | show | jobs | submit | mmq's comments login

Actually msft CEO mentioned in his presentation that OAI moved to Azure's vector search and AI search services for ChatGPT.


One additional feature that I would like to see: interacting with 2 or more GPTs at the same time where they could perform different tasks based on their specific expertise and capabilities either in parallel or even sequentially as long as the replies/context of the discussion is accessible for further interactions, similar to what can be achieved with the assistants API.


This sounds similar to Microsoft's Autogen, and I think it's possible to replicate a lot of what you're talking about by using the rough structure of Autogen alongside the Assistants API


I know that the use-case that I mentioned as well as many of the agentive aspects can be achieved using code. But I have to admit that using the UI and easily create GPTs, whether using them just as templates/personas or full-featured with actions/plugins, makes the use-case much easier, faster, and sharable. I can just @ at specific GPT to do something. Take the use-case that Simon mentions in his blog post, Dejargonizer, I can have a research GPT that helps with reviewin papers and I can @Dejargonizer to quickly explain a specific term, before resuming the discussion with the research GPT.

Maybe this would require additional research, but I think having a single GPT with access to all tools might be slower and less optimal, especially if the user knows exactly what they need for a given task and can reach for that quickly.


> The default ChatGPT 4 UI has been updated: where previously you had to pick between GPT-4, Code Interpreter, Browse and DALL-E 3 modes, it now defaults to having access to all three. ... So I built Just GPT-4, which simply turns all three modes off, giving me a way to use ChatGPT that’s closer to the original experience.

Isn't that what they have already built-in called "ChatGPT classic". The description litteraly says "The latest version of GPT-4 with no additional capabilities"


ChatGPT classic still exists for me, I had this new UI since a few days. https://chat.openai.com/g/g-YyyyMT9XH-chatgpt-classic


I had missed that! I wonder when they added it, has it been there since the launch of the new UI?

(Added it to my post)


Yes, I had it pinned as soon as the UI changed post dev day.


It’s worth mentioning that it’s not entirely classic. It’s still using the 32k context turbo model.


Users can rate community notes.


90% of the article is about HOW the ratings are taken into consideration...


Which sounds like a pretty terrible idea in itself. At least to me, it would sound pretty chaotic if the edit that wins out on Wikipedia is the one that receives the most votes by other Wikipedia users, and if you wanted to get something on Wikipedia fixed, you'd need to gather enough people to support your fix over the previous version.

I've ran into issues with Wikipedia mods ignoring sources and going off whatever priors they had, but at least then I was able to just passive-aggressively berate them on the talk page to get them to bend.


What's the alternative method for credibly neutral decision making on a public internet site anyone can participate in?


"Alternative" method for "credibly neutral decision" would imply that there exists one already.


It seems a large number of HN users, judging by this post, believes that community notes is credibly neutral.


You used the model for fact checking. These models are not good at being used as a knowledge base.


I would never use an LLM for fact checking, then you'd have to check again using something else.


Usually for asking questions about specific details, people are using RAG (Retrieval Augmented Generation) to ground the information and provide enough context for the llm to return the correct answers. This means additional engineering plumbing and very specific context to query information from.


> When OpenAI released GPT-4, we were able to change one parameter, and everything still just worked.

Wouldn't that be the same if you used the OAI js library directly? Basically swapping the model parameter?


It's not. The API is different, since GPT-4 is a chat based model, and davinci isn't. It's not a huge difference, but these little sort of things add up.


It is a very minor change (made the changes in minutes and didn't have to bring in a new framework for it).


Agreed. It's a trivial change, certainly not a justification for using langchain.


I see, thought you were using GPT-3.5 and moved to GPT-4.


Not recent, but the company that runs on top of milvus: https://www.businesswire.com/news/home/20220824005057/en/Vec...


Most demos shared by people can use numpy arrays similar to this https://twitter.com/karpathy/status/1647374645316968449


It’s true, but I also wouldn’t want any developer beside Karpathy to implement a production service using numpy arrays (I use Faiss).


David Silver's reinforcement learning lecture series is excellent as well:

https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLzuuYNsE1E...


There is information on YC's website that answers your questions:

* https://www.ycombinator.com/about#yc-program-2

* https://www.ycombinator.com/faq


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: