I have a personal writing app that uses the OpenAI models and this post is bang on. One of my learnings related to "Lesson 1: When it comes to prompts, less is more":
I was trying to build an intelligent search feature for my notes and asking ChatGPT to return structured JSON data. For example, I wanted to ask "give me all my notes that mention Haskell in the last 2 years that are marked as draft", and let Chat GPT figure out what to return. This only worked some of the time. Instead, I put my data in a SQLite database, sent ChatGPT the schema, and asked it to write a query to return what I wanted. That has worked much better.
I setup a search engine to feed to a rag setup a while back. At the end of the day, I took out the LLM and just used the search engine. That was where the value turned out to be.
I haven't tried response_format, I'll give that a shot. I've had issues with function calling. Sometimes it works, sometimes it just returns random Python code.
I was trying to build an intelligent search feature for my notes and asking ChatGPT to return structured JSON data. For example, I wanted to ask "give me all my notes that mention Haskell in the last 2 years that are marked as draft", and let Chat GPT figure out what to return. This only worked some of the time. Instead, I put my data in a SQLite database, sent ChatGPT the schema, and asked it to write a query to return what I wanted. That has worked much better.