Hacker News new | past | comments | ask | show | jobs | submit login

They probably have a letter counting tool added to it now. that it just knows to call when asked to do this.

you ask it the number of letters and it sends those words off to another tool to count instances of L, but they didn't add a placement one so it's still guessing those.

edit: corrected some typos and phrasing.

Maybe we'll reach a point where the LLM's are just tool calling models and not really giver their own reply.




There are only 5 tools it has available to call, and that isn't one of them. A GitHub (forgot the url) stays up to date with the latest dumped system instructions.


I can't speak to all LLMs, but OpenAI has a built-in python interpreter. Assuming it recognizes the problem as "tokenization counting", it doesn't need a dedicated tool.


How do we know they’re the real system instructions? If they’re determined by interrogating the LLM hallucination is a very real possibility.


they probably just forgot to tell it humans are 1 indexed and to do the friendly conversion for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: