Hacker News new | past | comments | ask | show | jobs | submit login

It's always giving a hallucinated answer. GPT doesn't 'run' anything. It sees an input string of text asking for the result of fibonacci(100) and finds from its immense training set a response that's closely related to training data that had the result of fibonacci(100) (an extremely common programming exercise with results all over the internet and presumably its training data).

Again, GPT is not running a tool or arbitrary python code. It's not applying trust to a tool response. It has no reasoning or even a concept of what a tool is--you're projecting that on it. It is only generating text from an input stream of text.




There's nothing stopping you from identifying the code, running it, and passing the output back into the context window.


You didn't read the article, did you?


Langchain has nothing to do with GPT itself or how it operates internally.


What you're saying in this thread makes no sense.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: