I'm starting to suspect that people generally have poor experiences with LLMs due to bad prompting skills. I would need to see your chats with it in order to know if you're telling the truth.
One with ChatGPT about dbms questions and one with claude about socket programming.
Looking back are some questions a little stupid ? Yes. But affcourse they are! I am coming with zero knowledge trying to learn how the socket programming is happening here ? Which functions are begin pulled from which header files, etc.
In the end I just followed along with a random youtube video. When you say, you can get LLM to do anything, I agree. Now that I know how socket programming is happening, for next question in assignment about writing code for crc with socket programming, I asked it to generate code for socket programming, made the necessary changes, asked it generate seperate function for crc, integrated it manually and voila, assignment done.
But this is the execution phase, when I have the domain knowledge. During learning when the user asks stupid questions and the LLM's answer keep getting stupider, then using them is not practical.
Also Im surprised you even got a usable answer from your first question asking for a socket program if all you asked was the bold part. I'm a human (pretty sure at least) and had no idea how to answer the first bold question.
I had already established from previous chat that upon asking for server.c file, llm's answer was working correctly. Rest of the sentence is just me asking it to use and not use certain header files which it uses by default when you ask it to generate server.c file.Thats because from docs of <sys/socket.h>, I thought it had all relevant bindings for the socket programming to work correctly.
I had no idea what even the question was. I had chatgpt (4o) explain it to me, and solve it. I now know what candidate keys are, and that the question asks for AB and BC. I'd share the link, but chatgpt doesn't support sharing logs with images.
So you did not convince me that LLMs are not working (on the contrary), but I did learn something today! thanks for that.
I can get an LLM to do almost anything I want. Sometimes I need to add a lot of context. Sometimes I need to completely rewrite the prompt after realizing I wasn't communicating clearly. I almost always have to ask it to explain it's reasoning. You can't treat an LLM like a computer. You have to treat it like a weird brain.
The problem with these answers is that they are right but misleading in a way.
Glass is not a pure element so that temperature is the "production temperature" but as an amorphous material it ""melts"" in the way a plastic material ""melts"" and can be worked at temperature as low as 5-700c.
I feel like without a specification the answer is wrong by omission.
What "melts" means when you are not working with a pure element is pretty messy.
This came out in a discussion for a project with a friend too obsessed with GPT (we needed that second temperature and i was "this can't be right....it's too high")
Yes. This is funny when I know what is happening and I can "guide" the LLM to the right answer. I feel that is the only correct way to use LLMs and it is very productive. However, for learning, I don't know how anyone can rely on them when we know this happens.
I mean. Likely, yes, but if you have to spend the time to prompt correctly, I'd rather just spend that time learning the material I actually want to learn