Hacker News new | past | comments | ask | show | jobs | submit login

If the code I wrote based on my newly-acquired insight works, which it does, that's good enough for me.

Beyond that, there seems to be some kind of religious war in play on this topic, about which I have no opinion... at least, none that would be welcomed here.




ML and DSP are both areas where buggy code seems to work, but actually gives suboptimal performance / wrong results. See: https://karpathy.github.io/2019/04/25/recipe/#2-neural-net-t...

> The “possible error surface” is large, logical (as opposed to syntactic), and very tricky to unit test. For example, perhaps you forgot to flip your labels when you left-right flipped the image during data augmentation. Your net can still (shockingly) work pretty well because your network can internally learn to detect flipped images and then it left-right flips its predictions. Or maybe your autoregressive model accidentally takes the thing it’s trying to predict as an input due to an off-by-one bug. Or you tried to clip your gradients but instead […]

> Therefore, your misconfigured neural net will throw exceptions only if you’re lucky; Most of the time it will train but silently work a bit worse.


Actually Karpathy is a good example to cite. I took a few months off last year and went through his "Zero to hero" videos among other things, following along to reimplement his examples in C++ as an introductory learning exercise. I spent a lot of time going back and forth with ChatGPT to understand various aspects of backpropagation through operations including matmuls and softmax. I ended up well ahead of where I would otherwise have been, starting out as a rank noob.

Look: again, this is some kind of religious thing where a lot of people with vested interests (e.g., professors) are trying to plug the proverbial dyke. Just how much water there is on the other side remains to be seen. But finding ways to trip up a language model by challenging its math skills isn't the flex a lot of you folks think it is... and when you discourage students from taking advantage of every tool available to them, you aren't doing them the favor you think you are. AI got a hell of a lot smarter over the past few years, along with many people who have found ways to use it effectively. Did you?

With regard to being fooled by buggy code or being satisfied with mistaken understanding, you don't know me from Adam, but if you did you'd give me a little more credit than that.


I'm not a professor and I don't have any vested interest in ChatGPT being good or bad. It just isn't currently useful for me, so I don't use it. In my experience so far it's basically always a waste of my time, but I haven't really put in that much work to find places where it isn't.

It's not a religious thing. If it suddenly becomes significantly better at answering nontrivial questions and stops confidently making up nonsense, I might use it more.


> following along to reimplement his examples in C++ as an introductory learning exercise.

Okay, that? That's not what people are usually doing when they say they used ChatGPT as a tutor. It sounds more like you used it as a rubber duck.


When the duck talks back, you sit up and listen. Or at least, I do.


You are obviously experienced and have knowledge of advanced abstract topics.

For you using ChatGPT as a NLP and flawed search mechanism is fine and even more efficient than some alternatives.

Advocating that it would be just as useful and manageable by inexperienced young students with far less context in their minds is disingenuous at best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: