Hacker News new | past | comments | ask | show | jobs | submit login

Maybe, but I doubt it. Spyware-based systems are doomed to failure as other commenters note. There's nothing you can do to prove the text came from a human. Faking inputs is extremely easy. People will sell a $20 USB dongle that does appropriate keyboard/mouse things. Worst case, people can simply type in the AI generated essay by hand and/or crib from it directly.

Schools are going to have to look at why take home work is prescribed, and if it should be part of a grading system at all. My hunch is that it probably shouldn't be, and even though it's a big change it's probably something they can navigate.

I predict more in-person learning interactions.




It's a cat-and-mouse game for sure. At the first level, any dongle that simply types the AI response through a fake HID device will be easy to detect. No real essay writer just types an entire document in one go, with no edits. They move paragraphs around, expand some, delete others, etc.

So this dongle will have to convincingly start with a worse version that's too short (or too long!). It'll have to pipe the GPT output through another process to mangle it, then "un" mangle it like a human would as they revise and update.

If trained on the user's own previous writings, it can convincingly align the AI's response with the voice and tone of the cheater.

Then the spyware will have to do a cryptographic verification of the keyboard ("Students are required to purchase a TI-498 keyboard. $150 at the bookstore") to prevent the dongles. There will be a black market in mod chips for the TI-498 that allow external input into the traces on the keyboard backplane. TI will release a better model that is full of epoxy and a 5G connection that reports tampering...

... Yeah, I also predict more in-person learning :)


Sure, but all of the above regarding making input look human is trivially easy -- because, again, AI.

More stringent hardware based input systems are likely non-starters due to ADA requirements. For example, disabled students have their own input systems and a college will have to allow them reasonable accommodations. Then there's the technical challenges. Some authoritarian minded schools might try this route, but I hope saner heads will prevail and they'll be able to re-evaluate why take-home work exists in the first place, and whether it's actually a problem for students to use AI to augment their education. Perhaps it isn't!


> whether it's actually a problem for students to use AI to augment their education.

To augment? No, but the problem is we can't tell the difference between a student who is augmenting their education with AI, and a student who is replacing their education with AI. Hence things like in-person proctored exams, where we can say and enforce rules like "you're allowed to use ChatGPT for research, but not to write your answers for you".


I'd build a structure/robot that I'd attach to my keyboard, and it would press the keys.

I started to write how it would be possible to control for that, but it got too Orwellian/horrible and I stopped.


> I predict more in-person learning interactions.

Which would be a huge benefit for the overall quality of education. A lot of student can write a passable essay in a word processor with spell check and tutors... but those same students sometimes have absolutely no idea what they've written. Group assignments has taught me this many times over.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: