Hacker News new | past | comments | ask | show | jobs | submit login

The motion to dismiss is revealing.

https://regmedia.co.uk/2024/10/15/dismiss.pdf

The AI policy starts at the bottom of page 5. Students have to mention any use of AI, even generating ideas. They must include an appendix with "the entire exchange, highlighting the most relevant sections" and provide an explanation for how and why everything was used.

It seems overly strict to me, hastily written when ChatGPT became popular perhaps.

Pretty soon most students are going to be say "Hey Siri, help me with my homework it's about X" and get an AI answer - are they all academically dishonest?






Kids need to learn how to think and have their own ideas, then as adults if they want to give in to the mediocrity of offloading their thinking to machines they'll have that chance.

AI is externalized assistance. Used one way, it can be a guide that helps a individual learn. Used differently, it's no different than asking someone else to do your work for you.

The issue here is academic honesty, not necessarily the definition of "AI". Should a student using Grammarly submit all drafts of their work and cite the changes they didn't make? Should students receiving external help in the form of parents and tutors cite that assistance?

I don't think that's necessarily a bad thing in an age when ubiquitous search and the internet democratize access to information resources. It's trivial to duplicate documents today, and It's no more of a burden to students to disclose how they are writing.

When I was growing up, teachers knew families who didn't have a home library or had only one car and didn't live near the library were at a disadvantage, so research periods were granted during class. Essays were written and turned in during class periods, and sudden changes to handwriting or style were easy to catch.

Today, the challenges are different, so it seems fair to change the requirements and criteria in response. I've advised friends in education to try assigning AI generated papers with citations as tests and ask students to correct them and expand them from sources.

Asking Siri whether information matches a particular source still isn't possible, and if you're going to have to go through the effort of compiling a bunch of sources for RAG, I think any student equipped to do that would also find it reasonably more efficient to simply do the work directly.


It's all meaningless anyways. What's the difference in me asking ChatGPT something and using the answer, and using some website as reference when the website itself could have used AI without me even knowing.

I do think that could be academic dishonesty depending on what came out of ChatGPT and how it was used. But it depends.

Let's also be perfectly clear that "Can you help me with homework about X" is probably not the question actually being asked. We know that these questions are just being pasted in verbatim. That is absolutely academic dishonesty.

This case is a little different. But let's not pretend that teachers in many subjects areny being taken advantage of and screwed over by these tools and students willing to use them. I'm sure they are all frustrated and willing to jump the gun against the slightest sign of this sort of thing. Tragedy of the commons situation, it's going to ruin academic culture in the US imo if there aren't extremely strict rules laid down and quickly.


>Pretty soon most students are going to be say "Hey Siri, help me with my homework it's about X" and get an AI answer - are they all academically dishonest?

I mean, it doesn't matter in the long run since academia will be as entirely AI driven as education soon enough, and the entire concept of "academic integrity" will be nothing but a quaint atavism from the days when the human in the loop was actually relevant, but yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: