I spent some years as a teacher and so have some first-hand experience here; my take, for what it's worth, is that LLMs have indeed blown a gaping hole in the structure of education as actually practiced in the USA. That hole is: much of education is based on the assumption that the unsupervised production of written artifacts is proof of progress in learning. LLMs can produce those artifacts now, incidentally disrupting the paid essay-writing industry (one assumes).
From this, I agree with the article - since educators now have to figure out another hook to hang evaluation on, the questions "what the hell does it mean to 'learn', anyway?" and "how the hell do we meaningfully measure that 'learning', whatever it is?" have even more salience, and the humanities certainly have something meaningful to say on both of them. I'll (puckishly) predict that recitations and verbal examinations are going to make a comeback - harder to game at this point, but then, who knows how long 'this point' will last?
I think the main subject of that hole is objectivity, and it affects much more than education.
By obsessively structuring education around measurement, we have implied that what is taught is objective fact. That's always been bullshit, but - for the most part - it's been consistent enough with itself to function. The more strict your presentation of knowledge is, the more it can pretend to be truly objective. Because a teacher has total authority over the methodology of learning, this consistency can be preserved.
The reality has always been that anything written is subjective. The only way to learn objective fact is to explore it through many subjective representations. This is obvious to anyone who learns mathematics: you don't just see x^2+y^2=z^2, and suddenly understand the implications of the Pythagorean theorem.
Because objectivity can't be written, objectivity is not computable. We can't give a computer several subjective representations of a concept, and have it figure it out. This is the same problem as ambiguity: there is no objectively correct way to compute an ambiguous statement. This is why we write in programming languages: each programming language grammar is "context-free", which means that everything must be completely and explicitly defined. To write a computable statement, we must subject the abstract concept in mind to the language (grammar and environment) we write it in.
Most of the writing in our society is done with software. Because of this, we are implicitly encouraged to establish shared context. If what we write is consistent with the context of others' writing, it can pretend to be objective, and be more computable. Social interactions that are facilitated by software are also implicitly structured by that software's structure. There can be no true free-form dialogue in social media.
The exciting thing about LLMs is that they don't have this problem. They conveniently dodge it, by not doing any logic at all! Now instead of the software subjecting you to a strict environment, it subjects you to a familiar one. There is no truth, only vibes. There is no calculation, only guesses. This feels a lot like objectivity, but it's just a new facade. The difference is that the boundary is somewhere new and unfamiliar.
---
I think the death of objectivity is good progress overall. It's been a long time coming. Strict structure has always been overrated. The best way to learn is to explore as many perspectives as you can find. We humans can work with vibes and logic, together.
For a long time, schools assumed that if a student turned in a written essay, it meant they had learned something.
But now AI can write those essays too ,so that assumption doesn’t hold up anymore.
The real question is: if writing alone doesn’t prove learning, then what does?
Maybe we’ll see a return to oral exams or live discussions. Not because they’re perfect, but because they’re harder to fake.
In a way, AI didn’t ruin education it just exposed problems that were already there.
The definition of learning has not changed. One of our first written records is a man complaining that this newfangled writing thing will make the students lazy, they will no longer have to remember their studies.
> One of our first written records is a man complaining that this newfangled writing thing will make the students lazy, they will no longer have to remember their studies.
And if your summary is correct, he was right. If you don't remember what you've learned (i.e. integrate it into your mind), you haven't learned anything. You're just, say, a book operator.
The particular problem here is the number of staff needed to actually administer and grade these kinds of tests. We're already talking about how expensive education is, just wait till this happens.
Exactly my point. It sometimes takes time for education to adapt to new tools. Writing, calculators, search engines, etc. I was in the last semester to learn mechanical drawing, the next semester learned CAD. Education adapts.
Bingo. If you want to just write an essay, then ChatGPT is perfect. If you want to write an essay that says something very particular, then ChatGPT starts to give you issues
From this, I agree with the article - since educators now have to figure out another hook to hang evaluation on, the questions "what the hell does it mean to 'learn', anyway?" and "how the hell do we meaningfully measure that 'learning', whatever it is?" have even more salience, and the humanities certainly have something meaningful to say on both of them. I'll (puckishly) predict that recitations and verbal examinations are going to make a comeback - harder to game at this point, but then, who knows how long 'this point' will last?