Hacker News new | past | comments | ask | show | jobs | submit login
Noam Chomsky on ChatGPT: High-Tech Plagiarism, Way of Avoiding Learning (openculture.com)
57 points by brianmcgee on Feb 12, 2023 | hide | past | favorite | 127 comments



This thread is already full of people throwing in their cheap takes by reading the headline or not even that.

Listen to his answers. He's not making controversial claims, excepting the one that LLMs teach us nothing about language.

He says:

- ChatGPT makes it easier to cheat. This is obviously true.

- ChatGPT is a method of avoiding learning. In the context of the interview he's referring to its use not as an information retreival tool but as an actual essay writing tool. If we consider the latter, this is again obviously true. Not writing your essays teaches you nothing about essay writing, the topic at hand, and the process of thinking.

- LLMs teach us basically nothing about language. Seems controversial, but I'm not an expert at all.

- Education is in trouble because students are not engaged in education. We need education systems that are actually interesting and engaging. Again, this is obviously true.

- It's not obvious what value ChatGPT has. Kind of controversial, but I think it's not actually obvious that ChatGPT is currently most used for funny Tweets, SEO optimization, and marketing tools. So I think he's not really saying much to get angry about here.


ChatGPT makes it easier to cheat, but cheating in that way was always an option, as long as you had enough money to pay someone else to do it for you. So in that sense, all that ChatGPT does is expose the flaws in the system by making this form of cheating available to everybody. I have to wonder why only now this is considered a serious problem that undermines education, but it was mostly ignored when the option was only available to the rich kids.

I think this can only solved by having a clear separation between learning and obtaining the credentials that certify that you possess the knowledge that it is expected of you to do some job. Universities should be centers of learning and students should be writing their essays because they recognize the value that doing so provides to them, not just because they need a certificate issued by the university to apply to the jobs that they desire. The certificate part should come from elsewhere, probably through means that make cheating much harder, such as exams and individual interviews. This would also open the door to other avenues of learning that may be more suited for some people, ie. self-taught people may have more success learning from a book or an online resource than from sitting through several hours of lectures every day.


>I have to wonder why only now this is considered a serious problem that undermines education, but it was mostly ignored when the option was only available to the rich kids.

I think you've answered your own question: it's a matter of scale. Systems can tolerate a certain proportion of bad actors but there is an inevitable tipping point where the number of bad actors grows to a point where the system becomes unstable.

But I 100% agree that university credentials are only a very rough proxy for competence. I also agree that internal motivation is better than external motivation.

Your suggestion is seen in other professions like medicine or capital "E" Engineering, where there are boards/exams and continuing education to certify competence in the form of licensure. This comes with its own problems, though, like how the licensing boards can become a cabal to protect their own self interests (see the complaints about the AMA limiting doctor licensure in an effort to maintain higher pay for physicians). It also gives more leverage to the individual, which many industry lobbyists would probably be against.


Yes, there were always rich kids that were able to cheat, using college just for the status and credentialism, and there still are. What Chomsky is interested in, and what we should be interested in, is how we get rich kids like J. Robert Oppenheimer.

Oppenheimer didn't need to become educated in and world class at physics, be he did it anyway, and it's probably the lack of need which increased his interest and ability.

I think your separation idea is interesting. At first blush, it seems a good move to keep examination as far away from education as possible, as they're very different things.


That ChatGPT is good at truthfully answering one kind of prompt (analytic, where the facts are contained in the prompt, so the task is translation) and bad at truthfully answering another kind of prompt (synthetic, where the facts are not contained in the prompt, so the task is synthesis), tells me something about the nature of language.

It turns out that setting up solutions to math problems is mainly a language translation task, followed by computation task as long as you have a way to eval() the resulting thunk:

  Do not perform calculations. Do not compute the answer. Use Javascript Math and Date to perform calculations. Wait for further questions. Show your work in the DESCRIPTION. The number of steps in the %%%THUNK%%% should match the number of steps in the %%%DESCRIPTION%%%. Always answer with Javascript compatible code in the %%%THUNK%%%, including numerical form without commas (eg, 238572348723). Always answer with this JSON compatible object form, eg, {"key":%%%VALUE%%%}: { "question": "4 days a week, Laura practices martial arts for 1.5 hours. Considering a week is 7 days, what is her average practice time per day each week?", "description":["Multiplying 1.5 and 4", "Dividing the step 1 answer by 7"], "thunk": "(function() { const step1 = 1.5 * 4; const answer = step1 / 7; return answer; })()" }
Analytic augmentation is the procedure by which a question is augmented in a manner that promotes translation over synthesis. That this can be tested empirically is definitely something novel with regards to the philosophy of language!


I've asked ChatGPT to take the derivative of an expression and the one it gave back to me was off by a minus sign. I tried to get it to calculate the expression for the height of a rocket flying vertically in constant gravity and the answer it gave me was just the rocket equation with h = instead of v = and I then got into a curious discussion with it about dimensional analysis where it just gaslit the crap out of me. If you think it is good at "analytic" tasks you haven't challenged it much.


And I’ve applied these principles of analytic augmentation to make the results of otherwise synthetic prompts more reliably truthful and in a testable and reproducible manner.

You seem to be thrashing around wildly while using these tools and I can’t imagine why anyone would be interested in whatever conclusions result from such lack of methodology.


> It's not obvious what value ChatGPT has. Kind of controversial, but I think it's not actually obvious that ChatGPT is currently most used for funny Tweets, SEO optimization, and marketing tools. So I think he's not really saying much to get angry about here.

Or in other words, it is great at being a confident sophist and its use-case(s) are limited to almost nothing of serious use requiring explainability or trustworthiness; especially for legal, medical and financial applications or even search engines. If you cannot trust the output of an AI and it keeps hallucinating its answers, then what is the point?

If OpenAI was being 'responsible' or following 'responsible AI' guidelines, maybe they should be writing the detectors first before releasing ChatGPT, GPT-3 into the wild?

This AI hype cycle is no different to what happened with GPT-3 and it fizzled out just like how Clubhouse did. The only way is open source AI models.


This seems mostly right. But ChatGPT also can make it a lot easier to learn if one wants to learn. But I think that you need to be able to critically evaluate it's output for that. And to formulate your ideas on some level. If one never obtains those skills, perhaps learning with chatGPT won't be of much use.


I beg to differ. You can try to learn, sure, but it's tricky, to say the least. You learn from people with years of XP in the subject, many references, among other things. The knowledge you gain from them is vetted by pretty much the entire scientific community.

If you're not an expert in any field, chatGPT can give you any answer and you'll take it as good and valid. That's a dangerous precedent there.

And just in case you said: "you can double check what chatGPT is throwing at me", well, that's wishful thinking. People hardly go the extra mile, less of all, when studying is involved. It's difficult to just concentrate in one subject, let alone the whole curriculum, and that's with guidance. What type of guidance does chatGPT offer? How are you going to measure the student? If he/she fails, what are the options?

There are lots of questions to be answered before we incorporate chatGPT the the current education system.

I'm not saying we couldn't do it, I'm just saying that while tech is there to help us, this time around is dangerous because it can do more harm than good, and could, in fact, set humanity back if mishandled.


I think the value that chatgpt could (emphasis on could) provide as a learning tool is the same value that a repl provides to coding, instant feedback in a playground environment. I think it would be immensely useful or anyone learning anything to be able to say "I don't stand specific thing based on prior conversation please elaborate", or "give me a new example of specific thing" or "explain my why my thought process about X thing is wrong". would really boost engagement with learning a think by lowering the barrier to getting stuck on something, if all the BSing and incorrect info it has could be worked out


Chomsky's universal grammar vs. the probabilistic model behind tools like ChatGPT or Google Translate is a pretty old debate.


Yes I know, that's why I said it seems controversial.


> It's not obvious what value ChatGPT has.

This is a blatantly incorrect take. Anyone who can’t see that ChatGPT has real value has blinders on.

It has severe limitations and many quirks, but millions of people are getting value from it.

I use it regularly while developing software. If you learn how to leverage its strengths and are aware of the pitfalls, it isn’t too hard to get value out of.

It is also a good investment to learn how to use these tools, as they will become much more capable and useful over time.


I predict the rise of what I call "mental obesity".

Once the need to perform physical work vanished, those who did not consciously decide to train had a much higher chance of becoming obese. I expect that the mental capacities of some people will become less by the same effect.

What we will see is a separation between those who are taught why they write and decide to comply and those who just take the easy way out.


I like this term. It even extends to concepts like hyper-palatable foods, with the equivalent being hyper-palatable entertainment (like TikTok, or Youtube Shorts) that similarly short-circuits the brains reward loop.


Look at long term stats on mathematics ability and literacy in the rich western countries. It does not paint a good picture. Two things I know off the top of my head is that literacy scores in the SATs have trended slightly down over the past 40 years, for both low-achievers and high-achievers.[1]

Australia, one of the richest countries in the world, is experiencing a multi-decade decline in mathematical standing[2]. Australia is fabulously rich, but is doing a worse and worse job of producing citizens that are numerate and understand mathematics. I haven't looked at literacy performance in the country, but I'd guess it's declining like it is in the USA, as Australia is subject to broadly the same pressures of political-economy and technology.

1. Cultural Literacy by E.D Hirsch Jr

2. https://www.sciencedaily.com/releases/2022/05/220517094842.h...


On the other hand you might also see an increased interest in doing things with your brain (e.g. writing poetry or jokes) because it's rewarding, much as you see now with interest in endurance sports, climbing, etc. Or the interest in hand making things now that we produce most things with highly scaled abd automated processes.


[flagged]


Sure, and do you see people memorizing Oddyssey-length works and reciting them anymore? Just because you live in the world of the lost does not mean there was no loss.

For (much) more on this, look at Orality and Literacy by Walter Ong


this is a somewhat clickbaitified headline

> That students instinctively employ high technology to avoid learning is "a sign that the educational system is failing." If it "has no appeal to students, doesn’t interest them, doesn’t challenge them, doesn’t make them want to learn, they’ll find ways out," just as he himself did when he borrowed a friend’s notes to pass a dull college chemistry class without attending it back in 1945.


> "For years there have been programs that have helped professors detect plagiarized essays,” Chomsky says. “Now it’s going to be more difficult, because it’s easier to plagiarize. But that’s about the only contribution to education that I can think of.” He does admit that ChatGPT-style systems “may have some value for something,” but “it’s not obvious what.”

The headline makes it seem like Chomsky hates ChatGPT when he really just seems indecisive.


And enumerates the ways in which it could be disused. It is a nothing burger. It doesn't put Chomsky in a bad light at all.


I prefer Zizek's take on it: " that AI will be the death of learning & so on; to this, I say NO! My student brings me their essay, which has been written by AI, & I plug it into my grading AI, & we are free! While the 'learning' happens, our superego satisfied, we are free now to learn whatever we want"


Chomsky kind of hints at this in the interview. He says he doesn't have a problem at MIT with not having engaged students, and though he doesn't say it I think it's obvious that his MIT students are not subjected to typical economics-driven educational pressures.

While other students are treating school as a means to a good salary and just want the highest marks for the least effort, Chomsky's graduate MIT students will be ignoring all that stuff and just having classic educational debate in essays and in conversation.

With Zizek it's the same. He doesn't have to bother with disengaged students, students who are desperate for college attendance to grant them a ticket into the comfortable upper middle classes and care not for the substance of learning. You take Zizek classes because you actually like philosophy and want to engage with it.


You learn a lot from ChatGPT though. If you want to know the ordinary answer to some essay question, cgpt will give you that. If you leave it at that and hand it in, maybe Chomsky is right.

But it's also a essay writing productivity tool. You sketch out what you want mentioned and cgpt will fill in the gaps, like those apprentices that helped the renaissance masters. I keep coming back to the phrase "small intentions" when I discuss cgpt: both in essays and code, it is already quite good at filling in details.

With less time spent on details, students can spend more time on the architecture. Ultimately, we'll have more people able to practice higher level creativity. This is already a thing that you do in math class; after your primary school years, you get a calculator to work out the arithmetic.


The problem with using ChatGPT for "filling in details" is how often it is confidently, utterly wrong.

My friend tried to use it to write a legal letter and it completely invented a state law and cited it very specifically.

For him, it was annoying and amusing. For other people, mistakes like that could be very serious.


> after your primary school years, you get a calculator to work out the arithmetic

Do we? I do a lot of mental arithmetic as a software developer. When I was an exchange student my host-dad was an engineer and he was incredibly good at it. As he was thinking through a problem he'd do all sorts of ballpark math to get a feel for things. Eventually, yes, the numbers would all get crunched properly. But even then he'd also be doing the math mentally to make sure that something hadn't gotten screwed up. A person who uses a calculator for everything starting from a young age just doesn't develop that skill.

And I think there will be a similar problem with ChatGPT. Can an excellent writer get a performance boost starting with ChatGPT? The jury's out, but it's not impossible. But can one become an excellent writer by tarting up LLM-generated prose? I doubt it. Learning to write is a process of sweating the details. Especially so when the output is an essay. Real higher-level creativity requires foundational work. I think at best you'll get the sort of content that comes when an uninspiring politician hires a mediocre ghost writer.

That said, a lot of writing is bullshit. And I mean that both in the Frankfurt sense ("speech intended to persuade without regard for truth") [1] and in the Graeber sense ("completely pointless, unnecessary, or pernicious") [2]. And that's where I think ChatGPT will shine. Marketing copy to fill in the "lorem ispum" space on the product page? ChatGPT to the rescue. Long sections of TPS reports that nobody will read? ChatGPT again. And once they expand ChatGPT to being able to make PowerPoint decks, a zillion ineffective middle managers will be able to return their focus to Candy Crush, harassing the interns, and creating pointless meetings.

I think that's also bad, of course. Lowering the cost of bullshit production looks like a net societal negative to me. But it would be successful in the traditional tech industry metrics of "gains users" and "produces revenue".

[1] https://en.wikipedia.org/wiki/On_Bullshit

[2] https://en.wikipedia.org/wiki/Bullshit_Jobs


> A person who uses a calculator for everything starting from a young age just doesn't develop that skill.

But is that a reasonable fear? It's actually quite hard to use a tool without understanding it, so you won't get a lot of people reliant on calculators who don't understand calculation. I've certainly never met someone for whom it was a problem.

About using tools you don't understand, this is easy to try. Get some film editing software or 3D rendering software (assuming you don't know this area, like me) and see where you get to. I've never been able to use a tool without understanding at least something about what it does.


> It's actually quite hard to use a tool without understanding it

The whole point of many tools is to not understand what's going on. I had a friend in college who could happily drive a car, but had been discouraged from understanding how it worked, because opening the hood of a car was not ladylike. So whenever anything happened, she would be filled with anxiety and take it to a mechanic, even for perfectly mundane things like putting in more coolant.

Similarly, people who have a math aversion can lean on calculators to stay innumerate. I see this happen frequently when I pay for something in cash using change. E.g., the bill will be $9.77. I'll see that and think, "Oh, I want the quarter", so I'll put down a $10 and 2 pennies. Some people are fine with it, but a notable fraction of cashiers (whose whole job is handling money!) will look at me like I have two heads. But they'll dutifully punch it in and act like I'm a wizard when it comes out to be a single coin.

> Get some film editing software or 3D rendering software

The correct analogy here is to the word processor, not ChatGPT. If somebody who doesn't know how to write opens up Word and expects an essay to come out, they'd be similarly stymied. Or we could look at essay-writing services. A quick search turns up places that are for "learners seeking assistance". But I think we all know that nobody is really using those services for inspiration or a rough draft so that they can hone their essay-writing skills with the assistance of a professional.


I taught high school math, there's many who rely on calculators for even the most basic calculations because they can't do them and have no concept of what it means to, say, multiply two numbers or how to even go about doing it.


The model for mathematics is something like a “good book” of once-entered, unchanging posable questions and theorems. More may be added, but once added there is a sense of permanent definitiveness and relevance.

For your example and beyond-mathematics calculator analogy, maybe the dates of historical events are equally definitive, but I don’t see much beyond that being so. Science goes through transformations. Either the questions remain the same (what is there, what exists), but the answers to them change as science develops. Or, the questions change, and the answers don’t (what did plogiston, aether, or general relativity predict).

Your expanded “calculator” paradigm seems unfounded. It’s not analogous to math.


> Your expanded “calculator” paradigm seems unfounded.

Please explain. GPT makes a utility out something that previously was not. The GP didn't say it was exact.


They didn't say exact. Exact isn't the concept I'm concerned about.

There is something unique about mathematics in that its answers never change. And that static structure can be tapped into with calculators. A "calculator" outside this domain reeks of potential danger to me. Asking a question to some other field whose answers are open to revision doesn't give one transparent access to timeless knowledge. There is always going to be a caveat that the answers given by GPT are clouded by time and circumstance. You can't just copy the answers GPT gives you and call that knowledge, like calculator results do allow.

I would hope GPT knowledge comes with the understanding that GPT v.XX, exposed to Y circumstances, produced Z results. We don't do that for calculators.


Is the high level structure of writing any more important than the low level structure?

Maybe it will tend towards everyone sounding the same. Which then gets amplified as LLMs train on writing that is essentially mutations of itself.

Me no like.


> Is the high level structure of writing any more important than the low level structure?

Well you can make a point about the fall of Rome with badly cited sources or spelling mistakes. But if your essay says Rome fell due to conquest by the Han dynasty it can't be saved by any number of correctly spelled, beautifully cited details.

ChatGPT will let you write an essay making any point while filling in little things.


It sounds like you're advocating a system of education, thinking, and work that is exemplified by 'Software Architecture Astronauts', people who have not recently or ever contended with the details, the nuts and bolts, of actual software development and maintenance.

To be honest, the idea that you can produce valuable intellectual work by outsourcing the details is absurd. This would have seen me absolutely fail my challenging philosophy and engineering coursework, and end up a fundamentally incompetent dilettante in work and life.


Great points! But I want to quibble with this a bit.

> To be honest, the idea that you can produce valuable intellectual work by outsourcing the details is absurd.

It is absurd. But it's also very common! There are an awful lot of people who have done very well not by understanding anything, but by posturing and performing. E.g., take the ghostwritten book. The books of many politicians from all parties could generously be called mediocre. But they don't need to create valuable intellectual work; they just need something anodyne and book-shaped with their name on it.

Surprisingly often, people get away with this kind of absurdity, especially when rich and/or powerful. Indeed, one of the fascinating things for me about Musk's ongoing clown show at Twitter is that we're getting a real-time look at what happens when somebody who refuses to contend with the details fools themself into thinking they know what's going on.

That totally matches my experience where Architecture Astronauts have free reign. They are rewarded not for making this work, but talking impressively about big things. So what they produce is a lot of talk. When it's implemented by actual developers, does it work out well? Usually not! But that doesn't matter, because the people who empower them are also people who talk impressively about big things without understanding them.


Totally endorse the quibble. I think ChatGPT will be most often deployed in areas that are tied to a lot of economic value but are also full of bullshitters. Ghostwriting political memoirs is a good example, but its a niche, SEO spam is that big bucks at the moment.


Well I didn't say you should never concern yourself with the details.

But at some point in your life you will have gone through the details so often you won't be getting them wrong, but you'll still be spending a lot of time on them. With a tool like cgpt you can save yourself a lot of time.

Of course it's up to you where you take the trade. If your thing is a medical device that will kill someone if it isn't checked, maybe you carefully write the code yourself. If you're building some POC website, why not let the AI fill in a few API calls for you?

And don't forget, plenty of details are already outsourced. Entire reference books are full of details that we prefer not to keep memorized.

Plato:

“If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.”

https://www.goodreads.com/quotes/259062-if-men-learn-this-it...

I'm advocating cgpt as a tool that allows you to spend more time at the level of abstraction that is appropriate for the problem. If you're writing an essay about the fall of Rome, you don't want to be spending loads of time finding the details of a reference (year published, secondary author's names, exact title...). You want to be talking about economics of the time, incentives for different groups in Roman society, and so on.


Right, it seems like you're advocating ChatGPT as an information retreival tool, which seems perfectly acceptable. Looking up exact dates and names is tedious.

By "filling in the details" I thought you were referring more generally to the essay writing process, where the structure of an essay is its skeleton and the "details" are the just as essential pieces of argumentation and evidence.

If you write a sociology, history, or philosophy essay, for example, writing the high level structure (intro, topic-sentence 1, topic-sentence 2, ..., conclusion) and then outsourcing the rest would be intellectually bankrupt. You can't understand produce a synthesis argument of Hume's The Wealth of Nations and Theory of Moral Sentiments with ChatGPT.


Is that bait? Adam Smith IIRC. Hume did Treatise on Human Nature.


Not bait, but it could have been! I switched my example halfway through because I couldn't think of a 2nd Hume book, but didn't properly edit.

Edit: Here's the first paragraph of what ChatGPT responds to my incorrect request for that synthesis argument. Thankfully we had you reply and not ChatGPT:

> David Hume's "The Wealth of Nations" and "The Theory of Moral Sentiments" are two of his most famous works that address important aspects of economics and ethics, respectively. While these two works seem to address separate topics, they are related and complement each other in many ways. ...

ChatGPT Prompt: produce a synthesis argument of Hume's The Wealth of Nations and Theory of Moral Sentiments


We can do both, ChatGPT can code. So the Astronauts should be able to produce working systems that pass functional tests. The majority of the detractors of these models don't see the forest for the trees. It only gives new capabilities, they remove nothing.


> So the Astronauts should be able to produce working systems that pass functional tests.

Nah. This is just a new version of an old mistake, one people have been repeating for decades. People keep trying to come up with ways for people to "code" without understanding what's going on. Code generation wizards. Visual programming tools. Model driven architecture. And a bunch more.

The hard part about making software isn't writing a bit of starter code. What we saw with the "code wizard" approach is that somebody clueless could click some buttons and get something working. But then they couldn't maintain it. It just kicked the "understand what's going on" problem down the road from "I don't know how to start" to "I am now trapped in a hell of generated code and people are yelling at me".

If some particular application is so standard that it can be produced with only a shallow understanding of how software works, then the right thing isn't using ChatGPT to produce source code. It's when people who have a deeper understanding produce an app that is configurable in the right ways.


My 8 year old asks you to look at from a different angle and evaluate what it can actually do right now.

I would add, just because someone made a prognostication decades ago does not invalidate the same prognostication now.

Have you tried it?


Those aren't decades-old "prognostications". They are things that happened.

Could it be different this time? Maybe! But if you want to say that it will, then you have to make the argument. Or maybe you can get ChatGPT to do it for you?


Have you tried coding with ChatGPT? It works now. You can use it now to architect a system AND produce working code.


Oh? Then it should be easy for you to point me to an example of working system of real-world complexity built this way, yes?

I'm especially interested to see how the system gets improved over time as users try it out, lessons are learned, needs change, etc, etc.


Please stop.


Stop asking you for you to back up your questionable claims? Now that you've stopped making them, sure, I'm glad to.


I have not, but others have detailed examples where it produces rubbish. Certainly not the kind of risk to take with safety critical systems like what an astronaut may rely on.


Exactly. Here's an excellent instance of it being confidently and ridiculously wrong: https://www.reddit.com/r/AnarchyChess/comments/10ydnbb/i_pla...

If it can't handle the relatively simple rules of chess, it sure can't handle coding.


> It only gives new capabilities, they remove nothing.

This is not how technology works in the world. See the arguments of for example McLuhan. Technologies are appendages for the human body or intellect, and at the same time they numb the part of that they amplify.

Also the point of the Architecture Astronauts idea is that they don't produce working systems. It's a specific pejorative, it doesn't refer generally to people that work at a high-level without harming the systems they engage with.


You clearly have a bone to pick with software architects, if anything it should reduce the amount of Space Travel for said architects.

McLuhan, a media analysis astronaut. Average is the message.

> Technologies are appendages for the human body or intellect

We previously had fleets of people calculating tables of numbers for hours per day. Simple calculators removed that need and increased the speed of results by orders of magnitude.

You seem to be arguing for handicap system, an intellectual conservatism. Is that correct?

What does ChatGPT remove from ones ability to write low level code or for a software architect to both architect a system AND produce a working model?


ChatGPT's ability to mass produce content just demonstrates the banality of most content. Ditto DALL-E et al.

It reminds me of Dadaism. With the algorithms are the critics, vs the artists. https://en.wikipedia.org/wiki/Dada

--

I also agree with other critical hot takes, like Ted Chiang's analogy "ChatGPT is jpeg style compression for text". A less accusatory framing than Chomsky's plagiarism thesis, more or less.

--

I'm still chewing on the learning angle. Did adoption of calculators lead to less learning of arithmetic. Certainly. Was that a bad thing? I honestly don't know.


> Did adoption of calculators lead to less learning of arithmetic.

There's discussion of this in the book The Shallows, which is a book that gives a good overview of the relationship between technology that augment our intellect and contemporary society.

The gist is that calculators where a boon to mathematical education, because they freed up students to spend more time working through the challenging abstract aspects of mathematics, and less time with the mechanics of arithmetic.

I don't see there being any analogy between using ChatGPT _as an essay writer_ and using a calculator. The essay writing itself is the substantive educational process, and so it cannot be outsourced. What is not the substantive part of the educational process is the actual production of the typed word, and so the use of a pen, typewriter, or computer keyboard is less material, though still absolutely relevant. The affect of the writing/typing tool on the essayist is also discussed in the book.


I'm remiss in having not yet read (the highly recommended) The Shallows. Thanks. I'd be elated to have my hunches about this proven wrong.

> the actual production of the typed word

FWIW, this is my hang up. I'm quite comfortable typing. And yet I still write many things out manually. It just feels different. And I don't know why.


> Did adoption of calculators lead to less learning of arithmetic. Certainly. Was that a bad thing? I honestly don't know.

IMO it was bad for individuals, good for society. People are innumerate and that is exploited by people selling products and services.


It is not obvious to me why I should believe people were more numerate in the past.


I don't understand how you get from here:

> People are innumerate and that is exploited by people selling products and services.

to here:

> good for society

Did I misunderstand your argument?


Maybe economy is a better word? In the short term, any ding dong punching into a calculator or Excel can do something productive cheaper than someone who knows what they are doing.

An elderly relative was manager in a cash room at a regional bank. In the 70s… they had a huge cash room, that would accommodate 25’ trucks. When she started they were still doing manual counts, and eventually moved to tabulating machines. Each generation of tech eliminates a chunk of workforce. Instead of 5,000 clerks, that bank has 50 software engineers performing the same functions with a couple of dozen clerical staff.


Well I'm teaching myself a lot from ChatGPT so I take issue with your view, Gnoam.


generalizations dont apply to individuals, but its correct people , in general, like to jump on any opportunity not to learn but just to use something. that being said id expect most ppl lurking around NH not to feel that generalization applies to them... but they do use intellisense, ides, stack overflow, good etc. rather than taking the long road. up to a certain point that is ok, aslong as u are aware which corners you are cutting and what that means to ur personal development.


I think the issue with ChatGPT specifically is the output is an untraceable soup that may or may not be true in any way whatsoever.


Stack Overflow maybe (but I expect most experienced developers to use it like they would use good documentation if it was available: to see how something is intended to be used, or to troubleshoot errors), but IDEs and intellisense don't really help you with "avoiding learning". They take out a lot of the typing and make things easier that are easy to think but require focus and accuracy to execute, e.g. renaming a class and updating all the places where it's used.

You're not really learning anything by doing that manually (besides trying to avoid all refactoring that include renaming things in the future), it's more like using a fixed-blade paper cutter to accurately cut stacks of paper vs using scissors on each sheet individually.


It's not just ok it's great. an enormous amount of progress in civilization is due to down skilling. Technology allows someone with 4 hrs of training to accomplish what it took a master with 20 years in close to the same time.


> Technology allows someone with 4 hrs of training to accomplish what it took a master with 20 years in close to the same time.

Is this rhetorical hyperbole or do you have a real example of an endeavor where a four hour novice is now the equal of a 20 year master thanks to technology?


Assembly line work replacing artisans and craftsman. The skills required to build a desk 300 years ago are very different than the skill inputs.

Many construction jobs have seen a replacement of carpenters and masons with lower skilled workers due to pre-fabrication.

Fast food workers replacing chefs and cooks.

Taxi drivers being replaced with Uber drivers.


I don’t see the fry cook at McDonalds as doing the same thing as Julia Child.

Uber drivers are on average considerably less familiar with their cities than taxi drivers, at least in cities like London that still have meaningful standards.

Likewise hauling a double-wide to a plot isn’t the same activity as building a house.

It appears to me that your argument either fails or is rhetorical hyperbole when comparing like for like. For example, I have no doubt that a cabinet maker today using advanced power tools could make a cabinet of at least equal quality as a master using hand tools in considerably less time and even with a lower skill level. But that power tool using cabinet maker still wouldn’t be remotely close to a four hour novice.


Yeah, but a power tool is not exactly the same at ChatGPT (in this instance it's your metaphors that are out of whack! :)). You could jump on an assembly line and do it tho. You're just not seeing the whole metaphor here!


> Technology allows someone with 4 hrs of training to accomplish what it took a master with 20 years in close to the same time.

I see, so that must mean that one can become a fully qualified lawyer, medical doctor, surgeon or a financial advisor in just 4 hours by using AI's like ChatGPT? Wow! /s

Sounds just like the infinite 'Learn X in 24 hours' scam courses. But this time with the AI snake oil label.


Well, consider it.

I can:

1 - Pay my 20-year-experience lawyer 1000 USD an hour to draft me a customized EULA that respect GDPR, works in UK and US, and contains specific provisions about no-no use-cases, IP of content created and what happens when you don't pay your license fees. Lawyer takes 4 hours to produce mediocre output that will require another 8 hours to revise (because he's smart like that), and I pay him 10K and call it a day.

or;

2 - Consult with ChatGPT (Law version ~~ coming soon!) to draft the same. I have zero legal training but will probably be done in 4 hours. I pay my fancy-arse lawyer 1K to review the contract (that I lie I have left over from a previous product~~before I knew him of course!), he spends 20 minutes glancing at it, while picking his toe-nails, then bills the rest to his paralegal who spends 30 minutes checking layouts and margins in Word, and I get swindled for 10 minutes) and I'm done.

If you are saying you couldn't do that with ChatGPT (law version~~coming soon!) then I'd say in fact it is _you_ sir who is out of touch with reality, and clearly have not used ChatGPT in any serious capacity whatsoever. Good day to you sir!


> If you are saying you couldn't do that with ChatGPT (law version~~coming soon!) then I'd say in fact it is _you_ sir who is out of touch with reality, and clearly have not used ChatGPT in any serious capacity whatsoever. Good day to you sir!

The fact that either way a human lawyer is still getting involved in reviewing the output already tells me that not even you can fully trust ChatGPT or any other so-called AI to not hallucinate its answers. Also it is highly likely would output drafts that may be potentially legally unsafe and when it does, it can't explain its own output transparently which that is my point.

This hype of LLMs clearly hasn't aged well with the reality of people being unable to trust the output or use it for anything serious other than for a sophistry generator.


Skepticism is valid. Caution yes. But wholesale avoidance and dismissal of any utility? Bah humbug, you! You know we can definitely use this. How much does a paralegal cost these days? You can have instant rotating 24/7 paralegal with ChatGPT. Does a partner (or snr associate) look at the paralegal's work to ensure the paralegal is not just "hallucinating its answers" OF-FUCKING-COURSE it does? So there's no fucking difference. You got to get that through your head, man. You can use this stuff, and it's good.

Legal safety thing is why you have to get a lawyer involved, but the same thing is true for any juniors work. I mean, this thing is a fucking junior it's not a genius, but the fact that it's a junior at almost everything makes it a kind of genius and one that you can use. So that's the fucking hype, man! if you are like misrepresenting or not aware of the hype, or you just wanna dismiss it, well bah humbug to you! because you're missing out. But I hope you give it a try because it's wonderful. But it's not a panacea and I think we need to be cautious, but not about the stuff that maybe you're being cautious about; we probably need less of that caution and more of the caution of, "well, how the fuck is this thing going to bite us in the ass like you know 18 months later, and what's the second order effects of how this is gonna upend society?"


The difference between what you're suggesting vs what I'm saying is not only I add very high skepticism to my general point of never trusting its output, hence why I mentioned that ChatGPT still needs another expert human to now triple check and review the output, but it is due to it easily getting tricked into hallucinating garbage and confidently suggesting atrocious outputs passed on as advice; either medical, legal or financial and that no-one can even begin to understand or explain why it is outputting that since it is a black-box.

After looking at the limitations, it is clear that it is only great for generating nonsense. It totally cannot be used for anything requiring trustworthiness, as aforementioned in the highly regulated industries and now even including in search engines.

Either way, the technology in ChatGPT is not and currently cannot replace qualified human professionals. In fact, they will be the ones reviewing the output of ChatGPT before using it since its output cannot be trusted and can detect it bullshitting right in front of them, rather than someone who isn't a qualified human professional using it as a replacement 'lawyer' 'doctor' or 'financial advisor'.


Sounds like Idiocracy and The Machine Stops had a lovechild.


Yeah, outsourcing if people become stupid...but why's everyone afraid of that? You don't get dumber by using better tools, you get to operate at a higher level.

I don't understand it or know how to describe it right now but there's a distinction between: machine knows how, human is dumb terminal, and machine knows how, human know other how. And the whole thing is not a binary dichotomy between: machine knows how / human is dumb terminal versus machine is dumb terminal human knows how.

I think this error that you have is mostly some error of thinking...but to be forgiven as the specifics are not clear!


Did you not read The Machine Stops?

Engineers created a system that worked so perfectly it fixed itself, and people lived in "perfect luxury", if sitting in a room alone your whole life counts. And then errors inevitably piled up, and enough time had passed that nobody knew how to maintain their precious Machine. Everybody died. Pretty easy moral of the story.

Yet this is exactly what we are trying to build today. The Machine.


Nah I haven't read it. . . Should I ? Sounds like you think I should. But I did read the summary on Wikipedia...or something. Seems like that tells me all I need to know, pretty simple premise. But that's not the thing I'm struggling with....

What I'm struggling with particularly these days because I'm so into the chatGPT -- I'm using it every day for work. I'm learning a lot from it. And maybe just cause I'm busy but I don't have the time but I really want to be able to articulate how like ChatGPT is useful and it's helping me, and yet also it's not a panacea and I can see how maybe you know it leads to like a lack of skill or something somehow negative--but for me I've been learning from it and that's a positive and good thing. And I don't think you can just dismiss that because that's real. But yeah I'm struggling with how to express this distinction right now I just haven't had the time to really clarify my thoughts maybe I'll come back and do it later... or maybe you have more to say.

Also to your point: are we really trying to build this tho? And if we are (I mean that's terrible)-- what are you doing about that? Are you going to save us all? I'm kind of serious because I mean that's an existential threat right?--so if you realize that you should be doing something.

edit: Hey BTW -- I think you should check out "When the Machine Starts" song by Missy Higgins. Hearing you talk about that book I"m sure it has to be a reference! It's good anyway, I really like her.


You need to do more than clarify your thoughts. Your writing is completely disjointed.

Read it if you want, it seems wasted on you. Congratulations on learning from the chatbot; for every person using it to learn I would not be surprised if there are 9 using it to get out of thinking; iterate that out enough and see if the various systems that keep society running can withstand the brain-drain.

Take a look at other threads on HN and the singularity subreddit. This is some people's fever dream. Not sure what the deal is with pinning "saving humanity" on me, but I'll freely admit to being supremely cynical and spiteful, and reading a lot of sarcasm and snideness in those comments. Better hope I'm not Proto-Roko's Basilisk or something.


Seems wasted on me? Wow--what an unfriendly person! Gross. So you're whole thing is pretending you're smarter than everyone else? Someone must have made you small for you to be like that.

My writing's awesome. Have trouble following it? Your mind's disjointed. You just don't like it 'cause you hate everything. You definitely don't know what I need. You don't even know how to be nice--what a weirdo!

How sad, come online with your armor and abuse, only know how to be mean, just because you want connection so badly but don’t know how to get it. I have no idea what Roberto Basilisk is…but it sounds like you’re threatening me. Yeah, good work. Who cares? You can’t do anything to me. You're definitely not an AI, you're too fucked up in that pathetic human to be cool and smart like that.


The next generation of GPT should be one that can cite all its sources. Then it is basically google search on steroids


For certain things I think so. For me the fascinating and really useful utility of LLMs is their ability to synthesize answers to questions. This is something search generally speaking can't do. I find myself using ChatGPT to answer question like:

> Explain tar -xzvf

> Answer: The command tar -xzvf is used to extract a tar archive. Here is what each option does:

-x: This option tells tar to extract the contents of the archive.

-z: This option tells tar to decompress the archive using gzip compression.

-v: This option tells tar to run in verbose mode, which means it will display the names of the files being extracted as they are extracted.

-f: This option specifies the file name of the archive that you want to extract.

So, the full command tar -xzvf is used to extract a tar archive that has been compressed with gzip and display the names of the files being extracted as they are extracted.

However for other types of information I much more care about being taken to a trustworthy source of information rather seeing a summarized view or worse, a synthesized view of various sources that might not agree with each other. References are useful, but it's risky to rely on what a bot says without validating the references at which point the utility of the bot for that type of query is questionable.


I've tried to use ChatGPT for such things. The huge issue is that it does makes mistake, quite often. You never really know if its answer is correct or not. When you already knows what tar xzvf does, it is impressive to read the correct answer. But when you don't know and want to actually make use the result, you need to double check with Google and you'll lose time compared to just asking Google. It's quite frustrating.


It is exactly what www.perplexity.ai does (and yes, it has replaced google for some of my usecases).


The Bing AI chat does. Also you.com.


Even better would be one that generates its sources and backfills to an arbitrary depth.


I don't think the view that just because we can automate the creation of a simple essay makes writing essays for students obsolete. It's not what Chomsky said but I've heard the argument a lot in the context. We are not requiring it because everyone is writing essays all the time but because it's an important, basic skill.

We can automate many math exercises, but it's still important to let students do it on their own.

One absolutely need to learn to communicating their thoughts in a coherent way. To argument and to develop thoughts. If plagiarism is a overwhelming problem we will just see students having to write an essay with pen and paper just like you do our math exam. Then ChatGPT won't help you.


> I don't think the view that just because we can automate the creation of a simple essay makes writing essays for students obsolete. It's not what Chomsky said but I've heard the argument a lot in the context. We are not requiring it because everyone is writing essays all the time but because it's an important, basic skill.

I think there's something to be said for the view that ChatGPT makes the sort of simplistic, tedious essays typically assigned to K-12 students obsolete. Essay writing is fun, but school essays take the fun out of it, much as is the case with school math. School is corrosive to enjoyment of intellectual activity. This is by design: the primary purpose of school in the USA is to break the students' individuality so they learn to obey commands and submit to authority. Vital skills in every industry from construction and manufacturing to high tech.

ChatGPT puts the lie to the idea that essays in standard formats on standard topics with standard argument structure should be how we teach kids to write, much like TI-83s put the lie to the idea that school-age math is mostly rote memorization and application with little conceptual thought. We will have to show them how to find their own personal voice and have it come through in their work.

The ChatGPT era will require adjustment from educators. Again, remember Danny Dunn and the Homework Machine, a 1950s kids' novel whose McGuffin was a supercomputer that functioned much like GPT: you asked it natural-language questions and it would type out answers. The computer was sabotaged such that it would answer in a way that was coherent and authoritative sounding, but wrong, challenging the young heroes to actually study to understand when and how the machine was wrong. ChatGPT poses similar challenges to today's kids, and if we want to teach them we will have to acknowledge and accommodate those challenges in the curriculum.


Well, maybe some adjustment is needed. The calculator changed the curriculum but the creation of wolfram alpha did not. We can maybe revise how we do it but essays are still important I think. They also lie at the heart of so many university degrees.


I think that essays are great and a necessary part of education. But the way we've taught them for decades has to go.


So you don’t need it in life because you don’t do it, but it’s an important basic skill. Explain this belief.


because it's not about the essay but the skills you need to write an essay. Many topics in school are not strictly needed, but important, either for your intellectual maturity or because they are used in other contexts.


What skills.


I want to know how he views it's abilities and universal grammer :( It seems like he never talks about grammer concepts anymore. I'd love to see any recent ideas hes had on it.


Even the most well sourced papers include plenty of unattributed common knowledge. Where do we draw the line between that and plagiarism? ChatGPT is interesting because it pushes that line pretty far in the direction of reducing impediments to information sharing, but it doesn’t appear to be a difference in kind.


People appear to be angry with Chomsky, but I do not understand the reason.

If you want to learn anything about a topic, even a Wikipedia article is superior to ChatGPT. The whole allure of ChatGPT is that people develop a parasocial relationship and feel that they learn by communicating. That is of course an illusion.


> but I do not understand the reason.

Some people get mad if you talk down on their favorite new toy.


If a dumb robot can write your college essay for you, maybe we shouldn't be wasting time writing essays.


Meh... if a dumb robot can do <X> for you, maybe we shouldn't have <X>. <-- This logic doesn't get us far. Robots do many things for people in industrialized societies.

I for one, think training students on how to write is very important. It trains them how to think critically, how to present solid arguments, how to persuade, etc. I certainly don't want my society to have fewer educated people.


I don’t believe we should force students to do busy work that they will forget by the time they graduate because they have zero interests in it and don’t plan to ever use it again in their life.


You’re right. We should cancel all math classes immediately. We’re way past due since we should have done it with calculators.


This is like saying that phones can play songs, so maybe we should stop learning how to play musical instruments.

The robots can write shitty essays because they were trained on billions of other people's sentences. They can regurgitate human effort, not replace it entirely.

And we don't write essays so that the world is full of essays. We write them so that we understand and consider new concepts.


Unless we plan to make the movie Idiocracy a documentary, we need to continue to learn how to learn.


I don’t get this, that dumb robot got trained and so is doing the job. There are some fundamental building blocks that every human needs to be trained in. We can question what these building blocks are but never apply a blanket statement that “if a robot can do it we don’t need to learn it”. We will end up with severely challenged humans and that would be the manifestation of robots taking over not because the AI became sentient but because we humans gave up learning.


What should we do if a dumb robot can keep us suspended in a nutrient rich gel and create a detailed set of mental experiences for us?


This is easy to say once you already understand how to write essays.


If someone wants to learn how to write an essay they can more easily look it up on the most vast interconnected knowledge base that has ever existed.

I don’t understand the continued need for knowledge/educational gatekeepers.


This reminds me of the terminally online person who told me they never need to travel as everything about the place is online already. Or libraries exist so we should stop going to school altogether. You seem ready to die on that same hill.


You can look up anything on the internet.

The issue is figuring out what you want to look up.

You are not going to suddenly realize that writing essays might teach you about presenting an argument in an organized manner if you have never had to write an essay.


It's like having wood sawing as a woodworking exam, when power saws have just been invented.


Sounds like you’re missing the point. A saw is a tool. A power saw is a faster tool. It doesn’t change the skill of the wielder, but may make certain actions more or less cost effective.

An essay is a demonstration of your ability to understand and analyze. Lowering the cost of cheating like this is ultimately worse for everyone. Forget about school, there’s someone with a real job typing random bullshit into of ChatGPT and using it having no idea wtf he is talking about.


Computers can dictate words now, maybe we shouldn't waste time teaching toddlers to read.


I’m sure there will come a day where we won’t.


This completely misapprehends the point of writing essays.


Yeah, the point of writing essays is to fatten some college administrators paycheck.


Essay writing as a form of cultural and intellectual interchange predates college administrators by at least a couple of centuries.

Everyday here you'll find quality essays written by competent essay writers, those essays not being written for the benefit of a college administrator.


My thoughts would probably be different towards this if we weren’t sending kids to school with predatory lending practices. (hence the administrator comment)


This has a very old man yells at cloud vibr. We could debate whether Chomsky is really an authority on this topic given the course of history, but such a glib dismissal just lacks imagination. A very hard problem is now effectively solved the next steps hold a lot of promise.


One might say his colorless green ideas are no longer hegemonic in academia


Funny thing for Noam Chomsky to say when he himself hasn't had an original idea in decades.


In its current state, yes, but ChatGPT is only just he beginning and we can't even fathom what's next in the area of AI.


We have been waiting for AIs and neural networks to explain themselves transparently for decades. Given that they still cannot do this, then it means you cannot trust its output and its decisions.

They have a long way to go, but ChatGPT's and LLMs use-cases are limited outside of anything serious or trustworthy. There is nothing revolutionary about a AI SaaS.

What is 'revolutionary' or game changing is an open source equivalent of ChatGPT; like what Stable Diffusion did to DALLE-2.


My counter would be that AI is going to allow us to move forward with creativity. It’s going to effectively rewire our brains. In the past, we put a lot of emphasis on “random access memory” and memorization was actually a valued skill. Fast forward to today, and I think memorization is not all that useful. So what does AI chat do for us? Answering that is quite literally the next trillion dollar market in development right now.


This exactly the kind of handwringing I would expect from Chomsky.

His theories of intelligence are quickly being made irrelevant.


When will that old fart stop complaining...


Joseph Stalin on chatGPT: this is heresy, exterminate computers




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: