The article sounds a cliche. The progression was always happening, nothing sudden. Just like the continuous movement of tectonic plates through the earth quakes. When tension between the plates reach a level, rupture happens. But it is not the rupture causing the tectonic movement. It is the opposite.
Things like electricity, computers, internet, smartphones and AI are those earthquakes caused by the tectonic movement towards dominance of the machine.
The goal of human progress was to make everything easier. Tools came up to augment human abilities, both physical and mental, so that humans can free themselves from all hard work of physical labor and thinking.
We do gym and sports as the body needs some fake activity to fool it into believing that we still need all that muscle strength. We might get some gym and sports for the mind too to give it some fake activity.
Remember, the goal is to preserve ourselves as physical beings while not really doing any hard work.
I know it might be an offensive way to put it, but I honestly believe if AI ends up making people no longer need to use their brains as much, it's a great thing.
Think about it: do we rather to live in a world where heavy labor is a necessity to make a living, or a world where we go to gym to maintain our physique?
If mental labor isn't (as) necessary and people just play Scrabble or build weird Rude Goldberg machines in Minecraft to keep their minds somewhat fit, is this future really that bleak?
It is quite bleak to me. Thinking has always been an important part of what makes us human, much more so than physical labor.
Craftsmanship and tool usage are physical activities that also define us as a species and you will find no shortage of people lamenting our loss of those skills, too. Both those and thinking are categorically different than water carrying, ditch digging, and other basic heavy labor.
You don't have to stop thinking. Just switch to thinking about other stuff that is more interesting to you and potentially more complicated.
AI will never completely replace human thinking; it will just ease the annoying/boring parts.
The same arguments were given for the innovation of the calculator, but we didn't stop thinking with math/calculations, we just make the calculator do the boring computation part (and now you can calculate with much bigger and more complexe operation without having to spend loads of times and paper for it).
> I know it might be an offensive way to put it, but I honestly believe if AI ends up making people no longer need to use their brains as much, it's a great thing.
I'd say it is a constant fight against laziness. Sure it is convenient to drive everywhere with a car but at some point you might understand that it makes more sense to walk somewhere once in a while or regularly. Sure, escalators are convenient but better take the stairs so you don't need to go to the gym and save some money. If you ask me should all do more physical labour and the same goes for the mental labor. If we give that up as well the future is really bleak, to answer your question.
Our society is laid out for cars, not walking or biking.
My wife and I live about a mile from some favorite places to eat, but the walk home in the dark is dicey with a high speed limit and not a lot of lights, doubly so biking (she is not a strong pedaler). We end up driving even in the summer.
Human settlements usually do not exceed a couple of miles in diameter, for a pedestrian life style. A person can only deal with distances around a mile or two for daily activity. Modern cities are designed for a species called cars. Humans walking in these cities would be so alien and awkward.
Modern cities tend to have means for effective transport called public transit. I have lived in two big city centers and now live in one of their suburbs and I have never felt the need to own a car, even though I do have a driving license. Moving around is as easy as hopping onto a tram, metro, train or bus. In special cases there's always a possibility to call a ride.
It's still more car-centric than it should be here - it's absurd how much space we waste on all these parked cars around - but fortunately it's been improving over time.
In what area of the world to you live? I've been to places like that with highways and fences, it's terrible. Fortunately it's not like that in most places in Europe.
The analogy here is probably physical exercise. Lack of exertion sounds great until your body falls apart and destroys itself without frequent exercise.
It is paramount to not ignore the state of the world. Poverty, wars, inequality in the distribution of resources, accelerated natural disasters, political instability… Those aren’t going to be solved by a machine thoughtlessly regurgitating words from random text sources.
Even if a world where people don’t use their brains were desirable (that’s a humungous if), the present is definitely not the time to start. If anything, we’re in dire need of the exact opposite: people using their brains to not be conned by all the bullshit being constantly streamed into our eyes and ears.
And in your world, what happens when a natural disaster which wasn’t predicted takes out the AI and no one knows how to fix it? Or when the AI is blatantly and dangerously wrong but no one questions it?
And I responded to your hypothetical in detail and followed up with a request for clarification. That’s how discussions progress and it’s what forums such as HN are designed for.
You were clearly advocating for a particular future (“honestly believe (…) it’s a great thing”), so hiding behind it being a hypothetical feels disingenuous. Of course it’s an hypothetical, because it obviously does not describe the current state of the world. That doesn’t mean the idea is beyond criticism or commentary. On the contrary, that’s exactly what hypotheticals are for.
The study referenced shows a sudden and dramatic drop in brain activity of people using AI to write an essay when compared to people writing an essay themselves.
Writing teaches us to organize our thinking. Failing to learn to organize our thinking makes us dumb.
When questioned about the contents of the essay those who used chat gpt to assist were not able to answer any questions about the paper. As though it had never happened.
Imagine waking up one day and realizing this your senior year of college was over and you literally could not remember a single thing you learned that year, as though it never happened.
That’s the idiocratic shift we’re seeing. AI is literally causing us to turn off our brains. A whole generation will learn nothing in school.
Ask an educator how it’s going. I’ve heard from half a dozen. The consensus is “it’s a shit show”
It's always the people who have no clue what they're talking about, whether in physiology or philosophy, who insist on speaking so confidently on matters of physical activity and the meaning of life. No, your reductionary argument about the purpose of something like the gym should not be "remembered". People partake in labor and even seek hardships for many individual reasons, most of which do not involve "faking" anything at all. Keep inhaling the data-ist technobro coolaid though, it's awesome to be always online.
Are we living in a golden age of stupidity? hat depends on what we mean by "stupidity." If we mean:
Information overload, not matched with critical thinking; short attention spans, driven by algorithmic content; decline in deep reading, writing, and manual creativity
…then yes. There’s a legitimate case that we’re in a period of widespread mental passivity rather than active curiosity. Of course this isn’t a new phenomenon. Every generation feels the next is losing touch with something essential. What is different today is the scale and speed of digital influence.
> Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.)
Key point. The top use case for "Artificial Intelligence" is lack of natural intelligence.
Not just anyone, people will "naturally" draw the line somewhere, and it will be in a number of different places for different people.
As the article emphasizes, "every technological advance seems to make it harder to work, remember, think and function independently …"
This is exactly what it takes for there to be a positive feedback mechanism for AI to accelerate. Almost like people havng the goal lines moved for them. Which it looks like AI has already done in spite of its notorious shortcomings.
That little quote doesn't only apply to AI, think about how it was as slide rule engineering faded into obscurity. Don't ask me how I know, that would be an even worse wall of text ;)
At one time all bridges, vehicles, aircraft and things like that were designed by people who had prevailed because their mindset was aligned with all the others who excelled at doing almost all the math necessary using only that one common tool, which was common among them because it was a best practice across so many cultures, and a leap above what they were using before. It wan't easy and it required a certain mindset which made engineering possible with such a primitive tool. Two pieces of wood.
The future's come a long way and nobody does this any more, so for the longest there's been no need for engineers to even learn how to use a slide rule, or especially not to use it professionally. Things actually did get easier. Slide rules were no longer necessary for engineering, and from that point the type of brain that could do those kinds of projects using only a slide rule has no longer been a requirement for who can become an engineer. This didn't make them stupid, engineering is still hard, naturally in many other ways.
But with that mindset that made it possible to accomplish so much with such primitive tools now largely absent, could that be why not much more is being accomplished with incredibly more advanced tools after so many decades?
I think you've hit the nail on the head, it's state-of-the-art.
Whatever the state-of-the-art at the time is.
>modern fighter jet can fly literal circles around one that was designed with a slide rule.
Yes, but not so easy to outperform the ones designed by those who had adequate talent using a slide rule, once those guys got a hold of mainframes.
Not all of those people are completely gone yet, mostly retired if still living, but they've been with us as senior engineers ever since, just dwindling numers.
But no new crops of that type of average engineer since the 1970's.
It might be beginning to show, things like B52's seem to have been impossible to replace ever since.
What are the odds of a dramatically different B52 replacement, designed today, lasting 60 years into the future and still operating routinely? If any could be made airborne by then anyway.
If ChatGPT generated the text, participants weren't encoding it into memory through the cognitive processes normally involved in writing; they were essentially passive recipients of AI output. Isn't it a trivial finding then, that participants could barely recall a text they didn't write? Also, the study is small (54 participants), not peer-reviewed, and conflates two different issues: the cognitive effort during task completion versus memory retention afterwards.
Doesn't the decline of IQ rather correlate with smartphone ubiquity, particularly after 2010, and the steepest declines appear in 18-22 year-olds—the heaviest smartphone users? Multiple studies link smartphone addiction specifically to reduced cognitive abilities, not technology broadly.
I get your point, but I confess I have sometimes had to pause for some time to decide if I was holding a recyclable, a compostable or landfill — looking at the little pictures in fact, hoping I can find the thing I am holding.
Yeah, but otherwise, the whole MIT Media Lab thing is increasingly tasting a little bitter, not the glamorous, enviable place it seemed like in decades past.
Rather than looking for the next internet-connected wearable, for some reason, increasingly, I keep thinking about Bruce Dern's character in the film Silent Running.
It's much worse in South Korea, I think there were at least 5 different bins with different signs. Most things you bought had a label on them and you could try to match the letters to what was on the bin. Except it wasn't perfectly matched up, and we didn't have some bins with signs that matched up with whatever was written on the package.
I eventually gave up and only ate to avoid having to deal with it.
This is one place where I think our friends the magic robots might actually be useful (though it's more CV than LLM stuff); people are really _amazingly_ bad at this, and will happily ignore a printed sign, and even quite low accuracy would probably be better than what happens now, which isn't that far off random.
Yes but…
As an example In some cities, the signs specifying whether parking is allowed can be impossible to decipher. Sometimes feels like an AI would be needed to tell you “can I park this particular vehicle here right now, and for how long?”
Not that I’d trust an AI to get it right - but people already don’t.
> The fundamental issue, Kosmyna says, is that as soon as a technology becomes available that makes our lives easier, we’re evolutionarily primed to use it.
> “Our brains love shortcuts, it’s in our nature. But your brain needs friction to learn. It needs to have a challenge.”
"Our brains needs friction to learn" is a good way of summarizing the fundamental problem.
Yes, shortcuts can be great, but they also obviously stop you from actually learning. The question then becomes: Is that a bad thing? Or is the net results positive?
My guess is that "it depends" on the tasks and the missed learning. But losing things like critical thinking, the ability to learn and concentrate could be catastrophic for society and bad for individuals. And maybe we're already seeing the problems this creates in society.
This sounds nice, but what I've run into is that the model fails to write changes if the code has changed under it. A better tool, where it takes a snapshot at the start of each non-interactive segment, and then resolves merge conflicts with my manual changes automatically, would make this much easier.
The worst part is when you find out your vibe coded stuff didn't actually work properly in production and you introduced a bug while being lazy. It's really easy to do.
That is both a sweeping generalization and plainly wrong. The "much earlier" days of programming had blazing fast compilers, like Turbo Pascal. The "earlier" days had C compiler that were plenty fast. Only languages like C++ had this kind of problem.
Worst offenders like Rust are "today", not "earlier".
Certainly, but that depended on your own choices in technologies. This condition is not time-dependent. You can inflict yourself the exact same condition by choosing Rust.
Compile times are faster now for C and C++ right? Some of it due to further compiler optimizations but mostly due to higher CPU power.
Still, you seem to be arguing that the choice should be Pascal instead of Rust. There is a reason why we choose these new languages: language features. Compile time is a lesser consideration.
No, I'm arguing that having more "blocking" time is not a function of time (early days of programming), but a self-inflicted choice. Yes, choosing C in the 80s meant self-inflicting yourself with "blocking" time, but there was the choice of Turbo Pascal, or Forth or whatever. Plenty of great choices that ran really fast on those old machines.
Do I mean that one should choose Pascal today? No, compiling C code today is really fast and has practically no "blocking" time. But you can still inflict yourself "blocking" time if you want, with languages like Rust.
In C we had to resort to tricks like precompiled headers to get any sort of sensible compilation and it still lasted a minute for a decent library.
C++ was/is even worse what with generation of all the templated code and then through the roof link times for linker to sort out all the duplicate template implementations (ok, Solaris had a different approach but I guess that's a nitpick).
I have not worked on any large project in Pascal, but friends worked with Delphi and I remember them complaining how slow it was.
I used to work on a project that could take 30 mins+ to compile the entire project.
Nearly every time, your problems were detected _early_ in the process. Because build systems exist, they don't take 30 minutes on average. They focus on what's changed and you'll see problems instantly.
It's _WAY_ more efficient for human attentional flow than waiting for AI to reason about some change while I tap my fingers.
If you need an attention sink, try chess! Pick a time control if it's over 2 minutes of waiting, and do puzzles if it's under. I find that there's not much of a context switch when I get back to work.
I'm having the same problem. LLMs really take me out of the task mentally. It feels like studying as a kid. I need to really make a concerted effort; the task is no longer engaging on its own.
As someone with focus problems, I find it more productive to have a conversation with ChatGPT (or Claude) about code. And avoid letting it make major changes. And hand code a lot with Copilot.
It depends. For a task I know well the LLM is often much worse. If I'm being asked to do something brand new, the LLM does speed me up quite a bit and let me build something I might have gotten stuck on otherwise. The problem is that although I did "build the thing," it's not clear I really gained any meaningful skills. It feels analogous to watching a documentary vs. reading a book. You learned _something_, but it's honestly pretty superficial.
Because when you simply “read” you’re not necessarily learning. The illusion of knowledge is real, where you nod in agreement that you understood something but when it’s your turn to do it, you have no idea how to. You need to do something yourself to actually learn it, and it involves struggling, frustration, eventually insights etc.
How slow the AIs respond provides some opportunity to work on two task at once. Things like investigate a bug, think about the implementation for something larger, edit code that experience has told you it will take just as much or more typing to have the LLM do it.
It's less cool than having a future robot do it for you while you relax, but if you enjoy programming it brings some of the joy back.
They're not that slow! You want me to believe we've gone from programmers being so fragile that disturbing their 'flow state' will lose them hours of productivity, to programmers being the ultimate multitaskers who can think and code while their LLM takes 10 seconds to respond? /s
Until recently every technological advancement replaced manual work, like in agriculture, transportation, industry. Even the tiniest car amenity, like electric windows, hydraulic breaks or touch screen entertainment is aiming to replace a limb movement. With AI it is the first time the tech offloads directly cognitive tasks, leading inevitably to mental atrophy. The hopeful scenario is to repurpose the brain for new activities and not rotting, like replacing labor work gives the opportunity for sports and not getting fat.
The Guardian raises tabloid press to a new level of bad taste, posing a serious faced question about stupidity and immediately answers it by having their site drop a blocking full screen emotionally loaded ad-request for my money in the sloppiest way possible. I will assume the content it blocked was AI generated bait and will find a better article to read.
The article is along the lines of students not being as good because they slack off and use chatgpt, but people working together with chatgpt might produce smarter results than people before?
It's like calculators didn't bring a golden age of bad maths. Instead people mostly stopped learning long division and used the calculator instead but the end result was ok.
The title itself. Without reading the article, I can sense the "we are living in a stupid age" arrogant trope characteristic of the "winning" social classes.
That's an interesting assumption, thank you for clarifying.
I do know a few people who walk through the world with the "everyone else is an idiot" mindset. They're a total pain in the ass and neither of them are particularly successful or particularly happy, irrespective of their (very different) notional social class.
For my part, I look at a title like that and immediately think of the number of hours I've spent doomscrolling, trying to find value in cryptocurrencies, thinking about what impact AI has been having on education, trying to figure out where my life took its various difficult turns...
And I see it more as a criticism of the systems we've built (primarily big tech, but also the industrial complex in general) to create a world where the answer might be yes.
Things like electricity, computers, internet, smartphones and AI are those earthquakes caused by the tectonic movement towards dominance of the machine.
The goal of human progress was to make everything easier. Tools came up to augment human abilities, both physical and mental, so that humans can free themselves from all hard work of physical labor and thinking.
We do gym and sports as the body needs some fake activity to fool it into believing that we still need all that muscle strength. We might get some gym and sports for the mind too to give it some fake activity.
Remember, the goal is to preserve ourselves as physical beings while not really doing any hard work.
reply