Hacker News new | past | comments | ask | show | jobs | submit login
A call center worker’s battle with A.I. (nytimes.com)
121 points by elsewhen on July 23, 2023 | hide | past | favorite | 151 comments




Every facet of the article is designed to wring sympathy, but it's worth asking the real question: what is this advocating for? What kind of change (or stasis) would the author actually prefer to see?

The existence of call-center work, which is intrinsically dehumanizing and unrelentingly laborious, is considered a failure of our society.

The replacement or automation of that work, which creates displacement and impacts the livelihood of people who cannot adequately retrain, is considered a failure of our society.

What, exactly, would constitute success?


<<The existence of call-center work, which is intrinsically dehumanizing and unrelentingly laborious, is considered a failure of our society

The people in the article find it one of the best jobs available and a path to a middle class lifestyle for their area.


I've worked at a call center, it fucking sucks. Similar to retail you are the one who has to deal with the anger and frustration of customers due to poor company policies. What makes it worse is that you're a faceless entity on the other side of a magical voice box rather than face to face, in public, where people are more likely to watch their actions and how public responds to that. I'd definitely agree that it can be very dehumanizing. But there are plenty of people who enjoy the job too. Generally I've seen for those people that it is about the money and that they get to sit in an office with air conditioning.

Both things can be true. Call it Stockholm Syndrome to the economy or whatever, but things can be dehumanizing while people simultaneously feel lucky to have said work. It could be worse, after all.

The bigger question is if we are working on parallel paths to give these displaced workers equal or better opportunities to support themselves. Be that new jobs, avenues for them to educate themselves, basic income, or whatever combination of things raises the economic floor.

I do also fear that people will call these people dumb for choosing a career that could "so obviously be automated" but prior to a year ago their biggest fear would be outsourcing to India or Brazil. I wouldn't expect this type of person to even believe it was a realistic possibility until the last 6 months, once this stuff started to hit mainstream. I'll make a prediction that we'll see them in vogue within the next year or two. I also wouldn't be surprised if we see similar sentiment in the comments.


I've had a call center job and it sucked. It also paid twice as much as any other job I'd had to that point until I got a job in software.

I couldn't do it, but I can definitely see why it would have a high upside for some.


I would not be surprised if insurance companies who get claimants to call in, deliberately put people through these voice activated systems simply to listen in and see if their story is straight. Putting them on hold when going through the details is another way for the insurance company to listen in and see if their story is straight. People are lulled into that sense of safety that they cant be heard when on hold.

I've listened into conversations whilst someone is ringing my number, its quite interesting the conversations that go on at a company before the customer answers the phone!

Youtube reminds me the voice recognition and thus subtitles is not that good!


I honestly don't understand why people demean customer service operators. It seems to be stressful work, but I have countless times been saved by operators. They are a life-line. AI tools are as useful as a google search, and perhaps just a little more, and a lot of customer service people who just follow a script are not much more helpful. But clearly some operators are a godsend cannot replace by AI at this time.


> which is intrinsically dehumanizing and unrelentingly laborious

These jobs are dehumanising because of how some companies treat people. AI wont change that some enjoy dehumanising others. A change in attitude, forced by law or social change, would fix that. Now that would be a success.


These jobs are not dehumanizing because of management. They are simply dehumanizing. If they were paid more, they would just be worth the dehumanization; getting hazard pay doesn't make a job less hazardous.

> A change in attitude, forced by law or social change

Exactly what would you like to force people to do? What's this "attitude," and by what process does it dehumanize people who work in call centers? Do call center workers have material problems, or is this a question of abstract mindset or aesthetics?


> These jobs are not dehumanizing because of management. They are simply dehumanizing.

Nonsense. Helping people with their problems isn't dehumanising. Following a script that your company has optimized to avoid helping the person is dehumanising.

> Exactly what would you like to force people to do? What's this "attitude," and by what process does it dehumanize people who work in call centers? Do call center workers have material problems, or is this a question of abstract mindset or aesthetics?

You could make them fiduciaries for the people calling in, or something slightly less extreme. Call center workers have problems that are superficially material, but addressing them on a directly material level would just be playing whack-a-mole; the unpleasantness of the position stems directly from the incentives that those who control them are under.


> These jobs are not dehumanizing because of management. They are simply dehumanizing.

There’s obviously lots of different types of call center jobs, but I wouldn’t say they are all categorically dehumanizing. I worked as a support agent for a while in my early twenties, and the core of the work was not dehumanizing. But being poorly payed, clocked to the second and not being given adequate equipment or training to do the work well was. I can imagine a version of that job that I would have stuck with for longer.


I find that rather strange, IMO a first level support manager has a very dehumanizing job and gets paid less than top tech support staff. The lifers I've known also didn't want developer jobs as project pressures can feel persistent while a support engineer couldn't care less about your problems when they go home.


They are dehumanizing because of the relationship to the work. It is of no value to the worker. It is meaningless. And that is dehumanizing.

Digging holes in a random field for 8 hours a day for minimum wage to make someone else richer is rather dehumanizing. When it's digging holes for a fence in your yard, or for a community barn-raising -- the exact same work -- somehow it's less dehumanizing.


It's productive. You did something physical while improving a situation.


> A change in attitude, forced by law or social change, would fix that.

I'm not quite sure I follow. Aren't we talking about displacing these workers? How does that result in re-humanization? Can we assume they can easily obtain a job of better quality?


> A change in attitude, forced by law or social change, would fix that

Presented this way it seems like AI would be the more frictionless option!


>The existence of call-center work, which is intrinsically dehumanizing and unrelentingly laborious, is considered a failure of our society.

If the existence of these jobs is a failure of our society, what do you call taking these jobs away without any sufficient replacements that pay as well? There are plenty of places in the US where a call center job is the best job you can get without special skills.


It's easy to say someone else's job is a failure of society. Contact centers do pretty important work, and not all of them are run as soul-sucking, penny-pinching, toilet paper usage measuring, hellscapes. At some point work has to be done in most businesses. In many, the contact center is where the work is done.


We will not run out of jobs because there isn't a fixed number of jobs. There's a fixed number of workers.

We've been automating jobs for centuries and yet workforce participation has stayed in the 60-70% range. New jobs will always spring into existence to occupy the newly available workers.


There's an approximately fixed number of jobs in time periods meaningful to humans. My rent is not going to wait for new fields to blossom in the wake of "progress".


> My rent is not going to wait for new fields to blossom in the wake of "progress".

Don’t worry, you wouldn’t even be qualified for the new jobs anyway, unless ofc they’re low skilled in which case you might barely scrape by on rent.


If we didn't do "progress" we'd both still be farming in the dirt and washing our clothes by hand. It's kind of hypocritical to ask for it to stop as soon as it starts inconveniencing you.


The benefits from past progress do not imply all future progress will be similarly beneficial. Our culture takes it as axiomatic that more efficiency is good. But its not clear to me that it is. The principle goal of society should be the betterment of the lives of people. Yes, efficiency has historically been a driver of widespread prosperity, but it's not obvious that there isn't a local maximum past which increased efficiency harms the average person. We may already be on the other side of the critical point.

How the benefits from increased efficiency are distributed matter to how much progress benefits average people. Historically, efficiency increases from technology were driven by innovation that brought a decrease in the costs of transactions. This saw an explosion of the space of viable economic activity and with it new classes of jobs and a widespread growth in prosperity. But crucially, the need for human labor kept pace with the expansion of wealth creation. This largely avoided the creation of a new distribution problem. But this time is in fact different. The expanding impact of AI on our economy will create a serious distribution problem as wealth creation becomes more and more decoupled from human labor. It is extremely narrow-minded to ignore this problem. It is not something that will just work itself out.


> But this time is in fact different.

Said by people throughout history.


If we actually build machines that are as capable as human beings (meaning, real AGI) then it actually will be different. Past automation efforts didn’t obsolete human workers, but that’s not some law of nature you can rely on. It’s simply evidence that past automation efforts were imperfect. Hopefully this new round will be as well.


This is literally an excuse for anything I can label "progress."

Random citations of examples of hard work, capped off with an accusation of hypocritical selfishness.

The word "progress" is not a justification, any more than the word "reform."


> There's an approximately fixed number of jobs in time periods meaningful to humans.

Bullshit. I've got relatives who can remember when call centers weren't a thing.


People wanting to replace those folks’ jobs without a plan for them are precisely the types of people that turned such jobs into dehumanising meat grinders. Petty power games and greed will not be replaced by ai thats for sure.


The first sentence is literally and nearly tautologically true, because they are the employers of the industry. Characterizing it as "petty power games and greed" is bizarre, though. They want AI for the same reason you want a calculator instead of having to hire someone to do your arithmetic.

Businesses aren't here to fix society, they're here to make money. Giving them command is our mistake, not theirs.


That's fine and why the article exists to push back on those businesses.


> The existence of call-center work, which is intrinsically dehumanizing and unrelentingly laborious

Is that really the end of the world? When I have several issues (banking, electrical/phone/gas company) I want to speak with a human.


I think this is a good question too.

Personally I want to see AI take over handling 100% of the calls from confused people who could have just used the website to accomplish their task but are too clueless. Then keep enough humans to exclusively deal with all the actually worthwhile calls, like mine. (I never call unless it can’t be done online.)

If people actually want a jobs program (uneconomic work created strictly for the sake of providing income), call it what it is, and at least have people fill potholes, or something else humans are actually better at than machines.


What would you tell the people who are out of their job thanks to AI and now can't pay their bills?


Probably the same thing you tell people who used to work in buggy whip factories. I know it’s trite but, really, there’s no constant but change.


> The existence of call-center work, which is intrinsically dehumanizing and unrelentingly laborious

The only sure way to not have "dehumanizing and unrelentingly laborious" jobs is to not have jobs at all.


This is like complaining that a National Geographic documentary is too sympathetic towards Gazelles. Sometimes the documentary tries to show the plight of the cheetah too. You can totally elicit sympathy from someone while just showing the natural order of things and expecting no actionable change to come out of it.

Moderated capitalism is a beautiful thing. It doesn’t last without good minded people keeping it in check but when it exists, it seems to work the best to improve the human condition. Call center jobs may look like shit to some but to the ones that got out of poverty through them they were heaven send. Now their era is coming to a close and we just have to see what the next natural order of things unfolds to be.


Exactly, this topic requires purposeful journalism. Otherwise it’s just doomerism.


The New York Times recently outsourced its sports section to The Athletic, a YC startup which the NYT now owns. 35 sports reporters will be "reassigned". It's partly union-busting - the NYT is unionized, but The Athletic is not.


Didn't they retaliate against Wirecutter staff when they tried to go union, too?

Half a billion dollars for a startup by two techbros, but the Times can't afford the cost of their employees unionizing?

I guess I'm not in the least bit surprised then that every time I happen to click on an article from The Athletic out of boredom, it feels like something a freshman journalism major might write and then later be embarrassed about...and that every article I've read has been attributed to "The Athletic Staff."

Sidenote: it always amazes me that in the US, you supposedly need permission from a complicated, slow federal bureaucracy to be "recognized" and thus protected from retaliation for actions.


"...but the Times can't afford the cost of their employees unionizing?"

It's not cost, it's control...


juuuust you wait for all that disillusionment when they blow the whistle ... on .... ???


Not relevant here.


It absolutely is? They wrote this piece, and context matters because it can shed light many things, including hypocrisy. In this case, the commenter is pointing out just that.


This is just the reality of the impending job displacement. Trying to prevent technology from advancing is not possible, and even if it was, it would limit everyone’s standard of living, which is undesirable.

We have two big problems to solve. First is figuring out if we’ll need a safety net for the first wave of automated workers, or if more jobs will be quickly created.

Second, which is more difficult, we need to make sure most people still have the purpose they derive from work. No matter how shitty a job might look from the outside, I wager most people still borrow a strong sense of identity from them. And if they lose that, shit will get weird.

The whole concept of paying UBI to everyone so they become artists is complete BS. A lot of people will just be depressed or behave in a way that negatively impacts society. Jobs are a good way to keep social order.

But telling sob stories about workers about to lose their job doesn’t solve any of these problems. It just makes people feel afraid of the future, which is counter productive.


Nope, most people don't borrow a strong sense of identity from their jobs. They just do them so they can afford to eat. With UBI they could actually choose to do something which gives them a sense of purpose and identity. Doesn't even have to be art. Personally I enjoy gardening and growing food much more than my job. With UBI I could fully concentrate on that, instead of wasting my time on stuff I don't enjoy.

In fact it would be much more healthy for call-center workers to spend their time outdoors growing potatoes, instead of sitting indoors just to afford said potatoes. Lots of underpaid jobs like this are inherently unhealthy for humans.


> most people don't borrow a strong sense of identity from their jobs

Part of middle-class striver religion is that recognition for your labor is the only way to have an identity (or a "worth," both metaphorically and literally.) What you are in that worldview is exactly what the people who can pay think you are.


> Trying to prevent technology from advancing is not possible, and even if it was, it would limit everyone’s standard of living, which is undesirable.

Technology doesn't immediately raise everyone's standard of living. Distribution of wealth from technology increases standards of living. But until we solve the distribution problem, technology will just exacerbate inequality and reduce standards of living for those not useful to the economy.


> First is figuring out if we’ll need a safety net for the first wave of automated workers, or if more jobs will be quickly created.

You say "if more jobs will be quickly created" as though that's what determines whether we "need" a safety net. The reality is that those capable of providing that safety net will only decide it's "needed" if they risk facing any consequences for not providing one.


> First is figuring out if we’ll need a safety net for the first wave of automated workers, or if more jobs will be quickly created.

Part of the problem is that people whose entire professions have ceased to exist, aren't always in a position to pivot to something else. If a skilled profession disappears entirely, those early in their careers can go back to school/training and switch to something else. But even if successful, they're usually on the hook for the cost of that education.

Enter into the equation those within, say 10-15 years from retirement, and do we honestly expect them to pickup student loans at age 55 that still won't be paid off when they're in their 80s?

Instead, they'll end up in a lower or un-skilled profession, their quality of life will permanently decrease, and our cultural response is to say "well, sucks to be you."

And then we wonder why so many people have resentful, crab in a bucket mentalities.


The social system we live in cannot tolerate mass unemployment, so I have no doubt that those "lost" jobs will be immediately replaced by newly invented jobs, just to keep the whole thing going just a little longer.

I'm quite certain that even the current generation of AI could already make 10-20% of first-world jobs obsolete, but of course those in power don't want that to happen. Not because of the poverty that would create, but because they don't want so many people having so much free time at their hands.


> I'm quite certain that even the current generation of AI could already make 10-20% of first-world jobs obsolete

I often ask people how we could modify a economy to support 10% automation of the workforce. Where 10% of people are not only displaced, but that we do not gain an additional 10% of new jobs which could be filled by humans.

The most reasonable answer I've ever gotten was jobs programs. But I don't think this actually solves things, and neither did that person. It's just a tax and prevents people from... being the most human they can be. It also prevents us from reaching post scarcity.

Now I don't actually believe that 10% of jobs in Western countries could be replaced. There's 135m people employed in America and AI can't even replace the 2.4m janitorial staff that we have. AI isn't needed to replace the 3.8m retail staff (#1), 3.4m cashiers (#3), or 3.2m fastfood workers (#4), where the first two have already seen significant disruption and the latter is still unsolved since AI can't "flip burgers" good enough yet (yes, I know there are burger flipping robots, you're missing the point). But they still can't replace health care aids (#2), nurses (#5), or even movers (#8). I'd really encourage you to check out the most popular jobs[0] and ask yourself if you truly can disrupt them. Because if so, you should probably apply for a y-combinator seed round. Or AI just isn't as far as many people think it is. Replacing the one cashier and one person working the drive-through (both people multi-task btw) isn't going to significantly reduce the 8 people working during any given shift at a taco bell.

[0] https://www.careeronestop.org/Toolkit/Careers/careers-larges...


This is a common fallacy that I see often on HN.

People are used to technologies that have a certain set of capabilities, where people find new ways to apply this technology over time. For example smartphones basically do the same kind of things they did 10 years ago, but people have found many more industries to apply them in (by writing apps).

AI is fundamentally different because the capabilities of the base technology are growing rapidly, at the same time as people are finding more applications for it.

So when you look into the future, you don’t just have to forecast increased applications of AI, you also have to forecast broader capabilities of the base technology. It is no use to take GPT-4’s capabilities and cast those into the future.


AI is different because it can act intelligently, getting the nuance that humans are able to. That is, it can do everything a human can do. But we're nowhere near that yet.[0] Past technology didn't directly replace human workers, but rather changed the way things were done. The difference is a self-checkout vs a janitor robot. The former is a paradigm shift, displacing the worker by shifting how the job is done and who is performing the tasks. The latter is a direct replacement of the human worker, drop in. Of course current AI will get better, but this idea has been around for decades, and like François says, people have assumed many of these problems would be solved far sooner than they have been. The thread is worth a read. We are talking about the opinions of a high ranking AI/ML researcher who extensively writes about the nature of intelligence and current limitations. Don't be fooled by fancy demos.

[0] https://twitter.com/fchollet/status/1683200588782379008

[1] https://fchollet.com/


"It is no use to take GPT-4’s capabilities and cast those into the future."

This is true, but we cannot say "ChatGPT is huge now, and in 5 years we will have the tech to replace humans" when we have no real evidence that that is the case either...


> The social system we live in cannot tolerate mass unemployment

It already does if you go by the labor participation rate.

The traditional prescription for unemployment caused by technological innovation is to artificially decrease the employee pool by taking away both ends of the workforce (after shorting them by age). The elderly get retirement, and the young go to compulsory school. This was fine for dealing with the introduction of combustion engines, steam engines and early electricity adaptation but eventually even this was insufficient.

Which is why so many young adults 18-25 are put into higher education to artificially drag out the time until they enter their chosen careers. They're not necessarily there to obtain a specific set of skills or education, as if that were the case employers would be more willing to hire talented & skilled but diploma or certification lacking applicants (that are automatically tossed out of the running by mostly automated job applicant filters).

The problem is that this requires piles of money to sustain, and our leadership is already trying to crawl back retirement by upping the retirement age even as US life expectancy has been trending down for years (incl pre-COVID).

Most likely: We'll see people deemed redundant by the marketplace left to "let nature run its course" and die off from poverty (via addiction, avoidable medical problems, malnutrition etc). This has been what was already dragging down life expectancy in the post 2008-crash America.


It’s not those in power wanting to oppress the masses. Humans with too much free time cause trouble. We don’t want that.


> Humans with too much free time cause trouble

Where is this coming from? Got any data?


I really don't get why this is such a popular belief. When you ask people what they'd do if they didn't have to work you find that they talk about hobbies, many of which have social value. Art, science, education, whatever. You do of course have to ask what they'd do after the initial relief of this burden, and specify long term rather than short term things like catching up on sleep or going on vacations.

In fact, we actually have strong evidence to support this claim that people would do "work." There are plenty of people who have amassed so much wealth that their family would not need work for several generations while maintaining an extravagant lifestyle. There's 182 people with >= $10bn, which I think we can say is a pretty conservative estimate for calculating indefinite luxury (let's again be conservative and say that's $1m/yr spending per person, and a they can gain a 2.5% return per year). I don't know everyone on Forbes list, but I'd be shocked if a quarter of them did not work. Gates and Buffet have both "worked" into their old age. I don't see why an average person wouldn't either.

People just get too fucking bored to not "work." Hell, look at how many open source projects there are. I'd only expect those to grow.


>I don't see why an average person wouldn't either.

Some people are just hardwired to be productive. It's part of why they became billionaires. A better judge would be to look at the children of wealthy people that were born in the lap of luxury. Do their activities generally reach the potential that their status gave them, or do they waste their days on frivolous pursuits? I don't have any stats, but we all know of plenty of stories of rich children that grew up to be useless drains on their parents finances. Given the endless stream of of entertainment afforded by modern life, if people's means of living comfortably were taken care of, I bet a large portion of people receiving UBI would do nothing productive.


Another population to look at is retired people. Every one that I've seen actually has difficulties adjusting. They really aren't just sitting around watching TV all day until their bodies wear out.


Riots during the lockdown? Nothing even a fraction as intense would have happened if people weren't stuck at home.


Is the cause of those things too much free time or are they actual social problems? If the latter, was it additional time that enabled them to actually attempt to address these issues or additional stress that resulted in surpassing the requisite threshold. Obviously some combination, but do you really think the dominating __cause__ of riots or do you think "riots are the voice of the unheard" (not advocation, but a warning).


Don't forget income tax.

They have to juice the masses for productivity as well so they can be taxed!


Or they could go to war, to cull unnecessary humans, leaving a higher class with nothing but them and their servants, or I guess in this case, robot slaves.


> The social system we live in cannot tolerate mass unemployment, so I have no doubt that those "lost" jobs will be immediately replaced by newly invented jobs, just to keep the whole thing going just a little longer.

In the US, they’ll be left to rot like they were before in the industrial Midwest, post-NAFTA. Or is happening now in most major US cities, where real estate speculation is favored over everything else, resulting in an affordability crisis and widespread homelessness.


In our line of business, our call center workers spend a significant portion (hours) of their time writing up notes and other information after calls.

Having automated transcription and summarization (approved by the user) will return a ton of time to them and make them more efficient.

And they could never be replaced, imo. The human impact in our sector is too important. A machine cannot, at least not yet, sympathize, joke, and persuade in the right balance, especially when the other party is emotionally agitated.


> And they could never be replaced, imo.

> A machine cannot, at least not yet

I think we really need to get away from this idea that people can never be replaced. You've contradicted yourself with "at least not yet" two sentences later - that's not to criticize you, I think you're correct in the latter instance and doing something a lot of people do. They see the reasons that people can't be replaced now and just extrapolate that out to forever, while sort of handwaving away the fact that they realize that AI is rapidly improving.

I think it's very clear that humans in these roles will be replaced in, say, 50 years. The question is how much sooner than that will it actually be - 20 years? 10? 5? 1-2?


> The human impact in our sector is too important. A machine cannot, at least not yet, sympathize, joke, and persuade in the right balance, especially when the other party is emotionally agitated.

I'd also add that even the reasoning for why the human isn't replaceable here is debatable. I've never called a call centre for sympathy or jokes. It's always because I have a query I want an answer to or problem I want sorted ASAP.

Assuming the benefit of an AI call centre would be that I wouldn't have to wait on hold, I'd rather have the AI centre than the inefficient human one. In scenarios where an AI can't easily deal with the query, then sure, I see why it would be good to have a sympathetic human who can take the call, but 90% of the time an AI would do just fine.

I think this is what a lot of people get wrong about automation. Everyone likes to point out how their job has these one or two really difficult bits, while missing that 90% of the job is easy.

As an example, here in the UK a lot of cashiers have now been replaced with self-checkouts, despite the fact we're still far from fully automating the cashier's job. We still need a couple of people on hand when there's security tags to remove or when someone needs IDing, etc. But that doesn't change the fact there's far fewer of them because 90% of the job can be automated.


> I've never called a call centre for sympathy or jokes. It's always because I have a query I want an answer to or problem I want sorted ASAP.

Not every problem has a quick solution. Many are thorny and include lots of variables and long time spans.

Imagine trying to choose the right treatment facility for a child with leukemia that will accommodate your insurance and finances. Or what sort of care a senior parent with dementia and diabetes needs.

When the stakes are that high, having a human who can appreciate your situation and emotional state is almost mandatory.


> Not every problem has a quick solution. Many are thorny and include lots of variables and long time spans.

Well-implemented AI ought to eliminate a lot of those long time spans, though. No more escalation or transfers required, no time spent looking things up - you've got a call center agent who isn't accessing the relevant internal systems, they're integrated with them (as well as all other call center agents).


I imagine it will play out like this scene in elysium:

https://www.youtube.com/watch?v=LZb5WyIukOc


Tell you half brained friends who think they can fire their call support staff. Literally no one in the management world has gotten the memo because it is becoming harder to get a problem solved with a human when a google search or reading through docs won't suffice.


If AI can replace abundant jobs in customer service, this will mean that the entry level becomes harder and harder for people to access. Because the work that's left for humans will become more skilled and complex - the stuff the machines can't deal with.


This is what I am wondering. The general entry path for IT (sysadmin) type jobs is get a front desk support job then start accumulating certifications. How is this going to that path? Or is AI erasing professions from the bottom up so no need for new workers into these professions?


> Or is AI erasing professions from the bottom up

AI will be erasing professions starting from the middle. It's going to be a very interesting, unprecedented development.

Two things are important: first, how much value the AI will create. Second, who will capture that value. Assuming it creates an amount of value commensurate with the number of jobs it replaces, the only important question is whether democracies survive the transition. If they do, people retain political power and use it to retain economic participation. If democracies fail, well...


Do we also bemoan the loss of backbreaking agricultural jobs or tedious and dangerous industrial work that have been automated away?


Yes we do, search the term 'rust belt'. Agricultural jobs turned to automotive jobs that turned to no jobs and wide areas of depressed economic activity.


There is no one “we”. Many groups of people gained from cheaper goods and services. Some groups of people lost from loss of income.


Yeah there's gonna be a whole lot of "comparative advantage" coming down the pipe, but the winners will become fewer and fewer by the day.


And we shoved a million more people in prisons on that time frame, that are now an administrative burden on society. 'We' all pay for that.


Someone who was laid off from such a job and never got another one probably does, in fact, bemoan that loss.


We do, similar to the mining towns of the UK: https://www.bbc.co.uk/news/uk-england-50069336


Not all of them will be super complex.

Everyone talks about how they don't know what new entry level jobs will open up when the AI automates everything.

When the delivery drones come, someone is going to have to go and retrieve them when they break down, or get stuck in trees, or shot at by someone...


Sure we already have that with scooter collectors.. If they keep working after VC money then the number of transport jobs they created are miniscule compared to the transportation they eliminated.

Extreme poverty is nearing its end, globally, and with it this efficiency axiom will end too. As we get more efficient at delivering a reasonable quality of life with a lot less work the amount of labor needed decreases eliminating jobs which is a natural spiral. No longer is there a kind of global reservoir.


Sounds wonderful. If only we had a society that was explicitly concerned with the welfare of individuals who weren't criminals, selected children, or the victims of natural disasters, everybody's lives would instantly become significantly better.


Leading edge AI can already handle many skilled and complex tasks. Within a few years this will be widely deployed.


Don't worry – those "complex" tasks will be done by machines also, just 3-5 years down the line.

It's all over, folks.


Historically, each new tech wave also created more entry level / low pay jobs. The previous ones were Uber, package delivery workers ...

This new tech wave will also probably create low entry / low paid jobs, just different ones. Maybe there's going to be entry level AI tuning or similar. It's hard to say how it will play out.


Working in a call center is terrible (I worked in one for 2 years), and any call center worker will tell you the ultimate goal should be to identify and solve the problems before the customer needs to call in, and empower the customer to solve the problem themselves (with requisite tools and documentation). Replacing the call center worker with an AI won't solve either of these things; in fact it probably makes the act of interacting with the support 'agent' worse in many ways; just another hurdle to customers getting help.

And in some cases it is antithetical to the business interest to let the customers solve the problem themselves, example: cancelling your ISP plan. Some state(s) (notably California) enforce giving customers the option to cancel online[1]; but most of these companies demand customers call in and be subjected to a wait queue, and a pathetic, dehumanizing customer retention plea before they are 'granted' the cancellation. The tools to simply cancel an internet plan are actively withheld from the customer. Will an AI change anything about that? Probably not.

[1] https://www.cnet.com/tech/services-and-software/companies-mu...


When I call a call center, I’m not looking for suggestions on how to solve the problem myself. This builds up nuclear level rage. If I’m calling it’s because I want to delegate the task of solving my problem to someone who can click buttons and make it happen, fast. And if an AI can do that quickly and happily, then good riddance to those call center workers.

Sometimes interacting with these call center workers is so infuriating, that I’d wager this is probably the most hostile interaction an average person probably has with another live human on some regular basis. When I think back to the times when I’ve been really angry in my life, talking to call centers has consistently ranked as the maddest I’ve ever been, and probably the maddest I’ll ever get. Imagine all of that simply going away with a competent AI – a net good for the world.


You get filled with rage because call center tech support is not for you. Call center support rarely is allowed to do anything you can't already do online and they are rarely able to tell you secret information that can't already be found online, and you have already tried online.

Call centers exist to serve the people who can't or won't RTFM online. You do RTFM online, so... rage.

AI-based support is likely to be the same: A chatbot version of the same old information you can already get on the FAQ and Troubleshooting section online. I don't know why we expect it will be different or suddenly unlock some Secret They Dont Want You To Know.


> Call center support rarely is allowed to do anything you can't already do online

This is not my experience at all.

In the past couple weeks, I’ve had to call in to businesses to get the following resolved because it wasn’t something I could do online:

1. Get some bank charges reversed because the online interface explicitly said I wouldn’t get charged for the actions that prompted the charges. 2. Get DRIP enabled on one of my investment accounts. 3. Downgrade a credit card product to a different credit card. 4. Get pricing information about a financial product that wasn’t provided on a banking website. 5. Follow up with the fibre provider in my area because they’ve somehow connected every house on my street but tell me that my house isn’t eligible. (TBF this one isn’t actually resolved yet.) 6. (Twice ) call a bank because legitimate transactions got flagged as fraud and denied. 7. Get my address on file updated with a political party I’d donated to so they’d stop sending mail to an old address.


I can think of at least one scenario where you need that call center worker. Let's say internet goes down in your area, but you're not sure where the problem is. Allowing a customer to call for a technician to come check the connection in your house might be a waste if the problem is regional. If too many people do that, the waste of resources compound. And you could argue that AI could diagnose where the problem is and choose what to do, but maybe it can't. You need that human to disambiguate the course of action.


I don't know what call center you worked in but when I did it, no one cared about any of that, even if they did most customers didn't have the technical acumen to understand what they could have done to avoid the call. They don't want to know how to solve it themselves, they see solving problems as your responsibility since they paid for the product or service and you are it's support. Business customers are the only exception.

Matter of fact, attempting to educate customers instead of telling them what needs to be done to fix the problem can backfire because regardless of your approach it could come off as either condescending or blaming them for calling in.

Self included, I went to my bank a while back to do something and the guy told me I could have done it online and how,instead of just doing it and letting me leave asap. Yeah, I don't give a shit what i could have done online, i already spent too much time fighting their bullshit and I just needed the problem gone asap because it was an urgent situation. At least solve the problem first before you try to educate me.

Call centers have tiers, I do think LLMs can do tier-1 work because tier-1's basically work of a script/flow and LLM would just pretend to listen and react according to the script. Tier-2 and above however is a terrible fit for LLMs.


The reason why the ultimate goal should be to solve the problems before the customer calls in is because of the amount of time wasted by both parties once things get that far. If an AI agent can reduce the time spent by both parties (no waiting on hold for the customer and no employee time for the company), then the goal is accomplished.

For example, you call out documentation as something that needs to be improved so that customers can solve their own problems—what's wrong with that documentation taking the form of an interactive agent you can ask questions to? A well-trained chatbot (not raw GPT) has the potential to be much more useful for the kinds of queries that would otherwise end up at customer support than does a search engine on a bunch of docs.


> what's wrong with that documentation taking the form of an interactive agent

Among many others, in no particular order:

* (borrowing from above) The interactive agent can force you to sit through however many menus/questions the company wants until you are "granted" cancellation. Good luck finding the right option if they make it difficult to navigate! All the while, you have a condescendingly "friendly" "conversation". At least with a real person, I can convey my message clearly, and even if they must still read certain things, they will know to do so more quickly. Have you ever dealt with Amazon's chatbot, which shows the "typing" graphic for a few seconds before giving what is obviously generated text, and it takes a few back-and-forth exchanges like this before someone real introduces themself in the chat?

* You can't quote it like you can actual documentation, which has knock-on effects such being harder to hold the company to account, not being able to give advice to friends & family, not being able to compare competitors before you open an account, ...


None of these problems are inherent to chatbots, nor are any of them unique to them. I'm not even convinced that any of these would be exacerbated by a chatbot—a company that's going to treat its users poorly will do so regardless of the type of agent they employ. And on the flip side, a company that cares about its users could absolutely design a chatbot that solves these problems and legitimately saves everyone time.

I think the attitudes that we have right now towards chatbots are shaped by the companies that don't give a shit—the tech really wasn't ready yet, so only the worst companies tried to force you to use a chatbot. We're on the verge of that changing, and soon a chatbot will be able to provide a better customer support experience than the average overworked call center employee. When that happens, I expect to see companies that care about their customers begin to make the switch, and we'll start interacting with implementations that really work.


Like you, I'd love to live in a world where, "if everyone behaves, things are much better for everyone". It's a great idea.


> and a pathetic, dehumanizing customer retention plea

State your desire to terminate. When a "conversation" begins, simply & robotically repeat your desire. Repeat as necessary.

It works.


We need chatbots on our phone to talk to their chatbots.


I heard that if you encounter one of those "Please tell me what you want help with?" voice prompts on a customer service call, you can just keep on repeating profanity and you will eventually get an operator.

I have never tried it, but I sort of want to...


FedEx hangs up on you. I tried.


Or use a virtual card (such as privacy.com for US). Create one to subscribe and delete it to unsubscribe.


Any time I i have to call for assistance it's like pull pulling teeth.

BofA has actually a nice system. I can hit 'call support' for from the app and there is a number added to the telephone number which icing identifies me. No need for me to searching various documentation.

Oh, and the fake typing noises while the AI / robots is 'searching'... they drive me nuts. Like who are you trying to fool here.


They're not trying to fool anybody, they're just trying to tell you that they're still there. It's the audio equivalent of a spinning cursor.


But do they really need 5 seconds of fake typing to look me up?!?


If it takes 5 seconds to perform the procedure, then yeah. IDK how these systems work, but I wouldn't be surprised if there was a explicit time delay added (like a second) to combat brute forcing. But come on, 5s isn't going to make or break your day. Chill with the instant gratification.


Funny, I can think of only two companies who I don't mind calling, and they are both financial institutions as well.

When did companies start treating customer interaction as a nuisance, a cost to cut, instead of an opportunity to build a stronger relationship and brand loyalty?

I mean, CfA sells objectively higher priced foods(for fast food), and people will still wait in line for it, because they don't treat people as a nuisance, so there has to be something to said for the business model.


I think it might have started with Thiel. I read an article about PayPal being the first to actively avoid support


Well, I don't like to call customer service but prefer that to a chatbot any day of the week.

If I want to return something or let them know about some issues with my purchase, I'd like to talk to someone on the other side of the aisle. I'd hate to "talk" to a chatbot and waste time trying to get to a human representative. Heck, I highly doubt this would work for old people, angry people, etc...

I could see a company selling its products with "100% human touch" as a marketing gimmick.


I think that at least part of my dislike of chatbots is that they force you to navigate an often poorly thought out decision tree in order to try and accomplish something.

When you talk to a human they're often navigating the same decision tree but they're acting as an interpretation layer to translate your natural language into the specific series of commands the machine they're working with will accept.

It seems likely to me that an LLM would be able to provide that functionality.


Doesn't always matter that you're talking to a human. Like when you call Comcast with internet trouble, and you tell them you've already rebooted the router but they make you do it again anyway because that's the first step in their tree and they aren't allowed to skip it.


Current-gen customer service chatbots are pretty bad.

But I could see something ChatGPT-level working in 95% of cases. Right now this doesn't work because of issues like prompt injection, difficulty of training on company data, etc. But I expect these limitations will be worked out over the next 5-10 years.


But chat gpt has lied to me in almost every session I’ve had with it. How will that work when it promises hundreds or thousands of customers something it hallucinated?


When companies we deal with don’t offer human phone support I tend to call their enterprise sales departments and have them transfer me to someone who can help. I think the furthest I’ve gotten was once where I pulled the cell phone number of some CEO off their LinkedIn and called them directly because everything else had failed.

As long as you’re extremely politely annoying it works every time in my experience.

I imagine the AI is meant to replace the type of call centers that are as useless as an e-mail form or a chatbot, but I don’t personally believe in something that useless. I mean, what’s the benefit? To make people who aren’t stubborn idiots go away with unresolved issues?


> “Am I training my replacement?”

Yes, and it's not new. I know for a fact Google has been using QA specialist data results to train the AI at least since 2018.

E.g. there used to be armies of people watching YouTube videos in search for policy breaking content, and essentially all they did with each report was to train the AI.

A friend of mine who worked there used to tell me that there's tons of insane content being pushed on YouTube that you never get to see, such as children cartoons were at minute 24 there are random porn images or Hitler's speeches.


Why do articles like this never confront the paradox at play here: these sorts of jobs are notoriously awful and dehumanizing. Their existence is often looked at as a failure of our system, even in the very same publications that are now publishing these sorts of articles.

I don't mean to minimize the problems that this (pardon the term) disruption causes, or the much more concrete and immediate fact of losing one's job. But if we're going to have that discourse don't we need to confront that central question? Should these impossibly rote and dehumanizing kinds of jobs exist, or shouldn't they?


People make more money working a shit job than they make when they're unemployed. People endure dehumanizing shit jobs because they have mouths to feed. Unemployment doesn't feed mouths.

---

>Should these impossibly rote and dehumanizing kinds of jobs exist, or shouldn't they?

It'd be nice if they didn't, but things just have to get done sometimes, rote, dehumanizing, or otherwise.

I don't think bullshit jobs should exist, but people keep paying money to do them, and people who take these jobs do so because they have mouths to feed.

I think callcenter jobs are bullshit jobs. But how else can people feed mouths?

So many people, so few jobs, but so much to consoom, so we invent bullshit jobs for them, lest people stop consooming, god forbid.


It feels like we could cut out the middleman by giving people money for doing nothing, and letting the robots do the dehumanizing jobs. I'm not sure why people think it's morally superior to force people to do useless stuff to get the money.


This. Anyone who does not understand this needs to check privilege.


If everyone listened to PC, there'd be no janitors because no one would clean shit up if they had a million dollars.


If the callcenter operators thought they were bullshit jobs, they would not be hiring people to work them. Callcenters make money, so on some sense they are not bullshit jobs.

A bullshit job is more of a corporate office job that looks legitimate on the surface but the person does nothing of any value and the job really only exists because some manager didn't want to reduce his number of direct reports.


I'll give a counter-point here.

While I agree with a lot of what you said about the challenges of these roles, my company is actively providing opportunities for our call centre folks to access internal re-training programs (designed in collaboration with professional educational institutions) to bring them into junior development and product roles within the company, as we transition more and more to a self-serve model (and therefore need less and less call centre agents).

Obviously this is not for everyone (our first cohort was 60 people and we had just over 1,000 applicants, out of which only about 30% met the qualifications).

It's been a very successful first cohort in 2022-23 and one thing these re-trained folks bring to the table as transferable skills is a very clear understanding of the problems and shortcomings of our service offerings, having been at the "failure" end of many processes as you put it.

As a result, many of them are very quickly contributing way more than a junior who is coming in straight from bootcamps or the outside, and are helping improve our products in meaningful ways that might have been hard, or expensive, to suss out by our existing product teams.

So if these jobs didn't exist, we wouldn't have these smart humans coming into our teams with that real-world experience.


And that seems great that your company is doing that, but I'm not positive that is the norm. Also what happens to the 70% who "don't meet qualifications?" They now no longer have a job and have told they aren't able to succeed or advance


This definitely isn't the norm, but hopefully we see it more as time goes on.

Hiring good people is hard, and if you have folks in your organization who have been there a long time (the program is only open to employees who have been with the company for more than 5 years), then why not reward that loyalty with an opportunity to transform yourself at the same time as the business transforms itself?

> what happens to the 70% who "don't meet qualifications?" They now no longer have a job and have told they aren't able to succeed or advance

They stay in their current (unionized) job, and can still access other opportunities for advancement within the company, like they always have. We have a long and proven track record of promoting out of our call centres and into our business and management practices, this is just a specific program to help grow out digitization business.

This isn't a "retrain or layoff" situation, sorry if that was somehow implied. Our company will continue to need call centre humans for many many years to come I suspect. We just need less of them.

Also for anyone who is accepted into the re-training program, their job is secure until they choose to take a new role. At any point during the training program (which lasts about 16 weeks), they can go back to their old job if this isn't what they expected or wanted.

They also continue to get their salary and benefits while they're in the full-time training program.

The only commitment is that if they go through the full program and take on a new role, they need to stay in that role for 2 years or more, otherwise they have to re-pay a prorated amount for the training program (because we pay our training partner for each person who takes the program) based on how soon after the program they left.


> Should these impossibly rote and dehumanizing kinds of jobs exist, or shouldn't they?

The question that precede this one is “should we care for people’s basic needs like housing, healthcare, food, etc?” In America, the answer to that question is largely “no” from which all the dehumanizing work then follows. Many conversations about AI on this site seem, perhaps intentionally, to elude this reality.

People aren’t angry at the prospect that these awful jobs will be taken away, but that their baseline survival will be made more difficult so that a handful of already wealthy people can get even more rich.


As a very smart man once said, "It took both time and experience before workers learnt to distinguish between machinery and its employment by capital, and therefore to transfer their attacks from the material instruments of production to the form of society which utilises those instruments."

We've forgotten the lesson and we have to relearn it. The problem isn't AI - the problem is the owners of capital, who will be abusing and exploiting workers with or without AI. AI as technology is a wonderful achievement and the product of a great deal of proletarian intellectual labour, built on the intellectual labour of society as a whole - it is ours, and it is fantastic once we restore control to its rightful owners. Unemployment, homelessness, lack of healthcare aren't caused by AI, they're caused by an economic system which demands unemployment, homelessness, and which throws in the lack of healthcare for free. Accordingly, we cannot prevent unemployment by fighting AI - it has never worked in history and it won't work now, like fighting the mailman to "prevent" a court summons. We have to identify and fight the root cause, not the tools that both we and they use or will use.


People like the money they make more than the hassle of doing the job; it's a great thing for people that work in the computers/tech field love what they are doing, but not every job is like that.


Everyone I knew who had a call center job was very happy to leave.

Automation hasn’t led to fewer jobs it just leads to changed jobs. Unfortunate for those affected but we shouldn’t be holding up degrading mechanistic jobs as examples of what we should hold on to.


It’s quite priveledged to describe call centre work as degrading and mechanistic. It’s a bit boring but it’s clean, office based work where you get to use your brain.

I would prefer to speak to a good call centre worker than email back and forth or filling in forms that go into black holes any day of the week.


Depends a lot on the call center. I worked in technical support for a while, and while dealing with awful customers was awful, I did get to use my brain and creativity to solve problems and that was fun.

I have family and friends who have worked in call centers where they have quite literally been selecting lines from a script to read off. That's a very different kind of experience, and it's the obvious type of job for AI to replace. Those family and friends might be stressed to have to find a new job, but they're not going to miss it for its own sake.


No, it is degrading. Most soulless agonizing job I ever worked, would cry on my commute home on many days.


I would rather work in a call centre over factory work, food production, retail, cleaning etc. None of these jobs are fun, but millions of people have to do them.


I work in a kitchen on weekends for fun.


I once did cleaning and groundskeeping for fun, back when my spine allowed it.


> People like the money they make more than the hassle of doing the job

You're framing this like a choice, when for most people doing such jobs it absolutely isn't.


It is clear choice between bad alternatives. Unless you realistically think we’re going to make cash transfers a thing then this boils down to “better above ground in the rain than below in the dirt”.


Read the article. It's actually way better than the jobs she had before.

Sure it would be good for people to do jobs that are on the next level up in terms of complexity. However, AI is coming for those jobs also.


Not looking forward to having to jailbreak a chat bot on the phone just to force it to cancel my account.


Next step will be to finally make the demo of voice assistant calling support Google made a while ago real. Robots talk to robots to cancel subscription. In science fiction that would lead to AGI, but it will not.


Every time someone talks about how great AI is going to be for business, I ask "how?"

The answer always boils down to greed. Every single time.


I'm not an AI enthusiast as many, but it's hard to deny the impact of ML in applications like copilot or Photoshop.


Working in a call center is the worst. Pure torture. But that's common knowledge.

It's strange to think that somebody would fight to keep that job.

It illustrates the depths of the ubiquitous mindfuck under which we operate.

I mean, what if the torture was replaced with a more obvious torture? What if you spent 8 hours a day getting your fingernails pulled out? At a good hourly rate and with nice benefits of course.

Would we squeak?


> I mean, what if the torture was replaced with a more obvious torture?

Speaking as someone who worked in a call center for less than a month before quitting because I hated every second of it, having your fingernails pulled out seems a lot worse. I'd take the call center job.

Another thing that's worse is being broke, I'd take the call center job over that too, if those were my only choices. I'm not sure about a ubiquitous mindfuck, but I know we operate under the need for food and shelter.


Ms. Sherrod from the article seems to disagree with you here:

> Within months of working at the casino, Ms. Sherrod felt the toll of the job on her body. Her knees ached, and her back thrummed with pain. She had to clean at least 16 rooms a day, fishing hair out of bathroom drains and rolling up dirty sheets.

> When a friend told her about the jobs at AT&T, the opportunity seemed, to Ms. Sherrod, impossibly good. The call center was air-conditioned. She could sit all day and rest her knees. She took the call center’s application test twice, and on her second time she got an offer, in 2006, starting out making $9.41 an hour, up from around $7.75 at the casino.


In many cases the only options are equally shitty or shittier jobs with worse pay.


They have an option to quit now. If the job is so awful, but people still do it the alternatives must worse. E.g. becoming homeless or an even worse job. I’m pretty sure we will see a spike in both.


Some people are not as sensitive as me or you. Diversity in action. (The real one.)

For a more extreme example: most of us wouldn't think about applying for the job of a hangman, but whenever there was a vacancy, the authorities were inundated with applications.


What's not to like? Wear a hood. Pull a lever. Get your fat check. It sure beats answering a phone.


Read the article. The other job options for her were worse.


If we want to live in a future utopia where the robots do all the work, then someone has to build the robots and design the AI.

I think we need to explore post scarcity politics and economics to understand how we can support all of the people who lose their jobs. How to organize society when people don’t need to work is a political question, not a technical one.

My favorite answers are Universal Basic Income and Fully Automated Luxury Communism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: