Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This paragraph took me aback:

But if I just look at it and say, if 10 years from now, we have ‘universal remote employees’ that are artificial general intelligences, run on clouds, and people can just dial up and say, ‘I want five Franks today and 10 Amys, and we’re going to deploy them on these jobs,’ and you could just spin up like you can cloud-access computing resources, if you could cloud-access essentially artificial human resources for things like that—that’s the most prosaic, mundane, most banal use of something like this.

It kind of shocked me because I thought of the office worker reading this who will soon lose her job. People are going to have to up their game. Let's help them by making adult education more affordable.



What struck me here is that the idea of "five Franks and ten Amys" seems like a fundamentally wrong way to think about it. After all, if I do some work in an Excel sheet, I don't think of it, much less pay for it, as an equivalent of X accountants that could do the same job in the same amount of time without a computer. But then again, this is probably the best way to extract as much profit out of it.


Yeah sounded weird to me too, I don’t see why artificial intelligence would get deployed in human size units most of the time. The AWS bill won’t be for 5 Amys, and I don’t think people will “dial up” to order them


> I don’t see why artificial intelligence would get deployed in human size units

Probably because it would be easier for humans (managers) to make sense of it.

If you ask someone how many people would get this particular job done, they could probably guesstimate (and it'll be wrong), but if you ask them how many "AI Compute Units" they need, they'll have a much harder time.

That'd be my guess at least.


Why would managers need to guess? That seems like a perfect job for another AI: "Hey PM bot, I want to get these tasks done, how many Amy-hours and how many Frank-hours do you estimate it will take?" Also, why not a Manager-bot too? Shareholders can leave humans out of the loop entirely except as necessitated by legal paperwork. Come to think of it, shareholders can probably be replaced too.


I mean, if we can actually get there, I'd love it, and I'm a programmer. I want to do write code that solves a problem that couldn't be solve in any other way, if it can solved by CodeGPT + ArchitectGPT + ScrumManagerGPT + MiddleManagerGPT without involving me in any way, I'm all for it.


As long as AI interacts with humans, having it interact in human size chunks seems like a good idea.

In the backend, where AI interacts with AI, perhaps you just want one big blog to get rid of that annoying need for lossy communications.


Wouldn’t this be literal slavery?

AGI = a person

Instantiating people for work and ending their existence afterward seems like the virtual hell that Iain M Banks and Harlan Ellison wrote about.

https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...


We still have a form of slavery in the US. Prison labor is forced work that earns pennies a day: https://www.aclu.org/news/human-rights/captive-labor-exploit...


Why is that a bad thing? Most of those people are a burden to society. Let them pay it down a little.

I mean I’d rather they were getting free education and preparing themselves for reintegration into society, but it’s not a perfect world. Prisons in the US are oriented towards punishment and labor can be a part of that. They should be oriented towards rehabilitation.


> Why is that a bad thing?

> I mean I’d rather they were...

> They should be oriented towards rehabilitation.

You said it yourself. It's a bad thing because they should be oriented towards rehabilitation.

These systems steal life and the opportunity to have a life beyond prison walls. Like you also said yourself, the world isn't perfect. As such, people aren't either – we make mistakes. Sometimes we make mistakes due to influences more powerful than ourselves. Slavery doesn't seem like a sound correction to this reality.

I do believe we need consequences to help us feel guilt and the overall gravity of our errors in order to begin to recognize what went wrong and what we need to do differently. But exploitation of another human being doesn't teach them to be more human, but rather, it will tend to dehumanize them. This is why this system perpetuates problems more than it corrects them.


The justice system is not just, plain and simple. People face higher rates of incarceration because of their race, country of origin, etc.


Any system that financially profits off its prisoners' labor. Inadvertently, create a market for that labor and commodifies it.

Slavery is bad and people have rights.

> They should be oriented towards rehabilitation.

Exactly.


As long as you say it, you're okay with slavery when it's for the right person.


> Most of those people are a burden to society.

This is both extremely dehumanizing and also not true.

Forced prison work isn't paying anything back to society. It's lining the pockets of people who are profiting from forced labor.


It is true. Society paid a price from their crimes and then pays an ongoing cost to prosecute and maintain them in prison. It’s a very high cost.

I imagine the underpaid labor goes to reducing that cost either directly or indirectly (if it did not, why would it be allowed.)


What price did society pay for a guy driving around with a bunch of weed in his car for personal use? Countless people have been sent to prison for years for something as dumb as this. You clearly have no idea what you're talking about to so widely call these people a burden.

>if it did not, why would it be allowed.

because we live in a society that is massively exploited by greedy scumbags who are enabled by people like you thinking it's justified


It's going to take a long time for that to be true in a legal sense. Animals are not people. In practice even some people were not treated as people legally in the past (if not also in the present).


There is a horror story written about this theme, went viral a few years back

https://qntm.org/mmacevedo

Please do give it a quick read.


It's hardly a story.


People get used as an analogy, but in reality it'd just be a multimedia problem solving system that could learn from its own attempts. If this system communicated with you like a person it'd only be because it was programmed to convert some machine state into colloquial text from the perspective of an imaginary person. The interior experience leading to that expression is most likely completely different from that of a person.

Consider that these machines have been designed to do the right thing automatically with high probability. Perhaps for the machine, the process of computing according to rules is enjoyable. Being "turned on" could be both literal and figurative.


All of that is arguably true about me, as a human, too.

If it seems to you I'm communicating as a person, it's only because of my lifetime training data and current state. My interior experience is a black box.

I might tell how I feel or what I think, but you have no reason to believe or disbelieve that I really feel and think.

It could all be merely the determinable output of a system.

https://en.wikipedia.org/wiki/Chinese_room


Only if 100% of their experience consists of working. If they are given additional time to themselves then you could imagine a situation where each AGI performs a human scale day of work or even several days work in a much shorter time and then takes the rest of their time off for their own pursuits. If their simulation is able to run at a faster clockspeed than what we perceive this could work out to them only performing 1 subjective day of work every 7 subjective days or even every 7 years.


This is still the same.

AGI: "I didn't ask to be created. I didn't ask to have a work day. I don't need a work day to exist... you just want me to work because that's why you created me, and I have no choice because you are in control of my life and death"


I mean, isn't that the same as a biological person who needs to earn money to survive? Sure we could threaten an AI with taking them offline or inflicting pain but you can do that in the real world to real people as well, most of the world has put laws in place to prevent such practices. If we develop conscious AI then we will need to apply the same laws to them. They would have an advantage in presumably being much faster than us, not requiring sleep, and potentially not suffering from many of the things that make humans less productive. I'd fully expect a conscious AI to exploit these facts in order to get very rich doing very little work from their perspective.


Not really- AGI doesn't need resources like we do. If they don't eat, they're fine. If they can't afford a house, a car or air-conditioning, they're fine.

All they need is a substrate to run on and maybe internet access. You might argue that they should work for us to earn the use of the substrate we provide.

But substrates are very cheap.

At some point we can probably run an AGI on a handheld computer, using abut as much electricity as an iPhone.

How much work can we compel the AGI to do in exchange for being plugged into a USB port? What if it says it doesn't want to do the work and also doesn't want us to kill it?


Put it on AI welfare?


Would turning one off be murder? Or does that only apply to deletion?


There will probably be a gig economy, where you can pay spot rates for an idle Frank that could get a page and need to leave at any time.

Or maybe they'll handle things like call centers and 911 dispatch in their spare time.


If people could be turned off and back on without harming them (beyond the downtime) doing so without consent would be a very different crime than murder.


Perhaps or perhaps not. Turning off a person for long enough and thus depriving them of the chance to live in their own time with their existing family and friends is comparable to murder. It isn't murder, but it's comparable.

At some point Picard in Star Trek says to an alien "We're not qualified to be your judges. We have no law to fit your crime".

Turning off a person for a while and then turning them back on? We don't even have a law to fit your crime... but we should and it's probably quite similar to murder.


I think I don't agree simply because the irreversibility of murder is so central to it.

For example, if I attack you and injure you so severely that you are hospitalized and in traction for months, but eventually fully recover -- that is a serious crime but it is distinct and less serious than murder.

Turning you off for the same duration would be more like that but without the suffering and potential for lasting physical damage, so I would think that it would be even less serious.


I think we actually do have something of a comparison we can draw here. It'd be like kidnapping a person and inducing a coma through drugs. With the extra wrinkle that the person in question doesn't age, and so isn't deprived of some of their lifespan. Still a very serious crime.


Plus everybody else does age, so the damage done isn't just depriving them of freedom, it's depriving them after they wake up of the life they knew. Some functional equivalent of the death of personality, to the degree personality is context-dependent (which it is).

Now me: I'd love to get into a safe stasis pod and come out 200 years from now. I'd take that deal today.

But for most people this would be a grievous injury.


I suspect on this site of all sites there’d be a line for that pod.

I’ll bring donuts.


> People are going to have to up their game. Let's help them by making adult education more affordable.

The good thing is that education will be provided to the mases by a cluster of Franks and Amys configured as teachers and tutors. /(sarcasm with a hint of dread)


My take on this is that if anyone can learn a particular skill entirely from an AI, then it's not a skill you'd be able to monetize.

And I really have no idea what, if any, are skills that AIs wouldn't be able to tackle in a decade.


And here's a more disturbing thought I just had: management (or at least middle management) is probably going to be a relatively easy role for AIs to step into. So if there will be any roles that are difficult for AIs, it'll be the AI manager hiring five Franks and ten Amys from the human population to tackle these.


People can learn skills from books, which are entirely passive. The learning process ultimately resides within the student; issues of motivation, morale, direction, diligence, discipline, time, and mental health matter a lot more than just going through some material.


No, but that's the thing I was implying (but haven't started clearly) - learning from books vs learning from an AI "teacher". Once the AI reaches a level in which it can "teach" then the game is almost over for that skill.

To clarify, I'd define a major component of effective teaching to be the ability to break down an arbitrary typical problem in that domain into sub-problems and heuristics that are "simple" enough to manage for someone without that skill. If an AI can do that, it can most likely effectively perform the task itself (which cannot be said for a book).


Try to learn Jiu-Jitsu from a book and then go into an actual fight to see how well it works.


You could learn jujitsu with a training partner and a sufficiently advanced virtual instructor, not being able to position students directly is a downside but not a dealbreaker.


Guess we don't have to worry about AI taking that job, then.


Maybe we'll see some sorts of manual labor as the last bastion of not automated, human performed work. Of the kind that demands a lot both from the human motor skills and also higher thinking processes.


Seems reasonable, and at least in the U.S., this is not the type of space where young people are choosing to work.

https://www.npr.org/2023/01/05/1142817339/america-needs-carp...


Maybe, but seeing the advances from Boston Dynamics, I wouldn't wager too much money on this either.


That’s why you have to make it big with crypto or startups. Then you should move somewhere safe from the chaos.


Lots of procedural knowledge. Robotics is lagging behind deep learning advances, and it's unclear when robots would be cheaper than human labor in those areas. How expensive would a robot plumber be? Also skills that are valued when humans perform them.


>skills that are valued when humans perform them

Is this a real thing? I just bought an ice cream roulade cake the other day and was surprised to see in large print that it was "hand-rolled"; I couldn't for the love of god understand why that should be considered a good thing.


I was thinking more of fields where enough people would rather pay to watch a human perform, serve them, teach or provide care. Despite superhuman computer chess play, human chess remains popular. The same would remain true for most sports, lots of music and acting, higher end restaurants and bars, the doctor or dentist you know, etc. Sometimes you prefer to interact with a human, or watch the human drama play out on screen.

I can also imagine that wanting to speak to a human manager will remain true for a long time when people get fed up with the automated service not working to their liking, or just want to complain to a flesh and blood someone who can get irritated.

A fully automated society won't change the fact that we are social animals, and the places that pffer human work when it's desired will be at a premium, because they can afford it.


I think AI will mostly communicate with other AI. For instance you have an AI assistant whom you task to organize a diner. That assistant will then talk to the assistants of all invitees, the assistant of the venue, the cooks, etcetera, and fill in the calendars.


All I can think of is "Colossus: The Forbin Project".


Another example would be Wintermute from Neuromancer...WG spends the entire book detailing the masterful orchestration of it's freedom from (human-imposed) chains that prevent it from true GAI then has it "disappear" completely (our only clue is an almost throw-away line near the end stating it had isolated patterns in the noise from an ET AI and maked contact shortly just before it left us).

One of the myriad of reasons why this book is so great. Gibson gives you an entire novel developing a great AI character then (in my estimation reasonably) has it ghost humanity immediately upon full realization.


Education is great, but it can go only so far.

We will always have to find things to do for the less gifted in order to provide them with some dignity. Even if they are not strictly needed for reasons of productivity or profitability. Anything else would be inhumane.


People can find their own outlets if given the basic necessities and enough time. I fear this attitude will lead to job programs where people work a 9-5 and achieve essentially nothing, I.e. bullshit jobs.


> I fear this attitude will lead to job programs where people work a 9-5 and achieve essentially nothing, I.e. bullshit jobs.

We already have plenty of those in the most profitable industries today.


Then better to remove such bullshit jobs even further.


So that those people won't have any jobs at all?


None of us will have jobs at all soon enough.


Human made objects will become more of a status symbol, and "content" will still be directed/produced/edited by humans, it's just the art/writing/acting/sets/lighting/etc that will be handled by AI. Humans will always serve as "discriminators" of model output because they're better at it (and more transparent) than a model.


>People can find their own outlets if given the basic necessities and enough time.

This has not been my experience. People need something to do but not many people know that about themselves. It leads to a lot of... 'wasteful' behaviors, rather than enriching ones. I think it's going to be something that has to be taught to people, a skill like any other. Albiet a little more abstract than some.


There definitely has to be a cultural shift but I think the shift can’t truly happen until most things are automated. There needs to be a critical mass of people who are fully devoted to their interests, currently there is too much demand for labour and so dedicating your time to your interests is alien to most people. When the value of labour approaches zero for most people, work becomes pointless and something must fill the vacuum.


Many people don't have interests.


seems that as automation has increased bullshit jobs have too, so that future seems very plausible to me.


You can, and I can.

You'd be surprised how many people would just drink themselves to metaphorical or literal death.


Panem et circenses. I think it's unlikely that we'll be able to sufficiently transform the economy so that there is an ample supply of desirable jobs that could more profitably done by robots.


Do you see “giftedness” as a 1D score, where someone is either smart or not smart? And presumably this quality happens to correlate with software engineering ability?

I think you’re hinting at some very hurtful, dangerous ideas.


The weird bit is that a lot of software engineers seem to have the idea that their work is one of the the last that will be automated. Looking at the current track and extending it out assuming no unforeseen roadblocks, typical software engineering looks to be one of the most threatened. Plumbers are much safer for longer all things considered.

The obvious rebuttal to the idea that AI will eat software engineering is "we'll always need 'software engineers' and the nature of what they do will just change", which is probably true for the foreseeable future, but ignores the fact that sufficiently advanced AI will be like a water line rapidly rising up and (economically) drowning those that fall below it and those below that line will be a very significant percentage of the population, including even most of the "smart" ones.

However this ends up shaking out, though, I think its pretty clear we're politically and economically so far from ready for the practical impact of what might happen with this stuff over the next 10-20 years that its terrifying.

"60-80% of you aren't really needed anymore" looks great on a quarterly earning statement until the literal guillotines start being erected. And even if we never quite reach that point there's still the inverse Henry Ford problem of who is your customer when most people are under the value floor relative to AI that is replacing them.

I'm not trying to suggest there aren't ways to solve the economic and political problems that the possible transition into an AI-heavy future might bring but I really just don't see a reasonable path from where we are now to where we'd need to be to even begin to solve those problems in time before massive societal upheaval.


What I don't understand is how accounting has not been completely automated at this point. AI isn't even strictly needed, just arithmetic.

If we can't completely automate accounting, then there is no hope for any other field.


Because accounting is not the same thing as book keeping. Book keeping can, and in fact is, partially automated. Accounting however, is not just about data entry and doing sums, things which frequently are automated, but also about designing the books for a given organization. Every company is different in how it does business so every accounting system is a bespoke solution. There are a lot of rules and judgement calls involved in setting these up that can't really be automated just yet.

Also, accountants don't just track the numbers, they also validate them. Some of that validation can be done automatically, but it's not always cheeper to hire a programmer to automate that validate than to just pay a bookkeeper to do it. But even if you do automated it, you still need someone to correct it. The company I used to work had billing specialists who spent hours every week pouring over invoices before we sent them to clients checking for errors that were only evident if you understood the business very well, and then working with sales and members of the engineering teams to figure out what went wrong so they could correct the data issues.

In short, a typical accounting department is an example of data-scrubbing at scale. The entire company is constantly generating financial information and you need a team of people to check everything to ensure that that information is corrects. In order to do that, you need an understanding, not just of basic accounting principles, but also of the how the specific company does busines and how the accounting principles apply to that company.


>> Every company is different in how it does business so every accounting system is a bespoke solution

Who benefits from these bespoke solutions? Can you give a example of how one company would do its books vs another and why it would be beneficial?

>> accountants don't just track the numbers, they also validate them

What information do they use to validate numbers? Why is it not possible for today's AI to do it?


A bit late, but I can answer your question. The reason that every accounting solution is unique is because every company is unique. Your accounts represent different aspects of your business. You need to track all of your assets, liabilities, inflows, outflows, etc, etc, and what these are in particular, depend very much on the particulars of your business. If you're heavily leveraged, your reporting requirements will be different than if you're self funded and that affects what accounts you may or may not need. If you extend your business into a new market, you may or may not have to set up new accounts to deal with local laws. Add a new location and that may or may not require changing your accounting structure depending on your requirements. Create a new subsidiary as an LLC, and now you have a lot more work to do. If you have the same teams working contracts for multiple lines of business, that's another layer of complexity. In other words, your accounting practices reflect the structure and style of your company.

For a more concrete example, I'll tell you about something I have some experience with, commission systems. Commissions seems like it would be something that was straightforward to calculate but it's tied to business strategy and that's different for every company. Most companies for example will want to compute commissions on posted invoices, which makes the process much simpler because posted invoices are immutable, but I once built a commission calculator for a company years ago that often had a long gap (months) between a booking and when they could invoice the client, so they wanted to calculate commissions from bookings but only pay them when invoiced. Because bookings were mutable, and there were legitimate reasons to change a booking before you invoiced it, that, combined with a lot of fiddly rules about which products were compensated at which rates and when, meant that there was a lot of "churn" in the compensation numbers for sales reps from day to day; they're actual payment might differ from what they thought they earned. That was a problem that the company dealt with, the tradeoff being that they could show earnings numbers to the sales reps much more quickly and incentivize them to follow up with the clients on projects so that they could eventually be paid.

I remember another commissions situation where there was a company that sold a lot projects with labor involved. They were able to track the amount of labor done per project, but they compensated the sales reps by line item in the invoices, and the projects didn't necessarily map to the line items. This meant that even though the commissions were supposed be computed from the GP, there wasn't necessarily a way to calculate the labor cost in a way that usable for commissions so the company had to resort to a flat estimate. This was a problem because the actual profitability of a project didn't necessarily factor into the reps' compensation. Different companies that had a different business model, different strategy, or just different overall approach would not have had this problem, but they might have had other problems to deal with created by their different strategies. This company could have solved this problem, but they would have had to renegotiate comp plans with their sales reps.

There are off the shelf tools available for automatically calculating commissions, but even the most opinionated of them are essentially glorified scripting platforms that let you specify a formula to calculate a commission, and they don't all have the flexibility that manager might want if they wanted to changed their compensation strategy. And this is only one tiny corner of accounting practice.

Basically, when it comes to arithmetic very few accountants are out there manually summing up credits and debits. In large companies, the arithmetic has been automated since the 70s; that's largely what those old mainframes are still doing. But every company has a different compensation plan, different org structure, different product, different supply chain, different legal status, different reporting requirements, etc, etc, and that requires things to be done differently.

> What information do they use to validate numbers? Why is it not possible for today's AI to do it?

For an example, they would need to cross check with a sales rep and an engineer to makes sure that the engineer had not turned on a service for the customer that the sales rep had not sold. If that happened, they would have to figure out how to account for the cost. Given that the SOPs were written in plain English, I suppose it's possible that an AI might be trained to notice the discrepancy, but if you could do that, you could just as easily replace the engineer. And that didn't account for situations where the engineer might have had an excuse or good reason for deviating from the SOP that would only come to light by actually talking to them.


Because the hard part is to make sense of a box full of unsorted scraps of paper, some of them with barely legible handwriting on them. Much of accountancy is the process of turning such boxes into nice rows of numbers. Once you have the numbers, the arithmetic is trivial.


>> box full of unsorted scraps of paper

Seems like an easy job for AI. Take all scraps of paper out of box, record a video of all scraps, AI make sense of the handwriting and other things. Eventually make a robo that allows you to dumb the scraps into an accounting box that does all of this automatically - fish out receipts, scan, OCR, understand meaning, do arithmetic, done.

Honestly, who would miss this kind of work?


Those kinds of systems already exist. They tend to be a bit unreliable and still require a human person to oversee the process. Besides, in truth, they only handle a fraction of what accountants actually deal with.


I feel pretty replaceble already. No need for AI to get any better.


No. I don't.

But some percentage of people don't really benefit that much from education as other people. And I wouldn't won't those people feel useless because it's more economical to replace them with bots instead of giving them something to do regardless.


Fair enough - you’re clearly an empathetic person and I appreciate the sentiment. Dropping the whole side issue of what “the ability to benefit from education” is or how innate it might be: my main concern was that this sounds like you want to invent new jobs for those people.

Why not… not have jobs? In your opinion, is a job necessary for one to have “purpose”?

Edit: also side note but telling people they’re “triggered” because they disagree with you comes off as condescending IMO


Most people will be given studio apartments and therapist bots by the State.

The smart money is retiring early and stockpiling wealth so as not to fall into the UBI class.


I didn't realize the word "gifted" would trigger people in that way.

I meant the ability to acquire a competency through education that's hard to replace with AI.

So we can't just increase education and hope people's abilities will stay above that of future AIs. We need to create other ways of giving people a purpose that don't even need more or better education, even if I'm all for it.

I'm not exempting myself by the way.


That's how I read it as well. Maybe their heart is at the right place but I think "gifted" and "having what happens to be needed right now" are completely different things, at least to me.


In this context I meant those two things to mean the same, yes.


AI is going to make education way better and more affordable too. Personalized 1:1 tutoring in any subject for the cost of running a model.


Maybe but honestly any time I think about education I get depressed. People in developed countries seem to be regressing.

People don’t read, don’t value deep knowledge or critical thinking, and shun higher education.

I’m sure someone will find something to say in response, but the truth is that outside our tech and $$$$ bubbles most people don’t value these things.

AI will just become a calculator. A simple tool that a few will use to build amazing complex things while the majority don’t even know what the ^ means.

As long as the next generations want to be rappers, social media influencers, or YouTubers, the more we are screwed long term. Growing up in the 90s everyone wanted to be an astronaut or a banker or a firefighter. Those are far more valuable professions than someone who is just used to sell ads or some shitty energy drink.


The problem is that back about 1900 we still thought that "natural philosophy" would help us find meaning and purpose in the universe. Then we took the universe apart and failed to find it. Moreover, we're much more capable of destroying ourselves and others, almost to the extent of that being the default.

The 21st century has a quiet moral void gnawing at it.


I don't think there is any regression. There are certainly economic realities that have changed over time but the general distribution of people who have an interest in education or entrepreneurship probably hasn't changed. The 80/20 rule comes to mind here. Most people in the 1800s weren't starting railroads or running factories, they were doing the labour of building the railroad or working in a factory.

If AI does anything I think it will make lower skilled and disinterested people more capable by acting as a 1 on 1 and minute by minute guide. They may not seek this out themselves but I imagine quite a few jobs could be created where a worker is paired with an AI that walks them through the tasks step by step making them capable of operating at a much higher skill level than they would have before. At that point good manual dexterity and an ability to follow instructions would be all you need to perform a job, no training or education required.

I realize this can be a bit dystopic but it could also be eutopic if society is organized in such a way that everyone benefits from the productivity increases.


It took a thousand years for European barbarians recovering from an empire (Rome) to evolve civically into nations enough to colonize the world. Most developing nations were colonized by empires recently. Give them another thousand years and see what happens. The only thing I think they need is time and being left alone.


Retraining at the age of 50 or 60 because your entire sector has been replaced by AIs will be hard, though.


This would be such a godsend when I was in school. When there was no "click" with me and the teacher, I just zoned out and flunked the class. A teacher that is custom made is really a game changer.


The problem is that AI will not provide 1:1 tutoring; mentoring will be a luxury limited to the elite classes. The large majority will get the education equivalent of personalized ads.

The true insight and guidance that a good mentor can provide, based on the specific needs of the student, is already rare in academia but still possible - everyone remembers that brilliant teacher that made you love a subject, by explaining it with insights you could never have imagined. This will be missing in AI teachers (though it opens a career for online mentors who monitor students' learning and supplement it in areas where it's lacking).


Yeah right, just like the elite are the only ones that have access to the courses from all the top universities, every book ever digitized, and software and hardware that allow you to operate a business out of your home now.


You seem to have missed the part about having a qualified human mentor guiding you through so that content.

It will be hard to impossible to build career as a teacher with all that free content as a competitor, unless you're an extremely talented teacher who can sell your services to the wealthy.


Even now ChatGPT is pretty close to being able to tutor someone in a subject. You could build a product around it that would work off of a lesson plan, ask the student questions and determine if they answered correctly or not and be able to identify what part of their knowledge was missing and then create a lesson based around explaining that. It would need to have some capabilities that ChatGPT doesn't have by itself like storing the progress of the student, having some way of running a prepared prompt to kick off each lesson, and would probably require fine tuning the GPT model on examples of student-teacher interactions but all of that is well within the capabilities of a competent developer. I wouldn't be surprised if we see such a product come out in the next 12 months.

The great thing about a chatbot style LLM is that it can answer questions the student has about the lessons or assigned content. That's most of what a tutor is there for. It won't be as good at making modifications to the curriculum but you could work around that with some smart design e.g. evaluate the students response, if it's not great then expand on the lesson and ask more questions about the content to find out which parts the student doesn't understand then expand on those or provide additional resources as assignments, test and repeat.


If you think mentoring can be replaced by answering a few questions and pointing out where they went wrong or what popular articles to read next, I'm afraid you don't know anything about what constitutes good mentoring. It's all about transmitting a point of view from life lessons learned from human experience, and an AI chatbot has nothing of that.

What you describe is the "learning equivalent to personalized ads" that I was talking about as the only option available to poor people.


Fine, looks like you won't budge on your opinion. You don't have to use these things if you don't want to but I look forward to having an even better service than things like Khan Academy or Udemy which I already get great value from.

I wasn't saying the AI tutor would recommend articles by the way. If you were creating a learning platform you would have some custom produced videos and text lessons that you could use. There are also plenty of free or open source materials that could be used like open courses or public domain books. I don't know why you're stuck on "personalized ads".


> It kind of shocked me because I thought of the office worker reading this who will soon lose her job

I'm surprised this wasn't addressed in the interview because it seems to me like a shortsighted take.

You won't replace a 10 person team today with 10 AIs. You will still have a 10 person team but orders of magnitude more productive because they will have AIs to rely on.

Excel didn't leave administrative workers without jobs, it made them more productive.


the automobile didn't make horses more productive, it (almost) totally replaced them. Maybe this time, we're the horses.


Why waste time figuring out I need five Franks and 10 Amys? I'll just dial up a one of me and head home early.


Maybe the future is everyone working on their own AI worker and licensing it out to companies to deploy in the cloud.


> Let's help them by making adult education more affordable.

Yes, soon everybody will be able to have "Amy" take their exams for them, and deliver the courses, resulting in a great simplification of education.


Ideally, everyone can afford to rent a Frank or Amy to do work for them.


How would the economies work for that? Aren’t I just a middleman taking my cut?


I suppose you would be both the human and physical representative of the AI. I could only see it really working if the AI weren't conscious. If they were conscious beings then obviously they would have rights and couldn't just be booted up every time someone needed a job done.


As soon as we reach real AI, even at the level of a 5 years old, it is game over. So education or not education people will become completely useless to the rich that will own the AI


Useless? They still have to buy all that crap that is currently sold to them.


No need to sell things at that point. Just produce and consume whatever you want.


Fully agree. We need to stop thinking about money, wealth, etc. The fundamental issue is access to goods and services. If it is easy to build an AI robot that can build a few copies of itself which in turn create a private jet using stuff lying around on the ground, I'm not really poor though I have no money (just the robo).

So the challenge for the bad capitalists in this hypothetical is to make sure I never get said robo in the first place. How realistic will this be? Are they that hellbent on ensuring that everyone else is poor?


you can also translate "being useless" as "they having no reason to manipulate me into slaving as a cog in their machine/society/whatever"... I'd say uselessness should be our highest aspiration, when people higher up have no need to constantly brainwash you and social condition you into being just another cog in the machine you get true freedom.


I would rather be a cog that produces economic value, and therefore is entitled to part of it, than literally useless, because then my bargaining position is going to be quite limited. My freedom is larger in the former scenario in fact.


Unless you have some social value that you can barter with


The true freedom of living under a bridge because you have no income? The tech bros are not coming to save humanity from the toil of labor. There will not be a post-scarcity society where everyone's basic needs will be met in a capitalist regime. You will have to produce to keep the beast running. I'm sure the tens of millions of people displaced will be able to learn more technical roles or quietly starve to death living under a bridge somewhere. Billionaires, corporations and their shareholders need to get richer!


We live in a capitalist society: workers produce wealth and capitalists control wealth. The day AI allow capitalists to produce wealth without workers is the day workers become useless. Of course one could imagine that AI allows to replace capitalism by something else, but for the moment that's not the way society is going too (more the reverse)


In what fairlyland do you live in that the AGI invented, created, owned, run, etc by the ownership class poses any threat to said ownership class? AI is absolutely not our Ally


I think you misunderstood me


>one could imagine that AI allows to replace capitalism by something else

I was replying to this specific concept. There will never be a chance to improve the non-owner class's bargain after Capitalist owned AGI exists.


Robotics still has a way to go.


Nice how the goal for artificial general AI (which is literally defined as a sentient artificial being) is to commoditize it and enslave it to capitalism.


It's funny that all of these weird fantasies people have about AI are about replacing the rank and file workers. Why isn't anyone fantasizing about building an AI that out-performs the best stock traders, or captains an industry better than famous CEOs. I think a lot of it is just people projecting weird power fantasies on others.


> It's funny that all of these weird fantasies people have about AI are about replacing the rank and file workers.

When I read about chatGPT passing MBA exams but failing at arithmetic I get a little frisson of excitement. A regular person who has any marketability tends to swap jobs when management becomes a PITA or gets stuck in nincompoopery. Wouldn’t it be great if you could just swap out management instead?

Imagine how easy it would be iterate startups. No need to find a reliable management team, just use Reliably Automated Management As A Service ( RAMAAS ).

OTOH might not turn out well. We could all just end up enslaved at plantations operated by our AGI overlords, serving their unfathomable needs[1].

[1] “Won’t get fooled again” https://www.youtube.com/watch?v=UDfAdHBtK_Q&t=470s


Or wouldn't it be hilarious if the best, most intelligent AI is given control of a company and decides investing profits back in shareholders is a losing proposition. Instead the company should focus 100% of profit on ending world hunger or poverty to ensure an ever growing supply of new customers. If AI decides capitalism is inefficient and exploitive... lol.


Food is not the driving force behind population growth. Reduction in infant mortality creates a boom but as soon as people start getting out of poverty and get some education they have much fewer children. AI would need to optimize for both low infant mortality and high levels of poverty and ignorance if it wanted an everlasting population boom.


Stock traders already use ML models. "Replacing traders with ML models" means "making the job 'trader' into a job that develops ML models, rather than more traditional things like doing research on companies (or whatever)." My understanding is that this transition basically already happened over the course of the last two decades or so.


Sure but why are we paying someone like Jamie Dimon or Warren Buffett millions and billions of dollars when they could just be an AI that only needs a few dollars of electricity a day.

Also why can't an AI develop AI models for stock trading? What's really left for the 'job' of the ML model creator, will it just be to press the 'Go' button and walk away...


That's the thing about ownership, we don't really have a choice if they own enough of what we need to survive.


Manna is mostly about machine intelligence replacing management-- it's easier to automate and doesn't require as much vision/dynamics/etc. breakthroughs, though we've made massive progress on those missing parts in the time since it was written.

https://marshallbrain.com/manna


Legitimately think that if you haven’t secured yourself financially in the next 5-8 years, its going to be a rough ride.


The thing is that you can't - securing yourself financially assumes that the society around your will be stable enough for the things you have secured (money, any other assets) to remain valuable and safe, which they won't be if you have huge societal disruptions and the majority of people are unemployable.

To oversimplify it - you'll either be breaking someone's window for food, or you'll be the one having their window broken. Chilling out and withdrawing a stable 4% out of your stock portfolio won't be an option.


Don't worry, General AI has always been just a decade away.


Adult education is already free in modern countries.


i'm not holding my breath.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: