I am surprised at the negativity from HN. Their clear goal is to build superintelligence. Listen to any of the interviews with Altman, Demis Hassabis, or Dario Amodei (Anthropic) on the purpose of this. They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.
Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.
All of those things would put them out of business if realized and are just a PR smokescreen.
Have we not seen enough of these people to know their character? They're predators who, from all accounts, sacrifice every potentially meaningful personal relationship for money, long after they have more than most people could ever dream of. If we legalized gladiatorial blood sport and it became a billion-dollar business, they'd be doing that. If monkey torture porn was a billion dollar business they'd be doing that.
Whatever the promise of actual AI (and not just performative LLM garbage), if created they will lock the IP down so hard that most of the population will not be able to afford it. Rich people get Ozempic, poor people get body positivity.
That's the crazy thing about this "super AI" business is that at some point no one would buy it because no one could afford because no one has a job (spare me the UBI magic money fantasy). I love the body positivity line. But if such a thing came to pass, I think something different would probably happen to the rich.
I continue to be amazed at how motivated some of us are to make such cruel, far-reaching and empty claims with regards to people of some popularity/notoriety.
Oh yes, Larry Ellison and Sam Altman are the real victims here!
I continue to be amazed at how desperate some of us are to live in Disney's Tomorrowland that we worship non-technical guys with lots of money who simply tell us that's what they're building, despite all actions to the contrary, sometimes baldfaced statements to the contrary (although always dressed up with faux-optimistic tones), and the negative anecdotes of pretty much anyone who gets close to them.
A lot of us became engineers because we were inspired by media, NASA, and the pretty pictures in Popular Science. And it sucks to realize that most if not all of that stuff isn't going to happen in our lifetimes, if at all. But you what guarantees it not to happen? Guys like Sam Altman and Larry Ellison at the helm, and blind faith that just because they have money and speak passionately that they somehow share your interests.
Or are you that guy who asks the car salesman for advice on which car he should buy? I could forgive that a little more, because the car salesman hasn't personally gone on the record about how he plans to use his business to fuck you.
Do you want a superintelligence ruling over all humanity until the stars burn out controlled by these people?
The lesson of everything that has happened in tech over the past 20 years is that what tech can do and what tech will do are miles apart. Yes, AGI could give everyone a free therapist to maximize their human well-being and guide us to the stars. Just like social media could have brought humanity closer together and been an unprecedented tool for communication, understanding, and democracy. How'd that work out?
At some point, optimism becomes willfully blinding yourself to the terrible danger humanity is in right now. Of course founders paint the rosy version of their product's future. That's how PR works. They're lying - maybe to themselves, and definitely to you.
Just my opinion/observation really but I believe its because people are implicitly entertaining the possibility that it is no longer about software or rather this announcement implicitly states that talent long term isn't the main advantage but instead hardware, compute, etc and most importantly the wealth and connections to gain access to large sums of capital. AI will enable capital/wealthy elite to have more of an advantage over human intelligence/ingenuity which I think is not typically what most hacker/tech forums are about.
For example it isn't what you can do tinkering in your home/garage anymore; or what algorithm you can crack with your intrinsic worth to create more use cases and possibilities - but capital, relationships, hardware and politics. A recent article that went around, and many others are believing capital and wealth will matter more and make "talent" obsolete in the world of AI - this large figure in this article just adds money to that hypothesis.
All this means the big get bigger. It isn't about startup's/grinding hard/working hard/being smarter/etc which means it isn't really meritocratic. This creates an uneven playing field that is quite different than previous software technology phases where the gains/access to the gains has been more distributed/democratized and mostly accessible to the talented/hard working (e.g. the risk taking startup entrepreneur with coding skills and a love of tech).
In some ways it is kind of the opposite of the indy hacker stereotype who ironically is probably one of the biggest losers in the new AI world. In the new world what matters is wealth/ownership of capital, relationships, politics, land, resources and other physical/social assets. In the new AI world scammers, PR people, salespeople, politicians, ultra wealthy with power etc thrive and nepotism/connections are the main advantage. You don't just see this in AI btw (e.g. recent meme coins seen as better path to wealth than working due to weak link to power figure), but AI like any tech amplifies the capability of people with power especially if by definition the powerful don't need to be smart/need other smart people to yield it unlike other tech in the past.
They needed smart people in the past; we may be approaching a world where the smart people make themselves as a whole redundant. I can understand why a place like this doesn't want that to succeed, even if the world's resources are being channeled to that end. Time will tell.
Exactly as you say. AI is imagined to be the wealthy nepotist's escape pod from an equal playing field and democratized access to information. Win at all cost soulless predators who find infinite sacrifice somehow righteous love games like the ones that macro-scale AI creates.
The average person's utility from AI is marginal. But to a psychopath like Elon Musk who is interested in deceiving the internet about Twitter engagement or juicing his crypto scam, it's a necessary tool to create seas of fake personas.
>> Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.
I joined in 2012, and been reading since 2010 or so. The community definitely has changed since then, but the way I look at it is that it actually became more reasoned as the wide-eyed and naive teenagers/twenty-somethings of that era gained experience in life and work, learned how the world actually works, and perhaps even got burned a few times. As a result, today they approach these types of news with far more skepticism than their younger selves would. You might argue that the pendulum has swung too far towards the cynical end of the spectrum, but I think that's subjective.
I think (big assumption) most here are from that same period/time. Most are in their late 30s, 40s. Kids, busy life etc. Not the young hacker mindsets, but the responsible maybe a bit stressed person.
One time I bought a can of what I clearly thought was human food. Turns out it was just well dressed cat food.
> to unlimited energy, curing disease, farming innovations to feed billions,
Aw they missed their favorite hobby horse. "The children." Then again you might have to ask why even bother educating children if there is going to be "superintelligent" computers.
Anyways.. all this stuff will then be free.. right? Is someone going to "own" the superintelligent computer? That's an interesting proposition that gets entirely left out of our futurism fatansy.
I’m willing to believe. It’s probably the closest we’ve come to actually having a real life god. I’m going to get pushback on this but I’ve used o1 and it’s pretty mind blowing to me. I would say something 10x as intelligent with sensors to perceive the world and some sort of continuously running self optimization algorithm would essentially be a viable artificial intelligence.
> They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.
Look at who is president, or who is in charge of the biggest companies today. It is extremely clear that intelligence is not a part of the reason why they are there. And with all their power and money, these people have essentially zero concern for any of the topics you listed.
There is absolutely no reason to believe that if artificial superintelligence is ever created, all of a sudden the capitalist structure of society will get thrown away. The AIs will be put to work enriching the megalomaniacs, just like many of the most intelligent humans are.
Unlimited energy? No, I don't believe in this. I thought people on HN generally accepted science and not nonsense. A "superintelligence" that would ... what? Destroy the middle, destroy the economy, cause riots and civil wars? If its even possible. Sounds great.
I mean, I had some faith in these things 15 years ago, when I was young and naive, and my heroes were too. But I've seen nearly all those heroes turn to the dark side. There's only so much faith you can have.
Not envious of multi-billionaire's companies gathering capital, IP, knowledge and infrastructure for huge scale modern day private Stasi apparatuses. Just bitter.
Once AGI is many times smarter than humans, the 'guiding' evaporates as foolish irrational thinking. There is no way around the fact when AGI acquires 10 times, 100, 1000 times human intelligence, we are suddenly completely powerless to change anything anymore.
AGI can go wrong in innumerable ways, most of which we cannot even imagine now, because we are limited by our 1 times human intelligence.
The liftoff conditions literally have to be near perfect.
So the question is, can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety? Looking at how it is going so far, I would say absolutely not.
> [...] 1000 times human intelligence, we are suddenly completely powerless [...] The liftoff conditions literally have to be near perfect.
I don't consider models suddenly lifting off and acquiring 1000 times human intelligence to be a realistic outcome. To my understanding, that belief is usually based around the idea that if you have a model that can refine its own architecture, say by 20%, then the next iteration can use that increased capacity to refine even further, say an additional 20%, leading to exponential growth. But that ignores diminishing returns; after obvious inefficiencies and low-hanging fruit are taken care of, squeezing out even an extra 10% is likely beyond what the slightly-better model is capable of.
I do think it's possible to fight against diminishing returns and chip away towards/past human-level intelligence, but it'll be through concerted effort (longer training runs of improved architectures with more data on larger clusters of better GPUs) and not an overnight explosion just from one researcher somewhere letting an LLM modify its own code.
> can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety
Those power-hunger billionaire CEOs who shall remain nameless, such as Altman and Musk, are fear-mongering about such a doomsday. Goal seems to be regulatory capture and diverting attention away from the more realistic issues like use for employee surveillance[0].
The concern is the trend. As these systems become more intelligent, and as we hand over more and more capabilities beyond a text i/o, it could actually deactivate the oversight either technically or through social engineering.
The best exception is high quality protein powder. Additional protein consumption is extremely healthy for you, short and long term. But it's technically an ultra-processed food.
It's probably better to each 4-5 chicken breasts per day instead of protein powder. But as far as I know there hasn't been a measured difference.
Within some mental model, isolated protein powder is healthy because we generally treat high protein consumption as low-risk for most people and recognize that protein isolates can be very effective for professional and amateur athletes to consume a lot of while building muscle.
In no way does that imply that these protein isolates are "extremely healthy" for the general public or even for anyone in the long term. There's just not any data to say that specifically (it's too niche to perform those kinds of studies), and far too little reason to make that assumption with confidence.
(And it's almost certainly a terrible idea for most people to eat 4-5 chicken breasts per day -- or a comparable amount of protein isolate powder. Please remember that most people are not living a gym bro lifestyle and shouldn't be following gym bro nutritional advice in the first place.)
Protein isn't bad for you and 4-5 chicken breasts is around 120g a day, a healthy amount for an adult. By way of comparison, indigenous people where I live ate hundreds of grams a day in their traditional diets. I've run into this whole "don't eat too much protein, oh man you will die!" nonsense meme before and I wonder where it came from.
Bad math? Per USDA standards, a single boneless skinless chicken breast has ~54 grams of protein; so 4-5 would be ~200-250g of protein.
Because that's grossly outside the norm for the general public, you're not going to find any evidence to support the idea it's a healthy amount for a typical person to consume for a long period of time. And likewise, you'll find little evidence saying what negative consequences it might have, if any.
You're welcome to make whatever assumptions you want to in that case, but there's not a lot of ground for anyone to convince skeptics who disagree with them. It's tenuous assumptions all the way down.
Regardless, in the real world, that also represents 1200-1500 calories of absurdly (mind-numbingly) high-satiety food and quite a lot of slow digestive bulk. Most people simply wouldn't be able to consume that while also eating a varied diet that provides them with adequate long-term nutrition. So it's probably a pretty bad idea for them to dedicate themselves to it, unless -- like some athletes and gym bros -- they have the further discipline to also stuff themselves of all the other stuff they need to eat while also not eating so much that they become overweight. Do you know many people like that? I'm not sure I've met more than a handful in my lifetime.
Whatever the impact of the very high protein consumption itself in some abstract theoretical kind of way, which we're far from having evidence into understanding, it's just terrible advice for the general public because of the secondary effects we might reasonably expect in practice.
It's possible to die of "protein poisoning" but you really have to be in a survival situation with no sources of fat. Hunters and trappers are known to have died this way when only lean meat was available in far northern winters. Maybe this is where the idea came from.
> Moreover, epidemiological studies show that a high intake of animal protein, particularly red meat, which contains high levels of methionine and BCAAs, may be related to the promotion of age-related diseases. Therefore, a low animal protein diet, particularly a diet low in red meat, may provide health benefits.
Consuming high amount of animal protein was pointed out by the urologist as one of the things I should evade. Apparently that contributes to development of kidney stones. So there’s at least one way in which it’s bad for you.
A high-protein diet can increase calcium and uric acid levels in the urine, raising the risk of urinary stones. I have experienced this, and got the cystoscopy to prove it. It sucked
That seems to be an overly reductive view on the value of knowledge.
What practical purpose does studying ancient civilizations have? Why do we send expensive telescopes into space to study faraway galaxies and try to uncover mysteries of the big bang? When can we expect the results from number theory to lower the price of gas at the pump?
Knowing that mitochondria have their own DNA is knowledge. Knowing that they reproduce independently of their home cell is knowledge. Learning whether they evolved from a separate viable organism would be knowledge. Learning whether we can make them viable, or breed them separately, and use them in therapies -- all knowledge.
Whether they are "alive" or not is just the definition of a word.
Much of science is about defining words in ways that match the underlying general structure of the system being studied.
A subset of scientists want to come up with an operational definition of "What is life", which may or may not include things like viruses and mitochondria. As you say, it's mostly definitional, but by defining this, we can potentially make our understanding match up with the latent reality.
Most big tech companies did mass layoffs in the past few years. They are now fully recovered and posting record profits this week all with the same leadership. You are saying they are incompetent and someone else could have navigated this better?
I ask this myself. What is the point of both doing layoffs, and then also firing the CEO? That next person will probably be worse, won't have the learnings of the layoffs, and would probably increase chances of layoffs happening again.
Unpopular opinion, but I don't understand this type of comment. It sounds like people want executives to get some kind of punishment or pain as a consequence of laying people off.
But what would that even look like? A fine? That would probably make no practical difference, and would discourage them from making changes that need to be made. Fire them? Then you would probably get a worse decision maker in the driver seat going forward, who also didn't learn from experience of going through layoffs.
Layoffs are awful. They affect lives and families deeply. But all businesses don't go up and to the right forever. Reductions are a necessary part of running competitive companies.
> Iwata ran the Kyoto, Japan-based video game company [Nintendo] from 2002 until his death in 2015. To avoid layoffs, Iwata took a 50% pay cut to help pay for employee salaries, saying a fully-staffed Nintendo would have a better chance of rebounding. [0]
By taking that pay cut, how many employees could Nintendo keep that would otherwise have been fired? 5? 20? Certainly not in the hundreds or thousands. This just seems like virtue signalling.
Edit: to expand on my point re virtue signalling: the article states the CEO took a salary cut, not total compensation (and it doesn’t elaborate on the value of the cut). Salary is a small fraction of CEO total compensation - the bulk of which is stock based, and even in the event that stock grants were also cut, the CEO surely already had significant stock. Cutting a relatively small component of compensation in order to boost the stock price which disproportionately adds to the CEO’s personal wealth seems like virtue signalling to me. If the CEO said “shareholders be damned, morale and culture are all that matters in the long run, no layoffs etc etc” that would seem more meaningful.
Optics matter. When a CEO approves mass layoffs, and ALSO approves a 60% salary increase, I can only conclude that the CEO is nothing but self-interested. Rewarding yourself for firing a statistically significant percentage of your company SHOULD indicate failure, not success.
I feel like this would be the few times where virtue signaling is probably a benefit. It can make all the people in the company feel like the leader is on their side, and maybe make the employees less resentful and be more productive.
In 2021, the average S&P 500 CEO made 324 times what their company's average worker made [0]. The widest gap was at Amazon, with Bezos making over six thousand times as much as the average worker at his company. The report is from a few years ago, since then the gap has increased further. So yes, many companies could trivially retain hundreds if not thousands of workers simply by cutting the CEO's pay.
Taking a massive pay cut isn't "virtue", it's a direct and measurable sacrifice. Just because you don't agree with the position or outcome doesn't make it "virtue", I'd wish people stop using that word for things they disagree with.
5 or 20 workers (and their families) who don't get jerked around for stuff way outside their control is an absolute win.
Right now, there is no actual downside for executives. Just less upside. Did they earn XX Million this year or XXX Million. Some tangible downside would be nice.
I mean, heck, why aren't they fired? And really, it's more then middle management where that'd make a huge difference. If bad performance led to actual shakeups in the entrenched middle management, we might actually see business practices change rather than continue on through the established fiefdoms and petty corporate politics.
If the CEO taking a paycut materially reduces the size of what would otherwise be a large layoff on the basis if reduced cost savings, the CEO was way overpaid to start with. (And it probably doesn't change the number of people the firm can usefully employ under changed conditions, even if it provides cost savings, because the latter is based on whether employing them makes more money than it costs, to which external cost savings are probably mostly irrelevant.)
It should be added that layoffs are not commonplace in Japan. What you'll see instead are bonus cuts and salary reductions (at most 20 percent). To do US style layoffs you would need to show that the affected department was directly involved in a line of business the company has abandoned. If this isn't the case the employees stand a good chance of successfully suing the company in court. And then there's the reputational damage which would be considerable in a country where lifetime employment is valued.
In other words this kind of behavior wouldn't be viewed as all that surprising locally.
Mass layoffs -- unless the company is actually tanking and on its last leg, which isn't the case with any of the tech companies who have been doing this recently -- cause the share price to rise. CEO pay, or at least bonuses, are often tied to the share price.
To put salt in the wound, my understanding is the research on elective layoffs (ones that aren't forced by circumstances) indicates the outcomes are mixed at best, leaning negative.
So all this crap is just cargo-culting our current management paradigm, and/or execs cooperating to suppress wages and weaken labor, which had gotten a bit too uppity after Covid (that's one thing waves of layoffs like this do accomplish).
Probably: just leave out the full-of-shit sentence...
There's an Indonesian joke based on the word "responsibility" which is "tanggung jawab". "Tanggung" in this context means to carry the consequences, and "jawab" is to answer. One can say to a friend "We have to share this responsibility. I'll do the answering, and you'll do the carrying of the consequences."
I think a start would be to stop saying the phrase “I take full responsibility for this decision” if you aren’t also publicly taking a pay cut.
There’s no need for a CEO to bring themselves and their feelings into the conversation. It’s this weird attempt at empathy that fails, because the CEO isn’t making any sacrifices.
Just say that there are layoffs. Most rational people who have been in any business for more than a few years recognize them as an unfortunate part of the business cycle.
Agreed. But just say that. No need to pretend taking responsibility, which is defined as facing consequences when things go bad.
“As CEO, I’m truly sorry to those impacted. But I strongly believe that this change is what is needed now to make sure Dropbox can thrive in the future.”
Simple: they sure love to talk a big game about responsibilities and taking responsibilities. Until it's time to actually do it. For the good of the company of course (if the company is in such dire straits, as the most highly paid employee - and probably not the hardest working one - why don't you take a big pay cut? For the good of the company of course).
> It sounds like people want executives to get some kind of punishment or pain as a consequence of laying people off.
No - as a consequence of poorly planning cost controls. It's not that the people don't need to be laid off for the health of the company, but that the executives who made the bad decisions don't get the boot along with them, in favor of more cautious or frugal leaders.
Yeah I'm not sure people with compensation well into the millions get to go "haha, sorry, still figuring this out!"
Cool, can we pay you intern wages then?
[EDIT] To make this a bit more substantive: I think this is a sign both that we need to stop with this whole professional-managerial-class horse-shit, promote people who know how to do the actual work of the business, and reduce exec wages because the job's simply not all that damn special and the comp shouldn't be so high that only godlike-perfect performance could possibly justify it, because in truth nobody's that good at it.
Step one of this would be reducing M&A activity (hellooooo antitrust enforcement) and reigning in the power of finance, since letting Wall Street suits put their HBS frat brothers in charge of everything is at the heart of why this stuff's how it is.
Preach! The job is straightforward - lot of these decisions they make are very clearly articulated by LLMs. We need more businesses, more competition, not bigger businesses.
I think it's an outlet for general frustration with the justification for high executive compensation (and returns on capital, for that matter) often being "they have much more responsibility and risk" when the actual downside is typically nonexistent, and even if there are consequences, the outcome is something like "LOL still richer than any ten of you combined will ever be", i.e. the "risk" is all fake.
Perhaps you've misinterpreted that statement? It is the board that takes on greater risk with executives as compared to other employees. The executives are given the keys to the kingdom, which means only an exceedingly small group of trusted individuals can be considered for the job. By the transitive properties of supply and demand, when supply is limited, price goes up.
The thing is, high executives have become a self-perpetuating class entrenched in corporate boards, this is the reason CEOs comoensation keeps skyrocketing, not supply and demand. And this entrenchment obviously also stifles accountability.
How does this self-perpetuating entrenchment not become a factor in what establishes the supply and demand, instead managing to exist as something off to the side?
Why do people understand this phrase "taking responsibility" as some kind of admitting of fault? Maybe I'm missing something, but it seems neutral and could even be referring to positive future outcomes for the company.
Because the context is that 20% of their employees are now unemployed. The tone of the letter is also "this is a hard but necessary choice" and not "this is great news for dropbox!"
> As CEO, I take full responsibility for this decision and the circumstances that led to it, and I’m truly sorry to those impacted by this change.
I think Houston being a cofounder means the financial picture looks different. Last year he made $1.5M in total comp. Which is a lot in absolute terms, but cutting his pay to zero would only allow keeping ~6 of those laid off. OTOH, he owns 25% of a $9B company. His salary is a rounding error compared to the performance of the stock.
Not to be an apologist, but I bet Drew really does feel responsible. He’s not professional management, and he always acted like Dropbox was his kid, at least from what I saw working there. I’m sure this feels shitty to him, though it’s obviously worse for the people laid off.
True - it means he could transfer, say, a billion dollars worth of stock to the affected employees and still be a billionaire. If he actually felt all that bad about it, of course.
Why can't the executive taking responsibility also take a pay cut and tighten their belt the way they expect the company to and the people who they've fired do?
Taking a hit on their giant salary seems completely reasonable to me. Having skin in the game makes people perform better in all sorts of cases, and I don't see why a CEO would be different.
Do you also object to sales reps or athletes making less money after a long period of performing poorly?
It's the grey goo of manager-speak. It rides both sides, but never truly picks one.
The other two options: blame employees(someone not you), or take some form of punishment as an individual.
I too do it sometimes, and I feel bad each time. I at least tell people what it is and that it's just the reality of the situation. I'm not gonna commit career suicide and jeopardize my family's livelihood but I also won't blame them. So I follow the meaningless middle road where the status quo mostly stays and we all at least learn from it.
Ah, you're only going to endanger the livelihood of your employees' families, most of whom have substantially less wealth than you. I bet the rest of your employees don't really care that you "feel bad."
I'm not responding in the context of layoffs nor have I fired people because of my mistakes luckily. That's a different story, and I was trying to explain to OP my take on it.
Yeah that opinion is unpopular because it's deeply stupid.
“you would probably get a worse decision maker in the driver seat going forward, who also didn't learn from experience of going through layoffs” or you would maybe get a better decision maker who didn't have to layoff — or hire unnecessary — workers in the first place? Ridiculous speculation.
Responsibility without consequences just means failing upward. That's why we have a gilded executive class of people who are barely qualified to run a local Taco Bell franchise.
> But what would that even look like? A fine? That would probably make no practical difference, and would discourage them from making changes that need to be made. Fire them? Then you would probably get a worse decision maker in the driver seat going forward, who also didn't learn from experience of going through layoffs.
Well they're the one asking to "take responsibility" here, the fact that they claim responsibility yet nothing happens is exactly why people don't like this phrasing.
Also who the fuck else can be responsible anyways ? The cook ? The guy who mops the fucking floor ?
People come in two flavors: conflict theorists and mistake theorists.
Conflict theorists think that every event is a result of power struggler. So if someone gets hurt, someone must be punished for that.
Mistake theorists think that the world is complex and sometimes bad stuff happens because most people operate with good intentions most of the time. Often, that means no punishment needs to be metted out.
To mistake theorists, conflict theorists look like ideological blood thirsty savages. To conflict theorists, mistake theorists look like enemy troops.
This is a gross oversimplification but it always shocks me to see how much more conflict theorists there are on hn now than before. So many comments here blaming the CEO or capitalism, most of which are going off extremely scant information.
yeah, taking a personal financial hit on this would go some way to at least pretend to actually have tried preventing it. Not sure about this particular CEO's salary, but I wouldn't be surprised if you couldn't finance a few engineers' salaries by cutting the CEO's, without it even hurting very much.
Not saying it's enough, not saying that's the only way, but I find it peculiar that this seems to be unthinkable.
> It sounds like people want executives to get some kind of punishment or pain as a consequence of laying people off
Um... yeah. Yeah, that's pretty much exactly what I want.
I used to work at a Dairy Queen. One dude there had been working there for a couple years. Unfortunately, one shift his drawer came up a dollar short. Our cutoff was 5 cents - a nickel - over or under. He was immediately terminated, of course.
He cost the company one dollar. A Dairy Queen cashier making minimum wage is held to a higher standard of accountability.
Wild to think we are at the final moments in human history when most art, writing, video, entertainment, speeches were created by humans. Soon the majority will be AI-created, and it will stay that way.
I somewhat doubt that. In many cases, the humanity is the entire point. Racing is a great example. Both horses and cars are faster than humans and have been for a long time. We have horse races and car races. We still also have human races. They didn't get replaced. Fandom just split, though I'm sure a lot of people are entertained by all of them. People still go to see cover bands when the Chuck E. Cheese band playing over a karaoke track has been just as "good" for decades if all you care about is sounding like the real thing. We still watch human combat sports even though lions are better fighters and robots probably are, too.
People give the same example in chess that we still watch people playing chess.
However that is beside the point. In both race and chess we want to know/reward the best human. People don't listen to podcasts to reward anyone or to find the best podcaster. So listening to a random podcast that appeals to you is much easier than watching stockfish vs Alpha go. The things you are describing are activities undertaken by very few for a living and are very binary. Most things are not like that and are much more likely to be disrupted by LLM spam.
Not anymore wild than people flying, television, computers, etc. For all of human history, except for the last century, things like art were time consuming, difficult to spread, available to a select few and just altogether a different world. I would say that generative AI is in the same category as photography and digital art (e.g. photoshop).
reply