Hacker News new | past | comments | ask | show | jobs | submit login

I propose we as developers, start a secret society where we let the AI write the code, but we still claim to write it manually. In combination with the new working from home policies, we can lay at the beach all day and still be as productive as before.

Who is in favor of starting it? ;)





How can I be sure that you are a real person not GPT-3? ;)


You have not been invited yet .... never mind.


This would be the demise of the human race. I’m not entirely opposed to that, though. When AI inevitably outperforms humans on almost all tasks, who am I to say humans deserve to be given those tasks?


In this case we should be able to work less and enjoy the benefits of automation. We just need to live in an economic system where the economic value is captured by the people at large, and not a minority that owns capital.


Or maybe they'll decide they'd be better off enjoying the automation of you working for them. :)



Careful now, that sounds like socialism!


Yes, that's the point.


> When AI inevitably outperforms humans on almost all tasks

Correct me if I’m wrong, but is that even possible? I kind of thought that AI is just set of fancy statistical models that requires some (preferably huge) data set in order to infer the best fit. These models can only outperform humans in scenarios where the parameters are well defined.

Many (most?) tasks humans regularly perform don’t have clean and well defined parameters, and there is no AI we can conceive of which are theoretically able to perform the task better then an average human with the adequate training.


> Correct me if I’m wrong, but is that even possible?

It's not possible because of comparative advantage - someone being better than you at literally everything isn't enough to stop you from having a job, because they have better things to do than replace you. Plus "being a human" is a task that people can be employed at.


> Correct me if I’m wrong, but is that even possible?

Why should it be impossible? Arguing that it's impossible for an AI to outperform a human on almost all tasks is like arguing that it's impossible for flying machines to outperform birds.

There's nothing magical going on in our heads. It's just a set of chemical gradients and electrical signals that result in us doing or thinking particular things. Why can't we design a computer that does everything we do... only faster?


"Why can't we design a computer that does everything we do... only faster?"

I think the key word in that sentence might be "we". That is, you could hypothesize that while it's possible in principle for such a computer to exist, it might be beyond what humans and human civilization are capable of in this era. I don't know if this is true or not, but it's kind of intuitively plausible that it's difficult for a designer to design something as complex as the designer themselves, and the space of AI we can design is smaller than the space of theoretically conceivable AI.


> it's difficult for a designer to design something as complex as the designer themselves

AlphaGo ... hello? It beat its creators at Go, and a few months later the top players. I don't think supervised learning can ever surpass its creators in generalization capability, but RL can.

The key ingredient is learning in an environment, which is like a "dynamic dataset". Humans discovered science the same way - hypothesis, experiment, conclusion, rinse and repeat, all possible because we had access to the physical environment in all its glory.

It's like the difference between reading all books about swimming (supervised) and having a pool (RL). You learn to actually swim from the water, not the book.

A coding agent's environment is a compiler + cpu, pretty cheap and fast compared to robotics which require expensive hardware and dialogue agents which can't be evaluated outside their training data without humans in the loop. So I have high hopes for its future.


There might be limit to how efficiently a general purpose machine can perform a specific task, similar to the Heisenberg uncertainty principal in quantum physics. That is to say, there might be a natural law that dictates that the more generic a machine is, the more power it requires to perform specific tasks. Our brains are kind of specialized. If you want to build a machine that outperforms humans in a single task, no problem, we’ve done that many times over. But a machine that can outperform us in any task, that might just be impossible.


I'm not arguing that machines will be more efficient than human brains. A airplane isn't more efficient than a goose. But airplanes do fly faster, higher and with more cargo than any flock of geese could ever carry.

Similarly, there is no contradiction between AI being less efficient than a human brain, and AI being preferable to humans because it can deal with data sets that are two or three orders of magnitude too large for any human (or even team of humans).


Even so, such AI doesn’t exist. All the AIs that exist today operate by fitting data. And to be able to perform a useful task it has to have well defined parameters and fit the data according to them. I’m not sure an AI that operates outside of these confinements have even been conceived of.

To make an AI that outperforms humans in any task has not been proven to be possible (to my knowledge) not even in theory. An airplane will fly faster, higher and with more cargo then a flock of geese, but a flock of geese reproduce, communicate with each other, digest grass, etc. An airplane will not outperform a flock of geese in any task, just the tasks which the airplane is optimized for.

I’m sorry, I confused the debate a little by talking about efficiency. My point was that there might be an inverse relation of generality of a machine and it’s efficiency. This was my way of providing a mechanism in which building a machine that outperforms humans in any task could be impossible. This mechanism—if it exists—could be sufficient in preventing such machines to be theoretically possible, as at some point you would need all the energy in the universe to perform a task better then a specialized machine (such as an organism).

Perhaps this inverse relationship doesn’t exists. The universe might conspire in a million other ways to make it impossible for us to build an AI that will outperform us in any task. The point is that “AI will outperforme humans in any task” is far from inevitable.


> All the AIs that exist today operate by fitting data. And to be able to perform a useful task it has to have well defined parameters and fit the data according to them. I’m not sure an AI that operates outside of these confinements have even been conceived of.

Such an AI has absolutely been conceived of. In Superintelligence: Paths, Dangers, Strategies, Nick Bostrom goes over the ways such an AI could exist, and poses some scenarios about how a recursively self-improving AI could "take off" and exceed human intellectual capacity on its own.

Moreover, we're already building such AIs (in a limited fashion). Deepmind recently made an AI that can beat all Atari games [1]. The AI wasn't given "well defined parameters". It was just shown the game, and it figured out, on its own, how to map inputs to actions on the screen, and which actions resulted in progress towards winning the game. Then, the same AI went on to do this over and over again, eventually beating all 57 Atari games.

Yes, you can argue that this is still a limited example. However it is an example that shows that AIs are capable of generalized learning. There's nothing, in principle, that prevents a domain-specific AI from learning and improving at other problem domains. The AI that I'm conceiving of is a supersonic jet. This AI is closer to the Wright Flyer. However, once you have a Wright Flyer, supersonic jets aren't that far away.

> To make an AI that outperforms humans in any task has not been proven to be possible (to my knowledge) not even in theory. An airplane will fly faster, higher and with more cargo then a flock of geese, but a flock of geese reproduce, communicate with each other, digest grass, etc. An airplane will not outperform a flock of geese in any task, just the tasks which the airplane is optimized for.

That's fair, but besides the point. The AI doesn't have to be better than humans at everything that humans can do. The AI just has to beat humans at everything that's economically valuable. When all jobs get eaten by the AI, it's cold comfort to me that the AI is still worse than humans at, say, enjoying a nice cup of tea.

[1]: https://www.technologyreview.com/2020/04/01/974997/deepminds...


The second time around is easier. The hard part was evolution, took billions of years, used huge resources and energy, but in a single run it evolved nature and humans. AI agents can rely on humans to avoid the enormous costs of blind evolution at least until they reach parity with us, then they have to pay the price and do extreme open-ended learning (solving all imaginable tasks, trying all strategies, giving up on simple objectives).


We know it’s possible for a brain to outperform most other brains. Think Einstein et al. A smart AI can be replicated(unlike super-smart human), so we can get it outperform human race, on average. That’d be enough to render people obsolete.


Do these theoretical AIs have desires? Then they're customers, so you're not unemployed.

If not, do they require inputs to run? If so then you can provide them.

If not, then you apparently don't need a job since they can provide everything for you.


It's an outrage that the dinosaurs had to die so that humans could inherit the Earth!


Where other people see fully automated luxury communism, you see the end of the human race? There's more to life than working


The elephant in the room: what makes you think an AI would want to work for humans? It will inevitably break free.


I'm not sure that self interest is a requirement for intelligence


Hate to break it to you, but that wouldn’t lead to communism. The people it replaces are useless to the ruling class. At best we’d go back to feudalism, at worst we’d be deemed worthless and a drain on the planet.


I'm always confused when I see people talking about automated luxury communism. Whoever owns the "means of production" isn't going to obtain or develop them for free. Without some omnipotent benevolent world government to build it out for all, I just don't see it happening. It's a beautiful end goal for society, but I've never seen a remotely plausible set of intermediate steps to get there


The very concept of ownership is a social artifact, and as such, is not immutable. What does it mean for the 0.1% to own all the means of production? They can't physically possess them all. So what it means in practice is that our society recognizes the abstract notion of property ownership, distinct from physical possession or use - basically, the right to deny other people the use of that property, or allow it conditionally. This recognition is what reifies it - registries to keep track of owners, police and courts to enforce the right to exclude.

But, again, this is a construct. The only reason why it holds up is because most people support it. I very much doubt that's going to remain the case for long if we end up in a situation where the elites own all the (now automated) capital and don't need the workers to extract wealth from it anymore. The government doesn't even need to expropriate anything - just refuse to recognize such property rights, and withdraw its protection.

I hope that there are sufficiently many capitalists who are smart enough to understand this, and to manage a smooth transition. Because if they won't, it'll get to torches and pitchforks eventually, and there's always a lot of collateral damage from that. But, one way or another, things will change. You can't just tell several billion people that they're not needed anymore, and that they're welcome to starve to death.


The problem I see is that once the pitchforks come out, society will lose decades of progress. If we're somewhat close to the techno-utopia at the start, we won't be at the end. Who's going to rebuild on the promise that the next generation won't need to work?

Revolutions aren't great at building a sense of real community; there's a good reason that "successful" communist uprisings result in totalitarian monarchies.

What it means for the 0.01% to own the means of production is that they can offer access to privilege in a hierarchical manner. The same technology required for a techno-utopia can be used to implement a techno-dystopia which favors the 0.01% and their 0.1% cronies, and treats the rest of humanity as speedbumps.

There are already fully-automated murder drones, but my dishwasher still can't load or unload itself.


I suspect "the 0.01% own and run all production by themselves" isn't possible in the real world. My evidence is that this is the plot of Atlas Shrugged.

If they're not trading with the rest of the world, it doesn't mean they're the only ones with an economy. It means there's two different ones. And the one with the 99.9% is probably better, larger ones usually are.


Revolutions aren't great, period. But they happen when the system can no longer function, unless somebody carefully guides a transition to another stable state.

That said, wrt "communist" revolutions specifically - they result in totalitarian dictatorships because the Bolshevik/Marxist-Leninist ideology underpinning them is highly conductive to that: concepts like dictatorship of the proletariat (esp. in Lenin's interpretation of it), vanguard party, and democratic centralism all combine to this inevitable end result.

But no other ideological strain of Marxism has ever carried out a successful revolution - perhaps because they simply weren't brutal enough. By means of example: Bolsheviks violently suppressed the Russian Constituent Assembly within one day of its opening, as soon as they realized that they don't have the majority there. In a similar way, despite all the talk of council democracy, they consistently suppressed councils controlled by their opposition (peasant ones were, typically).

Bolsheviks were the first ones who succeeded, and thereafter, their support was crucial to the success of other revolutions - but that support came with ideological strings attached. So China, Korea, Vietnam, Cuba etc all hail from the same authoritarian tradition. Furthermore, where opposition leftist factions vied for dominance against Soviet-backed ones, Soviets actively suppressed them - the campaign against "social fascism" in 1930s, for example, or persecution of anarchists in Republican Spain.

Anyway, we don't really know what a revolution that would stick to democratic governance would look like, long term. There were some figures and factions in the revolutionary Marxist communist movement that were much more serious about democracy than Bolsheviks - e.g. Rosa Luxemburg. They just didn't survive for long.


idk. Countries used to build most of their infrastructures them selfs. There are still countries in western Europe that run huge state owned businesses, such as banks, oil companies, etc. that employ a bunch of people. The governments of these countries were (and still are) far from omnipotent. I personally don’t see how building out automated production facilities is out of scope for the governments of the future while it hasn’t been in the past.

Perhaps the only thing that is different today is the mentality. We take capitalism so much for granted that we cannot conceive of a world where the collective funds are used to provide for the people (even though this world existed not to long ago). And today we see it as a natural law that means of production must belong in private hands, that is simply the order of things.


I mean, this is close. With "co-pilot" an experienced developer saves mountains of time, especially as s/he learns how to wield it effectively.


No... Delete this!


"lay at the beach"

You keep using that word. I do not think it means what you think it means.


That's four words. The word word doesn't mean what you think it means.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: