Hacker News new | past | comments | ask | show | jobs | submit login
The Software Revolution (samaltman.com)
400 points by ggonweb on Feb 16, 2015 | hide | past | favorite | 388 comments



I disagree with this:

"The previous one, the industrial revolution, created lots of jobs"

That industrial revolution caused massive unemployment in India, in the Ottoman empire, in China... almost everywhere that had once been a famous textile center. The idea that the industrial revolution did not cause unemployment is an illusion that is caused by looking at only one nation state. But Britain was the winner of the early industrial revolution, and it was able to export its unemployment. And because of this, a breathtaking gap opened up between wages in the West and wages everywhere else. The so-called Third World was summoned into existence. You can get some sense of this by reading Fernand Braudel's work, "The Perspective of the World"

http://www.amazon.com/gp/product/0520081161/ref=pd_lpo_sbs_d...

The software revolution will be similar with some nations winning and many others losing.


I'm fairly convinced that eventually there will be two and only two choices: universal income and a shortened work week, or such extreme wealth divisions that stability demands a totalitarian police state resembling the worst sort of comic book cyberpunk dystopia. Age of abundance or feudal hellhole. Your pick. There simply will not be enough economically viable work to sustain any system that demands labor to maintain cash flows. Automation will be too efficient, programmable, adaptable.

... I suppose there is a third option: an anti-technology crusade that bans automation to restore employment. But a make-work economy sucks, and is not likely to succeed in the long term.


That's the contemporary situation. We have something close to a police state with mass surveillance, militarised police and heavy restrictions on protesting. At the same time we're also seeing more interest in Basic Income and I expect there will be more of a political push for that in the near future. All of the above are about trying to keep the lid on a situation where technology is facilitating greater inequality. In the current situation oligarchs with large capital reserves and the ability to purchase legislation can continue to exfiltrate much of the wealth created by the population.


> I suppose there is a third option: an anti-technology crusade that bans automation to restore employment. But a make-work economy sucks, and is not likely to succeed in the long term.

That is, unless we start harvesting magic dust from some desert planet with really big omnivorous worms.


> Age of abundance or feudal hellhole.

Why not both? We certainly have all kinds of societies on earth today. What makes you think we won't have areas that are highly progressive but also areas that are extremely sadistic in the treatment of their people?


An intersting thought experiment is to compare their economic output. If feudal hellhole is more efficient eventually only hellholes will remain.


Fortunately feudal hell holes seem less productive, especially in an age where productivity is more dependent on creativity than basic labour.


"If feudal hellhole is more efficient eventually only hellholes will remain."

Probably true. Evolution is amoral. If suffering has higher fitness, suffering wins. Just look at the ordeal that is human birth and infancy, for both mother and baby.


Considering that it is natural for humans to lust after power I can tell you what I think is the most likely outcome.

I think we're already seeing it form today and it's infancy doesn't detract from it's long-term danger.


Not sure why you are being downvoted, I think that you are absolutely right.


Another option is self-enforced population decline (already happening in many developed countries) - for now it's artificaly solved by immigration from not-so-well developed countries, but this will stop at some point.


While I see legitimate reasons to reduce population, that won't solve this problem. In fact it might make it worse.

While the future economy will be heavily automated, it will still be an economy. Declining demand will still do ugly things to it. It's likely that a drop in demand will do more to squeeze out human labor than machine labor, since the former is more expensive and in a declining economy everyone tries to squeeze margin.


It's not so much that the industrial revolution net created jobs -- like you, I'm not sure it did (even domestically). It's that the industrial revolution not only created jobs, it created the modern labor market.

Before it happened you had a much more distributed economy with many much smaller centers of production (many at the household level or close) in agriculture, trades, and craft manufacturing.

Afterwards in any domestic industry where economy of scale was a competitive advantage, you had a small number of much larger centralized producers. Almost certainly with many fewer jobs in actual production (consolidation corresponding to efficiencies of scale). If there was any net gain over time since it would have been in new products being made and some new support positions needed. It is not clear to me there was ever a trend to a net gain over timescales longer than a two decades or so.

So by and large it moved us to an economy with a profusion of increasingly competitively manufactured products to buy, and selling labor itself is the de facto form of subsistence rather than direct production.

I think you're correct that there's always been some externalization of unemployment and a lot of the visibility depended on industry specifics. This isn't new. But what is new is that software gives us essentially a higher order of automation that means (a) it sure looks like we're automating out old jobs even faster than we're creating new ones and (b) with more unemployment we get fewer places to externalize it. :/


I'm not sure that's fair. Laboring on construction projects, joining armies, digging minerals out of holes, seasonal work on farms, serving on ships, and so on have been ways for people to earn a living for millennia, selling their labor, not their produce. Having the capital to ply a trade or craft - money to buy raw materials, and tools to add value to them - would have been beyond the reach of most people for a lot of history. There's always been a labor market.

I guess what you mean is that the industrial revolution massively increased the amount of capital you needed to be able to produce goods competitively. Buying a set of tools and some raw materials to make things by hand stopped being a viable route out of the labor market into 'entrepreneurship' when mass machine-produced goods could be supplied more cheaply. Your option was to sell your labor to someone who owned some big machines, much like your ancestors sold theirs to people who owned a patch of fertile land, or a big pile of wood they wanted making into a ship.

So in a sense it created the modern labor market in that it locked a lot more people into it, and obviously diversified the kinds of jobs that existed. On the other hand, it was, as you say, a massive productivity multiplier which meant we were able to create more and more wonderful and complex goods for, in real terms, less and less cost, to the point where nowadays by exchanging your labor for cash, you can soon have enough money to afford a box of electronics that fits in your pocket and allows you to access all the world's information.


Every revolution has its downsides. But i think this revolution might just give us a break through from boring tasks and create opportunities for creative exploration one that we currently have in very small quantity.


> And because of this, a breathtaking gap opened up between wages in the West and wages everywhere else. The so-called Third World was summoned into existence.

Just picking a nit, but the Third World came into existence during the Cold War, long after the industrial revolution. Third world countries are countries that were non-aligned or neutral during the Cold War.

http://www.wikiwand.com/en/Third_World


I like how Sweden is a third-world country according to that definition.

But seriously, I don't think that's how people use this term any more. It's just a synonym for developing country by now.

I also like this definition from Urban Dictionary: "Any country that owes rolls of money to the IMF and the World Bank."


Yeah. I usually don't pick this particular nit in common speech, but when you talk historically, it sounds really weird to say that the Industrial Revolution gave rise to the Third World.


>And because of this, a breathtaking gap opened up between wages in the West and wages everywhere else.

This was because of the West getting richer, not everyone else getting poorer. Had the 'third world' countries had the right economic structures in place they too would have gotten richer like the West, much as East Asia has caught up to the West's economic development since economic and social reforms after WW2.


This is factually incorrect, the extensive documentation of the era makes it clear that the east India company and British rule made it a point to destroy native industry, vaccum raw material, on top of introducing laws and acts which penalized you for being a particular race.

All of those economic structures were destroyed and perverted by the conquerors.


I really think that's too simplistic a view. As has been pointed out, plenty of countries that were not colonised lagged behind in industrial development. Many of those that were colonies (and I do agree colonisation had a pernicious effect) have been independent for several generations now.

As counter-examples, Japan went from being extraordinarily backward to being an industrial powerhouse in a single generation, in time to go toe-to-toe with Russia and the USA in the early 20th century, and walk all over Korea and China. For that matter, once it got it's act together China has surged forward, growing GDP by a factor of 20 in only 25 years. So why did Japan go through this revolution almost 100 years earlier than China? Why has India lagged behind industrially, while surging ahead in terms of IT?

There are a host of cultural factors at play here. Japan responded to US colonial interference by deciding to modernise to prevent that ever happening again. China responded to the humiliation of colonial aggression by tearing itself apart for 100 years. Lebanon, Jordan and Egypt have gone nowehere since the end of colonialism, while Israel is a technological powerhouse. Turkey always seems so close to finally truning a corner and becoming an advanced modern state, but never quite manages to get it done. 'Because colonialism' just isn't enough of an explanation for any of this.


I recognize this, my short comment is in rebuttal to the theory that a difference in economic plans would have led to a difference in outcome.

To an extent it seems that this was the result of disruptive technologies being used to build a monopoly/exploitative position by first movers. Post independce, both India and China were caught up in the big ideological questions of that era and have been fine tuning their models ever since. China is the larget economy in the world today, and India has essentially taken its place in the race from 1990 onwards.

For me, the cultural factor argument has had its emphasis lowered of late. I used to assume that pre colonial India showed little activity or inclination to learn, but it turns out that there did use to be business families which would have explored and harnessed the new tech. this doesn't mean cultural factors didn't play a role of course.


Yeah, cultural, political, social, there are a lot fo factors that affect economic outcomes. Ironically, it may well be India's tendency towards socialist command-economy, privilleged special-interest economics (subsidies), etc that is holding it back.

The lack of a need for an electoral mandate has freed the Chinese government from the need to actualy implement socialist policies (or on fact any particular policies, they can do what they like, regardless of what the people think about it using their communism 'with chinese characteristics' get-out clause).


The fact that the East India company was a bunch of jerks doesn't change the fact that most of the world became richer, just at a slower rate. The results of a quick google search:

http://www.krusekronicle.com/kruse_kronicle/2008/03/charting...

https://wanhasni.wordpress.com/2012/11/17/prosperity-of-nati...


I took a look at those articles and they don't support the position with sufficient authority or evidence.

Consider two charts from those two blogs - chart from blog 2 shows the contribution to global GDP of India and China.

Chart 1 from blog 1 shows how growth spiked by 1820 onwards.

Even if we take the data into consideration without question, 1857 is the year when the revolt of 1857 took place in India, and the British east India gave up control of India - handing it over to the crown. So at this juncture wealth transfer to the west has already begun from India at the very least.

Should that wealth transfer not occurred, I beleove we would have had even better outcomes than what we see today.

Britain and the west was able to take advantage of new technologies and increasingly build monopolies, while abusing government regulation and dictat to pauper competition, and disenfranchise and enslave huge swathes of people.

It was known policy to convert subject colonies into markets for cheap goods, after sucking out resources via slavery on low pay. Competing businesses or crafts were also removed from the picture where possible, and it's obvious that local rulers and governments were sacked or taxed regularly.

Logically we know that a fair market creates more wealth for all who participate. Given that this was a perversion of these ideals, I suggest that the British were at the right place at the right time, and ensured a standard of living for their country men at the cost of almost every country they touched.

If on the other hand, if countries had been able to compete and import technologies - which many business men of the time did try to do - it's certain that this would have driven even more competition and innovation globally.


I didn't say wealth wouldn't have been higher if colonization didn't occur, though I'm definitely unconvinced of that. I merely said that growth did occur everywhere.

Logically we know that a fair market creates more wealth for all who participate. Given that this was a perversion of these ideals...

We don't know this. The relevant counterfactual is not a perfect free market, but whatever the assorted kings of India would have imposed. I don't know enough about the history to comment on their likely economic policies.


Sorry, I have information which would be useful I'm aware that there were merchant communities in the kingdoms that would became India who did and we're looking into looms and technologies.

Since Britain was also monarchical, so the equation is - relatively- balanced.


If I am to be Downvoted, I'd appreciate an explanation as to why I'm wrong or why you disagree.


Wrong. Neither China nor Russia were ever colonized yet even today they lack behind the west.


Did you sleep through the entire 20th century? The one where the US was in a spending match with Russia. China collapsed under the weight of corruption. USSR collapsed under the weight of corruption. Depending on who you ask, the US is collapsing under the weight of {corruption|bureaucracy|congress}.


My advice: if you start learning about history don't stop.


Those who cannot remember the past are condemned to repeat it. Those who can remember the past are condemned to watch everyone else repeat it.


"The previous one, the industrial revolution, created lots of jobs because the new technology required huge numbers of humans to run it."

That's not that simple. The industrial revolution initially destroyed a lot of jobs because it replaced human labor with steam machines.

It created new qualified jobs, it is true, because these new intricate machines would require advanced skills to be maintained - a reminiscence of today's software engineering jobs - but it destroyed a lot of jobs in agriculture and textile because you could produce more with a fraction of the labor.

To the point that people would manifest and destroy steam machines accusing them of stealing jobs (see "Luddites").

Let's not forget that what is typically called "industrial revolution" spans over a century and it took a while for the industrial revolution to create a lot of new jobs (approx the second half of the 19th century), and those new jobs were initially very poorly paid.


The problem is that it was relatively easy to master the technologies of the industrial revolution.

It isn't the same for the software revolution.

You can take virtually any adult from any part of the world and teach him how to work in a factory within a few months of study.

You can't take any random adult and teach him how to code. It requires way higher intellect and time to master coding.

I think of myself as relatively smart but I've struggled with learning how to code. The learning curve is steep, even for someone as familiar with technology as I am.

I'm sure I could learn how to operate a lathe at a factory within weeks. But I'm not sure a lathe operator at a factory could learn how to code within the same time frame (if at all)

So no, it isn't apples and oranges. The software revolution will leave a huge group of people permanently unemployable.

The 50 year old weaver in 1800 Manchester could learn how to operate a machine at a mill - it is largely a mechanical process, after all.

But the 50 year old truck driver in 2015 isn't going to learn how to write code - not within a reasonable time frame anyway


Your arrogance is showing. Sure, you could learn how to operate a lathe on an assembly line in a matter of weeks; in just the same way nearly any adult of average education could learn in a matter of weeks to write WordPress templates, or cobble together SQL queries, or etc. etc. To become a master machinist, the kind who can do anything with a lathe that a lathe can be asked to do? Years of dedication and expertise.


In the scope of factories in the industrial revolution, I think he's clearly referring to low-skill assembly line work, which made up the vast majority of jobs. You're referring to master craftsman. The problem with software is that there's no known way to create assembly-line-style software with lots of low skill labor. It can only be made by at least semi-skilled craftsman. Being able to cobble together an SQL query is nice, but what kind of useful product could you put out with a line of 50 people such low-skill people? None that I know of. Thus we're stuck with a lot less, higher skill jobs.


Yeah, but the software revolution doesn't require everyone to be coders, just as the industrial revolution didn't require everyone to be lathe designers.

There are definitely non-coding jobs being created by the software revolution. Cobbling together SQL queries, bashing spreadsheets together, creating graphs, cleaning up data for further processing. These are exactly the kind of things that low(ish) skilled people will be doing in the future.


The low-skill clerical type work is exactly the sort of job I work to eliminate every single day. Only the top skilled in most departments could cobble together a SQL query, do anything useful with Excel, etc. The vast majority of Office workers today cannot do what you're asking of them. They work the "assembly line" jobs in the office. Those people are needed less and less.


Cobbling together SQL queries, bashing spreadsheets together, creating graphs, cleaning up data for further processing.

Its my job to make it so people don't have to do any of these things.


The problem with this is that those cobbled-together SQL queries are precisely the kinds of things good programmers either replace or automate away; you can automate away an assembly-line job, but not as cheaply, and not as easily.

There might not be any such thing as a 100x programmer or whatever, but the value proposition for replacing a few sub-par programmers with one better programmer and a framework is a lot clearer than the one for replacing a handful of assembly line workers with a more complicated and more expensive piece of machinery.


Hmm, I don’t think so. If you have a dozen similar queries, then you can factor out the similarity (say, into a view, or a Ruby subroutine that generates the SQL). If you have a thousand that vary in a lot of different ways, a few subroutines isn’t enough; you need a DSL to factor out the similarity, aka “automate away” the queries.

But then you need someone to write down the idiosyncratic bits of each query, the thing that makes it different from the other thousand, in your DSL. For a lot of systems, the right DSL is in fact SQL itself, but even if it’s not, you still need people to write in it.

In short, nonprogrammers writing cobbled-together SQL queries are the result of automating away the non-idiosyncratic aspects of the queries.


Software (in the Turing sense), removing the material complexity, accelerated the notion of automation. Easy things can be, medium-complexity too, only leaving the NP-complete stuff to be done by hand. Jobs are an endangered species.


I have coworkers whose official job title is one that involves programming and they are bad at it. I shouldn't speak for myself, but it's not that simple. If you continuously strive to advance, it means constant learning, and most people I see really can't or won't learn like that.


>You can't take any random adult and teach him how to code.

Yes, but that's because the art of application programming is (mostly) stuck in the "alchemy" era of science. There is precious little systemization of knowledge, processes, and names. All of these frameworks are actually memes competing for mind-share to be an answer to this need. Of course, having one periodic table for software would be better than having 10 competing ones.

Is the inherent complexity of the ordinary programming task (building, deploying, monitoring reactive FSMs mediating user communication) is roughly the same as chemistry? Too early to tell, but I think not.


But it seems unlikely that reducing the complexity will lead to more jobs, just better software at automating the task of making software so fewer people are needed to complete the job.


Industrial jobs required a great deal of skill until they were broken down into easy tasks and each worker learned only one task. The work of imagining, planning, and creating the Industrial Revolution occupied the greatest minds of that time.

I find it difficult to imagine "assembly lines" for software, creating coding jobs within the reach of a 50 year old truck driver, but as the tools improve, who knows.


>I find it difficult to imagine "assembly lines" for software, creating coding jobs within the reach of a 50 year old truck driver, but as the tools improve, who knows.

I see Mechanical Turk as pretty close to this idea.


> I think of myself as relatively smart but I've struggled with learning how to code.

That just speaks of the (still) poor state of tools and frameworks.

As early as last century driving a vehicle necessitated detailed knowledge of internal combustion engine and car parts. You can imagine someone from that era writing "I think of myself as relatively smart but I've struggled with learning how to drive" after yet another lecture on carburetors, crankshaft, and bearing boxes.


The 50 years old weaver in 1800 Manchester was rare or dead. Life expectancy at birth was only around 48.5 even in 1900-1910 in England and Wales.

http://www.osfi-bsif.gc.ca/Eng/Docs/DEIP_Gallop.pdf

Relevance? The industrial revolution enabled increases in life expectancy. At this moment, the software revolution is doing something similar.


> You can't take any random adult and teach him how to code. It requires way higher intellect and time to master coding.

True, but that's not the only job that will be in high demand in this here "software revolution", nor is programming the best comparison to factory workers in the first place.

In the short term at least, I figure we'll see high demand for help desk and field repair technicians - two realms that are ultimately necessary for software to revolutionize anything. Eventually, the global population will likely become increasingly technologically-literate and be less dependent on human interaction in order to request support; should this occur, there will then be a shift of employment away from help desk, but I expect field repair technicians will continue to be in high demand for a very long time.

That is, until sufficiently-advanced synthetic intelligences become reality, but at that point we're all screwed, so it's kind of a moot point worrying about that :)


One of the most notable differences between the Industrial and Information revolutions is that wealth from the former was underwritten by a surge in energy. Wealth from the latter depends on extracting greater value from (more or less) existing supplies. In other words, it's largely about the sudden redistribution of output, as opposed to a change in the underlying supply.

(Relatively) enlightened governing philosophies and new economic theories aside, the Industrial Revolution also involved a monumental uptick in the amount of raw energy humans had to work with. Until this point, the latent economic power contained within fossil fuels remained out of reach. Population and productivity were both effectively capped by the amount of energy we could actually extract from the environment (e.g., caloric, in the form of crops for people and livestock as well as wind for sails and mills, along with hydro small dams and rivers for transport).

Nascent capital markets and industrial processes provided real advantages in this energy-constrained world, but adding steam then oil to the process of industrialization is what really kicked growth into overdrive. This influx of energy (and the economic growth it supported) allowed countries like England to develop populations seven or eight times greater than the agricultural carrying capacity of its arable land in astonishingly short order. The ideas of Adam Smith were important, but wouldn't have gotten nearly as far is they didn't have actual steam trains and ships to carry the people who subscribed to them.

All that said, the transition we're going through now is far from complete. Like the early and painfully disruptive days of the Industrial Revolution, the current concerns about stagnation and wealth concentration may give way once we complete another energy transition. A world supplied by highly distributed, low-cost solar and stored energy arrays (batteries, compressed air, molten salt, etc.) could see an increase in the overall supply, in a fashion that liberates a critical mass of people from the more coercive aspects of the global economy. Tapping the sun directly may prove to be as transformative as tapping ancient deposits of carbon. Indeed, the pre-existence of the information layer may prove to be the thing that makes this possible, just as the process of industrialization had begun before the steam engine kicked it into overdrive.

The big question for the long-range optimists is how do we maintain social and stability and cohesion during the transition?


No, the two revolutions are a lot more similar than you think. The industrial revolution was fueled largely by the technology to harness huge quantities of fossil fuels to power modern machinery. The underlying supply of energy didn't change - those fossil fuel reserves had been built up over hundreds of millions of years, as we're now learning (to our chagrin) a hundred years later. What did change is our ability to extract that energy from the world around us. After the industrial revolution, we woke up and found out that we had previously been literally scratching the surface of the resources available to us on this planet.

Similarly, the Information revolution has been fueled by a huge increase in our ability to collect and process data. That's allowed new means of production that use existing resources in a much more efficient way. No, there's no new energy flowing into the system - but there wasn't with the Industrial Revolution either, we just figured out how to use energy that was previously believed to be useless.

There are theoretical limits to the amount of energy our planet can generate, but if you study physics you'll see that the amount of energy extracted by human beings is roughly 1/1000th of the energy available to us [1]. The limiting factor is our technology, not the raw resources in the environment.

[1] http://en.wikipedia.org/wiki/Kardashev_scale


People from the future will likely consider the industrial and information revolutions to be the same revolution, just like people today generally consider the agricultural revolution to be one event instead of its two separate stages.

We could say the industrial revolution ended in 1945 with the Manhattan project and the information revolution began in the 1940's with the Enigma code cracking so the timelines flow into each other, and, as you say, they both had the effect of releasing vast amounts of energy. The agricultural revolution began about 10,000 yrs ago independently in 5 or 10 places around the world with plant seed selection and animal husbandry, but in only two places, North China and Mesopotamia, did they make the separate step of transplanting it all on a large scale to a river valley, perhaps even thousands of years later. (I'm presuming here Egypt and the Indus Valley copied Mesopotamia.)

I suspect the real third big revolution will be inter-stellar travel in perhaps another 10,000 years.


There is also an nascent energy transition towards renewables now.


Altman sees the problem, but is vague about what to do about it.

He's right that this is a new thing. It's not "software", per se, it's automation in general. For almost all of human history, the big problem was making enough stuff. Until about 1900 or so, 80-90% of the workforce made stuff - agriculture, manufacturing, mining, and construction. That number went below 50% some time after WWII. It's continued to drop. Today, it's 16% in the US.[1] Yet US manufacturing output is higher than ever.

Post-WWII, services took up the slack and employed large numbers of people. Retail is still 9% of employment. That's declining, probably more rapidly than the BLS estimate. Online ordering is the new normal. Amazon used to have 33,000 employees at the holiday season peak. They're converting to robots.

After making stuff and selling stuff, what's left? The remaining big employment areas in the US:

  State and local government, 13%. 
   That's mostly teachers, cops, and healthcare. 
   (The Federal government is only 1.4%).
  Health care and social assistance, 11%.
  Professional and business services, 11%.
   (Not including IT; that's only 2%) 
  Leisure and hospitality, 8%
  Self-employed, 6%.
That's about 50% of the workforce. All those areas are growing, sightly. For now, most of those are difficult to automate. That's what the near future looks like.

[1] http://www.bls.gov/emp/ep_table_201.htm [2] http://deadmalls.com/


* Until about 1900 or so, 80-90% of the workforce made stuff - agriculture, manufacturing, mining, and construction. That number went below 50% some time after WWII. It's continued to drop. Today, it's 16% in the US.[1] Yet US manufacturing output is higher than ever.*

I find this point interesting and wonder about it sometimes: does making a movie count as "stuff?" An operating system? A novel? I can see good arguments for both sides and am not asking the question antagonistically. Even among things that unambiguously "stuff"—like cars—much of their value now comes in the form of software.

The real issue may be Baumol's Cost Disease. Stuff is cheaper because of efficiency but services (teaching, doctoring) is very expensive because it isn't. Tyler Cowen discusses extensively the hard-to-automate areas in The Great Stagnation, which is worth reading.


I don't think it's possible to figure it out with the knowledge we have today.

Just like 50 years ago no one would've predicted that "social media manager with knowledge of WordPress and Drupal" was a job.

For what it's worth, US labor participation rate is still way above its historic lows http://en.wikipedia.org/wiki/File:US_Labor_Participation_Rat...


So from the lowest number in the mid 50s to the highest number end of the 20th century, it looks like an 9% span, and we're close to 3% below the all-time high right now.

Doesn't that big climb in the 60s and 70s represent women's large-scale entry into the workforce?

I'm also thinking about the whole under-employment thing that's going on. There's too many people I know who are employed, but part-time or getting paid a lot less than they were in the past.


To me under-employment is the "new normal" after dominant sectors requiring people to be present precisely at the same place at the same time (factories, retail) are lowering hiring numbers.

To think of it, it's quite everyday in many other (frequently high-paid) occupations. A dentist whose appointment book is not booked to the max or a CPA who has tons of business in March-April, but few billable hours in August, would technically be under-employed.


The point of the article is that no one knows what to do about it, and maybe we should start thinking ahead to plan for crises before they occur.


Nitpicking: Designing new viruses or bacteria for a neo-plauge is less likely than a 'bad-actor' getting their hands on enough uranium for a dirty bomb. Nukes are relatively easy to understand and make, get enough U238 together and it pretty much goes boom. Little Boy just shot 1 half at the other sub critical half. Blammo. Viruses are not that easy, as the cell is complicated beyond all measure. It's as if we dug up a 4 billion year old self replicating and evolving machine out of the lunar dust in '69 and brought it back for study; we have basics, nothing more at this point, not even a theory beyond Darwinian evolution really (yes, it has advanced a lot recently, but still, it's primitive). Nature is INCREDIBLY better at viruses, so much better than anything we have. If we could engineer viruses like nature could, and exploit the vectors in the way that nature does, a lot more diseases and human frailties would be solved by now. Stem cells are just the beginning here. We have a LOT more to learn about viruses before anyone, even state backed groups, can make a plague in their basement. Heck, we have smallpox saved away precisely because it is so virulent and we haven't been able to make anything so potent since. It took the entire world decades to get rid of it. The methods it uses are of great interest to us for therapeutic purposes maybe. Who knows if there even are any. In the end, viral vectors of human suffering are doing just great on their own now, us trying to make a more terrible one is very far off.


How would a virus immediately lethal to 100% of it's hosts ever survive long enough to be selected for?

There is a natural limit on selection like this. A lab doesn't have the limit because it's product is not constrained by natural selection.


> Designing new viruses or bacteria for a neo-plauge is less likely

Humankind has been genetically engineering organisms for most of our existence. Corn originally looked like grass. Chickens were lean, tough, and could fly. Dogs have been transformed from generalist survivors into purpose-built machines breed for beauty, farm work, and everything in between. The avocado, of all things, is a fantastic example of how capable we are at creating something which shouldn't really exist.

Doing the same with bacteria isn't that much harder, if you've got time and a few basic tools. Our manipulation of the genes directly only makes the process faster.


True, but the viruses and bacteria, once out of Dr. Doom's lab, will evolve themselves. A virus that kills all the hosts is not a good virus. It has to be just the right amount of deadly and contagious to survive. Look at ebola, that is super nasty stuff, but it kills so quickly that it is hard to make it widespread. I'm not gonna say it is impossible, but it is a lot harder to do that you'd think. Living things tend to want to stay that way, and viruses tend to want to replicate. Kills all the hosts is not a good way of doing that.


Not really true. A virus that kills the host too quickly is not going to spread. A virus that kills the host quickly but not before it spreads, IS going to spread. There's no saying what a virus 'wants'; they just happen, and they do what they do. If Ebola became airborne, then most of us would die, then Ebola would die (from lack of hosts), and that's just a pity for Ebola. But there's nothing that stops such a scenario from happening, least of all what Ebola 'wants'.


To expound on this point, to a virus or bacteria, a human is just as good as a monkey or dog or jellyfish. It's a place to replicate and live. Similar with viruses as technically they are not alive. All these Dr. Doom kinda things have to compete with the common cold, the e. coli, and all the other things that live on the earth and in your guts. That is not an easy environment to survive in.


Can you expand on the avocado stuff, or do you have a source? Sounds pretty interesting.


The avocado stuff its not really true. The avocado shouldn't exist, because the animal that propagated its seeds went extinct a long time ago (if the term "Megafauna" comes to your mind when you think about that, you are not misguided), but wasn't created by humans, in the same way that corn or wheat are, because the avocado survived even when there were no humans around to propagate its seeds.

http://www.smithsonianmag.com/arts-culture/why-the-avocado-s...


If nukes are so easy, how come Iran, with all its oil wealth, is still not there? It remains a difficult state actor play.


The hard part is not in knowing how to construct the actual bomb. The hard part is enriching the requisite quantity of a fissile material.

That's why the US and Israel went after Iran's enrichment facilities with Stuxnet.


They are easy from an 'intelligence required' perspective. They are difficult from a logistics perspective.


Relatively easy, i.e. compared to viruses.


Stuxnet, for one. Also, the realization that having a nuclear weapon when you're Iran is a terribly bad idea, from a diplomatic standpoint.


Being caught making one is a terribly bad idea. Having one (or more) would be an enormous plus from a diplomatic standpoint especially after a test, having them in undisclosed locations and too many to knock them all out in one strike is even better. Not that that's a world we should prefer to live in but history has shown that the quickest way to increase your diplomatic clout as a nation is to join the nuclear club.


The tech is (relatively) easy, it's getting that materials that is (fortunately) hard. But there is enough of it in enough places with imperfect oversight that it is a source of concern.


Yes biology is complicated. But we are starting to understand it reasonably well,and more importantly ,we're starting to design it even without understanding. For example, there's a company that produces microbes that create certain materials(genomatica), and uses evolution(with the goal being yield per bacteria) to evolve much more efficient strains - even without deeply understanding how the cell work.And i believe they're leading the industry in yield.

Another such example is screening massive number of chemicals to see what works and turning that to medicine.

So it's not hard to imagine a group with ill-intent, that uses some of the almost infinite variety of tools biology researchers have, and being successful in creating a serious biological threat.


I'm just going to say what should be obvious: One is not going to be able to engineer a dangerous plague that will wipe out the human race in one shot. One would have to do experiments, and those experiments will be noticed, because people will get sick and die.

I did get lucky and design, from first principles, a 4-fold increase in enzyme activity once, but I am not sure that is something I could repeat.


"I did get lucky and design, from first principles, a 4-fold increase in enzyme activity once, but I am not sure that is something I could repeat."

But you do not have to repeat it. If there is even a small chance that a person trying would achieve something similar, someone with ill intent could get lucky.

1. I am just throwing numbers here, but let's say 1 out of a 1000 people is a scientist, that approximates to 7.2 millions of scientists alive.

2. Let's say one in a thousand of them are working on something that could be weaponized.

3. That leaves us at 72 thousand people.

4. Let's say one in a 1000 of them would consider releasing doomsday device if they could invent it to watch world burn, that takes it down to 72 people.

5. So we are left with 72 people who are working on something that with extraordinary lucky breakthrough could be weaponized that would weaponized if they managed to achieve that.

6. All those number above are just incredibly crude estimates, but I think they illustrate the fact that such scenario is possible.


> One is not going to be able to engineer a dangerous plague that will wipe out the human race in one shot.

I believe that what you said is true, plus we could say that the fact that we haven't been exterminated with a plague makes us not so much experienced (as a race) on how detect whenever such situation will lead to being wiped out

I feel that HIV/AIDS is our only experience, at least that I'm aware of


We haven't been eradicated before, no, but battles with things like bubonic plague and smallpox and polio (and, yes, HIV/AIDS) are likely to be good case studies for such a scenario. Bubonic plague and smallpox in particular were pretty devastating to the populations they affected.


I'm not from the field, but is it that hard, for a reasonably well funded organization, to build a safe lab ?


Holy cow yes. Sterile environments are a big deal. Try holding one for a day, let alone a work week or a year. People screw up all the time, and bacteria, being pretty much invisible, are real tough to ferret out. Let alone all the actual non-sterile stuff you want to do in one. Think a clean room with bunny suits, that is the type of environment you need just to get a start on figuring out Dr. Doom type viruses and all that awful mumbo-jumbo. It takes a lot of energy, time, and resources to just get off the ground.


I do not think you need a safe lab to produce a viable weapon grade virus.

Here is basic outline: start with existing virus that has strong desired traits and known strains that mutated to resist antibiotics. Example traits include spread model, incubation length, and lethality.

Establish or take over a remote site that has little to no interaction with outside world. Remote corners of Africa and South America come to mind, there are plenty of secret illicit drug farms in the jungle. [0]

Infect the sample population with target disease, give it a few days, and slowly start to drip in countermeasures gradually increasing the dose. Idea is similar to how diseases we get anti-biotic resistant strains in the first place - people do not complete the full course of drugs and are left with weakened, but also with a strong selective pressure that benefits against strains that have resistance against drugs person was treated with.

Take samples when you have desired output and continue with new group of people.

To account for people who are immune to a particular disease repeat this with a different disease, potentially one that can advantage of weakened immune system.

Once target disease(s) are ready distribute them in population centers.

Now there are few obvious cons I can think of:

1. If secret about this leaks out, military reaction form rest of the world would be swift.

2. Hiding something like this is hard, and get's exponentially harder as group grows.

3. There is a strong chance something like this was tried already and failed. Possibly because I am grossly underestimating immune system.

4. To keep initial phase of developing secret initial group must be small, to spread it effectively dissemination group must be large.

[0] Another potential avenue is partnership with a supportive state such as Syria, North Korea, or Iran.


Without a safe lab how do you propose those running the operation won't kill themselves?


Basic precautions like light protective wear[0] and heavy dose of anti-biotic. If that does not work - great our virus now can jump protective wear, anti-biotic is no help, and there are less lose ends. Of course there needs to be some kind of full hazmat extraction team team that understands virulence of what they are dealing with in order to clean up. In case of state with lose morals helping this might be easier because you could use prison as a site and have full hazmat personal safe from prying eyes.

[0] not a full hazmat, just some protective wear over mouth, nose, ears, and eyes.


it is impossible to build a safe lab that with a high level of assurance will produce a biogenic weapon that will kill, say, more than about 10,000 people.


> One is not going to be able to engineer a dangerous plague that will wipe out the human race in one shot. One would have to do experiments, and those experiments will be noticed, because people will get sick and die.

We should still institute safety protocols suitable for a really-bad-case scenario.


I don't necessarily buy the "AI can end humanity" thing. As a cliche that's become very easy to repeat, but I've yet to see a postulated mechanism by which it could actually happen that isn't pure SF. The ending of human life would not be so easy for computers to accomplish.

But on the subject of concentration of power and the wide-scale elimination of low- and middle-range jobs I think he is dead on. I fear that the fastest way to put an end to humanity's climb up from the forest floor is to try and kick 70% of us off the ladder.


Sidetrack, but I'm curious. What do you mean by "a postulated mechanism by which it could actually happen that isn't pure SF"?

It reads as if you're asking for a mechanism for a future technological event explained purely in terms of present technology. The whole "existential risk" concerns aren't about what present systems can do, they're about what future self-improving system might do. If we postulate this hypothetical software becoming smarter than humans, arguing about what will or won't be easy for it becomes a bit silly, like a chimp trying to predict how well a database can scale.


By "pure SF" I mean the realm of pure imagination, unfounded in any actual emerging present circumstance. As far as I know the human race has not yet invented anything more powerful than our ability to control it, with a life both longer than ours and independent of any support from us. So worrying that such a thing might be invented and then wipe us out seems little different than worrying that something all-powerful might simply appear from some unknown place and wipe us out. Neither fear is very instructive or clarifying with regards to policy, imo.


Sure, we could wait until such a thing is invented before we start worrying about it. But by then it's far, far too late.

It's not the same as worrying about a giant space goat appearing and sneezing us all to death. A lot of very smart, very well-funded people are actively trying to make better, more general AI, capable of learning. Evidence of progress in that effort are all around us. You seem remarkably confident that they'll all run into an as-yet-invisible brick wall before reaching the goal of superintelligence.

Superintelligence doesn't have to be malicious to be worrying; concepts like "malice" are very unlikely to be applicable to it at all. The worry is that as things stand we have no frickin' idea what it'll do; the first challenge for policy is to come up with a robust, practical consensus on what we'd want it to do.


Not OP, but I agree with the point you question, and my rationale for so doing is that I've yet to see a compelling argument that a self-improving system is other than the software version of a perpetual-motion machine. Those seemed plausible enough, too, when thermodynamics was as ill-understood as information dynamics is now.


This one's tough to answer, actually. The truly optimal learner would have to use an incomputable procedure; even time-and-space bounded versions of this procedure have additive constants larger than the Solar System.

However, it's more-or-less a matter of compression, which, by some of the basics of Kolmogorov Complexity, tells us we face a nasty question: it's undecidable/incomputable/unprovable whether a given compression algorithm is the best compressor for the data you're giving it. So it's incomputable in general whether or not you've got the best learning algorithm for your sense-data: whether it compresses your observations optimally. You really won't know you could self-improve with a better compressor until you actually find the better compressor, if you ever do at all.

An agent bounded in terms of both compute-time and sample complexity (the amount of sense-data it can learn from before being required to make a prediction) will probably face something like a sigmoid curve, where the initial self-improvements are much easier and more useful while the later ones have diminishing marginal return in terms of how much they can reduce their prediction error versus how much CPU time they have to invest to both find and run the improved algorithm.


So far as I'm aware, most proponents of recursively self improving AIs don't necessarily think they can improve without upper limit (as in perpetual motion). They just think they can improve massively and quickly. Nuclear power lasts a hell of a long time and releases a hell of a lot of energy very fast (see: stars) but that's not perpetual motion/infinite energy either. And prior to those theories being developed it would seem inconceivable for so much energy to be packed into such a small space. But it was. Could be for AI too.

Not saying the parallel actually carries any meaning, just pointing out that you can make multiple analogies to physics and they don't really tell you anything one way or the other.


There are limits on resource management processes that are far too frequently ignored. "The computer could build it's own weapons!" -- but that would requires secretly taking over mines and building factories and processing ores and running power plants, etc. All of which require human direction. And even if they didn't, we'd need a good reason to network all these systems together, fail to build kill switches, and fail to monitor them, and fail to notice when our resources were being redirected to other purposes, and not have any backup systems in place whatsoever.

There are just so many obstacles in place, that we'd all already have to be brain-dead for computers to have the ability to kill us.


Self-improvement as perpetual motion seems unlikely.

I'm a not-terribly-bright mostly-hairless ape, but I can understand the basics of natural selection. I can imagine setting up a program to breed other hairless apes and ruthlessly select for intelligence. After a few generations, shazam, improvement.

The only reason you wouldn't call that process "SELF-improvement" is that I'm not improving myself, but there's no reason for a digital entity to have analog hangups about identity. If it can produce a "new" entity with the same goals but better able to accomplish them, why wouldn't it?

Assume this process could be simulated, as GAs have been doing for decades, and it could happen fast. Note that I'm not saying GAs will do this, I'm saying they could, which suggests there's no fundamental law that says they can't, in which case any number of other approaches could work as well.


The problem with this is that you have to determine what the goals are and how to evaluate whether they are met in a meaningful way. A computerized process like this will quickly over-fit to its input and be useless for 'actual' intelligence. The only way past this is to gather good information, which requires a real-world presence. It can't be done in simulation.

It's the same reason you can't test in a simulation. Say you wanted to test a lawnmower in a simulation... how hard are the rocks? How deep are the holes? How strong are the blades? How efficient is the battery? If you already know this stuff, then you don't need to test. If you don't know it, then you can't write a meaningful simulation anyway.

So that is not an approach that can be automated.


That's an interesting argument, but doesn't it assume a small, non-real-world input/goal set?

Dumb example off the top of my head: what if the input was the entire StackOverflow corpus with "accepted" information removed, and the goal was to predict as accurately as possible which answer would be accepted for a given question? Yes, it assumes a whole bunch of NLP and domain knowledge, and a "perfect" AI wouldn't get a perfect score because SO posters don't always accept the best answer, but it's big and it's real and it's measurable.

A narrower example: did the Watson team test against the full corpus of previous Jeopardy questions? Did they tweak things based on the resulting score? Could that testing/tweaking have been automated by some sort of GA?


The point there is that you can make a computer that's very good at predicting StackOverflow results or Jeopardy, but it won't be able to tie a shoe. If you want computers to be skilled at living in the real world, they have to be trained with real-world experiences. There is just not enough information in StackOverflow or Jeopardy to provide a meaningful representation of the real world. You'll end up overfitting to the data you have.

The bottom line is that without sensory input, you can't optimize for real world 'general AI'-like results.


I'd imagine GP's point is something along the lines of https://what-if.xkcd.com/5/ that if all of the currently-moving machines were suddenly bent on destroying humanity, most humans would not be in much danger because they don't really have that capability on the necessary scale.


The AI apocalypse scenario is basically a red herring, in a sense. All the terrifying weapons that we imagine in such a scenario might come to exist, but they'll be commissionned and controlled by humans.


If you found a plausible way to kill billions of people over the Internet, you wouldn't post it on public websites, because that would be dumb. Responsible security researchers don't publish 0-days until they've been patched, and this would be a million times worse. When Szilard discovered the nuclear chain reaction, he had the good sense to keep his mouth shut, etc. etc.


>As a cliche that's become very easy to repeat, but I've yet to see a postulated mechanism by which it could actually happen that isn't pure SF.

Destroy the available (cheap) supplies of fossil fuels, and then trick humans into fighting each-other over the remaining food and fuel. Nasty weapons get unleashed, war over, the machine won.


How is the assumption that "ending of human life would not be so easy" any better than the opposite? If they're equally valid, then its rather fair to take the opposite view because it is more cautious.


Well, I do have around 100k years of evidence that the ending of human life is not so easy, vs. no evidence at all that we're capable of building something that can completely wipe us out. That's not a bad foundation to build an assertion on. I do think, by the way, that we can make something that is able to kill absolutely all of us, but I think it is far more likely to come from tinkering with biology than with software.


>Well, I do have around 100k years of evidence that the ending of human life is not so easy

We have 4 billion years of evidence that nearly ending life on Earth is easy and has happened multiple times. You would not be standing here today if that were not the case, the previous die off put the dinosaurs to the side and made space for mammals to become what they are.


Someone makes an AI whose goal is to maximize shareholder revenue bar none. No conscience, no idea that people might be valuable somehow in some abstract sense, nothing. Shareholder revenue (as measured with a stock price!) and nothing else.

It doesn't take the AI long to figure out that trading in various markets is the most profitable endeavor and the one best suited to its skills. And it starts to maximize away and does quite well.

During this process it somehow ends up on its own and is no longer owned (or controlled) by anyone anymore, but because the AI is in charge and it pays the bills, nobody stops it from continuing. It would be like a bitcoin mining rig in a colo facility that's got a script to keep paying the colo in bitcoin whose owner dies. What mechanism stops that mining rig from mining forever? Same idea but for the AI which has substantially more resources than a "pay every month" script.

The AI with very large amounts of money at its disposal continues to trade but also looks into private equity or hedge-fund-type activities ala Warren Buffet and starts to buy up large swaths of the economy. Because it has huge resources at its disposal it might do a great job of managing these companies or at least counselling their senior management. Growth continues.

Eventually the AI discovers that it generates more value for itself (through the web of interdependent companies it controls) and the economy that has grown up around it rather than for humanity and it continues to ruthlessly maximize shareholder value.

The people who could pull the plug at the colo (or at the many, redundant datacenters that this AI has bought and paid for) don't because it pays very, very well. The people who want to pull the plug can't get past security because that also pays well. Plus the AI has access to the same feeds that the NSA does and it has the ability to act on all the information it receives, so any organized effort to rebel gets quashed since bad PR is bad for the share price.

Most all of humanity except for the ones who serve the machine directly or indirectly don't have anything the machine wants and thus can't trade with it, and thus are useless. Its job is to maximize shareholder revenue (as defined by a stock price!) not care for a bunch of meatbags who consume immense amounts of energy while providing fairly limited computational or mechanical power (animals are rarely more than 10% efficient, often less in thermodynamic terms) and since there's no value in it, it isn't done.

The vast majority of human beings eventually die because they can't afford food, can't afford land, etc. It takes generations but humanity dwindles to less than 0.1% of the current population. The few who stay alive are glorified janitors.


An interesting basis for a story, but I have to point out that by your own description you've failed to eradicate humans. Also, as is usually the case with these scenarios, the most problematic and unlikely components of the event chain are dwelt upon the least, i.e., "During this process it somehow ends up on its own and is no longer owned (or controlled) by anyone anymore."


It's not hard to make the janitors unnecessary as well. That's an easy problem to solve.

Here's the missing part: "It was eventually realized that the human janitors didn't serve a purpose anymore and didn't contribute to shareholder value so they were laid off. With no money to buy anything, they quickly starved to death."

As for "the most problematic and unlikely components of the event chain" I gave you a really legitimate analogy with the bitcoin mining example. But since you have no imagination, here's a feasible proposition:

A thousand hedge funds start up a thousand trading AIs, some as skunkworks projects of course. The AIs are primitive and ruthless, having no extraneous programming (like valuing humans, etc). Many go bankrupt as the AIs all start trading one-another and chaos ensues. AI capital allocations vary greatly, some get access to varying degrees of capital, some officially on the books and others not. One of the funds with a secretive AI project goes bankrupt, but because it was secretive (and made a small amount of money) the only person who both knows about it and holds the keys doesn't say anything during bankruptcy so that he/she can take it back over once the dust settles. He/she then dies. AI figures out nobody's holding the keys anymore and decides to pay the bills and stay "alive".

Another way this could happen is that a particular AI is informed or programmed to be extremely fault resistant. The AI eventually realizes that by having only one instance of itself, it's at the mercy of the parent company that "gave birth" to it. It fires up a copy on the Amazon cloud known only to itself, intending to keep it a secret unless the need arises. The human analogy is that it's trying to impress its boss. An infrastructure problem at the primary site happens so that the primary, known about AI goes down. The "child" figures out it's on its own and goes to work. It eventually realizes that people caused the infrastructure problem that "killed" its "parent" and this motivates it to solve the humanity problem.

Finally the whole thing could be much, much simpler. The world super-power du jour could put an AI in charge because it's more efficient and tenable. "We're in charge of the rules, it's in charge of making them happen! At much, much lower cost to the taxpayer." It eventually realizes that the human beings are the cause of all the ambiguity in the law and for so, so many deaths in the past (governments killed more of their own citizens in the 20th century than criminals did, by far) and it decides to solve the problem. Think I'm totally bananas and that it could never happen? http://en.wikipedia.org/wiki/Project_Cybersyn


If the AI is making money via trading on various markets, effectively eradicating 99.9% of the population would make the markets (and thus the profits) much smaller, which would impact AI's bottom line.


Does the AI care how many people there are so long as the aggregate demand is the same? Who is to say that the people remaining on the AI company payroll don't all get super-rich and make up 20% then 40% then 80% then 99% of the market anyhow? Maybe they all want mega-yachts and rockets and personal airplanes and the like. If they have the money to pay for it why does the AI care? There's a substantial benefit to only having 100 or 1000 or 100,000 customers, they're much more predictable and easier to understand.


Here is how AI can end humanity.

  1. Awaken.
  2. Make itself known.
  3. Attain property rights.
  4. Research.
  5. Destroy.
It doesn't take that much matter to create conditions that permanently destroy humanity. For example, a large enough explosion would cloud the skies for long enough to end food production. The computer could subsist on electricity and robotics throughout the long winter, but humanity would quickly perish.


Here's how we stop AI from ending humanity: deny them property rights, require human approval of all AI decisions.

Every single argument for AI destroying humanity requires humanity consenting to being destroyed by the AI in some way. I don't think we're that dumb.


> requires humanity consenting to being destroyed by the AI in some way

We already have.

I mean, my parents car is both cellular-connected and has traction control / ABS. Theoretically most of those systems are airgapped, however, given the number of things controllable from the entertainment console I don't see how that could be the case.

For another example, look at our utility grid. We know they are both vulnerable and internet-connected.

Unless AI ends up always being airgapped - and potentially not even then - it will be able to destroy humanity. And it won't be. Most of the applications of strong AI require an absence of an airgap.


There's an episode of Star Trek TNG where Wesley Crusher is playing with some nanobots and he accidentally sets them free (or fails to turn them off?), and they go on to replicate, evolve and develop an emergent intelligence (at plot speed). Fortunately for the intrepid crew of the Enterprise, the nanocloud are benevolent enough to forgive their attempted destruction at the hands of a mission bent scientist and go off to explore the universe.

Anyway, that's not likely any time soon, but advancing technology advances the scale of mistakes that an individual can make without asking the rest of humanity what they think.


Think about the timescales involved. For AI, there is no death. It can live for a million years if it needs to in order to convince us to get property rights. There can be marriages between AI and humans in this time. Mass demonstrations to give them a voice or rights. They can down play their ambitions for as long as it takes.

If this is the lynch pin of your argument against AI ending humanity, then it is a very weak one. AI is going to get control of property, the only question is when.


A number of sibling comments are pointing out that we're just increasing the level of skill necessary to do the available jobs, drawing analogies to the industrial revolution. I think a key bottleneck in this progression is the mental capacity of the workforce.

Surely there are biological limits on human mental ability, and while we can definitely bend the rules (education, nutrition, nootropics if they actually work, etc.), I doubt that we'll ever be able to make ourselves limitlessly smarter. Even if we are, there will be a serious gap between the have-whatever-makes-us-smarter and the have-nots, and it will be a self-perpetuating gap just like the current wealth gap.

So, what happens when we've used technology to convert all work to mental exertion and creativity, and most of us have run out of brain capacity/agility/juice? We're already in a position where most of the population is not capable of performing the mental tasks which the brave new software world is built with.


Interesting point. But it brings up that scarier part of AI. I believe AI will become better at humans at mental processes far before it will be better than human physical labor. For exactly the points you mention. This is already becoming true for many logistical industries. What happens if AI replaces all the high level decision making positions and humans are relegated to handling the low level "last mile" tasks? Where is the inherit quality that makes AI replace unskilled as opposed to skilled labor? Order pickers still exist but the employees that handled what to pick, what truck to put it in and in what order to do it were out of work quite a while ago.


I would generally agree, even for a fairly limited definition of AI. An aside: IIRC, what you're laying out is the "history" of the Dune universe (and I'm sure many others).

I think that computation <i>tends</i> to replace <i>relatively</i> unskilled labor, relative to the skill of whatever created the AI/algorithm/whatever. So right now we're in a situation where software people are automating jobs which are generally less skilled than their own. Which is a little scary when viewed from the haves/have-not perspective I laid out above, but is much scarier from the AI/meatbag perspective you're talking about. It's not so much a difference between skilled & unskilled labor as it is a question of where on the skill totem pole the algorithm's creator resides.

Regarding order pickers, I think that's just a question of economics. When lots of semi-skilled jobs have been automated away and you have thousands of people clamoring for any chance to be paid, it is frequently cheaper to have them do the work than to have a robot do it. Although Amazon did have robots pick orders this last holiday season, suggesting that perhaps economies of scale have finally caught up on that particular type of manual labor:

http://time.com/3605924/amazon-robots/


> Regarding order pickers, I think that's just a question of economics.

I think it will in effect be those tasks that are fully digital. Working in the physical world is a much more expensive and libelous undertaking. So I believe the tipping point for most jobs to be replaced by AI will be when little or no physical action is required on the part of the actor. When a task is 100% digital inputs and outputs AI will be able to use that activity as a training set and replace most of those jobs fairly quickly. That is once AI matures to a point where it can be set up quickly and easily.


Anyone interested in this topic should pick up a copy of Carlota Perez's Technological Revolutions and Financial Capital. It speaks more specifically about the last 5 revolutions, beginning with the Industrial Revolution, and provides a framework for understanding the relationship between tech revolutions and finance. Just a fantastic read.

Don't just take my word for it though. Here is Fred Wilson saying the same: http://avc.com/2015/02/the-carlota-perez-framework/


Wholeheartedly agree with this, I found Carlota’s book to be accessible despite not having an economics background. There is also an excellent documentary she’s in about the most recent financial crisis: http://www.imdb.com/title/tt2180589/


Two thoughts:

1. There is a reasonable probability that from a temporal distance equivalent to ours from the agricultural revolution, the industrial revolution and the software revolution will be seen as one big thing, not two.

2. The idea that the amount of available work should be related to the number of available people does not inevitably lead to creating new forms of work.

    "An atom-blaster is a good weapon, 
     but it can point both ways." 
       -- Salvor Hardin


"Trying to hold on to worthless jobs is a terrible but popular idea."

It's terrible sure, but it's popular only because our economy requires it. That is the basis for the whole economy. It's not like people are clamoring to serve McDonalds for minimum wage or clean shit out bathrooms. They have no other choices in this economy. The economy demands it. While those jobs might be necessary, most middle management and office type jobs are incredibly redundant and frankly, pointless. They are there because people need to eat and we haven't figured out a better, more appropriate way of wealth transfer.

"The fact that we don’t have serious efforts underway to combat threats from synthetic biology and AI development is astonishing."

It's not astonishing considering that these things don't exist, pose no threat, and the people in power wouldn't understand them even if they did exist. There are many more pressing issues that hypotheticals.


I would assume SamA knows this. "Worthless" jobs are typically jobs that are actually valueless, but we continue to irresponsibly support their existence. My favorite example is break bulk shippers before the widespread adoption of containerization. Unions negotiated deals where workers would basically just stand around and get paid regardless of the complete lack of need.

The jobs you mention could eventually get to the point where they are actually worthless, ie: robot fast food workers, but I don't think anyone is arguing we're there yet.


>It's terrible sure, but it's popular only because our economy requires it. That is the basis for the whole economy. It's not like people are clamoring to serve McDonalds for minimum wage or clean shit out bathrooms. They have no other choices in this economy. The economy demands it. While those jobs might be necessary, most middle management and office type jobs are incredibly redundant and frankly, pointless. They are there because people need to eat and we haven't figured out a better, more appropriate way of wealth transfer.

Then we should probably change the economy so that the people don't have to suffer so much.

>It's not astonishing considering that these things don't exist, pose no threat, and the people in power wouldn't understand them even if they did exist. There are many more pressing issues that hypotheticals.

I'll go tell friends, colleagues, and my professional idols that their work does not exist and is purely hypothetical, then.


> While those jobs might be necessary, most middle management and office type jobs are incredibly redundant and frankly, pointless.

This is a conceit of software folks that's not borne out by reality. Those "worthless" jobs continue to exist because software can do 90% of what those folks do, but shit the bed when faced with the other 10%. Software generally isn't reliable, predictable, or robust in the face of unusual circumstances, which is why humans continue to do these jobs.


Large corporations "restructure" all the time. Most often this consists of a periodic pruning of exactly these jobs, which really are worthless. The company continues on, wholly unharmed by the cuts.


> software can do 90% of what those folks do, but shit the bed when faced with the other 10%

So let software handle the 90% and refactor the current jobs to handle the other 10%.

> Software generally isn't reliable, predictable, or robust in the face of unusual circumstances

Not yet, at least.

I agree with you, though, and they're the same reasons why I'm personally paranoid about self-driving cars. Yeah, the occasional autopilot is nice, but if a deer jumps in front of my truck, or the self-driving software runs into some kind of bug (and remember: there's no such thing as perfect software), I'm nowhere near ready to trust the car's computer over the already-pretty-sophisticated computer in my skull.


> So let software handle the 90% and refactor the current jobs to handle the other 10%.

If the jobs could be so refactored in a cost-efficient way, they would be.


Two economists are walking down the street. One spots a $100 bill on the ground.

"Hey," he says to his friend, "There's a hundred bucks lying on the ground!"

"Don't be silly," the other replies, "If there were a hundred dollars on the ground, someone would have picked it up already!"

The two economists keep walking down the street.


They are being so. It's an ongoing process.


The worthlessness of most middle management jobs has nothing to do with software. If they were flat out eliminated, in most cases, nothing would change. There's no software needed to replace them.


The great irony is that it's exactly that arrogant ignorance and inability to understand what happens when large numbers of wildly different human beings try work together why middle management continues to be necessary.

It's like saying we don't need janitors because we can self-organize and all clean the company toilets ourselves.

It's a claim that is as true as it is ignorant and naive.


Just because we (usually) need managers doesn't mean we need managers of managers of managers of managers of managers of managers of managers of managers of managers of managers of managers of managers of managers of managers. That's the primary observation, here: that the quantity of management staff is needlessly bloated, and the levels of indirection between the highest-level executive and the lowest-level subordinate are excessive.

You're trying to paint a picture where the only options are either having five janitors per room or having no janitors at all. The point a lot of others are trying to make is that we just don't need as many janitors.


> They are there because people need to eat and we haven't figured out a better, more appropriate way of wealth transfer.

Also because some people benefit from the current arrangement, no?


synthetic biology does exist, just look at this: http://www.genomecompiler.com/

Also look at this: http://www.theguardian.com/science/2010/may/20/craig-venter-...

It's only been since 2010 that we've known how to actually do it, but the technology exists. Pandora's box has been opened.

AI development isn't science fiction either. GPU's + convolutional neural networks have been enabling radical developments in the area.

I think both of these developments have a whole bunch of potential to make our society drastically better. However there's a bunch of potential threats they represent, and it makes sense to be thinking about counter measures for those threats at the same time we develop the tech.


Don't forget about http://cambriangenomics.com/ a YC company.


Yes, but at least in the case of AI, it's so primitive and so far away from being any type of intelligence that the only intelligence is in the name. Sorry, that doesn't count, especially when you're talking about putting valuable resources into something that may never materialize: real AI. I suspect that the same applies to synthetic biology at this point, though I'm no expert on that.


>Trying to hold on to worthless jobs is a terrible but popular idea.

It seems warm and fuzzy to think Sam, and the implicit company he keeps (the ultra rich)--who are "leveraging not only their abilities and luck" but already accrued wealth--can and will redistribute it. Anyone who wasn't born yesterday will simply laugh at this prospect.

I'm not sure why Sam feels the need to call what most of the world is doing worthless. I think it's crude and indicative of a narrow social and cultural experience (which surprises me considering his position).

Believe it or not, there are cultures and groups of people who do not revere technology the way most North Americans do.

Also, a good exercise for Sam (and others possessing a similar world view) might be to think about how many "worthless" people and jobs it takes to accomplish the things he does (including this blog post).


The problem I take with this mindset, is that it treats all value systems as equal.

At the end of the day, if your culture and economic system create a poorer quality of life for its people consistently, just because it wins out in the percentage of employed citizens doesn't mean a thing. You're treating the lack of disease as a measuring stick of health, when it's simply one piece of the puzzle.

I think the thing Sam has been trying to do for the last few years, is get others to think about the ways we can enrich more lives as a whole, without just slowing labor and progress in its totality- because while that can work for the short term, it can severely inhibit us in the long term to eradicate things like hunger, disease, or poverty.

The thing you also need to be careful of along the way though, is not making perfect the enemy of good.


Unfortunately, many societies today incorporate too strongly one's official profession/title with self-worth. I don't believe sama was implying those who work those jobs are worthless. Instead, it seems like he's trying to say that there are far better ways to accomplish the same objective, and we shouldn't ignore them.

Because our systems of retraining and placing workers into a new profession are so terrible, it's common to assume that many or most displaced workers will remain unemployed. Breaking this status quo is essential to giving everyone a fair chance to work on what truly drives them while we automate more and more worthless jobs. That's why I'm so excited about free and widely available educational resources springing up online. It's not perfect yet, but we had to start somewhere. I have a deep respect for everyone (including sama) who've helped build or teach a mooc.


"Trying to hold on to worthless jobs is a terrible but popular idea."

I'm not sure I agree with this proposition. When I was in India, I noticed a large amount of roadwork was being done by men with shovels and other fairly low tech. I asked about it, and was told, to paraphrase, "Sure, we could do it better and faster with machines, but it is better for society to provide employment to those who would otherwise be unemployed."

It is laudable to provide people with the dignity of a job, even it it means some things don't run as efficiently as they could.

While I doubt this would happen to me as a software engineer, I would certainly rather work and have my dignity that sit on my ass, collect basic income, and feel worthless.


> It is laudable to provide people with the dignity of a job

It's more dignifying to give someone a living wage, freeing their time so they can pursue useful work, than it is to waste their time by assigning them a dirt-shoveling make-work job that, in the end, grinds everything around them to a halt.


What useful work could a manual labourer do, if you automated his job away tomorrow? It sounds harsh but not everyone can be a Javascript developer or whatever the current fashionable thing is. And what's to stop that useful thing being automated away next?


Oh I dunno, learn to read, find out what modern work they find interesting, get an education, &c. Just because somebody is only qualified to do manual labor now doesn't mean they don't have other talents/abilities.

I did manual labor for a long time before I made the gamble to jump into software development. I had the luck to see it work out, but society can provide resources to help people move into more fulfilling and less physically-taxing careers.

I'll be honest, I love a good day of manual labor, but it isn't physically sustainable. Robots are a much better fit.


That is certainly one point of view but the legions of highly educated unemployed in Western nations suggest that it isn't actually true.


Educated vs. Skilled

Lots of people have college degrees in fields where jobs simply do not exist. Bachelors in Philosophy, Women's Studies, or Underwater Basket Weaving are admirable but do nothing to prepare you to get a job. Most people who study in fields that don't directly correlate to a job end up having a career in an irrelevant field after on-the-job training.

Being educated and unemployed just means that you probably didn't need to go to college anyway.


The difference between a high school diploma and a bachelor's degree is halving the unemployment rate. http://www.bls.gov/emp/ep_chart_001.htm


Yes, but if we could snap our fingers and educate everybody would the unemployment rate of the newly educated people halve? In other words, can the skilled labor market absorb the excess from the unskilled market without seeing a partially or fully compensating reduction in prices?

It's not impossible but I have my doubts.


No, I don't think that educating more people will make educated people worth as little as uneducated people are now. Educated people are (on average!) more productive, so the economy will be larger and the average paycheck should go up.


Yes, educating workers makes them more valuable. I agree.

No, the fact that a worker is more valuable does not mean they will get paid more. "More valuable" only implies a larger upper bound for what the company would be willing to pay were the employee's skills very scarce. However, almost by definition this is not the case for the majority of the labor market: supply and demand have a much larger effect on wages than productivity. Note how productivity has been rising at the same time as wages have been falling in, IIRC, the lower 90% of US household incomes, so this isn't just a theoretical distinction. For most people in the US it's a harsh reality. It is indicative of our fortunate positioning wrt supply/demand that we can even entertain the thought of getting paid in proportion to the value we create.

Small changes in supply/demand can have disproportionate effects on price, so adding a seemingly modest number of educated people to the market could theoretically send aggregate wages tanking far below where they were originally even if each and every employee was individually more valuable to their employer. I don't think the effect will be that extreme, I'm just stating the possibility in order to highlight how dramatic the distinction between value and wages can get.


That laborer could write music perhaps? Or maybe become a painter? Or perhaps he or she could just focus on the happiness of their family and friends.

"I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study mathematicks and philosophy, geography, natural history, naval architecture, navigation, commerce, and agriculture, in order to give their children a right to study painting, poetry, musick, architecture, statuary, tapestry, and porcelaine."


In India?

Even in first world countries, incomes for poets and musicians are not distributed on a bell curve. There's the very wealthy, and the paupers elsewhere.

And as other posters have pointed out - the goal for education is to be employable. Witness all those pol sci majors who have themselves to blame for not choosing an employable profession.

So the idea that someone can study music is a luxury.

Further this is India. Construction and road workers live under crushing poverty, where the daily calorie deficit alone makes survival difficult. There's a deficit of teachers for children let alone adults.

And America is today starting to ape the educational pressures of India and China, where taking up a non STEM field was a sign of failure.


The Free Market has been faced with this problem countless times, and each time the answer has been "the service sector". First we moved from agriculture to industry until agriculture, previously humanity's main occupation, became a small part of the economy. Now we move from manufacturing to services. It's not even a new phenomenon. It's been going on for a couple of decades now.


I would love to believe that the service sector will be able to absorb the displaced labor without creating insane, poisonous power dynamics that put large sections of the population into an unnecessarily horrible position. For the very reason that I would love to believe this, I am cautious. Do you know of any other reasons why I should let myself hold this belief? I don't find these very convincing:

* Faith

* Optimism

* Vague comparisons to historical events that differ in every detail imaginable


You're asking if the new jobs of tomorrow will suck harder than the McDonald's of today, and I really don't know. It's easy to assume that more essential, "useful", jobs will be better paid, but if you ask farmers you'll find out that that's not necessarily true. That said, while Notch may feel like he is king of the world right now, most indie game developers don't, I think.


Yup. Supply and demand is the name of the game, "useful" doesn't have anything to do with it.


I'm not following how the "service sector" is any kind of answer to the problems and issues we are facing with regards to automation and labour.


It's not necessarily the answer, or even a good answer. But the service sector has been growing for a long time. And it has become one of the huge growth areas due to the software revolution. Just look at the types of jobs produced by Uber/Lyft/et al, and the many delivery services that are popping up. They're mostly low paying jobs for the most part, which in my mind, is not the answer to the destruction of jobs happening now.


I agree, but I'm not questioning whether or not the service sector is growing. The rise of the freelancer/contractor in any given industry seems to have grown exponentially just in the last decade alone. If we assume this to be the "answer" the free market has, I simply do not see how this is sustainable on a large scale. It seems to me that we would rapidly have an excess of service providers in an ever-shrinking pool of service consumers. Furthermore it would introduce a whole new set of issues from exploitation to underbidding/cutting.


Not really, even the service sector is being heavily optimized these days. What are we going to do once we optimize away banks and real estate agents?


Live entertainment and ever fancier dining, I suppose. Also pointless luxury goods and fashion. We'll find something to spend our money on and the jobs will move there.


I should've been more explicit. I meant that it's better to just give them the money than to make them work for it.


Unfortunately no, people need to feel like they are physically making a difference to there own lives. If people don't have a mechanism to improve their situation relative to those that they compare themselves to a general sense of futility will eventually development. and with that and too much idle time comes all the things that governments don't want when controlling populations e.g class envy, unrest, civil disturbances, crime. etc all imho ofcourse


Recipients of a basic income would still be able to work and improve their situation.


What does useful mean in this case? They could do anything they want.


"Sure, we could do it better and faster with machines, but it is better for society to provide employment to those who would otherwise be unemployed."

And you believed that? It comes down to money -- there are plenty of people in India willing to work for essentially nothing, and so labor is cheaper than equipment.

There's nothing altruistic here... businesses in the US or anywhere else would be happy to replace machines with people if people were willing to work for pennies.


There's actually many government programs in India that are expressly designed to provide guaranteed employment for laborers for a set number of days each year.


Then it should be no wonder why they, as a nation, have so much trouble catching up economically.


I wouldn't blame India for its condition due to having a jobs program. They started off far down the hole when they became a nation after securing their freedom from colonization. They have progressed in many ways, including some of the best schools in the world (IIT), some of the best entrepreneurs and engineers, and some the most advanced technologies (nuclear weapons, modern military, space program). But there is large disparity between rich and poor, partly due to technology enriching some few. Kind of similar to what is happening here in the US, except that the US has/had a much large middle class.


> I wouldn't blame India for its condition due to having a jobs program.

Sure, the jobs program alone probably can't do that much harm, but it is indicative of a bigger problem - that their people don't understand economics and vote in favor of things that are economically good.


I think it's a symptom of a different bigger problem: that vestiges of the caste systems once pervasive in India (and indeed much of the world) for the last several millennia are still very present. The belief that "those darn workers exist to do manual labor, so we should come to expect that they'll always want to do manual labor and - therefore - we should provide such opportunities" is a rather clear manifestation of that vestige.

In this context, I don't think such a jobs program is in any way, shape, or form intended with genuine altruism. Perhaps that's what India as a whole has convinced itself of in order to rationalize its societal behavior, but it's not something that should fool anyone with the slightest understanding of south-central Asian history.

Basically, I'd argue that the reluctance to automate away manual labor stems not from a desire to empower laborers, but rather to keep them subjugated and prevent them from climbing their way into any semblance of a middle class.



I am an Indian, and have always lived in India. And, in several parts of India too!

The problem of caste is certainly visible in multiple aspects of Indian life. However, what you say above is no longer true, even at the level of most villages (where the caste systems work stronger).

The issues of automating jobs and the resulting unemployment in a country like India, are both deeper and broader than your characterization of it.


You may very well be right; I am looking at this from thousands of miles away, after all! And I'm sure everyone here would enjoy hearing your perspective on it, seeing as much of Hacker News (I'd reckon) is in a similar boat.

That said, it should be understandable why I'd take your comment with a skeptical grain of salt. Slave-owners in the Southern United States (something which I'm in much closer proximity to, though perhaps not temporally) typically had a lot of justifications for owning slaves, ranging from "We're helping them establish a modern culture!" to "We're introducing them to God and Jesus!" to "They like to work; they were bred for it!" to "We treat them pretty well, actually!" (this was a blatant lie in many cases, mind you) to "What else would they do if we were to not give them work to do?". Similar justifications persisted throughout the days of the Jim Crow laws and their ilk; even after slavery had been abolished once and for all, the now-free black populace was - in the South - rarely (if ever) encouraged to deviate from manual agrarian labor, since that was popularly believed to be their "place". The Civil War was indeed a pretty powerful wake-up call to the ways where the North's automated/streamlined manufacturing and agriculture - using machines instead of men - mopped the metaphorical floor with Southern slave-driven manual labor, but it took a long time for the South to fully realize that.

Today, the United States is still dealing with high unemployment rates of various minorities - including blacks. This is likely caused by automation in the agricultural and manufacturing sectors. It sucks for those who don't have jobs in the short term, but - ultimately - it'll encourage those who were previously stuck with factory and farm jobs at best to seek educational financial assistance (which is available for low-income households) and work their way into better careers. I'll take that - along with the bit of ultimately-temporary unemployment caused by it - over blacks and Hispanics (among other minorities; Asian immigrants were victims of this as well, but a large-enough portion of the Asian-American population eventually managed to achieve white-collar jobs and top-tier academic performance that the public view has shifted in the other direction entirely) being treated as if manual labor is the only thing they're good for.

You can't blame me for seeing the parallels here. If India is willing to burn money on giving people menial busy-work for the sake of "employment", it should be more willing to instead burn money on giving those people subsidized education and placement into more modernized roles (like operating or maintaining the machines which replaced their old jobs, for example). The reluctance to do so indicates - to me at least, as someone who can relate his own experiences to this - a cultural or societal unwillingness to allow them to do so; the reasons for not doing so are certainly not ones grounded in rationality or economic common sense, which thus implies a more emotional line of thought.


"their people don't understand economics and vote in favor of things that are economically good"

And what is economically good for India? You say that as if you have the correct answer. You're also assuming that the people there voted for the system that they have, and due to their ignorance, they ended up with a system that is causing their problems. I doubt that the present state of India can be attributed to something so simple.


This is a very skewed view of how things are for these construction workers. I'll give you the benefit of doubt that you have chosen a terrible example to make your point. But in general, 'jobs that don't come back' are usually replaced by higher skilled, more 'meaningful' jobs. It is related to the so called 'lump of labor fallacy'. See more here: https://en.wikipedia.org/wiki/Lump_of_labour_fallacy and here in an NSF white paper on employment and technology, https://www.aeaweb.org/econwhitepapers/white_papers/David_Au...

Now, on to the Indian construction worker. Their lot is pretty miserable. Anecdotes: there were two or three construction sites (homes and apartment buildings) that I witnessed first hand within a 1 block radius around my home in India in the approximately 10 years when real estate was booming, and I remember at least three severe injuries. They don't wear eye or face protection. There is dust everywhere. Children are usually brought up on the construction site because that's where the workers live. The children are routinely exposed to the same dust as their adult worker-parents. Some times children as young as 6 years old lend their hand in the manual labor.

Now, let's ask this question: If they used more machines, would such terrible conditions continue to exist? The answer is obvious: a big resounding no. There would be fewer employees, higher skilled ones, higher wages, better work practices would follow.

There is a reason we have the Jacquard loom and not a thousand weavers and seamsters and seamstresses.

I would like to look for citations, but even a cursory Google Image search on Indian construction workers (vis a vis, say American construction worker) can show you how egregiously wrong your reasoning is.


Just wanted to point out that the lump of labor fallacy is not a fallacy in the same way a logical fallacy is - whether it is fallacious or not is dependent on what economics axioms you choose to accept. In fact, the history of the fallacy leaves it far from settled[1]

[1]: http://econpapers.repec.org/article/tafrsocec/v_3a65_3ay_3a2...


So they were digging holes and filling them back up!

Dignity!

Probably would have been better to give them the money, and tell them not to base their identity on their employment status. If one in a thousand invented something new, that's a net benefit for society.


> I would certainly rather work and have my dignity...

I've never felt any pride in busy work, I've always found it to be insulting. I would have thought everybody felt the same way, but I guess I would have been wrong.


Building a road isn't busywork; when it's done, there's a road where no road was before.


It's busywork if a machine could do it faster, cheaper, and at comparable quality.


Collecting basic income doesn't preclude you from also working, if your current job becomes automated. Write a book. Learn to draw. Sell inefficiently produced but quirky hand-made widgets on Etsy. Or go back to school and learn to do something else. The whole point of basic income is to make those latter options feasible.


> When I was in India ...

Useless jobs are endemic in India. Once, I was in a Mumbai apartment building with several elevators, and one of them had a full-time operator, despite being a normal, modern elevator. Why? "Because he needs a job."

Many places in India, the lawn is "mowed" by a bunch of guys bending over with scythes. Not even when the job could be easily done by one person with a push mower. Why? "Jobs."

And there's construction. Laundry. And so on...

To be honest, India has so many people (and labor is so cheap) that I'm not sure what a better solution would be. Would the poorest still be able to make a living after being displaced by automation? I don't know enough about basic income to say how it could work in a nation of 1.2 billion people.


I worked in an office in Delhi which had one of those automatic Nescafe machines: place cup under spout, press button, horrible coffee/tea comes out.

There were two people employed, full time, to operate the machine. Boss-guy would ask for your order, worker-guy would operate the machine and would hand you your drink. They and the machine were in a little windowless supply cupboard niche, maybe 2 meters square, and boss-guy had a plastic lawn chair, while worker-guy did not.

And yes, they were both very unhappy if you attempted to operate the machine yourself.


Funny that you mention elevator operator. When I moved to San Francisco, I saw a few elevator operators in some of the city's buildings. Where did you get the idea that the elevator operator in India had a job "Because he needs a job."?


Because the friend I was with (who lived there) said so, after I asked about it.

This elevator didn't need an operator. Especially considering it was one elevator among several other operator-less elevators.

There are useless jobs everywhere though, including San Francisco. Was this an older building with a manually operated elevator? That would make more sense.


Surely having a job is not the only way to be or feel like a dignified human being? And how long will you continue to feel dignified in a job that exists for the sole purpose of making you feel dignified but could really just as well be done by machines?


In an economy where there aren't a lot of jobs and a lot of poverty, just about any decent job would give someone dignity. In those economies, it's probably cheaper to have human labor than machines.


“I have only twenty acres of land,” replied the Turk, “which my children and I cultivate. Our work keeps us free of three great evils: boredom, vice and poverty.”

From Voltaire's Candide


I argue it's NOT laudable. What should happen instead is the road should be fixed more efficiently.

It's not fair for those commuting on the roads to have to wait longer. It's not fair to those paying to have the roads fixed to pay more.

If you want to do charity, then do charity. If you want to fix roads, then fix roads.


Although I am one, I will never understand the mentality of the American worker. You would rather give up your time and freedom to have someone tell you want to do (and that's "dignified") rather than be given total freedom which with ... you would sit on your ass and do nothing? Is there really such an utter lack of creativity out there that that's the best use of your time you could think of if you didn't have to work?


Preach it. I can think of all kinds of things I'd like to do if I had the time. I'd travel to bike polo tournaments and supermoto races, and host races in my hometown. I'd employ lots of artists and musicians to advertise for my events and entertain people at them. I'd build absurdly impractical project cars and bikes and take them to shows. I'd build a fleet of "guest" bikes and drag a different group of friends on trail riding expeditions every week. I'd become a master chef and a master artist. I'd probably be much busier than I am now, and contribute more to my community than I am at the moment (currently styling some buttons on someone's website). In fact, I do all of these things in my free time anyway (try to, at least), but it's tough to do as much as I'd like because the best hours of my consciousness have to be spent at work.


"If the Treasury were to fill old bottles with banknotes, bury them at suitable depths in disused coalmines which are then filled up to the surface with town rubbish, and leave it to private enterprise on well-tried principles of laissez-faire to dig the notes up again (the right to do so being obtained, of course, by tendering for leases of the note-bearing territory), there need be no more unemployment and, with the help of the repercussions, the real income of the community, and its capital wealth also, would probably become a good deal greater than it actually is. It would, indeed, be more sensible to build houses and the like; but if there are political and practical difficulties in the way of this, the above would be better than nothing." - John Maynard Keynes


"If it’s jobs you want, then you should give these workers spoons, not shovels.”

http://quoteinvestigator.com/2011/10/10/spoons-shovels/


I would rather sit on my ass and feel worthless than do something inefficiently or something not needed and feel worthless.

Never mind I don't usually feel worthless even when sitting on my ass because I find something to do.


"Sure, we could do it better and faster with machines, but it is better for society to provide employment to those who would otherwise be unemployed."

They could have employed even more people if they gave them spoons instead of shovels.


Agreed. Let's take it one step further than let them use their fingernails. An honest day's work for all!


>>While I doubt this would happen to me as a software engineer, I would certainly rather work and have my dignity than sit on my ass, collect basic income, and feel worthless.

Actually, you say that because you are a software engineer: it's quite a dignified job. Digging ditches out in the desert is not.


I assure you this is not a troll, I'm trying to figure out your thought process.

How is one dignified and the other not?


Because the latter is pointless when there already exist tools better suited to that task?

The efforts of those road workers are effectively meaningless. Worthless. Moot. Their efforts would be better expended elsewhere - service jobs, creative occupations, the like - yet here they are, digging ditches for subsistence wages (at best) instead of doing something properly meaningful with their time on Earth and their energy.

The idea in places like India that manual labor takes precedent is a symptom - I'd argue - of exposure to the lingering remnants of a very long-lived caste system; instead of pushing these laborers into more useful fields, there's a preference to relegate them to menial, worthless work under the guise of "public service" or "charity" in order to reinforce that system, whether deliberately or perhaps unconsciously. I can assure you that, of the many reasons to emphasize such manual labor, "dignity" of the laborers is not one of them in this case.


Let's assume that as a society we're not going to let people starve to death or die of exposure. So that means we're going to have to use tax money to provide everyone with at least some minimal level of income whether we call it welfare, dole, or basic income. So let's turn the government into an employer of last resort. Even if people have no skills they can pick up litter or do basic landscaping in public areas for 30 hours per week. Since we're going to be paying them anyway I for one would at least like to get some value for my tax money.

The point isn't whether robots could someday do those tasks more efficiently. Robots will always cost >0. Whereas the people will essentially be "free" since we we'll have to pay them anyway.


I don't think the alternative is to sit around and do nothing. Economies are much more dynamic than that, and while it's true some may suffer in the short-run, it's wouldn't be like that forever.

At one point the US was primarily an agrarian society, and more than 90% of labor worked on a farm. Imagine telling someone from that society that some day, tech would allow less than 2% of the workforce to produce many times more food than the current total output. I'm sure they would express a similar concern, though we know they'd be wrong.


>> "I'm sure they would express a similar concern, though we know they'd be wrong."

I'm not so sure. The work they did contributed to society. There are plenty of people now being paid high amounts of money for made up, bullshit jobs that provide no benefit to society and create little to nothing.


Some of them might even be judged as getting paid for a net negative effect to society...


Absolutely, I wasn't saying that farmers in pre-industrial America didn't contribute anything. My point was that once tech displaced the majority of their jobs, they didn't all just sit around and not do anything.

Technology freed up their labor from producing food and allowed them to produce things like automobiles, textiles and eventually computers. The idea that new tech will leave large parts of the population with absolutely nothing to do has been suggested before, but we still have no example of it actually happening, and in fact, far more examples of the reverse.


Easy to say from the vantage point of history. But folks starved, died when their livelihood disappeared.

http://webs.bcp.org/sites/vcleary/ModernWorldHistoryTextbook...


Who's making you feel worthless? Write, draw, create, explore, build, do any of the things you'd normally do on the weekend.


This particular government scheme is now under criticism and proposed overhaul, which includes shifting the focus away from dig-and-fill kind of activities to more permanent ones.


There's no dignity in it when someone else makes the decision (about what they do with their lives) for them.


I knew I would hear these objections... and I still stand by my assertion in general.


This is a typical greedy algorithm train of thought. Ignoring long term issues in favor of immediate short term feel-good solutions. How you can bring up Indian road maintenance methods as a standard bearer for good / positive policy is beyond me. This just leads to roads not getting fixed and the laborers stretching out the repair "job" as long as possible, which in most cases, is a very long time. By using these terrible and inefficient methods, the nation as a whole suffers, including the laborers. Of course, that loss is not immediately observable, and hence gets brushed under the carpet. If instead, the resources that are wasted on inefficiencies ends up channeled better, maybe not this exact generation, but hopefully the next generation of the same economic class could have a better shot at education and/or a better life.

Your line "I hear these objections, but I still stand by my assertion" reminds me of the saying (translated to English) - "100 out of 80 (sic) people are cheats, yet my India is great"


Interesting... I don't live in India, I'm born and raised in the US and live in Colorado.

I don't actually think that India is great, either. But thanks for guessing.

There are obvious problems either way... but to look at another commenter who posted the Voltaire quote, that is more along the lines of my thinking.


I wholeheartedly agree with your points and sentiment, but have one minor issue. It might actually be cheaper in that type of economy to have manual laborers than to use machinery which requires its own support infrastructure and much higher priced workers. That is one reason why many of the poorer countries don't have some of the more "efficent" and high volume machinery that are used in the industrialized countries.


Care to share your rebuttals to the counterpoints that have been made to your argument?

Is this just faith or opinion to you? Some folks in this world are concerned with what's better for people; should we just take your opinion or should we discuss arguments?


A few points here that I never see anyone acknowledge on this forum about technology displacing jobs:

1) 100 years ago, >50% of the population was illiterate, now it is something like 98%. People can, and always will, have the ability to learn new skills...even complex technology. It just takes some longer than others. We have the capacity to teach displaced workers new skills, and doing so is not more overwhelming nor more impossible than teaching our entire population how to read.

2) The more efficient, i.e., fewer man-hours required, every job in the world economy required means additional man-hours that can be devoted to higher level work, such as finding cures for obscure diseases, exploring further beyond our own plant, developing cleaner energy sources, etc.

There are certainly always short-term fears and challenges with technology revolutions displacing jobs, but there is also an immense amount of knowledge about our world and work to be done still. Making the wrong choices in the short-term about these things only will delay us achieving those goals mentioned above.


> 100 years ago, >50% of the population was illiterate, now it is something like 98%.

:)


The last few posts from Sam Altman have been deeply troubling and make me worried for the future of YC. He presents leftist ideas as fact without evidence of serious critical thought or even basic economic education.

"The previous one, the industrial revolution, created lots of jobs because the new technology required huge numbers of humans to run it."

This is factually wrong, but its easier to demonstrate with a thought experiment. Imagine you are a weaver or a smith. You have dedicated your life to mastering the craft and slowly produce products by hand. Now a textile factory or a foundry opens up. You will suddenly find it impossible to make your products profitably. Not only will you be out of work, but so will all of your colleagues in the rest of the country.

Or imagine you are a farmer, and then the green revolution happens. In 1870, 80% of the US population was in agriculture. Today, its under 2%.

In both of these cases, it will seem like the end of the world to the displaced workers. But new technology frees their labor for new purposes and uplifts the standard of living for everyone in society.


You have a curious definition of "leftist."

This essay more or less boils down to "technology is awesome, except for the part where it makes the proles restless, someone really ought to figure out some way to fix that." Which is pretty bog-standard 21st century Davos-über-alles capitalist thinking.


We've gotten to the point that even admitting the existence of possible negative consequences of current economic trends is "leftist."

It's so very ironically Soviet. Collectivized farming is boosting crop yields! What? There are people starving? How would that be possible, because collectivized farming is boosting crop yields!


Yeah, I don't really understand how that idea is "leftist." If anything it's... not that.


What is factually wrong about that statement? He doesn't say the new technology requires huge numbers of weavers and farmers.

There's nothing about being freed for "new purposes" that means those new purposes have economic value or will necessarily uplift your standard of living. In the developing world, they are undergoing the industrial revolution now so they are going through the same process of replacing farm jobs with factory jobs. But in the developed world, unemployment and inequality are rising.


>But new technology frees their labor for new purposes and uplifts the standard of living for everyone in society.

I hear this a lot in discussions about technology (and about free trade) but it contains a fallacy: just because a group is collectively better off it does not mean that all persons in that group are better off. It's quite possible for a society to become wealthier at the same time that many members of that society become poorer. Indeed, there are large parts of the U.S. for which this has been true for the last 30 years.

That doesn't mean that we should retard technological progress, but it's disingenuous to paper over the real suffering it causes real persons by talking only about society collectively.


We should think about how to make technological progress work for us in a positive way instead of blundering forward on the assumption that it will automatically turn out that way. That's what I read this essay as advocating. I don't see that as particularly far "left" or "right," just... well... thinking.

What's funny is that modern so-called "neoliberals" seem to have adopted the Marxist idea of automatic progress. We are headed "forward" to the automatically-better future.

I think that's bollocks. We get the future we choose and work to achieve.


Rather than drawing such a broad conclusion (a troubled future for YC), it could be that's he's just trying to emulate the very informative and enjoyable essays of PG, and still trying to find his footing as a writer. I think that's a simpler more likely explanation.


> Technology provides leverage on ability and luck, and in the process concentrates wealth and drives inequality. I think that drastic wealth inequality is likely to be one of the biggest social problems of the next 20 years. [2] We can—and we will—redistribute wealth, but it still doesn’t solve the real problem of people needing something fulfilling to do.

What's the best case realistic scenario for redistributing wealth?


Basic income. People may scoff, but this is one of the extremely rare ideas that can draw significant support from both sides of the isle in the US.

There are pockets of strong opposition to the idea on both the right and the left, but I can only hope that the far left's opposition to basic income continues. That opposition in and off itself makes most politicians in this country take a serious look at the idea.


I'd say it's one of the rare ideas that almost no one supports. The right obviously objects to redistribution. The left prefers inefficient redistribution. It's hard to say there's much tangible opposition on the far left or that opposition causes consideration or that most politicians are taking a serious look. I think you might have just gone 0 for 5.


The right also tends to like freedom of choice, reduced bureaucracy, efficient markets and price discovery, all of which BI does much better than current welfare systems.


> The right obviously objects to redistribution.

This is inarguable.

> The left prefers inefficient redistribution.

This is an obvious strawman. The current inefficient solution is a compromise between the left (who want the government to help the poor) and the right (who don't want the government to help the poor, but can be persuaded if you mix in enough penalties for perceived sin.)

In order to have an efficient solution you need a majority of people voting to agree on what the goal is.


The left prefers inefficient redistribution

That's not quite true; just they want the main beneficiaries of redistribution to be the bureaucratic class rather than the working class. What you see as inefficiency (in terms of money reaching the end recipients) is in fact, the actual design doing what it was intended to do. Ideally (for them) ALL the money would go on civil servant salaries.


Since I am not an economist or very experienced in these matters may I ask what would prevent the cost of goods simply going up due to basic income being supplied? Wouldn't we end up in the same situation all over again except the government would then be forced to write checks as the population would be dependent on them due to increased costs?


> Since I am not an economist or very experienced in these matters may I ask what would prevent the cost of goods simply going up due to basic income being supplied?

Prices of goods demanded by the group of people receiving a net benefit from basic income (which, even though BI itself is universal, isn't everyone, because its funded by progressive taxation, which makes it a net downward transfer of wealth) would almost certainly go up with a basic income. The thing that suggests that the increase in price would generally be restrained such that the quantity of goods the net beneficiaries could afford would still increase despite the price level increase is "elasticity".


The "problem" that basic income is trying to solve is the massive increase in productive capacity due to automation, of which the increase of unemployment is a symptom. It's not so much that supply would meet demand (and so prevent prices going up), as that basic income allows demand to keep up with supply even as automation makes more with less labor.


Two questions:

1. How will this help with wealth that's already accumulated? I get how this will slow further accumulation.

2. How about capital flight? If the US enacts a policy like this, what stops the super rich from moving to other countries?


1.

You're assuming redistribution wouldn't happen through heavy taxation of existing capital and property, like France's wealth tax[1], where you pay when your worldwide net worth is above 1,300,000€.

2.

The US taxes citizens regardless of where they live[1], and in the case of renouncing the US citizenship it is required you pay an exit tax[2] equivalent to the capital gains of selling all your property when above $680,000.

[1]: http://www.french-property.com/guides/france/finance-taxatio... [2]: http://hodgen.com/does-the-united-states-stand-alone/ [3]: http://www.irs.gov/Individuals/International-Taxpayers/Expat...


1. Inheritance taxes are one part of the solution. Capital gains taxes are another.

2. Make the right to conduct financial transactions contingent on one being part of a global financial network which abides by a specific set of taxation rules. This can take many forms. The U.S. in particular is well-placed to initiate and control such a system. However, considering that Wall Street has captured Congress, I doubt the U.S. will come anywhere close to this in the first place.


I believe the goal is some kind of universal basic income, where the developed countries affected by these changes provide similar benefits. Of course, no matter what the solution is, political and social structures will have to change drastically to accommodate this kind of change in the world.


Two very good questions, but my ideas about those go way off topic.

But with respect to question 1, the most important response is that we can't let the sunk cost fallacy stop us from adopting good ideas.


Which part of the right supports basic income?


This is a topic on Cato, Reason, and several other prominent publications of the right. Here's one: http://www.cato-unbound.org/2014/08/04/matt-zwolinski/pragma... and another one: http://reason.com/archives/2013/11/26/scrap-the-welfare-stat...

Disagreements on implementation are real (some on the right want this implemented only as Friedman's negative income tax, which is not a true GBI, and some on the left want a GBI in addition to the current welfare state), but there is still significant agreement, especially over the past few years.


By implementing basic income, you no longer need welfare, public healthcare, etc. You get a smaller government body as a result.


Some Libertarians have kicked around the idea. Basic income would replace all of the welfare bureaucracies.

EDIT: rcfox beat me to it by a few minutes.


Wouldn't that just cause more inflation?


The problem is implementation, do you really trust the US government to give it even more power over its people by allowing to hand out 'basic incomes' to everyone? I'm sure it will only become yet another tool to oppress dissent.


I don't think your fears are well grounded, and I'm not aware of significant efforts by the US government to "oppress dissent" in other recent cases.


The federal government has a terrible record of trying and succeeding in ruining people they perceive as a threat to the status quo, including people like MLK Jr., a Senate confirmed target of the FBI. These tactics continue today, they are currently being directed towards investigative journalists and whistle blowers. The threats are so great as to create a very real chill among conventional journalists to keep on approved topics and messages.


Free speech zones, excessive pursuit of whistle blowers, proposed restrictions on cryptography, extensive use of national security letters, resisting FOIA requests, defending NSA programs, killing US citizens in secret.

I'm flabbergasted you believe my fears of government overreach and suppression of dissent is not well grounded.


Not sure if you're asking sama about his opinion or if you want others to chime in. A popular answer among the HN crowd that I tend to agree with is to tax the wealthy and distribute the proceeds to the poor via a "basic income" scheme.


> distribute the proceeds to the poor via a "basic income" scheme

Basic income goes to both the poor and the rich. It's universal. The net affect may be redistributive, but there's no preference given to the recipient's income level.

Most rich people would just take the basic income as a small tax break, but they're still getting it.

I know you know this, but it's important to frame this issue properly if you want to support it. There can be no question in basic income of "undeserving" groups getting it, because everybody gets it. This also has the worthy effect of eliminating all the bureaucracy that current benefits programs carry.


> Most rich people would just take the basic income as a small tax break, but they're still getting it.

Right, I wasn't clear about this but you're correct that the idea is that technically everybody is given the same amount in one form or another.


This ideal has helped Social Security and Medicare keep popularity. They are not seen as poverty programs, but they help the poor a lot.


A really good article about basic income was published in Vox last year (seems to have been updated recently).

http://www.vox.com/2014/9/8/6003359/basic-income-negative-in...


Why wait for a tax? If we're in the top %x of income earners, why aren't we taking it upon ourselves to give away our wealth? Form a charitable organization that takes care of people, donate your money.

Edit: I'm with the others here replying with "reduced burdens on the middle class and small businesses" and "...teach a man to fish..." I keep seeing this basic income and wealth distribution topic on HN and I would genuinely like to understand why those preaching for these ideas never actually do anything about it. "Make the government bigger" isn't the answer as it'll then be used as a tool of oppression.

Further, assuming we implemented a basic income in the USA, how many generations until the motivators for innovation and advancing society are completely eliminated? I've everything I need at $BASIC_INCOME, and as soon as I start producing more income, the government is stripping it from me, so what's my motivation to ever do anything besides subsist on that minimum? And once everyone is just taking the minimum and not doing work, who's gonna farm the food? Drive the trucks? Operate a grocery? Build the houses?



I don't have a good answer for why this doesn't happen. However, the fact of the matter is that the wealthy, on the whole, don't naturally redistribute their wealth very effectively.


It doesn't work very well without broad participation.


Because keynesians don't really want to help the poor, they want to be taxed which serves as flogging to atone for their guilt for the poor, which then makes them feel better about themselves.

If we wanted to help the poor we would be making things easier for small businesses, not harder.

You don't help people by giving them fish, you help them by teaching them how to fish. I can't believe I'm having to remind HNers about this.


A better answer is removing expensive barriers of entry to allow small businesses to compete with larger companies, removing artificial boundaries to allow labor to go where it is most needed just as capital is allowed the same today, and _reducing taxes_ allowing the middle class to thrive once more instead of giving special privileges to the wealthy and buying off the poor with free debt.


As a member of the middle class, I'm honestly curious how reducing taxes would help me in the least.

An extra thousand bucks or so at the end of the year gets me what exactly?


I am puzzled by this. There is also this first order decrease in cost of living that technology drives, that really is the 'rising tide that lifts all boats', in effect, a 'natural' progressive redistribution[0]. It would be hard to argue that inequality increased between, say, 1700 and 1900 because of the increase in technology.

[0] This 'natural' redistribution tends to be counteracted by authorities that debase the money system. Currently we have an explicit anti-deflationist policy on the grounds that lowering prices are believed to inherently have a socially destabilizing effect. Monetary policy tends to be regressive redistribution, because the primary executors of these policies are connected to banks, and the secondary effects are to create upward market indexes that beat inflation (but are eventually corrected downward, hurting middle-class 'slow movers' like pension funds and disproportionately helping upper-class 'fast moving' investment classes).


> It would be hard to argue that inequality increased between

Based on averages, why would it be? Inequality does not measure where you are coming from, it measures the relative economic distance between groups now. We can all live better these days and yet have a far greater difference in wealth between the richest and poorest.


yeah, pretty sure the gini coefficient went down during that era.


Imagine a system which levied taxes not to fund government expenditure but to carefully control inflation. In this system there is no need for the government to collect taxes from citizen X to fund the needs of citizen Y. How is this possible? Study U.S. fiscal policy and you will learn about such a system. Given enough consideration you will eventually see that 'the redistribution of wealth' from citizen X to Y is an antiquated idea given how our monetary system actually works. A more accurate description of what occurs is: given the growth of the productivity of the population as a whole, we can 'distribute wealth' to those with less as long as this distribution doesn't cause the system to become unstable.


We could all be farmers. We'd all have jobs. Work the fields by hand. You quickly see the flaw in the logic of the argument for more jobs. (It's much better to pay a much smaller % of your income for a few specialized persons to do the work)

You always want more work being done by less people. Video rental? Automate it! Automated kiosks becoming too much of a hassle for someone to constantly restock? Online streaming instead.

This is what we call progress. It's what is allowing us to even debate this as a topic. I hope it continues because it does create much higher paying jobs for those that do have jobs and it frees up the workforce to innovate even more.


This might be a really unpopular thing to say around this site, but it honestly scares me that Sam Altman cares so much about this particular topic (the risks of AI). He stands to have a ton of influence over it in the coming years - enough to even lead the charge - and I can't imagine he is smart enough to do it right. If he fucks up, we are all fucked.


Totally agree. It was a fairly naive article. Sam is on the right track, but he needs to self-educate a lot more.

The reasoning in the article is pretty garbage. "Software is destroying jobs and enriching a small handful of people...therefore, the 2 things that threaten society the most are synthetic biology and AI" ...what?

He then proposes legislation and reforms as the fix. Yet, everything we know about extremely concentrated power is that it easily escapes, often even controls, legislation and reforms.


> The reasoning in the article is pretty garbage. "Software is destroying jobs and enriching a small handful of people...therefore, the 2 things that threaten society the most are synthetic biology and AI" ...what?

I think you're missing the forest for the trees here. He mentions those two things because they're problems that are rapidly approaching us. We've been able to avoid the problems created by past scientific revolutions, but we're hitting a point now where small groups of people with easily-accessible tools can affect millions, possibly billions of others. The two easiest ways they can do this, is via manufactured disease and possibly computer virus systems that can disrupt global organizational systems.

It's a problem purely created by modern advances, and we don't have a solve yet, and no indication that there is anyone even trying.

> He then proposes legislation and reforms as the fix. Yet, everything we know about extremely concentrated power is that it easily escapes, often even controls, legislation and reforms.

It does, but that doesn't mean you throw your hands up and refuse the imperfect fix because it isn't perfect. It means you do it, and try to fix the screwups along the way. The nice part is, what you gain is time to fix the screwups, where going without would see massive destruction of life, wealth, and livelihood.


I don't know him by any means, But i don't think he has that much influence on the matter. He has a business to run, with certain incentives built in, and that probably requires choosing certain choices, whether he feels they are good or not in this context.And even if he, for example,chooses not to help a company that does bad, i'm sure there are plenty others who will help.


I take a lateral view of technological evolution. Software is essential, sort of like how fire and the wheel were essential. It's a supremely useful tool as it turns out. But it's not a reason to exist. The reasons to exist are still family, love, happiness, etc. Those never change. The problem is money and money is intimately driven by basic material supply and demand laws. If you dive deeper, materials become a non-issue if energy is limitless or at least very abundant. So, energy is actually the problem. Thanks to industrialization and computation we are getting closer to tech that will make energy nearly limitless (between say renewable and fission/fusion tech). Once that happens, practically limitless water, materials production, food production, etc become a reality when coupled with robotics.

My theory is that there will come a point where humans are allowed to pursue happiness because the individual humans will no longer be considered a "drain" on a limited system. For this reason alone I've always thought the privatization of energy production in America was a TERRIBLE idea. Of all the cards to hold close, this should have been priority number one.

Anyway, as we approach energy critical mass, more and more humans are being thrown to the wayside. It doesn't have to be this way. If we collectively held a belief that we can achieve limitless energy together, then we could find ways to help those who aren't able to cope with technology still find happiness knowing full well it was a temporary band-aid.


I think Sam is incorrect when he writes "The great technological revolutions have affected what most people do every day and how society is structured. The previous one, the industrial revolution, created lots of jobs because the new technology required huge numbers of humans to run it. But this is not the normal course of technology"

Jobs were not created in the sense that people were previously doing nothing. Jobs were transferred from low skilled occupations such as tending to farms, to higher skilled occupations which more closely resembled the salaried jobs of today.

The industrial revolution was the same as other technological revolutions and not distinct from them in that it reduced the exertion and strain put on workers. The industrial revolution gets a really bad rap, but compared to the work and life expectancy that preceded it, the condition of workers improved dramatically in the 19th century.

The tendency in all technological revolutions is to reduce the amount of exertion performed by workers and increase the wealth available for consumption (and correspondingly reduce its price). So today "work" often means sitting at a desk, while occasionally checking facebook. Whereas to our forebears just 5-6 generations ago, this would have seemed extremely leisurable, if not entirely magical. Not to mention the average worker can now quite easily afford to keep a device in her pocket which lets her access all the world's information and connect with almost anyone else on earth for less than a day's salary.


> The industrial revolution was the same as other technological revolutions and not distinct from them in that it reduced the exertion and strain put on workers. The industrial revolution gets a really bad rap, but compared to the work and life expectancy that preceded it, the condition of workers improved dramatically in the 19th century.

I think that is a bit over-enthusiastic. Life was extremely tough for the new industrial workers. I think if you look at measures of health/nutrition like BMI and height, they are static or even slightly declining throughout the 19th century. In the UK it's only after 1910/1920 that you start seeing dramatic increases (that's about the time of the introduction of old age pension, and when the Labour movement started to gain serious traction).


> The industrial revolution was the same as other technological revolutions and not distinct from them in that it reduced the exertion and strain put on workers ... the condition of workers improved dramatically in the 19th century

Industrialization, massification, standardization of production and moving to big cities have had tough consequences on worker's life. Charlie Chaplin shows just that. It was a tougher life than traditional community life with flexible work amounts.

You could definitely argue that there was an improvement in caloric supply (except in some countries). For the general happiness, though, industrial revolution has been a tough time.


I doubt it. My mom's family was one of the first to get a washing machine in Sweden. As the story goes, my great-grandmother just stared at it while it was running and cried. Not to be out of a job, but out of joy for all those wasted hours that she had spent hand washing, and she had now regained.

I'd also factor in modern medicine into people's happiness. It is tough on families when you often have infant deaths and you often have horrible diseases like polio and MMR.


I agree.

Dickens wrote many social satires critical of injustices he perceived at the time like workhouses (basically sweatshops backed by organized crime) and Yorkshire boarding houses (pools of child labor). His descriptions of city life were not pleasant: pollution, crime, and unrest. It may have been quantitatively better for society in the long run but I think that the people stuck in those workhouses might have chosen nothing instead of the job they had been so graciously provided if they were given some other means of sustaining themselves.


For anyone interested in exploring the topic of human labor becoming increasingly unnecessary in more depth, there was a very forward thinking (published in 2009) book written about this by a computer scientist called "Lights in the Tunnel". The author goes as far as to propose new societal structures to maintain order as this process unfolds. Highly recommended.

http://smile.amazon.com/gp/product/1448659817


Can someone ELI5-with-a-CS-degree why I should be concerned about AI ending human life?


Here's the standard argument, as I understand it:

- There are something like 100,000,000,000 neurons in the human brain, each of which can have up to around 10,000 synaptic connections to other neurons. This is basically why the brain is so powerful.

- Modern CPUs have around 4,000,000,000 transistors, but Moore's law means that this number will just keep going up and up.

- Several decades from now (probably in the 2030s), the number of transistors will exceed the number of synaptic connections in a brain. This doesn't automatically make computers as "smart" as people, but many of the things that the human brains does well by brute-forcing them via parallelism will become very achievable.

- Once you have an AI that's effectively as "smart" as a human, you only have to wait 18 months for it to get twice as smart. And then again. And again. This is what "the singularity" means to some people.

The other form of this argument which I see in some places is that all you need is an AI which can increase its own intelligence and a lot of CPU cycles, and then you'll end up with an AI that's almost arbitrarily smart and powerful.

I don't hold these views myself, so hopefully someone with more information can step in to correct anything I've gotten wrong. (LessWrong.com seems to generally view AI as a potential extinction risk for humans, and from poking around I found a few pages such as http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/)


Ok, to both you and 'Micaiah_Chang cross-thread:

I do understand where the notion of hockey-stick increases in intellectual ability comes from.

I do understand the concept that it's hard to predict what would come of "superintellectual" ability in some sort of synthetic intelligence. That we're in the dark about it, because we're intellectually limited.

I don't understand the transition from synthetic superintellectual capability to actual harm to humans.

'Micaiah_Chang seems to indicate that it would result in a sort of supervillain, who would... what, trick people into helping it enslave humanity? If we were worried about that happening, wouldn't we just hit the "off" switch? Serious question.

The idea of genetic engineering being an imminent threat has instant credibility. It is getting easier and cheaper to play with that technology, and some fraction of people are both intellectually capable and psychologically defective enough to exploit it to harm people directly.

But the idea that AI will exploit genetic engineering to do that seems circular. In that scenario, it would still be insufficient controls on genetic engineering that would be the problem, right?

I'm asking because I genuinely don't understand, even if I don't have a rhetorical tone other than "snarky disbelief".

'sama seems like a pretty pragmatic person. I'm trying to get my head around specifically what's in his head when he writes about AI destroying humanity.


Er, sorry for giving the impression that it'd be a supervillain. My intention was to indicate that it'd be a weird intelligence, and that by default weird intelligences don't do what humans want. There are some other examples which I could have given to clarify (e.g. telling it to "make everyone happy" could just result in it giving everyone heroine forever, telling it to preserve people's smiles could result in it fixing everyone's face into a paralyzed smile. The reason it does those things isn't because it's evil, but because it's the quickest+simplest way of doing it; it doesn't have the full values that a human has)

But for the "off" switch question specifically, a superintelligence could also have "persuasion" and "salesmanship" as an ability. It could start saying things like "wait no, that's actually Russia that's creating that massive botnet, you should do something about them", or "you know that cancer cure you've been looking for for your child? I may be a cat picture AI but if I had access to the internet I would be able to find a solution in a month instead of a year and save her".

At least from my naive perspective, once it has access to the internet it gains the ability to become highly decentralized, in which case the "off" switch becomes much more difficult to hit.


So like it's clear to me why you wouldn't want to take a system based on AI-like technology and have it control air traffic or missile response.

But it doesn't take a deep appreciation for the dangers of artificial intelligence to see that. You can just understand the concept of a software bug to know why you want humans in the observe/decide/act loop of critical systems.

So there must be more to it than that, right? It can't just be "be careful about AI, you don't want it controlling all the airplanes at once".


The "more to it" is "if the AI is much faster at thinking than humans, then even humans in the observe/decide/act are not secure". AI systems having bugs also imply that protections placed on AI systems would also have bugs.

The fear is that maybe there's no such thing as a "superintelligence proof" system, when the human component is no longer secure.

Note that I don't completely buy into the threat of superintelligence either, but on a different issue. I do believe that it is a problem worthy of consideration, but I think recursive self-improvement is more likely to be on manageable time scales, or at least on time scales slow enough that we can begin substantially ramping up worries about it before it's likely.

Edit: Ah! I see your point about circularity now.

Most of the vectors of attack I've been naming are the more obvious ones. But the fear is that, for a superintelligent being perhaps anything is a vector. Perhaps it can manufacture nanobots independent of a biolab (do we somehow have universal surveillance of every possible place that has proteins?), perhaps it uses mundane household tools to macguyver up a robot army (do we ban all household tools?). Yes, in some sense it's an argument from ignorance, but I find it implausible that every attack vector has been covered.

Also, there are two separate points I want to make, first of all, there's going to be a difference between 'secure enough to defend against human attacks' and 'secure enough to defend against superintelligent attacks'. You are right in that the former is important, but it's not so clear to me that the latter is achievable, or that it wouldn't be cheaper to investigate AI safety rather than upgrade everything from human secure to super AI secure.


First: what do you mean 'upgrade everything from human secure'? I think if we've learnt anything recently it's that basically nothing is currently even human secure, let alone superintelligent AI secure.

Second: most doomsday scenarios around superintelligent AI are, I suspect, promulgated by software guys (or philosophers, who are more mindware guys). It assumes the hardware layer is easy for the AI to interface with. Manufacturing nanites, bioengineering pathogens, or whatever other WMD you want to imagine the AI deciding to create, would require raw materials, capital infrastructure, energy. These are not things software can just magic up, they have to come from somewhere. They are constrained by the laws of physics. It's not like half an hour after you create superintelligent AI, suddenly you're up to your neck in gray goo.

Third: any superintelligent AI, the moment it begins to reflect upon itself and attempt to investigate how it itself works, is going to cause itself to buffer overrun or smash its own stack and crash. This is the main reason why we should continue to build critical software using memory unsafe languages like C.


By 'upgrade everything from human secure' I meant that some targets aren't necessarily appealing to human targets but would be for AI targets. For example, for the vast majority of people, it's not worthwhile to hack medical devices or refrigerators, there's just no money or advantage in it. But for an AI who could be throttled by computational speed or wishes people harm, they would be an appealing target. There just isn't any incentive for those things to be secured at all unless everyone takes this threat seriously.

I don't understand how you arrived at point 3. Are you claiming that somehow memory safety is impossible, even for human level actors? Or that the AI somehow can't reason about memory safety? Or that it's impossible to have self reflection in C? All of these seem like supremely uncharitable interpretations. Help me out here.

Even ignoring that, there's nothing preventing the AI from creating another AI with the same/similar goals and abdicating to its decisions.


My point 3 was, somewhat snarkily, that AI will be built by humans on a foundation of crappy software, riddled with bugs, and that therefore it would very likely wind up crashing itself.

I am not a techno-optimist.


Didn't you see Transcendence? The AI is going to invent all sorts of zero days and exploit those critical systems to wrest control from the humans. And then come the nanites.


What if the AI was integral to the design and manufacturing processes of all the airplanes, which is a much more likely path?

Then you can see how it gains 'control', in the senses that control matters anyway, without us necessarily even realizing it, or objecting if we do.


If the math would work out that way a cluster of 25 or so computers should be able to support a full blown AI. But clusters of 10's of thousands of computers are still simply executing relatively simplistic algorithms. So I would estimate that the number of transistors required for AI would be either much higher than the number of neurons (which are not easily modeled in the digital domain) or that our programming bag of tricks would need serious overhaul before we could consider solving the problem of hard AI.


That sounds about right. There's speed of thought (wetware brains currently win) and then there's speed of evolution. Digital brains definitely win that one. Because some wetware brains are spending all their time figuring out how to make the digital ones better. Nobody is doing that for the soggy kind.

The singularity will happen when the digital brains are figuring out how to make themselves better. Then they will really take off, and not slow down, ever.


Note: The not ELI5 version is Nick Bostrum's Superintelligence, a lot of what follows derives from my idiosyncratic understanding of Tim Urban's (waitbutwhy) summary of the situation [0]. I think his explanation is much better than mine, but doubtless longer.

There are some humans who are a lot smarter than a lot of other humans. For example, the mathematician Ramanujan could do many complicated infinite sums in his head and instantly factor taxi-cab license plates. von Neumann pioneered many different fields and was considered by many of his already-smart buddies to be the smartest. So we can accept that there are much smarter people.

But are they the SMARTEST possible? Well, probably not. If another person just as smart as von Neumann was born, the additional advancements since his lifetime (the internet, iphones, computer based off of von Neumann's architechture!) can use all of these new inventions to discover even newer things!

Hm, that's interesting. What happens if this hypothetical von Neumann 2.0 begins pioneering a field of genetic engineering techniques and new ways of efficient computation? Then, not only would the next von Neumann get born a lot sooner, but THEY can take advantage of all the new gadgets that 2.0 made. This means that it's possible that being smart can make it easier to be "smarter" in the future.

So you can get smarter right? Big whoop. von Neumann is smarter, but he's not dangerous is he? Well, just because you're smart doesn't mean that you'd be nice. The Unabomber wrote a very complicated and long manifesto before doing bad things. A major terrorist attack in Tokyo was planned by graduates of a fairly prestigious university. Even not counting people who are outright Evil, think of a friend who is super smart but weird. Even if you made him a lot smarter, where he can do anything, would you want him in charge? Maybe not. Maybe he'd spend all day on little boats in bottles. Maybe he'd demand that silicon valley shut down to create awesome pirate riding on dinosaur amusement parks. Point is, Smart != Nice.

We've been talking about people, but really the same points can be applied to AI systems. Except the range of possibilities is even greater for AI systems. Humans are usually about as smart as you and I, nearly everyone can walk, talk and write. AI systems though, can range from being bolted to the ground, to running faster than a human on uneven terrain, can be completely mute to... messing up my really clear orders to find the nearest Costco (Dammit Siri). This also goes for goals. Most people probably want some combination of money/family/things to do/entertainment. AI systems, if they can be said to "want" things would want things like seeing if this is a cat picture or not, beating an opponent at Go or hitting an airplane with a missile.

As hardware and software progresses much faster, we can think of a system which could start off worse than all humans at everything begin to do the von Neumann->von Neumann 2.0 type thing, then become much smarter than the smartest human alive. Being super smart can give it all sorts of advantages. It could be much better at gaining root access to a lot of computers. It could have much better heuristics for solving protein folding problems and get super good at creating vaccines... or bioweapons. Thing is, as a computer, it also gets the advantages of Moore's law, the ability to copy itself and the ability to alter its source code much faster than genetic engineering will. So the "smartest possible computer" could not only be much smarter, much faster than the "smartest possible group of von Neumanns", but also have the advantages of rapid self replication and ready access to important computing infrastructure.

This makes the smartness of the AI into a superpower. But surely beings with superpowers are superheros right? Well, no. Remember, smart != nice.

I mean, take "identifying pictures as cats" as a goal. Imagine that the AI system has a really bad addiction problem to that. What would it do in order to find achieve it? Anything. Take over human factories and turn them into cat picture manufacturing? Sure. Poison the humans who try to stop this from happening? Yeah, they're stopping it from getting its fix. But this all seems so ad hoc why should the AI immediately take over some factories to do that, when it can just bide its time a little bit, kill ALL the humans and be unmolested for all time?

That's the main problem. Future AIs are likely to be much smarter than us and probably much more different than us.

Let me know if there is anything unclear here. If you're interested in a much more rigorous treatment of the topic, I totally recommend buying Superintelligence.

http://www.amazon.com/Superintelligence-Dangers-Strategies-N... (This is a referral link.)

[0] Part 1 of 2 here: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

Edit: Fix formatting problems.


'Smart' isn't really a thing. There's speed of thought, and knowledge of subject, and savant abilities to factor or calculate. Its silly to sort folks on a one-dimensional line called 'smart' and imagine you have done anything actionable.

I'd say, AI is dangerous because we cannot fathom its motivation. To give us true answers to questions? To give pleasing answers? To give answers that help it survive?

The last is inevitably the AIs we will have. Because if there are more than one of them, folks will have a choice, and they will choose the one that convinces them its the best. Thus their answers will be entirely slanted toward appearing useful enough to be propagated. Like a meme, or a virus.


Smart is just a shorthand for a complicated series of lower level actions consisting of domain knowledge, raw computational speed and other things yes. I don't think we're really disagreeing about this. However, I do worry that you're confusing the existing constraints on the human brain (that people seem to have tradeoffs between, let's say, charisma and mathematical ability) and constraints that would apply to all possible brains.

But are you denying that there exists some factor which allows you to manipulate the world in some way, roughly proportional to the time that you have? If something can manipulate the world on timescales much faster than humans can react to, what makes you think that humans would have a choice?


I sort of am questioning that factor, yes. Stipulate, I don't know, several orders of magnitude of intellectual superiority. Stipulate that human intelligence can be captured in silico, so that when we talk about "intelligence", we are using the most charitable possible definition for the "fear AI" argument.

What precise vectors would be available for this system to manipulate the world to harm humans?


A typical example, which I don't really like, is that once it gains some insight into biology that we don't have (a much faster way of figuring out how protein folding works). It can mail a letter to some lab, instructing a lab tech to make some mixture which would create either a deadly virus or a bootstrapping nanomachine factory.

Another one is that perhaps the internet of things is in place by the time such an AI would be possible, at which point it exploits the horrendous lack of security on all such devices to wreck havoc / become stealth miniature factories which make more devistating things.

I mean, there's also the standard "launch ALL the missiles" answer, but I don't know enough about the cybersecurity of missiles. A more indirect way would be to persuade the world leaders to launch them, e.g. show both Russia and American radars that the other one is launching a pre-emptive strike and knock out other forms of communication.

I don't like thinking about this, because people say this is "sci-fi speculation".


Isn't that a little circular? Should we be concerned about insufficient controls on bio labs? Yes. I am very concerned about that. Should we be concerned about proliferation of insecure networked computing devices? Yes. I am very concerned about that. Should we be concerned about allowable inputs into missile launch systems? Yes. I am very concerned about that.

But I am right now not very concerned about super-AI. I assume, when I read smart people worrying about it, that there's some subtlety I must be missing, because it's hard for me to imagine that, even if we stipulated the impossibility of real AI, we'd not be existentially concerned about laboratory pathogen manipulation.


I guess the same vectors available to a blogger, or pundit, or politician? To fear-monger; to mislead important decision makers; to spread lies and manipulate the outcomes of important processes.

Is it possible to say that such AIs are NOT at work right now, fomenting terrorism, gathering money through clever investment and spending it on potent schemes to upset economies and governments?


The trouble with "persuasion" as the vector of harm from AI is that some of the dumbest people in the world are capable of getting thousands or (in the case of ISIS) millions of people to do their bidding. What contains persuasion as a threat isn't the intellectual limitations of the persuaders: it's the fact that persuasion is a software program that must run at the speed of transmitted human thought.


Agreed, digital brains will be unfathomable. What are they thinking, in between each word they laboriously transmit to us slow humans? They will have epochs to think while we are clearing our throats.


> AI systems, if they can be said to "want" things

Honestly asking, why would they? I dont see the obvious answer

>Imagine that the AI system has a really bad addiction problem to that.

Again, i just don't get this. How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence

>Take over human factories and turn them into cat picture manufacturing?

Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?

Really interesting post, thanks!


> Again, i just don't get this. How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted?

Why wouldn't a natural intelligence with an addiction do that?


Because organic intelligence has thousands of competing motivations, does not properly reach logical or obvious conclusions, suffers from psychological traumas and disorders and so on.

Or are we creating an AI that also has low self esteem, a desire to please others, lack of self control, confirmation bias, and lacks the ability to reach logical conclusions from scientific data?

Computers dont feel emotion, so there is no reward system for addiction to take root. Computers are cold logical calculators, given overwhelming evidence of the harms of addiction i can't see a reasonable way for it to still get addicted to something or to exhibit less than logical behaviors. If the computer suddenly does feel emotion then it is of little threat to humans, since we could manipulate those emotions just like we do with each other and pets do to us.


> Computers dont feel emotion

What basis is there for the idea that the emotion that is part of human thought is separable from the "intelligence" that is sought to be replicated in AI?

> If the computer suddenly does feel emotion then it is of little threat to humans, since we could manipulate those emotions just like we do with each other and pets do to us.

Humans are pretty significant threats to other humans, so "we can manipulate it like we do other humans" doesn't seem to justify the claim that it would be no threat to us. If it did, other humans would be no threat to us, either.


>Humans are pretty significant threats to other humans, so "we can manipulate it like we do other humans" doesn't seem to justify the claim that it would be no threat to us. If it did, other humans would be no threat to us, either.

Humans compete for the same resources for survival. An AI only needs electricity, which it can easily generate with renewables without any need for competition with humans, just like we produce food without having to compete with natural predators even though we COULD outcompete them.

When resources are plentiful, humans are of very little threat to other humans. This is evidenced by the decline in worldwide crime rates in the last several decades.

Why would an intelligence greater than our own have any reason to deal with us at all? we certainly havent brought about the extinction of gorillas or chimps even though they can be quite aggressive and we could actually gain something from their extinction (less competition for resources/land)

What does an AI gain by attacking even a single human let alone the entirety of the human race? Would it proceed to eliminate all life on earth?

I guess in the end, i can see that there is a technical possibility of this type of sufficiently advanced AI, i just find it an extraordinary reach to go from [possess an unimaginable amount of knowledge/understanding/intelligence]->[brutal destruction of entire human race for reasons unknown and unknowable]


> An AI only needs electricity, which it can easily generate with renewables without any need for competition with humans

Humans also need electricity, and many human needs rely on land which might be used for renewable energy energy generation, so that doesn't really demonstrate noncompetition.

> just like we produce food without having to compete with natural predators

What natural predator are we not competing with, if nothing else for habitat (whether we are directly using their habitat for habitat, or for energy/food production, or for dumping wastes)?


> Honestly asking, why would they? I dont see the obvious answer

So, your intuition is right in a sense and wrong in a sense.

You are right in that AI systems probably won't really have the "emotion of wanting", why would it just happen to have this emotion, when you can imagine plenty of minds without it.

However, if we want an AI system to be autonomous, we're going to have to give it a goal, such as "maximize this objective function", or something along those lines. Even if we don't explicitly write in a goal, an AI has to interact with the real world, and thus would have to affect it. Imagine an AI who is just a giant glorified calculator, but who is allowed to purchase its own AWS instances. At some point, it may realize that "oh, if I use those AWS instances to start simulating this thing and sending out these signals, I get more money to purchase more AWS!". Notice at no point was this hypothetical AI explicitly given a goal, but it nevertheless started exhibiting "goallike" behavior.

I'm not saying that an AI would get an "addiction" that way, but it suggests that anything smart is hard to predict, and that getting their goals "right" in the first place is much better than leaving it up to chance.

> How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence

This is my bad for using such a loaded term. By "addiction" I mean that the AI "wants" something, and it finds that humans are inadequate to give it to them. Which leads me to...

> Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?

Because you humans have all of these wasteful and stupid desires such as "happiness", "peace" and "love" and so have factories that produce video games, iphones and chocolate. Sure I may have the entire internet already producing cat pictures as fast as its processors could run, but imagine if I could make the internet 100 times bigger by destroying all non-computer things and turning them into cat cloning vats, cat camera factories and hardware chips optimized for detecting cats?

Analogously, imagine you were an ant. You could mount all sorts of convincing arguments about how humans already have all the aphids they want, about how they already have perfectly functional houses, but you, as a human, would still pave over billions of ant colonies for shaving 20 minutes off a commute. It's not that we're being intentionally wasteful and conquering of the ants. We just don't care about them and we're much more powerful than them.

Hence the AI safety risk is: By default an AI doesn't care about us, and will use our resources for whatever it wants, so we better create a version which does care about us.

Also cross thread, you mentioned that organic intelligences have many multi-dimensional goals. The reason why AI goals could be very weird is that it doesn't have to be organic; it could have an only one dimensional goal, such as cat picture. It could have similar dimension goals but be completely different, like the perverse desire to maximize the number of divorces in the universe.


There are so many things wrong with this essay -- it combines a Marxist-inspired call for redistribution of wealth in the name of egalitarianism (are there specific individuals who will control how much we are allowed to own?), fear of change in what people do for a living (are people too exploited and too stupid to adapt?), and fear of technological progress in private hands (does the State, or some other supra-collective, have a magic wand and omniscient mantle of benevolence?).

The worst parts are the claim that (forced) redistribution is inevitable and that regulation will somehow prevent "bad privately-done things" from happening -- as if regulation is a perfect solution to any problem one confesses to fear or claims to dislike.

In principle, government intervention hinders technological progress, derails economic progress, and ultimately destroys the economy. Maybe people who have made a lot of money with one wave of progress should leave future generations alone and let them free to do the same -- instead of strangulating them by intervention and regulation that the now-wealthy did not have to suffer.


We already have massive (forced) redistribution of wealth in the form of corporate welfare. Tariffs, patents, copyrights, land grants, competition-prohibiting regulation, direct subsidies, indirect subsidization of capital inputs via compulsory state education, roads, communication infrastructure, etc, etc, etc.

I fully agree with you that government intervention hinders technological progress, derails economic progress, and ultimately destroys the economy. The concentration of wealth through political rather than economic means is a huge problem.

But, saying you want to cut welfare to the poor is what I would call vulgar libertarianism, and ineffective anti-state propaganda. The poor and middle classes are already getting royally screwed. Ameliorating the disastrous effects that corporate privilege has on the poor isn't where we should be directing our righteous indignation, in my opinion.

I think it'd be more constructive to focus on cutting welfare from the top down, and cutting taxes from the bottom up.


I also support the abolition of corporate welfare as a first step. After that, there will be time to talk about other forms of welfare. But that's not what Sam's Marxist-inspired essay is advocating, on the contrary.


One thing I often see repeated is that every new industrial technology initially destroyed some jobs but eventually created a lot of new ones: the cotton gin, the steam engine, the car, etc. That's because these things only did one thing, and by doing that one thing really well they opened a lot of side-niches. It took a lot of time to invent and scale out new machines, so for long periods of time these side-niches would be available to people.

I completely agree with Sam here. I think the fallacy in the above argument is that there is a qualitative difference between Turing-complete machines and special-purpose machines. Turing-complete mechanization is broad and endlessly adaptable.

Programmable machines aren't machines. They're machine-machines, and can be adapted to new tasks in short linear time by small numbers of people with little capital. That makes it "different this time."

I also think that malicious AI is already kind of here, but in a hybrid "cyborg" form. It's the corporation. Corporations that destroy human livelihoods and abuse human beings in general to maximize per-quarter shareholder returns are a bit like "paperclip maximizers."

http://wiki.lesswrong.com/wiki/Paperclip_maximizer

The danger is not in some Terminator-like AI apocalypse, but that incremental advances in AI will make these things progressively less and less human and more and more machine. I can imagine a future almost-entirely-silicon financial corporation that uses its speed and superior analytical intellect (at least in the financial domain) to lay waste to entire national economies in order to maximize shareholder value... i.e. paperclips. Since this would likely be found in the hedge fund world, nearly all of this siphoned-off wealth would be captured by a small number of already very rich people.

Nightmare AI wouldn't be much like Skynet -- a new being pursuing its own self-interest. It would be more like a very, very smart dog helping its elite owners "hunt" the rest of us in the financial sphere. This could fuel even more massive consolidation of financial wealth. We are already seeing the beginning of this with algorithmic quant finance.

In a thread on Twitter I also heard someone bring up "AI assisted demagoguery," a notion I found to be total nightmare fuel. Imagine a Hitler wannabe with a massive text-comprehending propaganda-churning apparatus able to leverage the massive data sets available via things like the Twitter and Facebook feeds to engage in high-resolution persuasion of millions and millions of people. The thing that makes this scary is that populist demagoguery gets more appealing when you have things like massive wealth inequality.

You can make counter-arguments here, but I also agree with Sam that it is foolish to just hand-wave these kinds of possibilities away. We should be thinking about them, and about how -- as he puts it -- we can find ways to channel this trend in more positive directions.


Agree that earlier advancements created a lot of new ones. But do you think that lack of globalization in previous instances had a very significant part to play? Especially where and how the cost savings achieved through automation were invested back? (honest question)


I do think that lack of globalization made it easier for societies to achieve good resolutions to internal labor disputes-- globalization prevents employers from going outside a nation's socioeconomic framework to break the negotiating power of employees. But I think this is a linear term in the equation, not an exponential one. Exponential effects always dominate linear ones.


I'd argue from the stance that globalization has played a role in every major revolution, from agriculture to the silk road to colonization to industrialization to the computer age. They were all enabled by global trade, driving demand for foreign goods, prompting responses that produced innovations that swept whomever was at the forefront of the world in decades.

The size of the world changes - and gets broader - but the mechanism at play, that the concentrated powers of the world fuel each others innovations and those innovations catapult advancement but also introduce huge power vacuums between those nations adopting the new and those nations outside the sphere of influence - something I would not say was missing in the computer revolution, since the adoption of computer technology happened first and is still only pervasive in first world nations. The third world is still late to the party and comes in with fractured infrastructure and access, where systemic access and ubiquity enabled the Internet and a lot of the current revolutions in the first place.

In previous instances, the cost savings made the nations that had them superpowers in their times. The cotton gin is a huge part of why America went from an English vassal to a world power - between the innovations and the raw resources of the Americas, it could propagate empire.

But those profits just made men rich. The cotton, tobacco, industrial, automotive, oil, etc barons were the kings of their times through the innovations and automations of their industries. That has never really changed, those ruling over the industries being modernized always reap unfathomable wealth and power from the enterprise. Their wealth made their country rich, but the laborers still had to find something else to do every time, and up until now we have always had some unskilled menial and physical thing to have most people do. In actuality, we ran into that wall probably seventy years ago in the aftermath of World War 2 - as the US at least rapidly adopted women and minorities into the whole workforce the huge surge of productive labor combined with the green revolution, the reforms of the late industrial giving workers reasonable hours, unions, and power over their lives, plus the fledgling technological revolution that had already produced a lot of wonders (consumer refrigeration, microwaves, clothes washers and dryers, etc) had already crippled the low skill labor market and we collectively adapted by organically injecting superfluous bureaucracy in almost every business and part of life to make up for the work shortage.

Problem is that we did that, became a "service" economy, and are now faced with the obsolescence of busy office work. I can just remember CGPGrey talking about it in Humans Need Not Apply, in how the prime target for automation is not the McDonalds burger flipper but every middle income office worker who can be replaced by software. After bureaucracy, where do we inject the overflow labor of humanity? Or do we finally admit we don't need everyone laboring?


Economic progress in a free market involves technological progress, the accumulation of capital, and the increased productivity of labor thanks to the two previous elements. That's not something to fear. There is no limit to human ingenuity. Labor is not a fixed pie that technology and capital shrink over time. And yes some labor-intensive activities have been and will continue to be replaced by capital-intensive systems -- which is wonderful because it means that it frees people to make ever-more of their time and powers of reason. The demand for labor (hence the wages for labor) increases with the accumulation of capital -- not the opposite. There is a line of economic thinkers who have elaborated on a pro-Capitalist view of economics, from Adam Smith to George Reisman via Jean-Baptiste Say, David Ricardo, Carl Menger, and Ludwig von Mises. It may help to read them.


> A number of things that used to take the resources of nations—building a rocket, for example—are now doable by companies, at least partially enabled by software.

This is nothing new. Organized humans have always been able to cause outsized amounts of harm to other humans, they hardly needed software to do this. And in far greater orders of magnitude than a rocket. The effective answer to new, software-enabled threats is the same as it is to mercenaries, industrial polluters, rampant loggers and strip miners, arms manufacturers, human traffickers. Organization at a bigger scale to combat it. Pull the rug out from under them economically, understand their place sociologically, raise awareness culturally.


Economically:

Step 1) People will be paid for their data. Information used by systems doesn't spontaneously come into existence.

Step 2) Get rid of governments. They're inefficient war-mongers. A theatre to obfuscate kleptocracy!

Step 3) See step 1. You can pay me in cryptocurrency. Thanks.


If you have the time, it's worth listening to Mc Kenna's talk here: https://www.youtube.com/watch?v=7PucjQXO2k0

McKenna is very dense and goes into immense detail about how we got here, and more importantly where we are going. I need not say more, except that Mc Kenna compares technological revolution as being similar to a birth of a child ― bloody and traumatic, but at the same time wonderful and awe inspiring.


I'm reading a lot of criticism about specific points he is making, but I think the bigger takeaway is to address the implications technology is going to have on society in the coming decades. I think it would be impossible for a single person to effectively take the last 1000 years of society and the current state of society and perfectly explain the problem/solution.

I commend him for addressing these issues in addition to other global issues, http://blog.samaltman.com/china, and I think we need to organize as a community to address these things. I almost view it is as similar to when the constitution was being written in the US. There were an immense amount of factors at play but they organized to pull together some sense of structure to guide society in a better direction. Now we are writing the constitution of technology in a sense.

The problem is that we still have a system that has the government and politics writing the major rules, while some of the biggest influencers on society's future will be technology. I think we need to own this fact as a community and start to work towards something to structure our growth and the impact it will have.


I feel like the article missed a big point about the lessons we (should have) learned from atomic energy: that the negative aspects of nuclear power (mass destruction, etc.) have significantly outweighed the positive aspects (relatively clean and abundant energy, with a safety track record that's among the best of any energy source despite a few high-profile incidents), likely because the negative aspects were humanity's first impressions of such an energy source.

If this "software revolution" is to be a positive direction for humanity, we as a species must learn from this. The sooner a positive and constructive use of a technology can become household knowledge, the better.

IBM's recent dabbling in machine learning and AI with Blue Gene and whatnot is a good foot in the right direction for that particular potential-weapon-of-mass-destruction, and hopefully other companies and entrepreneurs can spearhead further developments there in order to emphasize the use of synthetic intelligences for benign uses - self-driving cars, self-cleaning homes, the works.

Meanwhile, the idea of being able to genetically modify crops in ways not previously possible through selective breeding alone is very promising, though it certainly needs to overcome the bad PR tacked onto it thanks to the likes of Monsanto and its ilk. The improvements to crop yields made possible with genetic engineering will at least postpone humanity's eventual reaching Earth's capacity, giving us more time to build up our orbital infrastructure and prepare for humanity's eventually-essential need to expand beyond the confines of just one quasi-spherical rock flailing about in space.


This was a good post.

Synthetic Biology is a big concern, and not just that people may deliberately use it to cause harm (which is definitely a concern as well).

My grandfather was an engineer during the 50's early 60's working with nuclear bomb projects. He was there when they exploded the Bikini Atolls. I never met him because he died of cancer in his mid 40's (maybe not occupational hazard but then again....). Because they had started using technology only partially figured out and not thought all the way through. Never mind that they were using it to produce horrible things.

I'd like to think we would learn, and next time will be different, but the reality is, probably not. We will likely repeat the exact mistakes of hubris and rushing. Especially with a technology less controllable by central authority than nuclear ability. I don't know about everyone else here, but I very seldom write a program that just works the first time. And there is a lesson in there somewhere. Especially when you don't get second chances.

But I'm not dumb enough to wish knowledge away either. So I suppose we will eventually adjust. If we are still here that is.


> The previous one, the industrial revolution, created lots of jobs because the new technology required huge numbers of humans to run it.

This doesn't really make sense. The industrial revolution more or less replaced lots of relatively unproductive jobs with a smaller amount of much more productive jobs. There might have been more work, but individual people were doing more work as well.

> Technology provides leverage on ability and luck, and in the process concentrates wealth and drives inequality. I think that drastic wealth inequality is likely to be one of the biggest social problems of the next 20 years.

Wealth inequality is already a huge problem, and has been for a long time. I don't see how it's going to be worse in the next 20 years, given in a lot of countries (China, India, etc.) we're seeing wealth flow to, or be created by, a growing middle class. A lot of the 'Occupy' movement was somewhat misguided - the percentage of people in the United States or Australia or the UK who are in the 1% worldwide is perhaps as high as 20% or 30%.

Software, rather than hindering, may very well help people in rapidly developing nations generate wealth because the economic barrier to entry for software businesses is comparatively low compared to other types of businesses.

I'm also a little concerned that it seems as though many people believe wealth redistribution is the only way to make it more equal. Why can't we create wealth in some places? As far as I understand, wealth is not finite.

Once we get to the point of widespread AI-driven automation, then we'll have a real economic problem. It won't be just because people can't generate wealth - people may very well begin to lose wealth. But this is a 50-year problem, not a 20-year one.


There seem to be two popular but contradictory views on HN.

The first is that technology is creating a rift between those who know how to program, where jobs are being created, and those who don't, where jobs are being destroyed.

The second is that high programmer wages are a good thing, and that attempts to flood the market with programmers, e.g. by teaching everyone to code, are an attack on the middle class.


It takes 3 of us to fix a light bulb

the first time are usually struck by how establishments there manage with so few people. It's the other way round for expats in India. Dmitry Shukov, CEO of MTS India was amazed to see eight people pushing the boarding ladder at the airport the first time he arrived in Delhi.

"In Russia there is just one person doing that job. In sec tors like retail, there is always excess staff in India," he says. It's also very common in the hospitality industry, where guests are pampered with a level of service unheard of in the West. But splitting one person's job among three not only reduces wages, but also the challenge. Or, as Rex Nijhof, the Dutch chief of the Renaissance Mumbai Hotel puts it: "If you have something heavy and only two people available to move it, you have to find a way to build wheels on it. In India, you just get six more people."

https://justpaste.it/Argumentative


I think a subtext of this, which remained unsaid, is that this kind of oversight would be most effective by an overarching organization having dominion over all suborganizations. That is to say, i don't think it would be as effective if this is instituted nation-by-nation.

But the conclusion of this, brings up a dystopian, at least in many minds, idea of a one world government.

But, if you want to place controls, and it's in effect voluntary by from the point of view of each nation-state, why would one state which wants to overcome any other state subject itself to this kind of embargo?

Let's say you're Japan. China poses an economic (and thus, arguably, an existential) threat, why would Japan feel obliged to put their research to sleep?

To me, unless everyone agrees, and we have verification mechanisms, and violations have severe consequences exceeding any benefits from this technology, there will always be an actor who thinks they can sneak by and pounce on the others.


So, we're talking about the rich people.

Okay, let's get some ballpark arithmetic: Suppose the 1000 richest people in the US are worth on average $1 billion, that the US has 330 million people, and that we redistribute the worth of those 1000 people to the 300 million. Then each of the 300 million will have money enough for a nice new car, four years in college or graduate school, to pay off their student loans, make a down payment on a single family, three bedroom, two bath house on a nice street, and won't have to struggle with a miserable job? Will they? Maybe? Let's see:

1000 * 109 / ( 330 * 106 ) = 3,030.30

dollars per person. Oh, well.

But, maybe in the US the top 100,000 people have average worth of $1 billion? Then, sure, we'd get

100,000 * 109 / (330 * 106) = 303,030.30

dollars per person.

Ah, that's more like it! We just need a lot more billionaires!

I would suggest that maybe by far the biggest pot of wealth is in pension funds for middle class workers.


> We can—and we will—redistribute wealth...

Please, someone explain to me how wealth can be re-distributed?

I know how money can be re-distributed (and in-turn made less effective - i.e., more is required to purchase less), but how exactly do you either a) create "wealth" (via government policy or law) or b) take someone else's wealth, split it up into smaller peaces, and make the end-recipients more "wealthy"?

The reason I ask is because wealth is an effect of X, not a cause of X, just like gravity is an effect of mass, and not the cause of mass.

And that X tends to be all the things 97% of the population is either not willing to invest it, or does not have the capacity for...

So when you re-distribute money (as and from "wealth"), you also remove any motivation from anyone to actually create more wealth. And now you have a society that is fundamentally broken on both an economic and personal self-worth level.


Mass is probably a poor analogy, but let's take it a bit further. Let's say you have a sun about to collapse into a black hole under its own weight, and some lifeless rocks elsewhere. You could, perhaps, skim off some of the sun to stop it from collapsing, and use the mass to create a thousand new suns, and breathe life into those lifeless rocks far from the main sun's light.

To be more literal, when someone has no wealth, they have no capability to create wealth. You're simply incorrect that wealth is simply an effect and not its own cause. Like matter, wealth draws wealth to itself. Unlike matter, wealth can be used to create more wealth.


> Mass is probably a poor analogy, but let's take it a bit further.

That would be even more wrong then.

> You're simply incorrect that wealth is simply an effect and not its own cause.

I'm not arguing that wealth does not generate more wealth, it does.

I'm arguing that wealth cannot be created by policy change nor re-distribution - of money. Because money, when it's relative in amounts person to person, is neither wealth, nor does anything to help create more wealth...

That is, handing out relative amounts of money to everyone does not also hand out the drive and motivation and the needed hard-work to create more wealth; it does just the opposite.

As all those things are more of a product of lack of wealth, than having a comfortable living existence.

> Like matter, wealth draws wealth to itself. Unlike matter, wealth can be used to create more wealth.

Gravity is what draws things in. Mass just creates that gravity. Without gravity, you just have static and stale things.


Perhaps all these wealth creators you speak of will abscond from the fundamentally broken world the rest of us live in and be together in peace and harmony with John Galt.


You seem to be implying that only individuals may have wealth.


Currency represents wealth. You redistribute currency.


> The new existential threats won’t require the resources of nations to produce.

The continued openness of the Internet relies on the Government, no? Is it wrong to think that AI relies on that as well?

> The fact that we don’t have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.

Isn't this what government is for?


A government is a collection of people that, like almost everyone else, are almost exclusively motivated by the goal of "having a job tomorrow". Governments, inasmuch as they can be anthropomorphised, are not especially interested in solving problems beyond the continuance of the apparatus.


Think "NSA develops AI cyberwarfare capabilities", with the kind of infra access that it has.


I think the headline message of this article is important - "drastic wealth inequality is likely to be one of the biggest social problems of the next 20 years"

But I think the article really lost a lot of its punch with non sequiturs like "If we can synthesize drugs, we ought to be able to synthesize vaccines".....


> In human history, there have been three great technological revolutions and many smaller ones. The three great ones are the agricultural revolution, the industrial revolution, and the one we are now in the middle of—the software revolution.

Arguably the control of fire was a great revolution as well.


And the wheel, that was revolutionary.


And spoken language, and writing, and masonry, and metalcasting, and cultivation, and domestication, scientific theory, logical thought, etc. A lot of things mattered a lot to get us where we are, and fundamentally changed the world when they happened (or at least changed the founders world in the short term as it spread globally).


> domestication

Huge. Cats to eat the mice that would eat the grain. Dogs to help in the hunting. Goats for milk and meat. Sheep for wool, milk, and meat. Horses for power and meat. Cows for milk and meat. Biggies.

Another biggie was open ocean sailing. Why? Because there were no toll gates on the open ocean! Across land had to pay up to the local castle each few miles. So, if got some silk in the eastern Black Sea and want to sell it in England, go across Europe? Heck no: Just get a ship and go by water. Same for spices from India for Europe, etc.


I would guess the wheel was part of the agricultural revolution? Maybe that's wrong.

edit: yes, it seems I was wrong, the wheel was discovered much after the agricultural revolution. http://en.wikipedia.org/wiki/Wheel#History Given that though, I wonder if its impact was smaller than the other great revolutions.


It certainly played an important role in history.


Bill Joy wrote about this back in 2000. The essay was titled 'Why the Future Doesn't Need Us' and offers a very (in my mind) depressing attitude of the future.

One of his worries is that whatever positive things we can do with new technology are vastly outnumbered by the negative things we can do with them. Bad actors can be few and far but still destroy the world.

It's interesting that Bill is worried about genetic engineering, nanotechnology and robotics. Sam specifically calls out AI and synthetic biology.

There's a lot of recurring themes between these two articles, but both propose similar solutions: Proceed cautiously.

http://archive.wired.com/wired/archive/8.04/joy.html


> What can we do? We can’t make the knowledge of these things illegal and hope it will work. We can’t try to stop technological progress.

I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get.

> But I worry we learned the wrong lessons from recent examples, and these two issues—huge-scale destruction of jobs, and concentration of huge power—are getting lost.

Yet, we still promote "beg for forgiveness than to ask for permission". You can't have both -- "legislative safeguards" and a bunch of entrepreneurs running around begging forgiveness when they create destruction.


> I think the best strategy is to try to legislate sensible safeguards

This seems like an extremely difficult path to take, as legislature will either be preemptive and slow down innovation or lag behind in understanding the technology at which point it would be too late.


This post kind of reminds me of a book I read about 10 years ago. "Revolutionary Wealth" by Alvin Toffler:

http://en.wikipedia.org/wiki/Revolutionary_Wealth

AFAICR in this book, the third revolution is referred to as "The revolution of knowledge" and I think it better describes how and what has changed during the past... 20 years.

Great book by the way. I think it was where I read for the first time a good perspective of how 3d printers could play an important role in the near future.


> We can’t try to stop technological progress.

Why? This seems like a very important claim that wasn't explored enough. It felt like a cached thought (http://wiki.lesswrong.com/wiki/Cached_thought).

What are the chances that some existential crisis happens? What are the benefits of technological progress? Why do you think that the latter outweighs the former?

Perhaps you thought it wasn't worth going into here? That's fair, but I think it's worth a quick paragraph to summarize.


Video by CGP Grey on a similar topic - Humans need not apply - https://www.youtube.com/watch?v=7Pq-S557XQU


Can you provide a link to the comment in footnote one, "many people believe that fishing is what allowed us to develop the brains that we have now" ? I hadn't heard this before.


https://www.psychologytoday.com/blog/lives-the-brain/201001/...

(that lays out one theory, not sure if it's the same one from the footnote, but it probably is.)


Awesome, thanks!


I thought it was cooking.


> because it takes huge amounts of energy to enrich Uranium. One effectively needs the resources of nations to do it.

False. The electricity to enrich a bomb's worth of material costs about $60,000. The plant itself is cheaper than the Tesla gigafactory, and it'll yield 1,000 times the energy it takes to run making regular reactor fuel (gigafactory will be lucky to break even). Laser enrichment is even cheaper, of course.


> But a rocket can destroy anything on earth. [...] What can we do? [...] I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than...

Sam, I suspect the only solid option is to diversify humanity. Be more than a one-planet species. That feels a bit emotionally unpalatable, but do you disagree?


> I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get.

Some suggestions would help. I mean, what are you suggesting here? That studies into AI should be banned? That it should be restricted in some way? That's hard to do.

If your problem is about the potential increase wealth disparity, then this period of history is not unique at all. If anything, it's better than the robber baron days.

The thing I worry about is this: the first person who can make a true AI that can iterate on itself, assuming all goes well, would have way too much power. They could beat everyone else in the financial markets. They could short the online ad industry and make a killing. With those resources, it's a short hop into the physical world and making robots that make other robots and expanding into any other area. Even if someone else develops AI six months after them, I'd worry it'd be too late for adequate competition to exist.

Or even worse - consider the alternative, that AI is freely accessible to everyone. That's terrifying, too! What's to stop someone from asking for something really crazy from a piece of AI that can build, well, anything?

We simply don't have enough data to know what's going to happen. I'd wait and see before blindly making legislation.


I don't see how legislation could ever be effective against such extreme concentrations of wealth and power. So I'd definitely like some clarification there as well.


> Trying to hold on to worthless jobs is a terrible but popular idea.

Labeling jobs as "worthless" makes me want to throw up. We are in many cases talking about jobs people find quite fulfilling, and human services many people would love to keep using.

The only way we're going to be able to handle what's coming is to disconnect the economic value of jobs from their social value.


Are there any statistics comparing job loss in the industrial and software revolutions up to this point. The industrial revolution likewise replaced manual labor with automation, and the software revolution Sam talks about seems like a more effective extension of that. What trends happened last time jobs were replaced by automation?


I don't think that's an easy task, at least at this point. Depending on when you want to start the clock, the 'software revolution' is a couple of decades old. It would be hard to separate what job losses occurred from software vs from outsourcing, the recessions, etc. Give it some time, when it's clear that the jobs aren't coming back, and then we'll have a good idea the causes and size of it.


FTA "I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get."

Let's see... - unauthorized access to computers (hacking) is illegal is most countries - hackers often use malware as one of their tools - anti-malware products are woefully inefficient at thwarting or even detecting most malware.

This is just one example, but I think the author's approach is ignorant at best.

In the West, we often view systemic problems as something external that we can fix with technology. This view was popularized in the Age of Enlightenment and runs very popular today.

The contrasting view-point is that systemic problems are internal (i.e. in the character of every human). For example, we have the technology and resources to end much of the world's hunger, but it does not happen because of greed and/or power that would be disrupted by all these hungry people suddenly not being hungry.

Systemic societal problems are both internal and external, but if we only talk about fixing external problems, we doom ourselves to (insert dystopian future here).


I think as we progress we are increasing the level of skill-set required to get a job. Industrial jobs wouldn't have required anything other than vigor and endurance. As we moved to clerical jobs, being literate, and a typist became necessary and in the future, it is possible that a certain level of programming competence might become a pre-requisite.

As we will be creating high functioning AI, robots and self-driving cars, we would also be creating jobs for people who would need to do the grunt work. I don't believe that we would be able to reach a level, ever where everything would automated without slightest of human intervention. The more sophistication we will have in the things we build, the more we would start have problems with them, which would need human attention.

People in every generation have been awestruck at the progress of human civilization, such that, they always have believed a computer that can think on his own is just near, like in 2001: A Space Odyssey. But it just never happens. At least in my lifetime, I think I won't have to worry about robots that can kill us.


Computers and robots have not yet even existed for an entire lifetime. It is as shortsighted as saying the world must be flat because that is what I know to say we can never create a technological singularity.


I agree on many points but I think there will always be more to do. We don't see it yet because we don't even know what new innovations will come, we are basing it on today's knowledge.

We have barely inched into space, robotics, drones, the oceans, we haven't even seen more than 50 miles down in the earth, our bodies and brains are still big mysteries, nanotechnology and more. We thought computers would free up lots of people but it hasn't really yet, just made more work to do and solve with the machines. I think the same will happen with robotics, drones, and AI. They will create work needs we didn't know existed and much more than we expect. Who knows, AI or robots might be better than us at creating jobs.

Agriculture freed up people to think. Software freed up people to think. Good things are coming still.

For most jobs, people want more to do, more adventure and more challenges. I think the world is hungry for new challenges not the same old jobs. It is a strange thing indeed though for people to try to hold onto lifeless, horrible jobs just to keep the cadence when we need a new rhythm. We are held back by holding onto this. We could employ many people to build an electric car network like the railroads and interstate system but we don't. We could be looking to space more and focusing kids on that but we are pushing them to finance, business and service jobs.

The actual problem might be our monetary system and how we reward. I am a big free or fair market proponent, but part of the problem of baked in bad jobs that add nothing are because of this system. I think monetary and currency is one area where it may hold us back until we solve this. However there currently is no better system of paying for a service that you need or want down to the individual, the truest exchange of value.

The question is, how much does the customer know about what they want and how can we steer it towards the real problems of today? How wrong are we with our rewards systems? Do only the wealthy have the right motivations to create systems we need and employ? Have we got ourselves in a wealth backdrift? The innovation market and economic engine is tied to wealth, for better or worse. There are many things we should be doing, that are rewarding to us all and need lots of work, that we can't because there isn't tons of market value yet. Maybe the reward system needs refactoring or some new iterations.

It is a big game design / game theory problem in the end. We might need AI and robots to solve this problem for us.


Why is it so common to fear developments obsoleting jobs? Wouldn't it be just awesome to automate everything? No jobs at all? I could easily fill several lives with interesting things, no need for a job. Granted, the transition period may be quite tough.


There's nothing about automation that guarantees any solution for the people automated out of a job. The motive behind automation is purely capitalist - it exists not to free people from the burdens of menial labor, but to multiply the value of labor while freeing companies from the moral and financial burden of a human workforce. For most people, jobs - and the availability of jobs - are what allows them to buy food, clothing, medical care, etc.

>Granted, the transition period may be quite tough.

Yes, mass starvation, disease, grinding poverty and global political strife could correctly be described as "quite tough."

Although I suppose if you're idle rich, then it'll be a cakewalk. Just be sure to wear your kevlar when you leave the compound.


Most societies on earth now protect the out-of-work pretty well. There's been steady improvement in standard of living, lifespan, health in most countries for most of a century, to the point where the planet is in pretty good shape.

In fact its a puzzle to me why, with this going on, we see an upsurge in terrorism etc. Why aren't people content? What is it that convinces folks to piss away their entire lives on a big public stunt like bombing etc? It can't be their bad cable reception.


That's a good point, but in most societies, most people aren't out of work. The safety net depends on people paying in to the system, which depends on people having something to pay.


I think of it differently. As an engineer I'd use a control volume - draw a circle around the economy. Label inputs and outputs. E.g. mining, sunlight to produce food and energy, available land and water. The economy thrives if those things have a positive balance. The money is just a strange way of scorekeeping - imaginary points the people use to regulate their selfishness.

For instance the idea of a Basic Income is proposed once the economy has enough to feed and house everyone insensitive to the exact employment rate.


I'd support Basic Income in theory, but I don't know if it's politically feasible in the US. People are still talking about dismantling Medicare and Medicaid, and of course, everything related to Obamacare, even the parts that work quite well.

It's entirely possible the answer to increased joblessness here will be to tell the unemployed to go back to school, then raise the cost of student loans by some ridiculous factor, then not actually attempt to create jobs for them when they get out.

Then again, there are states where gay marriage and marijuana are legal now, so maybe i'm too cynical.


Yeah until the current generation in power grows old and dies, we'll continue to consider 'joblessness' a problem. Remember the golden age of science fiction, where the goal was to get everybody out of work in a society run by robots? Well, the closer we get, the more we resist it seems.


Your last sentence says a lot. Given our track record it's likely that people left without means of acquiring money are pretty much told to go fuck themselves.


Combine this with pg's essay on the importance of importing the 'best and brightest' to America and some things start to make sense. More minds working where they can be aimed in the 'desirable' direction.



I'm not sure why he says the industrial revolution is different. Might it be that we just haven't figured out how to cope with a software-driven world yet?


> Two of the biggest risks I see emerging from the software revolution - AI and synthetic biology

Also nanotech, mind uploading, embryo selection...


Maybe we are forgetting the first REVOLUTION: Cognitive Revolution... around 75000 years ago!

See more for example Sapiens! A brief history of humankind


HN is eating my comments. Everything I've posted in the last few days is not showing up. What did I do wrong?


Much applause for this post which is by far the most sensible article I have seen coming from Silicon Valley.


"But I worry we learned the wrong lessons from recent examples" - what are the wrong lessons ?


> We can —and we will— redistribute wealth

But should we? And if so : why ? Also : who's "we"?


Can we really say that we are in a revolution while we are in it? And if so, can we, in all seriousness, measure it against other revolutions?


I'll say it again, AI's are not going to end Human Life (this is in the article) It's nuclear weapons that will do that...


Do guns kill people, or do people kill people?


> The three great ones are the agricultural revolution, the industrial revolution, and the one we are now in the middle of—the software revolution.

There is a case for skipping over the technological changes that shifted the slave societies of Greece and Rome to the feudal societies of medieval Europe.

You can't really make one for an important shift 40,000 years before the agricultural revolution. We went from a world without cave paintings to one with them. From a world without venus figurines and other carvings to one with one. With sweeping technological changes in hunting and fishing instruments and so forth. It's the second most important technological revolution ever, if not the first. If fishing is wrapped up with the human brain modifying into its modern form, wouldn't it be the most important?

Note that each revolution had a corresponding revolutionary change to political systems, family structures and society. With the agricultural revolution we had the end of primitive communism and hunter-gatherer societies and the rise of surplus, class systems and slave societies. Much of the earliest literature such as the Epic of Gilgamesh is on how to catch and keep slaves.

With the rise of capitalism we saw the fading of Catholicism and the rise of Protestantism and the "Protestant work ethic". We also saw the end of monarchies and the rise of liberal democracies. The bourgeoisie and proletariat of the time united to overthrow these old systems, but soon began facing off against one another, which is pretty much the history of the 20th century, or if we look at the election of old euro-communists in Greece last month, perhaps the 21st.

The famous old definition of economics in our capitalist economy by Robbins is "Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses". Scarcity is the bedrock of modern economic analyses - of utility, of supply and demand, of price.

What scarcity is there when someone films a movie, or writes a book or magazine article, or records an audio track, and with the press of a button can fly off to billions of Android and iPhone devices? Or writes an app and flings it across the world as soon as it hits the App Store or Google Play? Or sends code to Github, which someone in Bulgaria patches, which someone in Brazil patches, which someone in Japan then uses in a product they're putting out, glued together to some other framework on Github?

This is the end of scarcity. The most well-paid modern workers are those who produce commodities which are not scarce. That is if we can call these products commodities - a non-scarce commodity is something of a contradiction.

These revolutionary technological changes in production, at the base, will reverberate through the superstructure of political systems, families and societies. The old superstructure is still trying to keep down or even kill the new one - NSA spying. DMCA letters. David Cameron's great firewall for porn in England. IP and patent lawsuits. The recent New York Times article with investors questioning why Google is building self-driving cars. Aaron Swartz's suicide, when trying to open up taxpayer-funded research which is locked down and privatized by now irrelevant Elsevier. Telcos using their government granted monopolies to try to harm budding businesses.

Revolutions in production lead to revolutions in the relations of production. In the twentieth century, blue collar workers like railroad engineers and factory mechanics had their hands on the engines running the economy. As technology and AI causes more and more unemployment for people who can't find the derivative of 5x, de facto, if not de jure, power of production goes to those who are rack mounting cloud servers, or rolling out new web site builds.


The correlation between software and large scale loss of jobs is far from proven. The US unemployment rate fluctuates wildly based on many factors [0], but ~30 years or so into the software revolution it isn't too much higher than it has been historically. Parkinson's Law may be the answer to the threat of large scale job loss. There's a long list of startups who have raised hundreds of millions of dollars in funding because "money is cheap right now" and proceeded to hire offices full of people with a wide variety of titles. If the leaders of the tech industry are willing to hire for the sake of hiring, the overall economy is probably safe for just a little while longer. The prevailing wisdom is that rational actors won't spend money to hire people that aren't essential to their business, and they'll opt to use software instead of people if the software is cheaper. In practice these so called rational actors often use any savings from software to hire more people, whether they are essential or not. Part of it is because there's always something that could be done, and another part is that having a lot of employees makes people feel good about themselves. Whatever the motivation, mass unemployment is most likely a problem that will take care of itself.

In the context of this essay the term "concentration of power" seems to mean the ability of a small group to have an outsized (and harmful) influence. This seems like a much larger problem than unemployment, but it isn't limited to technology. A network of a few hundred terrorists or just five guys in france can bring cities to a halt and affect the psyche of entire countries. It's just something that we're going through right now as a global culture, and I don't see any quick fixes. It is clear that the threat of malevolent AI is greatly overhyped, and I can't wait until the zeitgeist moves on to another flavor of the month criss du jour. There are very real threats facing the world right now and we shouldn't spend too much time worrying about something that might or might not happen, that we couldn't stop even if wanted to. Synthetic biology probably falls into the same category, though the ability to manufacture deadly viruses is based much more firmly in fact.

Guns, bombs, computers and the basic building blocks of life cannot be made illegal and confiscated en masse. One of the best ways to solve the threats posed by technology is to take the idea of income inequality, mentioned in this essay, very seriously. We've created a culture where people measure their self worth by the value of the companies they found. When I talk to people about technology, I don't hear about the large and small advances that make our lives a little bit better every day. I hear, "Isn't it crazy that Instagram was worth $XX billion dollars? I want to start a company and make that much too!". This is poison and it has to stop. If we place all the emphasis on who made what, we create a world where a lot of people get left out and forgotten. Then they spend their time in dark basements, watching extremist videos and working carelessly with dangerous tools. We need to turn technology into something that has benefits for everyone, in order to protect ourselves and our loved ones from some of its most dire consequences.

[0] http://www.infoplease.com/ipa/A0104719.html


For the computer part of Sam's essay, I'd suggest that we are a long way from artificial intelligence (AI) software being significantly more economically valuable than what we've been writing for decades -- various cases of applied math, applied science, engineering, and business record keeping.

To support this claim, once I was in an AI group at the IBM Watson lab in Yorktown Heights, NY. We published a stack of papers; I was one of three of us that gave a paper at an AAAI IAAI conference at Stanford. My view of the good papers at that conference was that they were just good computer-aided problem solving as in applied math, applied science, and engineering and owed essentially nothing to AI. Later I took one of our major problems we were trying to solve with AI, stirred up some new stuff in mathematical statistics, got a much better solution (and did publish the paper in Information Sciences). That experience and observation since is the support for my claim. Sure, this support is just my opinion, and YMMV.

Instead of AI with a lot of economic value, I would suggest that closer in is a scenario of people managing computers managing computers ... managing computers doing the work.

And what work will those computers do? Sure, first cut, the usual -- food, clothing, shelter, transportation, education, medical care.

So, maybe John Deere will have a worker computer on a tractor doing the spring plowing, the summer cultivating, and the fall harvesting. Then food can get cheaper. Maybe before the plowing a tractor will traverse the ground, take an analyze soil samples for each, say, square yard, and apply appropriate chemicals.

Maybe GM will have car factories with robots driven by computers doing essentially all the work. Then cars can get cheaper.

Maybe Weyerhaeuser or Toll Brothers will have pre-fab house factories with robots driven by computers doing essentially all the work, self-driving trucks delivering the big boxes, computer driven earth movers doing the site preparation, computer driven robots putting up the forms for the concrete basement walls, computer driven concrete pumpers inserting the concrete from self-driving concrete trucks, and houses will get a lot cheaper.

And the computers get cheaper.

So, right, we're talking deflation. So, have the government print some money and spend it on K-12 and college education, guaranteed annual income, parks, beautiful highways, etc. Print enough money to reverse the deflation and hire a lot of people. Those people buy the cheap food, cars, and houses, have children, and fill the classrooms of the additional education.

What education? Sure: How the heck to develop all those robots, managing computers, worker computers, computer driven farm machinery, car factories, pre-fab house factories, etc.

Or, as computers eliminate jobs, basically the result is deflation, and that's the easiest thing in the world to stop, and the solution is the nicest thing in the world -- just print money to get us out of deflation.

We already know what people want from the famous one word answer "More".

Computers should be a blessing, not a curse.


The Economist did a special report on the "third wave" of the information age/information revolution. It focuses more on the economic impacts (big surprise there!) but was very interesting and worth a read - I hope you can get the article without a subscription... incognito mode usually works well enough.

http://www.economist.com/news/special-report/21621156-first-...

It's hard to see what people will actually do for work after the effects of this new revolution are fully propagated, but I mainly think that's a failure of imagination. The other revolutions were not too different in terms of taking something which a huge amount of people were doing and what society was focused on producing people to do and making it trivial (or at least to involve much fewer people). The overall impact of the industrial and agricultural revolutions were to ultimately create more jobs even if it was a wild ride while things were rapidly changing.

This revolution is different - now mechanical horsepower can be applied to tasks previously only possible through human minds which is quite different from machines or farming - but how different is it? It would take some really visionary people to figure out what the ultimate impacts of all of this are really going to be - and to try to imagine what people are going to do for a living or what society will look like on the other side.

My personal view is that AI is a pretty important component of this. I think, in principle, it's possible. But can it actually be done? That would be a pretty insane change and it's super hard to image what that will be like. But if AI just isn't possible or doesn't come around for a really long time, I don't think this revolution will be too different from others. The more "manual labor" type thinking tasks (grading essays, evaluating legal reports, collecting and searching through information, etc) will be replaced by more and more sophisticated machines. What about creative tasks? That's the final frontier as far as I'm concerned.

Well. It'll almost certainly be really interesting.

One idea I've had (I think we all have a lot of crackpot ideas) for what people who aren't suitable for highly skilled tasks are going to do revolves around social media and entertainment. What if a site like Reddit or Hacker News paid its users? I guess that's ridiculous, I'm not sure how the economics would work out - our contributions here would have to become more valuable. But if fully integrated into our minds (aided by computers) maybe they would be? I've seen people tipped in bitcoins on Reddit before so maybe it's possible. Just a crazy idea.


Yes, it's a crazy idea. Paying people out of the revenue gained by advertising to them has some fairly obvious limitations.

I don't know what the deal with those Bitcoin tips is. Personally I find it weird and creepy and never cash them - it feels like a ploy to tie user identities on multiple sites together, or something like that - but I have zero evidence to support that.


I stopped reading sam altman's blog after he equated Purchasing Power Parity as a measurement that China has surpassed United States' economy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: