Hacker News new | past | comments | ask | show | jobs | submit login
Just A New Fractal Detail In The Big Picture (2015) (edge.org)
363 points by cdcarter on July 3, 2016 | hide | past | favorite | 175 comments



For different versions of this argument see: https://en.wikipedia.org/wiki/AI_effect.

Bostrom said: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore".

Finally, the problem with all these articles is the lack of precise definition for the terminology involved (it reminds me of debates about "consciousness").

Recipe: make a slight twist on the interpretation of the already-vague definition of the term and a new essay emerges.

In this case, the author is equating "AI" to "Abstraction". All the things he mentions he doesn't understand, he could understand by looking them up and talking to the experts, but he doesn't have to, because the appropriate simple interfaces are in place. That's intelligence in its general form, where's the artificial part?


A lot of the arguments/heated debates on HN stem from a failure to define key terms. Less so in threads on compsci-related topics, but almost always on threads with econ-related topics. Especially universal basic income. Once you identify this common issue it becomes easy to avoid these frustrating (and frustrated) conversations altogether.

Eno has taken a pretty liberal view of "AI" but he makes a good point about specialization. I would define the concept he's getting at as "collective social memory," but I enjoyed his musings nonetheless.


>A lot of the arguments/heated debates on HN stem from a failure to define key terms. ... Especially universal basic income.

Can you recommend some definitions that are a good starting place for basic income discussions that we could helpfully point to when discussing that issue?


There are actually two slightly different ideas that are called UBI.

The libertarian variant is a relatively small guaranteed income (usually roughly the current minimum wage), is usually paid for with a VAT or sales tax, and replaces most or all of the current welfare programs. The main objectives are to provide a safety net for the poorest citizens in a fairer and less bureaucratic manner and to somewhat equalize the bargaining power between low-wage workers and their employers, usually as an alternative to unionization.

The socialist variant is significantly larger (living wage), is usually paid for with graduated income or capital gains taxes, and supplements rather than replaces the existing welfare programs. Its main goals are to provide a direct mechanism for income redistribution and to facilitate non-profitable professions (artist, full-time volunteer, professional student, stay at home caretaker, etc.) for those that want or need them.

There's obviously a lot of overlap between the two, and most proponents support both sets of objectives to some degree. That said, most of the contentious arguments seem to derive from critics either not understanding the difference between the two proposals or strawmanning their interlocutors into one extreme or the other.


It's generally accepted that any intelligence that isn't from a living being is artificial. Global Civilization isn't living, per se.


Could you offer an abstract definition of "life" and "intelligence" (or maybe even 'thinking' or 'consciousness') ?

For example, if you're trying to figure out whether something is 'moving', you would define the term in such a way that it could apply to planets or animals or other things/systems. If you define 'motion' = 'humans putting one leg in front of another', then of course non-human animals or anything else couldn't move. But, that definition is too restrictive. If you're trying to say that animals can't move, then provide a more abstract definition of 'motion' (e.g. 'change in location') and then you will see animals and many other things start moving.

Here is a video with a good explanation of how a lack of an abstract definition results in confusion:

  https://youtu.be/_Wvv-bt9SzI?t=11s


No, I couldn't. It's terribly difficult to define, and the more you try, the harder it becomes. I think it's a bit easier if you try to generally define it in terms of what we think it isn't: the average layperson would consider any intelligence created by humans to be artificial, such as "artificial cells" or other "AI" like Watson. That's what I meant by "generally accepted" but I realize that my comment reflected very little of the thoughts I had when I made it.

Using that definition, Global Civilization, regardless of how complex it has become, is nearly entirely created by humans. Ignoring the "human" part of Oxford's definition for civilization[1], you could say that certain groups of animals do have a form of civilization: some dolphins and killer whales, for example, teach their children very specialized hunting and recreational methods that others within their species do not know. But no other animal engages in anything at the scale of Global Civilization. In that sense, Global Civilization is an artificial construct (intelligence, if you like).

I'm not arguing that it isn't intelligence. It's almost like a virus, really, in that it can't do anything by itself (it requires humans), but it certainly has life-like characteristics when powered by a "host."

[1] http://www.oxforddictionaries.com/us/definition/american_eng...


I'd argue that Global Civilization is nothing but living!


I read a comment on HN a while back arguing that today's companies are essentially human-powered AIs. It made me think quite a lot -- large companies especially take a lifeform of their own. They tend to mostly care for themselves. Actions are taken "for the company" without any one person being actively aware of why. Higher-ups have some form of backdoor access to the direction the company takes; a symbiotic relationship with what is, in a way, a parasite.


You can argue that that's true of all large enough or complex enough organisations. The emergent behaviour deviates from any one individual.

Reminds of my favourite law: https://en.wikipedia.org/wiki/Iron_law_of_oligarchy


I was thinking about this fascinating topic recently in the light of Brexit.

The EU has traditionally been more supportive of the rights of indivduals as opposed to those of companies and other organisations, in the belief that companies emerge from human behaviour and even though they operate in their own (ie commercial) environment, they exist to serve the population.

The US has by contrast given companies more freedom to operate, in the belief that by doing so they provide an environment for individuals to fulfil themselves. I fully believe a post-Brexit UK government will go the same way.

I think the key term here is "motivation". If the motivations of the company and individual align, all's well. If not, who's going to cover your back?


There was a talk at the 30C3 where the speaker argued that we do not need to fear AIs. A human-level AI is an information processing system (what we have already, and which is not harmful) plus a motivation system. The latter is the one that's to be worried about, but it already exists in the form of corporations (which fittingly also work against our collective interest in many ways, e.g. environmental protection).


There was an interesting TED talk a while ago about this idea applied to cities, how they 'live' and communicate and grow but how we have never really seen one die. Less about AI I suppose and more about new life.


Many larger systems are more than the sum of their parts. Enough that you could consider the grouping it's own entity. Ant hills are the canonical example because individual ants are recognizable as 'animals', but the function of an anthill clearly relies on a complex interaction between more simple ants.

Just as we are made of tiny organisms, bigger systems could be considered to be made of us.


Hofstadter wrote a nice piece on how an ant colony could be thought to be an entity in itself. My favourite quote pasted from an online source:

Anteater: "I reject holism. I challenge you to tell me, for instance, how a holistic description of an ant colony sheds any more light on it than is shed by a description of the ants inside it, and their roles, and their, interrelationships. Any holistic explanation of an ant colony will inevitably fall far short of explaining where the consciousness experienced by an ant colony arises from."


Unfortunately, this argument proves too much. Replace "and colony" with "human" and "ant" with "cell in a human body" - can you explain where the consciousness experienced by a human cell colony arises from?

(To avoid debating materialism vs idealism use another creature, say "chimp", instead of "human" - assuming that you agree that chimps have a consciousness.)

I am not fond of the holistic / emergent idea but I don't think "we can't figure out the exact place where the sum becomes larger than its parts" is a good argument against it.


Hofstadter's anteater is a character in a dialog. You shouldn't assume it presents the author's conclusions directly. Indeed the point of the anteater is that with respect to ants, it's hardly an impartial source.


A simpler type of holism might simply state that information generated by a system is distinct from the individuals that compose that system. Bits form a computer, but only in groups.

Its more complicated with businesses because we are aware of the greater system and attempt to influence it.


Holism is inescapable. An ant is a holistic thought. When you talk about the parts of an ant hill you don't talk about the atoms in the ants. You talk about the ants. In that same way when you want talk about the ecosystems that contain ant hills you don't talk about individual ants. An anteater relates to the whole ant hill. The relationships a thing has are just as significant to what it is as the things it's composed of. Reductionism can only look inside things.


well, if you follow the point of view of materialists, you have to accept that the united states of america are a living, conscious, entity, of which you (maybe) are a part.

http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconsci...


Kind of ironic to think that exist something like 7 billon of IA around, but the judgment about them is mostly

meh.


The AI effect is a real thing, but this is taking it to extreme. I think it's worth distinguishing "artificial general intelligence", AI that is general and can do everything a human can, from "weak" AI. "Weak AI" may or may not be progress towards AGI, but we certainly haven't had AGI for thousands of years.


As a curious software engineer, I have a basic familiarity on where heating oil comes from (I remember my first sci-pop book with a section on basics of oil processing), where nuts are grown, basics of food industry supply chains, and every other example covered. My knowledge might be rudimentary and without much detail, but I try to cover all bases, and when I discover some area of human knowledge I genuinely know nothing about, I jump to Wikipedia in excitement. (I read Wikipedia a lot).

I thought that being uncomfortable with own ignorance is a fundamental part of human nature. But apparently it isn't.


You should try building a toaster from scratch

https://www.ted.com/talks/thomas_thwaites_how_i_built_a_toas...


Unfortunately, I am not that good at manually building hardware. Learning how it works is another thing, though.


Be careful assuming you understand how something works without having built at least a prototype.


He said his knowledge "might be rudimentary and without much detail" - it is perfectly possible to understand how something works (in a general sense), without building a prototype yourself.


This is correct. Got burned by this quite a few times.


Interesting thought. Do not say you know it/understand it untill you experience it. I thing there a thin line between these two categories, those things we ca understand without experience, and the rest. Is there a research paper about this aspect?


The main issue is simply that most descriptions of a subject are nowhere near detailed enough to be "complete", and even if they are, the reader of that description almost never will manage to pick up every nuance of what is described.

Try reproducing results from some research papers, and you see this demonstrated very easily, both in frustration of having to figure out what you misunderstood, and in frustration of trying to figure out details that were left out.


Yeah, I thought that was all kind of funny too. I'm a voracious reader, and have been curious about just about all parts of science and technology at some point. So I too knew at least the basics for everything he mentioned. Magically plop me in a stone-age agrarian society, and we'd be back to at least an 18th-century level of technology in a couple decades.


There were some science fiction stories (I forgot their names and authors) in which time travelers actually encounter a lot of practical difficulties in past societies, because the people they encounter don't believe them or don't readily see the benefits of their suggestions. So maybe you need to add "a stone-age agrarian society that's eager to take advantage of my knowledge", since that part can't necessarily be taken for granted. :-)


Yup. And you need a common language, so that may take a while.

Some simple stuff like crop rotation, composing, irrigation (if needed in that climate), should be some easy wins though.

And then it would be on to making iron, glass and other basic building materials.

It is more likely that I'd just die of something stupid though, like bears.


This. I just saw a documentary about famous inventions yesterday. They talked about how when Alexander Graham Bell conceptualized the telephone, most investors said "Why should we ever need this? We can already transmit speech with telegraphs."

I guess the same effect amplifies the more absurd or outlandish the proposed technology appears.


> Magically plop me in a stone-age agrarian society, and we'd be back to at least an 18th-century level of technology in a couple decades.

Execution is key! That you know how it works does not mean you can successfuly build it!


This is a road to madness. If every time you wouldn't understand something you would just plunge into the material and get familiarized you would do nothing else for the rest of your life. I think Eno means managing ignorance in the way of making a decision on when to stay ignorant (whilst being aware of your ignorance) and when to learn and hence "fix" this ignorance.


> This is a road to madness

How so?

> If every time you wouldn't understand something you would just plunge into the material and get familiarized you would do nothing else for the rest of your life

That's how I've been spending a significant part of my out-of-work time, for most of my adult life. Is there anything wrong with being a curiosity-driven person?


Not at all, some might call that a 'hobby'.


There's no way you could actually do this. Going an hour on a normal day you'd have a life time of material to learn so I agree yes it is madness. You can't learn thousands of years of engineering, math, science, etc.


Many lifetimes. It is of course impossible to absorb the sum of all humanity's knowledge. But it is possible to have a 30000 feet view of it (or perhaps a shitty satellite view obscured by clouds). And I would feel very uncomfortable if I didn't have at least a basic knowledge on how my environment works.


Of course you can't learn everything, but you can keep your curiosity alive and keep questioning reality, and search for explanations. To me, it's not about learning the whole corpus of human knowledge.


Keyword is "basic", I think.


Curiosity in general is probably a fundamental part of human nature.

But you have to make a call at what point you stop trying to understand everything about everything. Thomas Young, the early 19th Century scientist has been famously described as "The Last Man Who Knew Everything" - the sum of human knowledge moves on so vastly that it is simply physically impossible to have an in-depth knowledge of every possible subject.

I don't think he's saying that he knows absolutely nothing about these subject (I'm sure he's probably got a rough idea that the oil probably came in a big oil tanker for example).

But there's over 5 billion articles on Wikipedia, and at some point you've got to decide that you haven't got time to understand how the Four Hu (to take the first random topic I found) came about because you're spending time thinking about how AI works or whatever.


Of course, but there is (loosely defined) hierarchy of knowledge.

I haven't read (until now;)) the article about Four Hu (a specific topic), but I did read on 214 Kangxi radicals (more general/beginner friendly topic), and, of course, on Mandarin and Cantonese in general (I can't speak neither).

This is how it works. Curiosity is insatiable.


There is such a thing as "rational ignorance". It's all about the cost benefit analysis.


Tim Urban points out in his AI article that it might be dangerous to expect that the AI that we know now would still be the same AI we will have in the very near future http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

"So as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:"

Compare the images:

http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-c...

http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-c...


I'm not afraid of AI because intelligence has diminishing returns. 1000 times smarter entity might be only 2 or 3 times better at predicting the future than human is. So 1000 times smarter superhuman AI maybe could tell weather two weeks ahead instead of one. And the physical world is inherently like weather, chaotic in mathematical sense.

Our upper hand against even very dumb animals doesn't come from our intelligence. It comes from thousands of years of infrastructure that we built to amplify ourselves. AI would have to build its own, which we won't let it (at least initially) or steal ours in which having ability to see just a bit farther in the future doesn't help that much with.

It won't get much ahead of us with science because science is limited by technology. 1000 times smarter physicist won't figure out new universe laws without getting his hands on data from new accelerator, larger than any other built before.

That's some reasons why I think run-away AI won't be dangerous to us unless we let it build or take over our physical world which won't happen given our current caution towards subject.


> Our upper hand against even very dumb animals doesn't come from our intelligence. It comes from thousands of years of infrastructure that we built to amplify ourselves.

Their dumbness is what prevents them from using our infrastructure.

> That's some reasons why I think run-away AI won't be dangerous to us unless we let it build or take over our physical world which won't happen given our current caution towards subject.

Given how infrastructure is constantly being taken over by people who shouldn't be able to, I fail to see where you see any caution.

Also, the primary threat from AI IMO is not some "increased intelligence" in and of itself, but the lack of fatigue, the much greater bandwidth (you can only know so many people well, a machine isn't really limited in the number of people once it has the ability), and that machines might have a lower drive to resist being used against people than people have (which already is too low).


I don't think that is true, just from observing the difference between humans. Really intelligent mathematicians can do work that a weaker mathematician would never accomplish if they spent their whole life on it. And the average person could probably not even understand it given years.

There seem to be vast differences in ability between humans, despite all of us having 99.99% the same DNA and same brain structure. Who knows how far beyond humans a superintelligent AI could be.

>AI would have to build its own, which we won't let it (at least initially) or steal ours in which having ability to see just a bit farther in the future doesn't help that much with.

Who says we need to let it? It just needs to be better than the best human hackers, and it will be able to compromise a huge amount of our infrastructure in a short time. Possibly without us noticing. Even intelligent groups of humans have been able to achieve terrifying things like stuxnet.

Then if all else fails, it can just persuade people. It's smarter than the best human sociopaths and politicians after all. And it could bribe people by earning vast amounts of money on the stock market, or other areas where it has a comparative advantage over humans. That would buy it a huge amount of power to start building independent infrastructure.

But the scariest outcome is that it doesn't bother with robot armies at all. If it's so intelligent, it could design working nanotech. The only reason we don't have it is because it's so hard to engineer. So much complexity and moving parts, is very hard for humans to wrap their brains around.

Human brains weren't evolved to be engineers after all, it's a lucky coincidence we are capable of it at all. An AI brain optimized for this task should far exceed humans. The same way chess computers far exceed human chessmasters.


I agree that AI can do marvels in math, but math quickly departs from reality (like string theory) and being able to prove things about some beautiful intricate mathematical structure most of the times doesn't help you much with the world of matter.

I recommend Culture novels written by Iain M. Banks. There are vastly powerful AIs but they spend as much time as they can afford in "Infinite Fun Space" which is basically math taken to insane depths.

I know that there's a saying that all math is eventually applied but I think this saying is popular because it is sort of paradoxical that some of the most bizarre math eventually found their application. What I think is for math just a bit deeper than what human can grasp, "eventually" quickly becomes larger than the age of universe.

About stealing... Hackers are successful not because they are ungodly smart. They are successful because they have the will to look for and exploit vulnerabilities. Intelligence, again, doesn't help all that much. I don't think people who wrote stuxnet were able to do this because they were of superior intelligence. They were just usually intelligent people (which means barely more intelligent than average human) that were sufficiently motivated by huge budget and interesting problems.

Super-intelligence won't help you with stock market because it's purely random game. You can see it in results of active management funds. HFT gives impression of algorithms beating people at trading but the thing they use to extract the value is not intelligence, it's speed. If you can trade faster you can beat slower guys because you are playing slightly different random game than they do. If you compare one HFT with other HFT then you are back to random results. So AI won't have upper hand on stock market unless we allow it to trade faster then our non-conscious software.

In theory I could image AI persuading people to give it what it wants. Because people are stupid and have flaws that are recurrent and easily exploitable. But again, I don't think intelligence would make such a huge difference. 1000 times more intelligent conmen could be just twice as effective because of chaotic nature of how human flaws interact.

> If it's so intelligent, it could design working nanotech. The only reason we don't have it is because it's so hard to engineer. So much complexity and moving parts,

It's hard to engineer because it's so damn small, not because it's complex. To build nanotech you'll need to build tools to build tools to build tools to build nanotech. You could do all that but even if you are 1000 times smarter, reality has a speed limit. You can't take a shovel of sand from the beach and build CPU, no matter how intelligent you are. You have to build fab first.

> Human brains weren't evolved to be engineers after all, it's a lucky coincidence we are capable of it at all. An AI brain optimized for this task should far exceed humans. The same way chess computers far exceed human chessmasters.

Yes. But it can exceed humans at consciousness, charity and compassion even faster because those are purely intellectual things, like chess while engineering is limited by moving atoms and energy around.


>being able to prove things about some beautiful intricate mathematical structure most of the times doesn't help you much with the world of matter.

Mathematical ability is just an example. The same abilities that apply to math also apply to engineering, programming, etc. A superintelligent AI would be able to do unbelievable things to "the world of matter". Because the main requirement for manipulating matter is intelligence, discovering better designs and technologies.

> Hackers are successful not because they are ungodly smart. They are successful because they have the will to look for and exploit vulnerabilities. Intelligence, again, doesn't help all that much. I don't think people who wrote stuxnet were able to do this because they were of superior intelligence.

I find this assertion unbelievable. Even average hackers have significantly above average IQ. I can't find the statistics right at the moment, but many STEM degrees have above average IQ. The best is physicists which had an average IQ of 130. An average person can barely figure out how to operate their email client, they won't be building stuxnet anytime soon.

>Super-intelligence won't help you with stock market because it's purely random game.

It is not. People make fortunes with just slightly better statistical models, or slightly better information. Traders spend millions to fly drones and helicopters over oil tanks and parking lots, to get a slight edge over others.

>It's hard to engineer because it's so damn small, not because it's complex. To build nanotech you'll need to build tools to build tools to build tools to build nanotech. You could do all that but even if you are 1000 times smarter, reality has a speed limit. You can't take a shovel of sand from the beach and build CPU, no matter how intelligent you are. You have to build fab first.

But we could potentially bootstrap nanotechnology really quickly from existing biology. There are already labs that will make proteins on demand from DNA. The problem is it's just so complicated.


> main requirement for manipulating matter is intelligence

Rather, it's knowledge. And to get more knowledge you need to manipulate matter. Pure intelligence is useful up to a point but the you need to go out and get more data. That's the limiting factor I think of. The only thing you can do with just intelligence is math (that can have some use later on) and philosophy (that's totally useless).

> The best is physicists which had an average IQ of 130.

Incredible amount of people have IQ over 130. If that was key factor to writing stuxnet there'd be new one each day.

> An average person can barely figure out how to operate their email client, they won't be building stuxnet anytime soon.

IMHO that's mostly because they lack knowledge and any reason to care. You really don't need IQ above 130 to do technical things and having IQ of 150 or even 200 doesn't help you all that much, it seems, with pushing boundaries of human capacity. Progress is made mostly by fairly common intelligent people talking to each other.

> It is not. People make fortunes with just slightly better statistical models, or slightly better information.

And they lose fortunes with significantly better statistical models, and better information (up to but excluding insider trading). When you sum up everything it's as random as it gets. Take a look at Warren Buffet bet against hedge funds.

In all fairness additional information could help, but data is not information and where good models are unavailable and processes are chaotic you are almost just as likely to infer correct information from data as incorrect.

> But we could potentially bootstrap nanotechnology really quickly from existing biology. There are already labs that will make proteins on demand from DNA. The problem is it's just so complicated.

For me it looks more like building multi-core CPU in times of Blaise Pascal. Foundation theory is here, even some tech, but we have no idea how much technicalities lie ahead of us to figure to get to our dreams.


>The only thing you can do with just intelligence is math (that can have some use later on) and philosophy (that's totally useless).

We already have vast amounts of knowledge on the internet. I can download all of wikipedia in an hour, and fit it on a flash drive. Most of the world's scientific papers and books are digitized and available.

The limiting factor is no longer knowledge. It's the ability to absorb knowledge. To be able to instantly find an obscure paper from 1930 that's relevant to your current thought, or know some random fact from some article you read years ago, etc. That's something AIs would have a huge advantage over humans at.

>Incredible amount of people have IQ over 130. If that was key factor to writing stuxnet there'd be new one each day.

Stuxnet wasn't written by one person. It was probably a huge team of intelligent people, who worked possibly for years.

But who says an AI can't be equivalent to a group of humans? If it has enough computing power, it could make copies of itself. And unlike humans it can communicate thoughts and plans instantly to it's other "selves".

And who says it has to work at the same speed humans work. Human brains run at maybe 100 hz. AI's built out of silicone could work thousands of times faster. Doing the same work, just in much shorter time.

>And they lose fortunes with significantly better statistical models, and better information (up to but excluding insider trading). When you sum up everything it's as random as it gets. Take a look at Warren Buffet bet against hedge funds.

Look it's simple math. If you can predict prices 1% more accurately than anyone else, then you can make a huge amount of money in the long run. Stock prices aren't random. They are determined by real events, mainly how much profit the company makes.

>but we have no idea how much technicalities lie ahead of us to figure to get to our dreams.

That's the point though. A superintelligent AI could figure out exactly what those technicalities are, and find the shortest route to that tech. If you went back in time to Blaise Pascal, if you took the right books and plans from the future, you could get them to CPUs in mere years. We could advance technologically much faster than we are, the limiting factor is the speed of invention, which is slow.


> The limiting factor is no longer knowledge. It's the ability to absorb knowledge.

I get your point that from all experiments we performed so far and bothered to write down and digitize there might be few tricks left to squeeze out, but vast majority of our current progress comes from new experiments.

I agree that AI could very well write stuxnet or mess with our security so I'm hoping we get bit more tight in that department before we manage to build AI. I'd definitely prefer first artificial consciousness to be built before robots are as popular as cars or smartphones.

We will eventually develop AI and it'll be definitely challenging to orchestrate it running most of our civilization and not killing us in the process.

What I'm not afraid is AI taking what we know so far and secretly turning itself into miracle making god in matter of weeks. We will have few years or decades of existence before AI becomes vastly more powerful than the rest of us and till then we will have ability to align our priorities.

> Human brains run at maybe 100 hz. AI's built out of silicone could work thousands of times faster. Doing the same work, just in much shorter time.

It's not work. It's just thinking. AI can create philosophical theories or build new optimal JavaScript frameworks at blazing speed and still not make any progress in physics that would make it more powerful than us. It needs new data. It needs to run experiments to get anywhere beyond what we achieved so far. If we maintain transparency and caution in what AI is given and allowed to do then we may have safely transition from current world to AI world. Besides, at some point our civilization will encounter others it's better to bring our own AI to the table. Alien one might attach much less sentimental value to us.

> If you went back in time to Blaise Pascal, if you took the right books and plans from the future, you could get them to CPUs in mere years.

No, you couldn't. Do you know how long it takes to build a fab with current infrastructure available? You'd have to bring half or their industry into XX century before you could have a CPU.

> If you can predict prices 1% more accurately than anyone else, then you can make a huge amount of money in the long run.

Yes. The thing with stock market is that you can't. It's not because you are not smarter than other traders. It's because interaction between all the traders creates pretty much perfect chaotic random process that no-one can predict.

> Stock prices aren't random. They are determined by real events, mainly how much profit the company makes.

Same way random generator driven by unknown algorithm is predicted by its seed.


>I get your point that from all experiments we performed so far and bothered to write down and digitize there might be few tricks left to squeeze out, but vast majority of our current progress comes from new experiments.

I don't think so. We already know the laws of physics to a great degree. An AI or even human could design all sorts of amazing things without ever doing a single experiment.

Of course, there's no reason it can't do experiments, also. Once it's free on the internet, it need only contact some random Joe and bribe/threaten/persuade them to do it's bidding.

>No, you couldn't. Do you know how long it takes to build a fab with current infrastructure available? You'd have to bring half or their industry into XX century before you could have a CPU.

Perhaps. This doesn't seem to be true of most technologies though. You could go to 1800 London and show them how to build a modern car or airplane in a year or so. You could introduce everything from antibiotics to radios, centuries before they were actually invented. Ancient romans could have built crude steam engines, and bootstrapped industry in a century, if they had known how.

Building a CPU might require first building multiple other industries to support it, but that can be done. If the AI lays out in painstaking detail, every step necessary to construct every tool, every machine. And yes it would take a lot of labor, but they would be able to do it.

It seems impossible to us, because we can't imagine that kind of complexity. Humans are terrible at managing complex systems. We overlook or forget details, we don't account for possible mistakes, etc. No single human even knows every step necessary to build a pencil, because we specialize so much. An AI could be aware of every detail, of every step in the process, and manage it at terrifying efficiency.


> being able to prove things about some beautiful intricate mathematical structure most of the times doesn't help you much with the world of matter.

How about discovering a flaw in human crypto systems?

> Super-intelligence won't help you with stock market because it's purely random game.

Humans can't reliably exploit the stock market, because we're competing against other humans. It's not that there are no patterns, it's that as soon as someone discovers a pattern, people rush to exploit it and then it's tapped out. Imagine you were the first human stock trader to think of "buy stocks in companies when their product announcements get featured in newspapers"; and imagine that however much you tried to explain your strategy to the other stock traders, they just didn't get it. You wouldn't always pick a winner, but you'd make a lot of money.

How confident are you that AI won't be able to find something like that?


> How about discovering a flaw in human crypto systems?

Very good point.

As for stock market I'm fairly confident that there won't be strategies that AI can figure out that people won't catch on as fast as any other strategy. I think that's because stock market has very narrow API, and more complex strategies would involve piling up operations, doing multiple steps. Every problem that involves any probabilistic element (or uncertainty) quickly becomes unpredictable as number of steps grows. I don't believe AI can be better at stock market than us because it's a simple game of chance.


The stock market isn't just a purely random game though - it's based on the expected earnings of the companies, so if you can come up with a better estimate than everyone else by combining information better, you can make money. The reason it's hard for humans to make money this way is because everyone else is doing the same thing, so any advantage is short-lived.


Pseuodo-random generators are not purely random - they are based on a seed and an algorithm. For stock market nobody can have a clue what the algorithm is and what meaning the seed has in it.

The reason nobody can predict the next value is that at some point something unpredictable happens that can wipe out all your previous gains and some more.

Market can remain insane (meaning not behaving according to a model) longer than you can remain solvent.


Intelligence isn't a linear scale and different forms of intelligence aren't comparable in every regard.

Computers regularly surpass humans in all kinds of stuff that isn't considered AI, but just as much could be (e.g. calculating).

Computers can also surpass humans in things that are currently considered AI, like image recognition ( http://www.eetimes.com/document.asp?doc_id=1325712)

In the end it's just one other thing that computers can do better than humans. And the results might not be applicable in other sections of artificial intelligence.


He's addressing one aspect of AI that can be disturbing to some people, specifically becoming reliant on the expertise of others in your day-to-day life and thus becoming less independent. If your biggest issue with the concept of AI assistants is that they might have insights about you that you yourself don't have, I could see this article making someone feel better about it.

For me at least, that's not the primary fear. I don't fear people becoming dependent on AI. Rather, I fear people misusing the information that AI reveals about others. I don't feel uncomfortable from (for example) AI observing my media habits and using that information to make recommendations for other media.

However, I would feel uncomfortable if someone with access to that AI then fed those insights into some other AI that I wasn't aware of for much more nefarious purposes, such as profiling me to try to quantify my loyalty to the government.

That's just one example. Even if I had absolute faith in my government, bad actors could misuse that information to do other things, like determine which people would be most likely to take on debt on unfavorable terms (so they could be sent a 20% APR preapproved credit card, naturally!), or which people would be most likely to want to help a troubled Nigerian prince who just needs a place to temporarily park a bunch of money.

I'm not trying to fearmonger and I'm actually very excited for the future of AI. I just don't think this piece is addressing the deeper fears people have about the technology.


The problem with habit reinforcement is that you've simply created a feedback loop.

"You like [music type]? Here are more bands/artists of [music type]."

It looks like an innocent service. But it's devastating to real exploration, because it makes it much less likely you'll ever discover Band/Artist Z whom you'd never normally listen to but love anyway.

This is why thoughtless customer profiling is a dumb idea, and certainly not the sure fire insta-profit marketing panacea it's sometimes supposed to be.

It has some uses, but you're reducing customers to stimulus/response robots with a limited behavioural repertoire, and that's an excellent way to miss a lot of opportunities.


> The problem with habit reinforcement is that you've simply created a feedback loop.

> "You like [music type]? Here are more bands/artists of [music type]."

So far I've found the opposite. Pandora has exposed me to music that I never would have sought out on my own. In fact I can think of several artists that I would have judged by their cover, so to speak, and never given a chance even if I had stumbled on them on my own, instead of coming to awareness mid-song that "I don't know what this is, but I kind of like it."

My wife teases me about this a little bit when she finds me listening to music that she says isn't 'me' or doesn't seem like something I'd like. She's right, but I guess Pandora knows my tastes better than she does (and better than I do, to be fair.)


Contrast that with Spotify, which when given a song or artist that I like goes out of its way to recommend music that is superficially similar but that I hate. Or how it can recommend 80 songs to me a week based on things I have listened to and end up recommending me nothing that I like.


Huh. I've actually found the "Discover Weekly" feature to be remarkably good at picking music that I like. I'd say I affirmatively like about half to three-quarters of the songs per week, with only a couple that I find myself skipping.

I wonder if it's better at certain genres than others.


> I wonder if it's better at certain genres than others.

The Discover Weekly algorithm is, as I understand it, based on what songs have been added to other user playlists. So if you're primarily listening to a genre that has a lot of intense people making carefully curated playlists, you're gonna get a better Discovery Weekly.


And more generally, I would say it just depends on the quality of the AI. There is no fundamental reason why such a system would have to recommend only similar music. It could try contrasting music, and build a picture of what aspects of the music you like. It could draw conclusions to recommend seemingly different genres which nonetheless share some commonality that might make them appeal to you. It could also occasionally try out completely different things, to broaden your horizons (and give you more of that if you react favourably). And that's just scratching the surface. You could do all that without AI. An AI could potentially do all that, plus other interesting things that we wouldn't even think to try.


The algorithms used to make musical recommendations do statistical analysis on numerical ratings given by different people to different songs. They typically know nothing about music, as musical knowledge is understood not to improve their results. I wouldn't call them AI - they're neither knowledge-based systems nor neural networks.


I don't think that that's true for Pandora: https://en.m.wikipedia.org/wiki/Music_Genome_Project


> I wouldn't call them AI - they're neither knowledge-based systems nor neural networks.

Out of curiosity, why are you drawing the AI line at neural networks? Neural networks are just function approximators; there's nothing magical about them except they work really well on problems we're interested in.

Would you really not consider an SVM or a Naive Bayes classifier "AI" since they don't use a very rough, very loose analogy to the brain?


I discussed this problem decades ago when I developed MORSE (http://web.onetel.com/~hibou/morse/MORSE.html).

People like certain things for no obvious reason, and are unlikely to change. It does them no favours to recommend them music they don't like, or anything else which depends on taste. But if there's music similar to the music they're known to like, which they haven't heard - perhaps because the band is obscure, from a different decade or part of the world, or new, for example - they might want to hear it. Also, there might be one or two songs they'd like by artists they don't normally listen to. Collaborative filtering is good for this.

Your point is valid about ideas, news, and a number of other things which have nothing to do with taste. People would benefit from sometimes being exposed to viewpoints different from their own.


If you however do succeed in reducing humans to stimulus/response bots, it will likely be good business. Which is the scariest aspect, cause that drives lots of decisionmaking today...


Hence random mutations in genetic algorithms. And other similar functionalities


I still think your examples are more of the "powerful tools" kind of AI. And I think the general argument about "hard AI problems, when solved, cease to be AI" is also along those lines.

I think we've proven by example that what we thought of as things that require "real" intelligence, like chess, go, jeopardy - really don't. They're not a good yardstick for intelligence. Or intelligence is not as closely tied to (self) awareness and consciousness as we once thought.

I do worry about the "autonomous system" problem. The link between meta-data and drone strikes have brought us uncomfortably close to a real Skynet - without any need for an emergent consciousness per se.

But I always read discussions on AI (the kind that "Wake up") to really be more about AC: artificial consciousness.

> I don't fear people becoming dependent on AI.

Indeed. I fear AI that stops being dependent on people. Such systems are still far off, I believe. But I don't see why we couldn't intentionally or by accident create a new form of life that could out-compete us, and in the end make us extinct.


> Indeed. I fear AI that stops being dependent on people. Such systems are still far off, I believe. But I don't see why we couldn't intentionally or by accident create a new form of life that could out-compete us, and in the end make us extinct.

Since the line between AI realized in a machine and systems in general is kind of thin and blurry, one could argue that the economy is about to become such a system. If automation can fill all the links in the feedback loop chains of the market, we could one day wake up to see humanity being completely excluded from the economy, which would continue onwards by itself. A Disneyland with no children[0].

[0] - http://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Absolutely. Just as we are (in my opinion) a result of emergent systems (evolution) - the systems we build should be expected to have emergent properties. And there are other systems we are beholden to (and impact) such as the biosphere and the climate.

Criticism of capitalism is often misunderstood, or misconstrued as "conspiracy theory" - while what it often is, is just pointing out the most likely and apparent emergent properties of capitalism - and likely follow-on effects of that. Examples are concentration of capital and power, ignoring "external costs" like environmental damage etc.


Personally I think "emergent" != "emergent". Most of the systems we build are pretty dumb - except our socioeconomic ones, which gain complexity by being formed by aggregating over billions of sentient beings.

Now I'm having some trouble sorting out the categories in my head. On the one hand, the "emergence" you described in your comment is garden variety feedback loops. Something very simple in principle - and being a part of a feedback system literally means 'being beholden and impacting it'.

On the other hand, it really feels like search for intelligence is in a big part just tweaking one's abstractions. You can see the market as "just" a complex set of feedback loops between individuals and groups, or hop one abstraction level above and realize that we're all components of a feedback system that's optimizing for something we as individuals have close to zero control over. A system that's bossing us all around. We try to understand it, so that we can say that e.g. prices of product X dropped because people in company Y invented some innovative process. But if one digs deeper one can sometimes notice that if it wasn't company Y then it would be a company Z; if it wasn't an innovative process then it would be an innovative business model (or a scam), or a substitute product X1. All because the "market pressures" made it happen. If you step another abstraction layer up, you can just treat the market as a being and it's not a big of a stretch.

So at what point one should stop playing with abstractions? Or, maybe, what do we mean by "intelligence"? If we're looking for a different-but-mostly-similar being to talk to, then searching for it by doing abstraction dance is not a good way. But if by "intelligence" one means a sufficiently smart optimizer, one that can make decisions to reach its own goals, and that in principle could coerce us to do its bidding - if one means that, then one may realize this has already happened.


Some of the things attributed to capitalism have nearly nothing to do with capitalism itself. I'm starting, for one, to think that there's probably less concentration of wealth and power than we think. Bill Gates is a heck of lot wealthier than was JP Morgan, but Morgan had a lot more concentrated power and wealth than Gates.

Not to quibble, I just think our analysis is deeply flawed.


> I read once that human brains began shrinking about 10 thousand years ago and are now as much as 15% smaller than they were then.

Timeline's a bit off, but the science is still a shocker:

> Over the past 20,000 years, the average volume of the human male brain has decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size of a tennis ball. The female brain has shrunk by about the same proportion.

http://discovermagazine.com/2010/sep/25-modern-humans-smart-...


There has been incredibly strong selection pressure for babies being able to get down the birth canal - the death rate in child birth pre-modern times was something like 10%. The human brain is about as big as you can make it and still reproduce. What we would expect when you have strong selection for two competing forces (smaller head size and greater intelligence) is for the brain to shrink in size and become more asymmetrical - basically you get rid of system redundancy by having each side of the brain specialise in one thing. This is exactly what we have seen in humans.

One of the more interesting aspects of recent human evolution is how much has occurred in the the last 10,000 years. Evolutionary speed is basically proportional to the population size and the massive expansion that agriculture allowed has caused the rate of evolution in humans to increase around 1000 fold over the neolithic rate.

One other factor looking at eurasian brain size is gene competition resulting from the neanderthal / ancient african hybridisation event that occurred around 50,000 years ago. When you have a hybridisation event the genes from the two populations often don’t get along too well and it takes a while for selection to remove the incompatibilities. I would not be surprised if the genes controlling brains size in neanderthals and ancient africans were not very compatible (there is no reason that they should be given selection for brain size increase took place in parallel in both species) and that hybridisation resulted in an brain size excessively large given the birth constraints. It would be really great to know the change in childbirth death rate over the last 50,000 years.


When we look at the baby-in-a-birth-canal problem, you can obviously go two ways: smaller baby head (and torso) or wider birth canal. Apparently Neanderthals had wider pelvis _and_ larger heads/brains. So they kind of picked the #2 route.


The problem with pelvis expansion is it is hard to select for widening of the pelvis without upsetting walking and running. Development is hierarchical and bones are lower down in the heirachy than brain size.

In regards Neanderthals I would not be surprised if it was wider pelvis that allowed for them to have larger brained babies. This is a fascinating area.


A recent study came out showing birds are more intelligent than expected possibly because of neuron density. [1] I find it easier to believe, without evidence of course, that the brain became more compact rather than losing processing power.

A neuroscientist on brain size and intelligence: https://neuroscience.stanford.edu/news/ask-neuroscientist-do...

[1] http://www.pnas.org/content/113/26/7255.full


Size may be smaller, but I wonder about the surface area through folds. That's where the strength of our intelligence lies.


Plus there is gene of unknown function called DUF1220 which seems have something to do with brain size. https://en.m.wikipedia.org/wiki/DUF1220 There are more copies of it as brain size increases in mammals and apes. The Neanderthal genome has more of it than Sapiens.


I find that scary. A huge collection of specialists makes for a pretty fragile species.


Specializing enabled huge advances, everyone having to do everything is actually quite fragile because it can only support a small population.


I think it can be drawn a parallel with the centralized vs decentralised network. Which one is better? And I think the stronger one for the species is a combination of the two.


There are different types of decentralized networks. Not every such network has totally equal nodes, they may still serve different roles. The network is still decentralized as long as sufficiently many nodes are servicing each role.


Networks of specialists is more robust than disorganized bands of generalists. You don't have to go too far back in human history to find food insecurity and disease dominating all other problems.

You get rid of those through specialization.


Only if there isn't enough redundancy.

Though I agree, some of the systems (food/energy supply, communications) we have built are potentially quite fragile and disruptions could cause a lot of mess short term.

Never hurts to brush up on your survival skills Bear Grylls style :)


I don't see the correlation. The author is talking more about systems. The difference with AI is that it will have all of the knowledge within a system and thus be able to make incredible connections between disparate fields of knowledge.

On a separate note, I had an interesting thought about AI as I was looking at a cool architectural photo. I would love to design a building, but I don't know a thing about it. It would take me years just to learn all the details. Then, I thought, one use of AI may be to connect with our brains and make learning easier. Imagine if we could instantly learn all the fundamentals of architecture, then use our creativity to create our own designs. It's a mix of AI and human creativity, neither one replaces the other.


Or - you could design the building you want, any way you want - and the AI takes over when you're done to make the design is possible. Boring / difficult / specialist things like load bearing calculations, material strength, plumbing routing etc. are done for you. A bit like Ripley's exosuit, but for your mind!


It's feeding you information that you'll accept as fact and you think both sides of this arrangement are equal?

It's already how certain institutions have ingrained themselves into society, but they don't have the efficiency of being able to instantly impart their knowledge or the holistic view of it's goals and progress that AI could have.


> The difference with AI is that it will have all of the knowledge within a system

If you asked me to come up with 5 different definitions of AI, the concept of "all of the knowledge" wouldn't really be there. I agree on the "incredible connections" part, though.


"all of the knowledge" is a huge part of AI. The bigger the datasets, the better results you have from machine learning, and it's a trivial thing for AI to absorb huge amounts of information. What humans find difficult, computers can do easily and it's usually vice-versa for stuff like pattern recognition, creativity.


> The bigger the datasets, the better results you have

Wouldn't you agree that the ability to achieve relatively good results with relatively small datasets is one of the main reasons for the current ML boom?

> it's a trivial thing for AI to absorb huge amounts of information

Of course, I agree on that. But I also note that "traditional" (non-ML) algorithms can digest big data too, so I don't see that ability as the defining characteristic of the current wave of ML-related technologies.


Why does AI have to have access to all information? Seems a bit wasteful, resource wise, if you ask me.


TCP's not that resource-intensive.


Echoes of 'I, Pencil' in there:

http://www.econlib.org/library/Essays/rdPncl1.html

Some more background about the brain shrinkage referred to in the article:

http://www.scientificamerican.com/article/why-have-our-brain...


Yep, "I Pencil" came to my mind as well. But opaqueness is the only thing that AI shares with global market (or catallaxy as Austrians would call it).

What Brian Eno is missing, is that global markets are created by billions of human minds on the basis of a constant process of trial-and-error (to the extents government allows them to try and fail). It's not just error tolerant. It's constantly making profit by improving and correcting errors. And it's been around for 1000s of years.

Whereas computers have been around for less than a century and are programmed by a few people and AI may not have with "common sense" or error tolerance. To deliver yourself to that seems unacceptable to me.


And it's not as if we didn't have pencils before economies of scale kicked in either, the invention of the pencil (1564) pre-dates much of the mechanisms described in 'I, Pencil' so at one point it was definitely possible to know the entirety of pencil-making.

Even so it serves as a very graphic reminder of how interconnected all of humanity and industry is at this point.

http://www.enchantedlearning.com/inventors/page/p/pencil.sht...


This is similar to a thought I've had for some time now; That artificial intelligence is most likely to emerge - rather than be explicitly designed - from our increasingly complex, interconnected, self-regulating systems and institutions. We will be no more aware of, or able to converse with it, than the bacteria in our guts are aware of us. Also, artificial intelligence is the wrong term. We have artificial intelligence. What we are talking about is a new, higher form of real intelligence.


In the paper coining the term 'Singularity'[1], Verner Vinge describes that one way superintelligence could emerge is that large computer networks may "wake up" as a "superhumanly intelligent entity".

I personally find any argument that relies on our lack of understanding of complex systems and emergence to be on shaky grounds at best and a fallacy at worst.

[1]: https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.ht...


An important thing about intelligence is that it doesn't have to be like the one we have. An intelligence that "wake up" won't say "hello" to us. Assuming we survive its birth, we may just recognize it as things slipping more and more out of anyone's control, ignoring our inputs and doing whatever.

One could wonder if it didn't already happen with the economy.

In a way, talking about these kinds of "meta-intelligences" is kind of a point-of-view thing. It depends on the abstraction level you use. You can look at an ant and think about it as an individual insect, or realize that it's just a tiny, specialized cog in an intricate machinery of a colony - a colony that kind of behaves like an individual animal. Similarly, at what point our day-to-day interactions with other people, aggregated over the entire population of the planet, start to resemble a planet-sized organism doing weird self-regulation stuff? What's the difference between saying that "the market decided" and saying that a low-level decision was made in a part of the planet's brain?

Abstractions are weird this way.


It's not difficult to conceptualize globalized capitalism in this way, and indeed some have -- Nick Land springs to mind.


Yeah, but what does Nick Land actually know about the likely structure of AI software or interconnected control systems. Or even economics for that matter. He's a renegade philosopher, which conveniently requires no qualifications.

"accelerationism" otoh is a clear and interesting concept.


>> What we are talking about is a new, higher form of real intelligence.

Which if the author is right and our brains have already been shrinking means that within a few hundred years, there won't be any humans left - just "higher intelligence".

I've never understood people's naivety thinking we're always going to be at the top of the evolutionary ladder. At some point, we will be replaced and humans in whatever form they take now will cease to exist in the very near future.


Which if the author is right and our brains have already been shrinking means that within a few hundred years, there won't be any humans left - just "higher intelligence".

No, it really doesn't. Even if the extrapolation was correct - which is dubious at best - losing 15% of brain over 10 000 years certainly wouldn't mean "no humans within a few hundred years".


I think termites and fungii are above us on the evolutionary ladder - they're more by count and by mass.

If we cease, we had a good run.


This is (AFAIK) the basic idea of Integrated Information Theory: http://www.scottaaronson.com/blog/?p=1799 .


Brian Eno is a skilled musician. He's a lousy economic, systems, and AI theorist.

The Edge is revealing itself far more to be a forum in which people of provenance hold forth on that which they've no particular qualifications or grounds to discuss (something which never happens elsewhere on the Internet, of course </s>).

Complex, highly-interdependent systems exibit fragility, nonlinear transitions, and multiple optima, some not reachable, some local and highly persistent but undesirable.

Tossing up ones hands and declaring that all shall be as God / Allah / The Great Spirit / FSM wills it abandons all sense of agency.

The prospects of a global collapse of systems concerns a great many people, and for much the reason AI would: the mechanisms, logic, interactions, limits, and consequences aren't clear. See David Korowicz's "Trade-Off".

http://www.feasta.org/2012/06/17/trade-off-financial-system-...

As for technological unemployment, that's been a consideration for over 200 years. You'll find strong treatments from J.S. Mill, and commentary from Abraham Lincoln.

Some modern sources, referencing those:

Technology in Society

1984, Vol.6(4):263–284, doi:10.1016/0160-791X(84)90022-8 "High technology and job loss", Russell W. Rumberger http://31.184.194.81/10.1016/0160-791X(84)90022-8

Robert Struble Jr, (1993) "Towards a Structural Solution to Unemployment", International Journal of Social Economics, Vol. 20 Iss: 11, pp.15 - 26

http://31.184.194.81/http://dx.doi.org/10.1108/0306829931004...


This is one of the most elitist comments I've read on HN.

What are your special qualifications that allow you to say who is and who is not qualified to comment on a topical issue?

I'm guessing by your comment stating that he is a skilled musician that you aren't actually familiar with Brian Eno at all. He's an unskilled musician, that's by his own admission. He's not technically proficient on any instrument. He's quite the technologist though.


Space alien cats don't need qualifications. Nor do they claim them.

As others have noted, Eno's piece is a poor rewrite of I Pencil, itself a poorly reasoned propaganda piece.

Crooked Timber has an excellent deconstruction of that: http://crookedtimber.org/2011/04/16/i-pencil-a-product-of-th...

The one at Freakonomics is weaker sauce but also pretty biting:

http://freakonomics.com/podcast/i-pencil/

I've pointed to several sources discussing technological unemployment, and more critically the long history of discussion of that topic from 1800 forward in mainstream and heterodox economic, as well as political and other literature. None of which Eno's careless handwave points at.

If informed, sourced, intelligent, and specifically refutable comment is elitist, I'll take it.

You've also focused on the irrelevant element of my argument, though if anything, you're also undercutting your own criticism of me. Eno's skill is evident in his body of work. Which I have listened to, own some of, and rather like. His self-description is at best inaccurate.

Your assumptions as to my familiarity or otherwise with Eno's works place you on the rather precarious precipice of a domain in which I am and insist privileged expertise of obvious nature.

Cheers.


"Your assumptions as to my familiarity or otherwise with Eno's works place you on the rather precarious precipice of a domain in which I am and insist privileged expertise of obvious nature."

That is one poorly formed run-on sentence. What does that even mean? That's sounds as though it came out of a babble generator. Is that an AI joke?

"You've also focused on the irrelevant element of my argument, though if anything, you're also undercutting your own criticism of me."

Really? So your inaccuracy is irrelevant? How convenient for you. And no I've not focused on that, I focused on elitism and now some strange sense of entitlement you seem to reserve for yourself.

Sorry I don't give a toss about your brand of pop-econmics? You should get over yourself. I am not alone in this view either:

"We and others have noted a discouraging tendency in the Freakonomics body of work to present speculative or even erroneous claims with an air of certainty."

Source: http://www.americanscientist.org/issues/pub/freakonomics-wha...

The fact that you're all bent out of shape over content on the Edge and yet you support your infallibility by citing a pop entertainment show on NPR is kind of laughable.


It means I know my life, my experiences, my tastes, and my thoughts, far better than you.

Don't even try to claim primacy of such knowledge. Not of me, not of anyone.

(Now, if someone's saying one thing and doing another, point that out. But a person owns and has priviledged access to what rattles within their own skull.)

The economics I cited and referenced is most decidedly not pop. I've got my own thoughts on some matters, those aren't what I'm presenting here.

You're also now going all ad-hom on Freakonomics. I didn't say that Freakonomics is right. I'm presenting it as a valid argument, in place of constructing a similar one from whole sauce for your entertainment. The point isn't that either source are authorities, but that I've read and agree with the reasoning.

And just hang onto that cloth you're about to hand me, I've little need of it.


This is a cute but ultimately wrong argument. It's like saying we live with a black hole because there's one in the center of our galaxy.

Yes you could stretch the term of AI to say that a large system made up of people and simple machines is some form of AI, but that's stretching it to the point of becoming meaningless.

When people talk about real Artificial Intelligence they usually mean general AI, or at least a specific AI that is capable of human-like decision making. Not Computer Chess and not an auto-stop in a filling pump.

Stretching AI to the point of ludicrousness like this seems to serve only the purpose of trying to shut down discussion around human-like AI. Which is not a noble goal.

Discussing human-like AI is important, especially before we figure it out. It would have been nice if people in the early 1900s had spent time thinking about the consequences of putting so much carbon into the atmosphere before they did it. Let's not be another generation of people who could have had a lot more forethought than they did.


This is closely related to the economic concept of the "Invisible Hand"[0] and also explains why planned economies never seem to work, no matter how well-intentioned they are (see the current state of Venezuela - although in their case, as in almost every case of central planned economies, greed and corruption were the prevailing forces of the ruling agency)

[0] https://en.wikipedia.org/wiki/Invisible_hand


"Invisible Hand" turns out to be just another word for a feedback loop. Something we've learned quite a lot about in the past century. Feedback control is not magic. It does some things right, and it dumbly fails in some other cases. It's more resilient and tamper-proof - it's close to impossible for a single individual to control this system. That's why people like it. But it's also much less efficient than central-planned economies could be. In a way, running economy by self-cancelling feedback loops is outsourcing computation to the physical world. Instead of figuring out - using brains or computers - how much widgets a factory should make, you just let people buy and sell and get rich and go bankrupt, until the system spits out a stable state. Very resilient, but very wasteful.

I think central planning is much too fragile for today's world. But who knows, it may be ok for tomorrow's. I believe that prerequisites for successful central planning are having a benevolent planner backed up by huge computing power, so that the planner can respond to changes everywhere promptly enough. A friendly AI, if you like.


The "invisible hand" metaphor used by Adam Smith is not an explanatory mechanism, and if anything is an admission that the specific mechanism isn't understood. In use at the time and earlier, it had the sense of "the invisible hand of Providence" (or God). Though Smith, as Hume, was almost certainly what we'd now call an athiest.

He used the term three times, in three different books: The Theory of Moral Sentiments, then An Inquiry Into the Nature and Causes of the Wealth of Nations, and finally in a book on the history of astronomy. It's clear from context that Smith wasn't embuing markets especially with invisible handedness, but using a common phrase of the age.

The modern invention of this metaphor dates to the 1930s and 1940s, being first used in its modern sense by Paul Samuelson, and latched onto like a desperate child by the budding organs of the Mont Pelerin Society, better known as the von Mises / Hayek / Friedman / Rothbardian variant of Libertarian theology. Its popular significance grew after publishing of Adam Smith's Invisible Hand, a compilation of modern economic fallacies miscast as truths, by Regenry Press, a Libertarian propaganda mill, in 1963. You can trace the evolution of the term via Google's Ngram viewer.

One of the more notable "quotations" from Smith's Wealth of Nations

Economic historian Gavin Kennedy has traced this history in depth, published multiple papers on it, and writes a blog, "Adam Smith's Lost Legacy", which I highly recommend.

(You'll also find some discussion of the false myth that's developed over the term in the very Wikipedia article you've linked.)

https://econjwatch.org/articles/adam-smith-and-the-invisible...

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781536

https://adamsmithslostlegacy.blogspot.com/

My own recommendation is that people actually read Adam Smith to see what he wrote and meant: https://www.reddit.com/r/dredmorbius/comments/4cyroa/adam_sm...

More on the Mont Pelerin Society: https://en.m.wikipedia.org/wiki/Mont_Pelerin_Society


> "the invisible hand of Provenance"

_Providence_, perchance?


Thank you.

Poor provenance on my part ;-)


Don't worry; there is a providence that will lead you right.


Also, Adam Smith's example of the invisible hand was as a mechanism to prevent so-called free trade, and raise protectionist tariffs. The people you mentioned have twisted Smith's words into the exact opposite meaning - they say Smith's invisible hand sweeps away protectionist tariffs and allows international free trade. He said the complete opposite of what they say he said.


Thanks, I was going to mention the context of use but needed to go back to confirm what the usage was.

I am aware that his one use of "free market" was in a passage describing protectionist trade practices favouring the woolens manufacture industry in England: keeping raw wool import costs low and preventing import of finished goods, thereby maximising the revenue-cost differential, which is to say, profits.


Here's the beginning of that section, which generally inveighs against protectionist retraints on trade, most especially the "Corn Laws" -- limitations on grain imports to England:

By restraining, either by high duties or by absolute prohibitions, the importation of such goods from foreign countries as can be produced at home, the monopoly of the home market is more or less secured to the domestic industry employed in producing them. Thus the prohibition of importing either live cattle or salt provisions from foreign countries secures to the graziers of Great Britain the monopoly of the home market for butcher's meat. The high duties upon the importation of corn, which in times of moderate plenty amount to a prohibition, give a like advantage to the growers of that commodity. The prohibition of the importation of foreign woollens is equally favourable to the woollen manufacturers. The silk manufacture, though altogether employed upon foreign materials, has lately obtained the same advantage. The linen manufacture has not yet obtained it, but is making great strides towards it. Many other sorts of manufacturers have, in the same manner, obtained in Great Britain, either altogether or very nearly, a monopoly against their countrymen. The variety of goods of which the importation into Great Britain is prohibited, either absolutely, or under certain circumstances, greatly exceeds what can easily be suspected by those who are not well acquainted with the laws of the customs.

That this monopoly of the home market frequently gives great encouragement to that particular species of industry which enjoys it, and frequently turns towards that employment a greater share of both the labour and stock of the society than would otherwise have gone to it, cannot be doubted. But whether it tends either to increase the general industry of the society, or to give it the most advantageous direction, is not, perhaps, altogether so evident.

The restraint of government intervention (here as so often elsewhere noted in Wealth) is by government, yes, but quite clearly on behalf of specific powerful commercial interests. It's that* power Smith is hoping to curb -- directly then a restraint on excessive power accumulation by means of commerce, trade, and manufacture.

https://en.m.wikisource.org/wiki/The_Wealth_of_Nations/Book_...


I think you may have a diode in backwards :)

Smith compared free(er) trade to Mercantilism - extremely un-free, Royal patent oriented rent seeking .


Right. Let's again substitute the meaning of a word or a concept for another one and start up another dictionary debate.


I think you're missing the point.

The idea is that artificial intelligence will prevent us from having to understand a lot of things in order to accomplish them, but that effect is no different than any other technological innovation we have made.

The definition of AI has also changed plenty in the past sixty-something years we've been developing it.

A couple of good Wiki links on the subject: https://en.wikipedia.org/wiki/History_of_artificial_intellig... https://en.wikipedia.org/wiki/AI_winter


Some concepts have an implied prefix "digital computer-aided" to distinguish from related concepts. We all know we are intensely interested in the computer version.

Another example is "virtual reality" which existed since cavemen made up campfire stories and drew wall pictures.


What people are worried about is artificial systems that are of higher capability than all or the vast majority of humans. If the principal agent problem is solved by those that create them and they can be cheaply reproduced, then the value of human wages goes below subsistence. This likely isn't a big problem as economic growth rates would be so unimaginably high in this scenario (with doubling times on the order of months) that even significantly less wealth distribution than what occurs today could easily cover a basic income. This is why I'm not nearly as worried about technological unemployment. And as most people have at least some capital, even without redistribution many people would be fine.

If we can't solve the principle agent problem, we will have introduced self-replicating entities with much higher intelligence into our environment. As most possible utility functions require resources to pursue, we will be competing for resources with more intelligent entities. A competition we will lose.

So solving the principal agent problem is a big issue. Comparing human-level AI to markets is like Megafauna saying "We've been competing with other mammals for millions of years, man is only a difference in degree rather than kind. We'll be fine."


>And as most people have at least some capital

What? Most people in most Western countries are net-debtors. They literally own negative capital.


What specific statistic are you basing that on?

Even in the US where unsecured personal debt is practically a badge of honor, more than 50% of households still have a net worth.



Right, and the fuller explanation is one click away:

http://www.aarp.org/money/credit-loans-debt/info-07-2009/bul...

Seventy percent of respondents report having some form of debt or loan obligation

Having a balance on your credit card or having a mortgage is very different than being a net debtor.


For your first scenario economic growth must be fueled by a proportional growth in energy consumption.

Fossil and Nuclear fuels exist in finite quantities, and the rate of sunlight that hits earth is more or less constant.

Most people would still starve as fields would be repurposed to produce fuel.


Not necessarily. Or rather, the growth of wealth could double every few months.

And this wealth effect could be due to massive reductions in labor costs by employing AI, leading to increases in standard of living as the price of good plummets.

I haven't looked at charts for the economy as a whole, but I bet certain kinds of technology provide nonlinear benefits for linear growth in energy consumption. Think about one computer replacing a roomful of workers doing hand calculations. (We don't moan about losing those jobs, by the way.) Yes, the computer was the results of decades of progress and investment, but looking at the marginal energy consumption of the finished good compared with that of a roomful of humans, there is no comparison.

Also, as long as we're talking sci-fi, why stop at harvesting energy from the earth. Until we capture 100% of the energy emanating from the sun, we have a long way to go as a race as far as energy consumption limitations.


The general price of goods plummeting would be seen by most modern economists as something of a disaster -- so occupied are they with engineering inflation. I don't think any mainstream economists have models or theories of economic development or social organization that would take us to a Kardashev level 2 civilization.


Maybe it would only last a few years. But once we can convert capital directly into labour in a manner that scales, we will get insane amounts of economic growth.


I understand what you're saying but I think it's bad to use the term 'economic' growth. There would be potential productivity growth but not necessarily economic growth, since the economy is a human endeavour. AIs don't need money and won't contribute to money, and the machines would only be 'allowed' to produce what the customers would be able to buy (so essentially the current economy), unless they would produce lots of extra stuff just for the hell of it.

To have all that growth I think you'd need to decouple the machines from the economy but then distributing natural resources would be a problem, and then you come back to central planning and that's a whole other can of worms.


Indeed, but it remains to be seen if such growth would benefit the idling meat bags left unemployed.


This is similar to Kevin Kelly's technium theory: http://kk.org/thetechnium/

Technological innovation is an extension of evolution. Whether that's the invention of the alphabet or a computer, we are a part of this system continuing evolution.


While taking neuroscience it occurred to me that one could make an argument - depending on what one wants to show, the usefulness of a model is always limited by the intended use - to see a similarity between neurons and humans: The system outcome is not the sum of what each neuron "knows", and each neuron is really "ignorant" and "stupid".

For example, why are people bothered that there are people who are into conspiracy theories? Or some who see dangers everywhere, while for others everything (everyone) is good? Maybe that's their role in the "humanity computer"! Some neurons' (humans') task is to be extra-paranoid so that the majority don't have to. and others are the opposite, nothing bothers them. What seems "crazy" is, when looked at from a higher plane, quite possibly a very reasonable organisation. Maybe "humanity" does not - should not - make sense on an individual level (i.e. ever single human be "sensible", "reasonable"), but on the level of "humanity". Why this obsession that everybody has to agree, and people who don't are vilified? If all neurons in the brain were to agree you'd have a very dysfunctional brain. What it needs are complex connections and (feedback) loops that enhance and suppress output depending on the overall input. "Overall" is important - not "what an individual (human or neuron) sees", but the sum of all inputs into the system. It does not have to make sense on an individual level.

I like how Sherlock Holmes says it (short, 30 seconds): https://www.youtube.com/watch?v=HuIMmwJbnco

The attempt to understand "humanity" and what's going on on this planet on an individual human level is doomed to fail. The most you can get is a "feeling" that you get it -but if you do, it's wrong, and it's really bad. If you also happen to have some "power" (individuals having too much power is a bad construct) the outcome can be disastrous.

The things the linked article talked about I use to mention in the context of "magic". You know, what the fantasy books and movies are all about. They have "magic items" - whose main property is that nobody knows what they actually are or how they work, where they come from. Sounds familiar? I don't even have to look at an iphone.

This wonderful story sums it up very well I think (and please ignore the object that it uses, here "Coke", it's not about Coke, so no need to discuss the merits of overpriced unhealthy sugar-water): https://medium.com/@kevin_ashton/what-coke-contains-221d4499...

Quote:

> The number of individuals who know how to make a can of Coke is zero. The number of individual nations that could produce a can of Coke is zero. This famously American product is not American at all. Invention and creation is something we are all in together. Modern tool chains are so long and complex that they bind us into one people and one planet. They are not only chains of tools, they are also chains of minds: local and foreign, ancient and modern, living and dead — the result of disparate invention and intelligence distributed over time and space.

And look, I don't even have to explain my thoughts myself! Which, if I had been born in the forest away from thousands of years of human experience and exchange with other humans in time and space, I would probably never have developed in the first place. Instead I can go and use a few words of "glue" to link to pieces written by others - that they themselves owe to others.

Think about that in the next discussion about high-earning people "they earn it"! Do they? Back to the example "born alone on the forest". If someone develops a Facebook or a Tesla or a Dell computer from such roots, then I agree, they deserve billions. For clarification: I'm not talking about the 1%, I'm talking about the 0.01% (The Economist: http://www.economist.com/news/finance-and-economics/21631129... Some charts in a short video: https://www.youtube.com/watch?v=QPKKQnijnsM)


The problem I usually have with conspiracy theories is that typically they're not pitched as "what-if" scenarios for people to keep in the backs of their minds as remote possibilities. Typically they're pitched as things that people have absolute faith in because it allows them to give in to their worst fears and biases in the absence of evidence.

For a lot of people, conspiracy theories are the thing that lets them not just believe in their own personal boogeymen, but (in their view) can give them a moral obligation to go around making other people believe in their boogeymen as well. For example, 9-11 truthers generally don't just present a handful of odd facts and say "that's odd, I wonder if there's something more going on here". No, they will present a handful (at best) of odd facts and tell you that they are 100% certain about what really happened. They're not saying that the handful of odd facts supporting their argument is proof, rather they're saying that unless you can disprove their (unknowable and likely undebunkable) handful of odd facts, that they must be 100% correct about everything. There are a lot of parallels with religion, and IMO some of the same cognitive processes are at work.

Those sorts of conspiracy theorists aren't offering offbeat counterarguments to prevailing views- those sorts of conspiracy theorists are willfully injecting noise and superstition into the "humanity computer" (or the zeitgeist, or the collective unconscious, or civilization's ongoing internal monologue, or however we choose to think of it). That's my main issue with conspiracy theories (and a lot of religion, and a lot of political discourse...).


> For example, why are people bothered that there are people who are into conspiracy theories?

I upvoted, because you bring up a good general idea if we consider all humanity as holonic items in a larger system. However, in the case of conspiracy, most individuals find their time wasted.


I refer to Nassim Nicholas Taleb and Black Swan events. Or fire insurance. Yes, it probably is wasted. However, that is the point: employ a few (few!) neurons on looking out for the "crazy", the unlikely, the improbable. Hope that it's all wasted.

But just like with insurance, when it is wasted you say "Thank god" (I'm an atheist but I can't think of an atheist phrase :-) ), you don't say "what a waste" (that I paid for insurance). You can be sure you have (quite) a number of neurons that if you knew what they do would think are "waste".

Also, the point is for you as someone not into that stuff to ignore them. Not like some of my Facebook friends (former colleagues) who seem to spend most of their day hunting for what they think are examples of the most stupid humanity can create. I'm not so sure that it isn't them who really are stupid. If they just ignored "crazy", nobody would even know such people exist. Instead more people seem intent on bringing the most obscure ideas and idiocies some human somewhere developed to the light that it would otherwise never even have.


Or maybe the cynical and paranoid strain of thought that goes along with conspiracy theories has been historically important, but obsoleted by the sheer scale of our modern civilization and the complexity of the problems we face.

Being irrationally paranoid that the next tribe over is going to attack you might actually pay dividends.

Being irrationally paranoid that Obama is spraying us all with mind control chemtrails... not so much.


> However, in the case of conspiracy, most individuals find their time wasted.

I like this idea too, or at least it makes me at least a bit sympathetic to people I tend to dismiss. And I suppose, the key word is "most".


I read something similar positing a positive role for aberrant behavior due to mental illness - if everyone eats the strange mushroom, the entire village might die, but if it's just Crazy Darrell then the rest of us can benefit by seeing what happens to him.


I love Eno. Such an influential composer and yet he doesn't consider himself a musician in the least. He's used his own ignorance of music to spectacular effect. This a piece the Telegraph did on him a while ago. It's worth reading:

http://www.telegraph.co.uk/music/artists/how-brian-eno-creat...


Also the author of The Microsoft Sound WAV that played upon Windows startup in 1995.

At the time probably the tune played most often per day for a number of years.


For those interested in what people who actually know whereof they speak have to say on the topic of technological unemployment, and the history of economic discussion of the topic, a solid outline of discussions from Ricardo, Mill, McCollough, and Neisser, is included here:

https://econospeak.blogspot.com/2014/04/the-technology-trap-...


> "For those interested in what people who actually know whereof they speak have to say on the topic"

https://en.wikipedia.org/wiki/Argument_from_authority

I've no problem with learning from others, but pointing to a select handful of people who 'actually know whereof they speak' is not going to help explore the field fully.


Pointing to expertise is not argument from authority.


It depends. In this case, it was the framing that made it the argument from authority, namely "in what people who actually know whereof they speak". The implication is that the sources of truth are limited to a select few people.


Argument from authority is "X is true because Y says it is".

That's not what I claimed.

This is tedious.


Thoughtful article. Eno refers to hidden processes in systems as AI hinting at their similarity.

I am assuming there are already digital systems in place that monitor "hidden" processes that make a chicken sandwich for example (https://www.youtube.com/watch?v=URvWSsAgtJE).

Such systems could monitor data points and possibly forecast the price (or a great many things) of a chicken sandwich.


That AI is not separate from humanity. He accepts that we ask, we get. It serves us. People are worried about the non-we. Global civilisation cannot destroy humanity without destroying itself - unless it creates another human-autonomous AI. The non-we AI.


To me that's a stretch. AI is not a carpet under which you put everything you don't know about how the World works.


True AI would learn and teach itself new stuff. Something a central heating, burner, car or wifi cannot do.


I would call that a natural intelligence.


seems like an issue of semantics - he's taking something most of us would just call "human culture" and naming it "AI" - which is an interesting idea, but true artificial intelligence is something that involves some amount of self awareness of the system to itself.


To be further semantic, one could say you're referring to AI automata, not just AI.


It's interesting how we conflate self-awareness with true intelligence. It implies that something cannot be truly intelligent until the point it creates a personality for it to identify with. I'd argue that a better indication of intelligence is self-directed learning, even if the seed of that desire to learn is programmed in from an outside source. I don't think its necessary for an AI to need to form a narrative of self.


I think the article is missing the point of what AI will do by eliminating jobs like transportation and fast food preparation . I think once we get to that point we will have to implement basic income .


AI and basic income in the same comment. I almost forgot I was on HN for a second there.


To be fair, the urgency of figuring out whether basic income will work is largely driven by the concern that the number of available jobs will be reduced due to AI (or automation in general). Reduced job availability is about the only legitimate current concern (sub 30 years) of present day AI advances.

It makes sense they are discussed in the same comment to an article about whether we have dealt with the influence of the equivalent of AI for thousands of years through efficiency and specialization.


A Haskell mention would've completed the HN trifecta.


Show HN: A Haskell.js webapp to compute basic income levels under different hypothetical AI rapture scenarios


And an honourable mention for Georgist land taxation.


I think Phoenix/Elixir has dethroned Haskell.


Or Go or Rust.

But Phoenix/Elixir does seem to be the new thing. Not that I mind, always happy for the Erlang platform to get more attention.



Classic HN: Trying to capture the depth of topics like Basic Income and AI by sprinkling them over two sentences.

Kidding :)





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: