The interesting idea I read here is that systems that were originally intended to measure the real world, like a traffic map or social network, become so influential they have a major effect on the thing it measures. This could result in an interesting new steady state, like side roads filled to capacity with people following Waze directions, or fractued networks of filter bubbles. Or instead of steady state the system might be unstable, like massive volatility cause by algorithmic trading. The “analog” metaphor here is that the user isn’t the consumer of a carefully design system, but they’re one “electron” in the circuit that carries many users, and the overall behavior of the system depends on what they and millions of other users do as they interact with each other. In that way he suggests the outcome is not planned and is probably unpredictable.
Agreed, it takes awhile to get there but the core idea comes toward the end:
The genius — sometimes deliberate, sometimes accidental — of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind.
Yes, I don't buy most of what the essay said, but that was bang on, generally. I disagree heavily with the last sentence, but the bit about how we've generally shifted from models->actuality is very, very true. It's acutely obvious when you look for knowledge that isn't in the consensual standard places online, or really try to use Google as a search engine as opposed to a handy-dandy bookmark replacement thing.
Not a specific thing, no. It tends to be trade knowledge or bits of history & beliefs that were printed, or orally known. I have a certain store of oral history regarding a few 20thC movements that was passed down, for instance. But there are no electronic editions of that information - if you find it on Wikipedia, it'll be filtered, somewhat mangled, and altogether minimal in description.
These days when I want to find obscure knowledge, I purchase books written on the topic by scholars; they concentrate the information very well with good sourcing.
I don't think China's being left behind. They have a successful system of censorship. In the west we are slowly being led down the path where we've moved our knowledge and understanding of the world to the hands of private companies who deplatform and delete whole universes of information on a whim. Large subreddits getting banned for example deletes tends of thousands of man hours of work from existence.
EC2 and Google Cloud concern me a lot more than Facebook and Reddit. A lot of those censor-able platforms have only been successful because the users believed they were using something different than it actually was (surprise, Tumblr users.) We still have Wikipedia, Archive.org, Bittorrent, Tor, Bitcoin, and so on.
To define China's system of censorship as successful seems premature. We have had similar systems in the past, and while they maintained some form of stability for the status quo they ultimately hindered those society's progress.
"It is true that if you have a tyranny of ideas, so that you know exactly what has to be true, you act very decisively, and it looks good – for a while. But soon the ship is heading in the wrong direction, and no one can modify the direction any more." -Richard Feynman
If the search engine is now human knowledge that would explain why everything seems to have gotten dumber in the last few years. We are all google now.
How what we create ends up shaping us. For example, what we think of "reality" today, is the fiction we created in sitcoms and television shows that shaped our view of the world when we grew up. We dress like "cool" people, we dance like "cool" people, we imitate one another.. so in the end we become the fiction.
But to be honest I don't know which way to read the article. Besides the main "emergent / unplanned" idea I'm not sure what his point is. edit: I guess he meant to say that the "digital revolution" promised more control over our lives, and instead it is becoming something completely unplanned.
> like side roads filled to capacity with people following Waze directions
I had a feeling related to this when a couple of months ago, after I had given a friend a ride, she told me that she was really amazed of how I was driving around without using an always-on GPS map. I realized that for people in my age (their 20s and 30s) driving in a car now involves being dependent on your GPS maps.
I find that, at least for walking extended distances over unfamiliar territory (I can't drive), having the map is very useful. Not directions, not routing, but at least a map so that I know where point A and point B are, and I can see the possible routes between them. It also helps when mass transit stops are on the map. Digital mapping is a great boon to my travel, because it's so much easier to look at and carry a 9 inch tablet than to have to haul paper maps. If I didn't have clear digital mapping available on my devices, I probably wouldn't go anywhere new because it'd make me a target for either crime or scams. Now, as far as routing... I like looking up directions before I go so that I have a rough idea of where I should go. But I don't use GPS-backed directions en route most of the time.
Gps is particularly remarkable in the way it completely impedes the development of any ability to navigate routes independently. The only technology I’ve seen with such an incredible brain-atrophizing power, is the calculator.
It is great for being alerted to congestion problems, but many times if you are familiar with the route, you can find optimizations the mapping software can’t.
Yes! Yesterday, driving home from a grocery store five miles away on my usual [fastest] route via the interstate, I told Google maps "home." I guess the AI was having a bad day because it spent the entire ten-minute trip frantically rerouting and giving me new instructions every ten seconds as it tried to route me via city streets alongside and across the interstate's path. I watched the proposed spaghetti-like route on my phone's screen as my car icon cruised along right up the middle. Very amusing. Reminded me of one of those sci-fi films where the robot short-circuits and sparks and speaks word salad.
That might just have been a failure with the GPS -- I've had the same issue before, but the car icon was offset a bit from the highway, so it looked like I was constantly driving on the grass next to it (that must have come as a surprise to the little imps inside my phone doing the navigating).
I don't know if it's used elsewhere but I think in the 70s George Soros put forward a model of the economy that he called "reflexivity" where the central idea is that actors act upon things based on their perception of those things and those things change based on how they are acted upon thus changing how they are perceived by the actors and changing how the actors act upon them in a reflexive loop.
I think this article has an incredibly important point most people are missing because it is terribly written.
Part of this is that analog vs digital isn’t the right term. Vacuum tubes are completely irrelevant to the point he is trying to make, and confuse the reader.
The point he’s trying to make (which unfortunately I’m unable to explain right now better than he can) is about the emergent properties of systems made up of independnt entities in a rigid form and how this is a type of computing that differs from what we usually think of as computing. The example he have about DNA vs the brain sort of gives an example of this. DNA is coded similarly to computers, and in many ways cells are like computers/programs (that happen to self-propagate according to their software). On the other hand, brains are this other type of “system” computer made up of a combination of independent actors (neuron cells), a regidid but adaptable structure (the physical structure of the brain), inputs, and information sent between the individual actors. This is very similar to a million computer-regulated human systems where we react to information we receive through the platform, and we relay our reaction to the system, and that reaction affects how the system interacts with other individual actors.
about the emergent properties of systems made up of independnt entities in a rigid form and how this is a type of computing that differs from what we usually think of as computing
A way of viewing it is inversion of control between emergent behavior and programmed behavior. When the number of interactions between discrete units of programmed behavior exceeds a certain threshold the programmed logic responds to the emergent interactions instead of dictating it. The system as a whole starts obeying its own logic instead of one programmed into it.
I suppose you need to see such systems more like ecologies to be studied with techniques not far off from those used to study natural ecologies.
Daniel Dennet would phrase it as "people don't design boats, the ocean designs boats". Our technology doesn't just serve our requirements, it is also built to satisfy the inherent requirements of our physical world. However as our world becomes more filled with technology, the requirements start being altered by other technologies we would like to keep around. In this sense technology evolves and is selected in ways very similar to life itself.
ah, almost like when we have robots building so many products, but the robots are limited in what they can fashion, so we end up mostly using products which are reflections of machine-capabilities?
The buyers in the market choose to purchase products which reflect machine capabilities. No one is being stopped from using products that machine capable, but people seem to prioritize other qualities such as less expensive.
None of these ideas are new if you've studied evolution, ecology, economics, complexity, game theory, etc. All those fields are inter-related and study emergent behaviors like these.
Unfortunately few CS people or other engineers have touched these areas.
It really starts to blow you away how hard it is to have an original thought the more disciplines you dive into because as you're going through the material they constantly put a name to all the things you've thought about before but didn't realize were actually a thing.
Getting across a multitude of disciplines is highly underrated.
> Digital computers deal with integers, binary sequences,
deterministic logic, algorithms, and time that is
idealized into discrete increments. Analog computers deal
with real numbers, non-deterministic logic, and continuous
functions, including time as it exists as a continuum in
the real world.
As a computer scientist/computational neuroscientist, I don't really buy the analogue vs. digital distinction. Basic information theory tells us that any noisy continuous system is equivalent to a discrete system of a certain resolution/bit-depth. As the author continues to write
> [...] analog computers embrace noise; a real-world neural network
needing a certain level of noise to work.
A few bits are sufficient to describe the output of a real-world neuron. Action potential timing has jitter in the sub-millisecond range, which means that relatively coarse discrete time steps are sufficient for a simulation. Yes, our brains are extremely noisy systems, yet this is exactly the reason why they (in theory at least) can be simulated on a discrete/digital computer.
In practice there is little to be gained from analogue computation, except for a (potentially) reduced energy consumption compared to a digital implementation of the system in question. But on a theoretical level, nothing changes.
Although your overall point is correct, I disagree with this statement. It is in fact only theory that distinguishes between continuous systems and very high resolution discrete ones, because the time-evolution of an ideal computer is nowhere differentiable while the time-evolution of a physical system is everywhere differentiable.
Indeed, it is not only "discrete" and "continuous" that are indistinguishable below a certain threshold. For any data, there are an infinite number of continuous theories that are all indistinguishable from each other, and also are indistinguishable from an infinite number of discrete theories, which are yet also indistinguishable from each other - all that it takes for this to happen is for us to agree that the data doesn't specify anything below a certain scale. Then, every theory that agrees on the large scale will match the data, leaving room for anything you can imagine at the bottom. So discrete vs. continuous isn't the core point.
Yes, thank you for the clarification, my statement is ambiguous.
I meant to say that the theory required to describe a computation on a practical analogue computer (that is noisy) is no different than the theory required to describe the same computation on a digital computer, because they essentially are both discrete systems.
However, as you point out (at least as I understand your first paragraph), when we analyse/build systems on an abstract level we assume (often as a simplification) that they are ideal continuous systems.
>Basic information theory tells us that any noisy continuous system is equivalent to a discrete system of a certain resolution/bit-depth.
Not only that, but the https://en.wikipedia.org/wiki/Bekenstein_bound also tells us that in a finite region of space with finite energy, there's a fixed bound on its information content. So if the brain exists in finite region of space with finite energy, then it can be described without loss of accuracy by a discrete system.
> Basic information theory tells us that any noisy continuous system is equivalent to a discrete system of a certain resolution/bit-depth.
Technically you are right - but practically you are wrong. Like 'technically' any logical or even emotional reasoning can be modeled with if-else-structures (maybe add some randomness). But why aren't we able yet to actually create human-like reasoning? Because that approach is 'practically' not useful. That's why the most powerful ML solutions aren't realized with Prolog, but with neural networks at the moment.
> A few bits are sufficient to describe the output of a real-world neuron.
Possibly. But no computer is so far able to even remotely simulate or emulate what is actually going on within a real-world neuron. And that is required to fundamentally understand the output.
> except for a (potentially) reduced energy consumption
And that is quite a big deal. Because the effect of reduced energy in a parallelized system is going to be exponentially relevant!
Well analog computation being noisey could be the reason we experience things like emotion.
It's easy to say emotion has no value, until you see it in action bringing some sense of control to say a family that has gone through trauma or a country through war.
It doesn't look like digital computation (not digital encoding) can produce such outcomes.
We are constantly seeing, be it the NSA/Zuck/Wall St/China etc etc having access to ridiculous amounts of digital computational power, but being totally surprised on a daily basis by the realization they aren't in control.
> Well analog computation being noisey could be the reason we experience things like emotion.
Hm, I don't really see why this should be the case. Emotions are a pretty well studied in both animals and humans and are, to put it very handwavingly, merely a global change of brain state/equilibria, for example modulated by brain regions such as the Amygdala and/or the release of neuromodulators [1]. From my understanding, there is nothing about emotions that cannot be computed by a digital computer, and there is little about emotions that is related to noise.
I'll let philosophers think about the experience part of your statement.
But all that’s needed are are a handful of rules, to provide for a system that emotes. You’d probably dismiss it as an inauthentic toy, but emotions actually aren’t the core aspect of agency.
Anyway, the rules just need to assemble a goal, a threshold for equilibium, and reactions for deviation from that equilibrium.
Bonus points if you account for radiant measurements of equilibrium. What I mean by that is anticipation of adjacent conditions that signal a probable loss of equilibrium, such that the system doesn’t just react to an unbalanced circumstance, but also things that could lead to an undesired imbalance.
Examples:
A. If the cup is disturbed so that the milk spills, then a negative experience ensues.
B. If a balloon, inflated with ordinary compressed air, sinks onto the grass and pops, a negative experience ensues.
C. Ambulate through an environment obstructed by complex obstacles, and negotiate each obstacle without falling onto the ground. Falling onto the ground will result in a negative experience.
Each of these three goals represents a targeted state of equilibrium: don’t spill the milk, keep the balloon safe, don’t fall down go boom.
Now, layer an array of reactions on top of the branched set of possible outcomes. You can also buld up variations on top of each branch.
Positive branches are indicated in moments of success at achieving the goal. Negative branches are indicated upon equilibrium being defeated.
So the computer or robot can externalize its inner state with a happy face or a sad face, but we’re missing some of the emotional range. When would anger display? When the machine can assign blame and consider revenge, of course.
So if an entity (preferably a rival robot, since we wouldn’t want the robot to exact revenge on a person) knocks over the milk, pops the balloon, tackles the robot, the obvious motive is to make sure that never happens again, the root cause is the rival entity. Stand back up, destroy the entity, and acquire more milk, another balloon, and try to achieve equilibrium, and thus happiness again.
Prior to reacquiring its happy state, the machine can externalize an angry face if it can assign blame to a detected responsible entity, in all other cases, it would simply be sad, until it can stand back up, inflate another balloon and pour itself another glass of milk to protect. If it cannot set things back in order, as desired, then it is simply permanently sad (no balloon, no milk, unable to stand or walk), forever.
See how that works? It’s actually not much more complicated than that.
In a multi-agent environment an agent has to model its peers as well and learn to communicate to solve goals together. Dealing with other agents is one step up from dealing with objects. Emotion would naturally be linked to the actions of other agents as they affect the completion of one's goals.
Part of your statement is true, and part of it is false.
An agent would need to model the behavior of peers, yes.
But communicate? No. Solve goals together? No.
To coalesce civilization or society? Maybe, maybe not. Socialization among peers is not a prerequisite for agency. Not by a mile.
And certainly not amid a state of nature. Not at all would communication or collaboration become a necessity.
Emotion might become an aspect of investment in hypothetical experiments performed by an agent. Hope that equilibrium might be achieved with less work through communication and collaboration.
But put it this way. A caveman grunts at a wild boar standing on top of a hill. The caveman wishes to discern if the silohette atop the hill is potential food by provoking movement, or an inert object offering the illusory shape of a backlit animal. The boar notices and experiences fear. The boar freezes, hoping the grunt was not directed toward it intentionally.
Is neither an agent? Does the conflict of interests preclude emotion?
The boar models the adversary, and experiences emotion to preserve the equilibrium of staying alive.
The caveman experiences hunger as a loss of equilibrium, which provokes a mixture of anxiety, and unhappiness which may cascade into a malaise or depression as weakness progresses with starvation. The aggression of the hunt is not anger, although anger may arrive incidentally.
Is the grunt communication? Perhaps as much as any tactic might be. Deceptive comminication (bird calls, immitating a female in heat to draw male prey) might still be communication, after all.
But to model nature, there must have been a period of where some agents seemingly existed without peers. But those agents likely experienced emotion before cognizant sentience and a rich awareness of the potential for sentience within peers, which most likely precedes a capacity to communicate.
That’s just you projecting your impression of agency onto a puppet, based on prior observation of actual animals.
But make no mistake. It is a puppet. It’s a multicore processing circuit with stack pointers, instruction pointers and little else going for it.
It’s your laptop strapped to some motors. It’s not sentient, and has no agency. It’s a guided missile at best. A step above cruise control.
It lacks authority to define where it goes or form a need for continuing to stand. Thus it lacks true agency.
We can ascribe happy/sad to stand/fall, as crude, fundamental binary “emotions” but robots like BigDog are less complicated than amoeboid life found in pond scum.
Consider whether traffic lights are happy or sad, based on whether traffic obeys their signalling. Now consider traffic cameras. Now consider whether an automated ticket for running a red light on camera is an expression of emotion.
You have options, that you prefer to avoid, because you choose not to cope with the inevitability of your own mortality, for emotional reasons. It's simply easier to don the mantle of the soulless worker drone.
But you do have options, and the free will to exercise them. You could rob banks, sell drugs, stay in bed, go on a hunger strike, bootleg intellectual property for fun and profit.
Think of emotion as predicting future positive or negative rewards. The value of a state (or action) is related to the anticipation of reward. The fundamental role of emotion is to select actions. Rewards types are basically hard wired by evolution into our brains and are related to safety, food, companionship, learning, creativity, curiosity and a few more. They simply identify specific situations and send reward signals to the part of the brain that learns to predict future rewards.
> It doesn't look like digital computation (not digital encoding) can produce such outcomes.
We have built such systems and they work with enough training, but their training must be as agents in an environment, not as a model training on a static dataset - think AlphaGo. I think AlphaGo has learned emotion related to the world of Go (in this case emotion is related to good-move, bad move, safety and danger), and its human opponents had a lot to say about how it felt to play against it.
Emotion is not something beyond AI agents. It's just how they plan their actions. They might not be human emotions but they are emotions related to their own activity and goals.
I think this is an interesting article, though not from a CS point of view. Basically my take away can be boiled down to the brilliantly put “Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around.”.
I work with digitisation and automation in a Danish muniplacity, and I’m actually one of the authors “hidden” hands on architects. Because I’m a techie in a non-tech world, and, because a large part of Enterprise Architecture is the business end, I see the impact daily on the non-tech savvy world. Digitisation has absolutely changed the way organisations work, and not always in the way we intended. There is a reason why AI/ML is so hyped now, and it’s because we’ve trained our organisations to utilise Business Intelligence in every aspect of their daily lives. It’s more than that though, if you give people a system, they’ll use it, and not always in a manner that makes sense. From managers trying to make informed decisions to employees simply trying to do their best.
An excellent example of the dangers popped up last summer. We were reviewing a process scheduled for Robotic Process Automation, to see how suitable it would be and to sum up our benefit realisation prospects. Only it turned out that the process wasn’t really sound at all. In short we had a team of employees who spent a lot of time distributing tasks in Outlook. They even had a colouring system in place, one colour per employee, to make it easy to spot your individual tasks once distributed. Except they had more employees than there are standard colours, and because they didn’t know how to make more colours some people had to share. They wanted it automated, and we could have done that.
Only, the entire thing was just silly. It was even more silly than I’ve just outlined. Because after they had colour distributed the tasks, each employee would archive their individual tasks in our ESDH system (electronic journaling). During this proces, everyone would fill out a standard ESDH form, where you also need to select a responsible employee. So basically they were distributing tasks twice.
No one had questioned this for almost a decade, and these are intelligent workers mind you, to then it was just how the systems worked. We swooped in, looked at it for about 15 minutes and then forwarded them to a LEAN consultant who saved their department from around 600 yearly hours of needless bureaucracy.
A variant of this used to be called Time and Motion, also known as Taylorism, and is based on the uncomfortable assumption that left to their own devices humans are rather stupid and need to be told what to do. In detail.
There are two elements to this. One is the idea that there's an executive class which is born (or at least educated) to rule, and the other is the not-quite-identical observation that many people lack genuine independent agency, either by nature or because they lack the political/managerial leverage needed to make productive changes.
The personal part of the "digital revolution" was supposed to be a way for people to explore independence, agency, and creativity. It actually turned into yet another scheme by which those who believe they're born to rule can use algorithmic machinery to farm and control the economic and political activity of everyone else (input and output), without the stickiness and inertia of traditional long-term employment. Which was itself another form of farming and control, but with hard-fought humane benefits.
This is a political problem, not a technological problem. It can only be solved with technology where the political situation allows it.
For now it seems to be true that most humans are rule-takers and mimics, not creative innovators or strategic thinkers. I have no idea if that's a genetic limitation or an educational one. It would have been interesting to see what would have happened if personal computing had gone in the direction it was originally supposed to, and education had followed.
I know it seems bleak, but one thing to remember is that no one planned this. There's no omnipotent overlord whose scheme to screen-addict/rent-enslave the human race is finally coming to fruition. Things like Dynamicland, the early retirement people, and yes, HN, are all bright lights in the dark. It's not as immediately profitable to uplift as it is to exploit, but there's no one stopping you from teaching some neighborhood kids how to write a browser extension, or personal finance. More importantly, there's no one stopping them from learning if they want to.
Things aren't perfect, but there are opportunities and to spare for those looking.
On the other hand, as an employee who's been on the bottom in that kind of situation, I'm not sure I'd be happy about what you did. Management would be, of course; but I'm talking about the lower-level people. What you really did was take away an excuse for utterly normal bad days, for the little foibles of life, etc. After all, in the old system, if someone on that team comes in with, say, a migraine, and is next to useless for a day while doing what little they can, they can say of a task that missed deadline that day, "Oh, I share my color with so-and-so, I thought that task belonged to them."
It also prevents the ability to take a short mental break and go from focused to seeing the bigger picture. Outlook is easy to use, and it's usually different in color and UI from most other work-related software. It's not fully relaxing, sure, but those 600 hours a year of calendar management that you cut out are now potentially 600 hours more stress for the team members where they're being pushed to accomplish more, no matter whether they were at their stress limit or not.
Interesting scenarios you line up, it's not how I see it though. The public sector of Denmark has been downsizing everything by 1-2% a year for almost two decades, so it is actually the low-level employees who needed this because they are terribly overworked. Obviously management is happy to save that many administrative hours, but they've actually already reaped the financial benefits years ago by law. A cynic might see a bigger cause and effect relationship in there, but I assure you that both our political system, and, our bureaucracy is far too inept to come up with such an elaborate mastermind type scheme.
Also if you have a migraine so bad that you couldn't work, you'd call in sick. In fact, a good manager would send you home sick if they spot you. We have paid sick-leave in Scandinavia. If you have recurrent migraines, we even have national programs, that will pay your workplace a compensation for much of your sick leave.
1) Algorithmic tech products replacing basic services are growing beyond even their creators' control
2) Analog computing will replace these algorithmic products and fix all the problems
I disagree. Not with the "growing beyond control" bit, I think that's clear, but to the analog computing bit.
First, Dyson doesn't explain what he means by analog computing, other than a basic "operates on real numbers and continuous functions." He also doesn't give any clues about what hardware or software for these systems will look like, or specifically how they will outcompete what we have now (Google, FB, etc).
Second, I think he's wrong. The only things that determine the broad direction of society are who has the power and money. This has been true as far back as you go, whether kings or merchant guilds or the citizenry, when enough of them decide to work together for a revolution. And there's one fact Dyson ignores:
No matter how much or little control the owners of these algorithmic tech products have over the direction and consequences of their use, they still get the money.
As long as Google, Facebook, Amazon, and the others are making the money, the unintended consequences of their algorithms dictating the shape of our lives will continue to be irrelevant.
> First, Dyson doesn't explain what he means by analog computing, other than a basic "operates on real numbers and continuous functions."
I think he means that neural networks are not binary, but use analog communication between the neurons. We simulate that digitally, but maybe a system that uses analog by nature is better for artificial neural networks.
No idea if this is actually the case, or if this is what he meant ;).
He is not saying analog computation will "fix all problems". He is saying the computation that is happening that props up a Trump resembles an analog system where noise plays a role.
That fits well with neural nets, where noise plays a crucial role, from the random order of selecting examples into batches, to randomly dropping out synapses, directly injecting noise, or even starting prediction from pure noise (GANs). In RL the epsilon-greedy technique prescribes random actions from time to time, in order to better explore the environment. Noise, when used properly, makes neural nets better and more resilient.
> Childhood’s End was Arthur C. Clarke’s masterpiece, published in 1953, chronicling the arrival of benevolent Overlords who bring many of the same conveniences now delivered by the Keepers of the Internet to Earth. It does not end well.
Uhhhh, it didn’t end well because (in the book) humanity was doomed to evolutionarily tear itself apart, and the Overlords knew this. Seems pretty disingenuous to use that fictional scenario not ending well in this context.
> Uhhhh, it didn’t end well because (in the book) humanity was doomed to evolutionarily tear itself apart
Yes and no. Humanity ends as something else takes its place. Quoting from the book (haha of course I just pulled this up on my Kindle):
But there is one analogy which is–well, suggestive and helpful. It occurs over and over again in your literature. Imagine that every man's mind is an island, surrounded by ocean. Each seems isolated, yet in reality all are linked by the bedrock from which they spring. If the ocean were to vanish, that would be the end of the islands. They would all be part of one continent, but their individuality would have gone.
Telepathy, as you have called it, is something like this.
Swap "telepathy" with "telecommunications" and the analogy is–well, suggestive and helpful.
The book is one of my favorites. Lo and Behold is also worth a watch.
Yeah, I don't get the link to Childhood's End at all. He should have gone with Terminator but I guess he was looking for a more sophisticated reference.
As an aside, to anybody reading this that enjoys science fiction: read Childhood's End. It's my favourite book.
Actually, it would be a fantastic comparison if the author had explained it. It's almost as if he heard it somewhere else and repeated it verbatim (which is exactly the case, he's quoting someone else's essay without fully explaining or possibly comprehending it).
The internet presents us with a similar existential threat to human identity as in the book. Like the masks hiding a higher intelligence at the end of the Difference Engine, the apotheosis of the internet threatens to rob us of our individuality but offers the chance of the development of a higher form of consciousness, like the Overmind in Clarke's book.
Mh, actual status is not due to technical evolution but to commercial one. We have had, and there is no reason but commerce and ignorance, machines that work under the control of it user notably LispMachines, Alto Workstations &others.
They push knowledge, they help draw a personal evolution path, they solve problem. Today we have something similar with Free Software.
Those ideas are not much accepted by "the big and the powerful" because they mean a real free market in witch only knowledge and work have a value, not much marketing and certainly not commercial secret agreements.
So yes, digital revolution has turned from a free academical intelligent project and dream to a new way to subject whole populations and this start to happen years ago, and start to be so evident now that even totally ignorant in IT terms start to feel it.
Few subject succeed to displace public knowledge to private companies, to displace free market in a sort of Soviet Union planned economy draw instead of a dictatorial government by few equally dictatorial companies/founds. The next step is abolish entirely public (paper/coin) money substituted it only with digital payments, another step declare "politics" as a bad thing to be abolish substituting it with a corporate council like "Continuum" TV series foresee well, and George Orwell foresee even before with "1984".
Beware a thing in the past dictators need manpower so "force" is not all in few hands. In a not-so-far future manpower will not be needed anymore. So the chance to revolt and gain freedom is lower and lower.
Imaging a future with autonomous robotic army, cars, "smart" devices everywhere. Think how we can revolt, and against who. Also think how we can communicate, being tied to proprietary devices and platform, without pens, paper anymore, without the knowledge to write text with pen&paper, without the knowledge of "past" things like "postal system" and "newspapers" that well, we know but at the same knowledge of today's people know ancient mimeograph and polygraph...
What we may very ironically be witnessing with the recent international populist uprising that seems in part aligned with certain trans-national oligarchical interests is the emergence of the beginnings of an actual global government. Unfortunately it seems to be a global government run by gangsters.
The fact that US Democrats (and a few Republicans) are hung up on "Russia" shows that they don't get it. Russia is as irrelevant as the USA or China to these people. I mean sure some of them are Russian and may have ties to the Russian government, but Russia and Putin are just tools to them as are all other national and governmental powers. They're trans-national and represent an emerging power that has no national loyalty.
We're heading for a world run by gangsters and corporations. William Gibson remains the most prophetic of all sci-fi writers.
I'm surprised no one mentioned cybernetics yet. Author's premise seems to be that digital, centrally controlled systems are becoming pieces of systems made up of self-regulating sets of feedback loops.
The idea certainly isn't new, having been written about in the '50s in Stanisław Lem's "Dialogi". The observation about digital systems is certainly fresh though.
Interesting take on emergent properties of large numbers of simpler things no longer being ‘knowable’ or ‘understandable.’ Even though his analysis of how search engines end up being how we see the world seemed like a tangent to his big idea, it made me think of how knowledge graphs at Google and Facebook grow out of many sources of information and end up being the ground truth for organizing information about the world. The difference is that you can track the provenance of data going into huge knowledgegraphs but you can not understand the emergent properties of a complex system any more than you can understand how a billion parameters in a deep learning model function.
Terribly written article but very very good points (ignoring the whole analog metaphor). Emergent systems are already seeing damaging side effects such as accidentally inbuilt racism.
The perception that programmers are in control is dangerously inaccurate but I'm not sure how we can go about educating the public without destroying the trust we have left.
I regret it if folks found this comment irritating. I only meant to convey my sense that the article began with rather broad but vague statements, which dissuaded me from finishing it.
well, true in some context future could be dangerous these self controlled machines getting more destructive or could be programmed by few insane rebels to harm the humanity and people drastically!
> It’s a nice train of thought, but it’s not a great essay. Good enough for a bar conversation but I don’t think it is saying much as currently written.
Oh, you must be new to Edge.org, which for as long as I've heard of it has been home to some of the most egregiously pretentious intellectual wankery about computers and technology and shit -- the kind of stuff that The Guy I Almost Was makes fun of. When some of your most level-headed essays come from Eric S. Raymond, it's time to acknowledge that you haven't just misplaced the plot.