I really liked this article because it was a concise demonstration of the 'known' solution problem. A while ago[1] there was a discussion of what folks from the 19th century thought the world would be like in the 20th century. The common theme is that the futurists could never anticipate a change to a technology they didn't have any experience with. The rise of communications as a means of linking people together was completely missed, even by the Dick Tracy types who had wristwatch phones. Why? Because not only did it communicate like everyone had experienced (telegraph, telephone) it persisted with bulletin boards, MUDS, Facebook, and Twitter.
That is why I always try to ask a question my Dad suggested I ask when looking a future predictions, "What is so common place that they can't see it changing?" That is so difficult to do. Things like "What would the future be like if there literally was no disease and any physical injury could be patched up quickly?" Or "Energy is suddenly free?" or "Lie detection becomes infallible?" etc. Very tough to do.
I don't disagree, my point wasn't that nobody gets it right, it was that in order to get it right you have to think about the things which you take for granted being different. And that is not a natural act :-). Some of the best science fiction that anticipated the Internet has been stuff that assumed telepathy and looked at issues associated with that. We hit some of those problems with voice activation and bluetooth headsets :-).
One is the classic hard vs soft sci fi. A classic example in book form is Stranger in a Strange Land reads like a story about 60s boomer hippies from India who meditate and get high and die in an allegory of Jesus, because it is. There's a very thin wrap of search and replace science fiction over that hippie story most famously a giant plot hole where the author wrote in video telephone video conferencing without following up on the hole in the plot generated by being able to see the other party. In comparison, hard sci fi takes into account and weaves a story about the human effects of technology...
The other problem is in the "The authors polled a range of experts". Asking my plumber about theology or my lawyer about physics is obviously dumb, and in pop culture we accept that asking a 50s dude about the 70s is pretty much a waste of time. However in all fields of human activity if you want an intelligent commentary on contemporary computer programming you have to either hatch or grow a contemporary computer programmer. The opinions of a dude in another field from decades ago are only accidentally going to be correct. Regardless who was an expert in automation in the 60s, I'm pretty sure they weren't being asked because they almost certainly didn't have cool authoritarian credentials because the topic wasn't cool enough at that time, so by definition they were talking to the wrong people. The history of many present day trends and concepts shows they were not very cool before they became cool, therefore asking contemporary experts on the topic of cool will mostly not work.
>In comparison, hard sci-fi takes into account and weaves a story about the human effects of technology...
I find two problems with the above analysis.
Oftentimes soft sci-fi (PK Dick for example or even someone like Frederic Brown) identifies the "human effects of technology" far better than someone from the "hard" camp, such as Clark or Asimov.
Second, the article specifically covers bad predictions made by hard sci-fi and concerning use and impact of technology itself.
>The other problem is in the "The authors polled a range of experts". Asking my plumber about theology or my lawyer about physics is obviously dumb
Not that obvious to me. That's a great way to overcome systematic bias and domain blindness and get some outsider insight. You talk about being silly to talk to a "plumber about theology", but I've heard that a carpenter is supposed to have made some great contributions to the field, as did some fishermen.
Oh there's some overlap as in all things very little is binary.
None the less PKD doesn't really tell "human vs technology" stories. Take for example "A scanner darkly" which is an interesting psychological thriller and story about chemical addiction. To quote the wikipedia: "A Scanner Darkly is a fictionalized account of real events, based on Dick's experiences in the 1970s drug culture. Dick said in an interview, "Everything in A Scanner Darkly I actually saw.""
"Man in the High Castle" is alt history action. Its not really a story about technology.
"Blade Runner" is film noir action flick with a dose of moralizing about racism and dystopia.
I'd say the thing that defines PKD is technology as escapism. Yeah OK "a scanner darkly" is about speed freaks but he wanted to tell a psychological thriller about addiction without people casually tossing it away as "the speed freaks movie" so he invented a whole substance D mythology to prevent people from taking the easy way out in their analysis. Ditto Blade Runner where the technology was a cloak to hide aspects of modernity he didn't want to talk about, not a central part of the story.
Sort of like how some fantasy pencil whips away aspects of the modern world the author doesn't like in order to pare down to just what the author wants us to focus on.
Technology as a zoom lens, not in the zoom lens. PKD is a cool author but in many ways he doesn't even write sci fi.
How about another definition, soft sci fi can be turned into any other scenery with mere cut and paste, and tada "a scanner darkly" is about elves smoking too much pipe-weed in Tolkien world. But hard sci fi requires a major and substantial rewrite to change the scenery, not just cut and paste. KSR's Mars trilogy isn't going to Venus or ancient South America any time soon.
The problem with the carpenter you mention is out of millions of pro and maybe a billion amateur/ancient carpenters only that one had much insight. Its a bad odds game.
Those are his more psychedelic works sure. But take something from Dick like "The Second Variation". Or "The Electric Ant". Or "We can remember it for you wholeseale".
Or here's his version of the future of IoT, home automation and subscription payments, which as someone in an increasingly number of "forced on my throat" subscription schemes (Adobe, etc) I find surprisingly prescient:
The hero in Ubik tries to get out of his own house:
"The door refused to open. It said, "Five cents, please."
He searched his pockets. No more coins; nothing. "I'll pay you tomorrow," he told the door. Again it remained locked tight. "What I pay you," he informed it, "is in the nature of a gratuity; I don't have to pay you." "I think otherwise," the door said. "Look in the purchase contract you signed when you bought this conapt." ...he found the contract. Sure enough; payment to his door for opening and shutting constituted a mandatory fee. Not a tip. "You discover I'm right," the door said. It sounded smug."
>The problem with the carpenter you mention is out of millions of pro and maybe a billion amateur/ancient carpenters only that one had much insight. Its a bad odds game.
I find the opposite. All the important contributions to theology and fields such as philosophy have been made by outsiders. Once degrees and tenures started rolling in, the substance starts rolling out. At best those tenured bores can do meta-analyses.
"Regardless who was an expert in automation in the 60s, I'm pretty sure they weren't being asked".
Very much so. In those days if you wanted to know about the future of telecoms you asked telephone engineers. Those guys thought in terms of voice circuits, circuit switching, and paying for fixed bandwidth (e.g. 64kBit/sec) by the second whether you were using it or not. Even when packet switching came along they were still trying to shoehorn it into their existing technical and business model. There were people thinking about packet switched services, but they weren't "experts".
Take a look at Vannavar Bush's paper "As We May Think". It anticipates concepts of hypertext, but the whole thing is shot through with solutioneering about microfilm.
I think now, in some fields, even experts can't predict. Look at what happened with the game of Go, and probably Poker. No expert, except the real few ones who did the actual job, thought that a computer Go could beat a professional human player. At the same time, even with these surprises, we don't know if the singularity is near or very far.
It was an enormous surprise that it happened so quickly, but I do think the Go community thought it would happen in a decade or more. Monte Carlo methods had made a lot of progress, and researchers were having good results with neural nets in 2015. http://senseis.xmp.net/?ComputerGo (Note, I read various things from Go programmers, but don't work on that myself).
This is the premise of 'disrupt', isn't it? That the existing business models or technological solutions are so thoroughly shaken by the appearance of something new that doesn't pass for a mere iteration of the same. Electricity was not directly comparable to steam, despite having some of the same uses at first, cars were quite different from horses, and the packet-switched internet behaves differently from circuit-switched telephone lines.
Part of this phenomenon of even experts unable to predict the new is that often our current technology is capped by our societal understanding of physics. Before the Wrights and other early pioneers, we knew that powered heavier-than-air flying was possible for light things like birds, but weren't sure whether the same applied to large, heavy frames, or how we'd be able to deliver enough power to the machine while keeping the weight down. But after a couple of early success, a vibrant field of haphazard experimentation and serious scientific research opened up, and we learned a lot about aerodynamic lift.
Before the transistor, it was difficult to seriously posit that electricity-powered, non-mechanical computers would be ubiquitous not just to the point of every government and large enterprise having them, but also every single person possessing multiple miniaturized versions on their person. The invention of integrated circuits was a transformational achievement: it enabled us to go from having very scarce computing capacity to having a slight excess, and so new programs emerged to harness that ability to do calculation. VR and AR in turn aren't that transformative yet: they're simply video games where a portion of the input comes from the 'real world'.
Today, we're at a UI/UX/AI/ML crisis, where we have immense computing resources at our disposal and still lack an effective ability to communicate our intent with the machine. We dip down to the ancient metaphor of manipulating a pre-set UI with a pointer like a mouse or our fingers or our eyes, or we have to speak audibly to those within earshot so a microphone can capture our command, ascertain some meaning, then map it to one of many predefined actions. These seem like the dark ages. It's not hard to guess that something revolutionary is going to happen in this arena, and will require actual scientific discoveries to make it possible.
Interesting aside: One of the RAND's predictions (No. 24) is for "International agreements which guarantee certain economic minima to the world's population as a result of high production from automation", which sounds pretty much like Basic Income, along with some of the same arguments we've heard recently about why it will be increasingly needed. (Not saying I agree nor disagree)
The RAND study is from 1964. There have been several experiments with basic income systems in the US and Canada in the 1960s and 1970s, and I may assume that the general idea is even older.
It is. The idea was floated by the US founding fathers as a form of compensation for having all the land parceled up and owned. Further back you have the Levelers who were essentially proto-communists.
One way to detect suspect predictions is to look at which ones suggest improvements to things we have today without changing them significantly.
For example, people envision self-driving cars that are exactly like the cars we have today, only automatic. They predict that they will still be big, expensive, complicated, multi-person vehicles. I'm sure that category will continue exist, but why will that remain the norm? The size and complexity of cars are major drawbacks (e.g. storage, traffic, cost). With bicycles, we can see that small individual vehicles have significant advantages due to their size, cost, and simplicity. With self-driving cars, I think there will be room to borrow some of those advantages. Today, we have to build cars with safety as a major concern, but when self-driving vehicles are common, we will not have to. If we are free to relax during a car journey, the ability to travel at high speed may not be such a major concern, either. The typical self-driving car could easily be something inexpensive, small, and simple like the PodRide instead: https://www.youtube.com/watch?v=4lKq1fGtXFM
>If we are free to relax during a car journey, the ability to travel at high speed may not be such a major concern, either.
Most of the people I've spoken to firmly believe self driving cars will be faster, not slower. They'll be safer and more consistent at speed so it's reasonable to trust them to do so safely.
When the train takes 4 hours and I can drive it in 2h30 - I take the car. Just the fact that I get to "relax" is not sufficient.
Most car trips are much shorter than that, although that could change when driving is no longer necessary.
For an occasional long-distance trip, it would make sense to buy a ride on a purpose-built high-speed self-driving vehicle. Kind of like a personal train cabin.
For day-to-day commuting and errands, people could be able to spend much less for a smaller, slower, simpler vehicle that is potentially cheap enough for an average person to own.
When I see things like the RAND 'long-range forecasting' study, I always look to see if they specify ubiquity -- it's very different to assert that we'll have ultrasonic implants for the blind in a laboratory setting vs. being commonplace and woven into the fabric of society.
As William Gibson is fond of saying, "the future is already here — it's just not very evenly distributed." Predicting what will become integral to our daily life is much more interesting/difficult than determining how long it would take to develop a specific technology's proof-of-concept.
Thanks for this piece, it made me recall the joy of spending my afternoons in the library and the vivid dreams I got from science fiction books - I remember reading Verne's "20,000 Leagues under the Sea" as a kid in the 70's and picturing the Nautilus as a nuclear submarine in Victorian clothes.
Back then the cold war was rampant and Brazil was under military dictatorship. Now I'm living in the future, and it is half familiar and half surprising.
Applied AI will be the next "nobody knew how fundamentally things would change."
I don't believe in the singularity (at least as currently envisioned) but I do believe we need to figure out what a society must look like when most human beings are, almost literally, not useful for commercial employment.
Like, the whole tenets of a market economy and capitalist ownership will smash into the tenets of human worth and the function of society, with violence and consequences much greater than we've seen until now.
We can see it coming. It's obviously clear. And nobody in charge is taking about it, much less doing anything about it. A totally avoidable tragedy, walked into on purpose.
If technology makes the cost of supporting a person's life is nearly zero, then the wages they'll demand can also approach zero. So a human can undercut a robot worker on cost.
Currently, that doesn't happen because humans are so expensive to maintain - they'd sooner starve to death than work for less money than they need to keep themselves alive. But it's more than before. We do have a huge tertiary sector of the economy which is basically humans being employed to do low-value or unnecessary work because they're so plentiful and cheap.
There's the fabled 'Singularity' (with the capital letter) or the usual meaning: something we cannot predict beyond. That's always a problem in prediction, because even if we can imagine a singularity, we don't know what follows.
This is a bit banal. Nobody can predict second and third order effects reliably because they don't know how to weight one predicted effect vs another to know how they'll intersect and interact.
Extrapolation for any one technology usually works well for a short distance into the future (a decade or so), but the chances of a technology being blindsided by something else entirely rapidly rise over time.
It's not second or third order effects, it's the way that technologies facilitate social changes that trips people up.
The astounding thing about the Internet isn't just that it's possible to do all kinds of science fiction things with it, but that it has no single inventor.
In fact a public internet for shopping, dating, and photo sharing was never even considered as a goal when the core technologies were being RFC'd.
The real value of technology only appears when it intersects with user land. The broader the take-up, the bigger the potential for bottom-up social and political effects.
Telegraphy, radio, heavier than air flight, internal combustion ground cars, TV, and even printing were all defined by the political and social changes they created far more than by the physics that made them work.
So traditionally when someone prognosticates about a new shiny thing they assume their culture won't change and the technology will somehow fit into it.
Far more often, the most interesting thing about new technology is the way it affects belief systems, economic and social activity, and political power relationships.
I will relate to my observation in Harry Potter books. Harry Potter universe has time machine which we know scientifically are impossible while in the same HP universe, Hermoine has to physically go to the forbidden library to read books. JK Rowling couldn't think of wikipedia or internet which both exist in real world but not time travel.
It is a central feature to the Harry Potter universe that technology is overlooked, because magic. That is why everything looks rather antiquated on the surface, because all the advances are made in magic and nobody bothers with other stuff that is perceived to be less effective or important.
Almost every building is rather precariously built and even absurd in details, because there is no value put in structural engineering, you just magic it all together.
Why ever would they use the Internet? They all perceive their own culture to be superior, and all their advances are bound up in magic.
I think the best one of those I encountered was an SF story where the navigation for FTL was done with slide rules.
But demanding that fiction predict the future is unreasonable and will cause upset for you and the authors. Almost all speculative fiction is based on "what if": imagine if there was something different about the world or or our technological capabilities, what stories could we tell about them?
(That's not even counting the SFF that makes no claim to be about the future or reality, but simply brings in elements for dramatic effect. That's why you can hear things in film space battles, which tend to be organised like WW2 carrier/air battles. That's why Harry Potter books make no attempt at consistency.)
Compare and contrast with Diane Duane's _Young Wizards_ books (which if you liked Harry Potter you should totally read), where in the later ones the characters are using their Wizard's Manuals as instant messenging/social networking clients. Turns out that books which magically contain the information you need to know have many applications.
Yes, they even go bing when a new message arrives.
> Hermoine has to physically go to the forbidden library to read books.
It's a 'forbidden library' because the books in there aren't meant to be available to the general public. You'd expect it to be locked down against scrying and other forms of remote and unauthorized access, in exactly the same way that a military data center would be locked down.
I think, to expand on your point there, you can make the general observation that there is no 'WizardNet' or usage of the internet by any wizard (as far as we know). Further still°, there is a complete lack of computer technology from any of the Wizarding World™. Was this done intentionally? Were they supposed to be shunning it? Did they need it? Were they secretly Amish?
That said, I think it would've been dramatically less pleasing for Harry to whip out his Nokia for the camera flashlight, instead of simply uttering 'lumos'.
° Please correct me if I am wrong for my HP knowledge is rather rusty indeed.
I think the in-universe explanation is that magic messes up electric devices, and they are thus not usable in areas with a lot of magical presence. It's not really answered if that could be fixed or not, generally wizards just don't have any clue about how muggle tech works.
Which of course implies that magic is detectable using ordinary electronic equipment (even inadvertantly - would Kings Cross Starbucks notice their WiFi dropped out twice a year?)
It also raises the question of how electric something has to be to not work. Flash photography works, so capacitors are seemingly okay. Even a simple one-wire telegraph system would be an improvement on owls.
The owls aren't that magical - they are established to be obstructed by such mundanities as thunderstorms and glass windows. The communication latency is atrocious. To top it off they're sentient beings, presumably in finite supply, who can die in the line of duty. Interception and denial-of-service are both established to be not uncommon problems with owl communication.
Come to think of it, we come across actual magical radio broadcasts at several points. Even magical password-protection of same. Why the hell do they still use owls?
We have strong cryptography and near-instant communication protocols. Why the hell is there a postal service in every country, and why is the quality of the postal service an indication of sophistication rather than lack of development in a country?
> Why the hell is there a postal service in every country
Because delivery of packages to rural areas is terribly unprofitable.
Certain things need to be socialized in order to keep a country connected and enable development.
Without things like the post office, rural electrification, and the Eisenhower Interstate system, vast swatches of the US would have zero economic output.
I'm not sure if this is meant to be rhetorical? Either way - ideally you have postal service AND electronic communication, and the quality of both reflect on the country.
It's a fair question though - anecdotally, the two main uses for the postal service nowadays (in countries with internet access) are package deliveries and unsolicited marketing (possibly print subscriptions factor in).
To my mind, the unique advantage of the Muggle postal service for personal communications, apart from the globally federated physical distribution network - the one that would make me very hesitant to scrap it - is the difficulty of conducting mass surveillance on it.
> Flash photography works, so capacitors are seemingly okay.
It does? But yes, these things are glossed over quite a bit, which is also odd since there are enough wizards that grew up as muggles and should have a fairly good idea about what technology can do, and would miss tech they'd have to leave behind ((EDIT: in a modern version) no smartphones, no internet would be quite a jump for kids used to them)
Any suggestions for fantasy that does it better? I can think of Charlie Stross' The Laundry Files (where technology is actively used to harness the power of the paranormal), the Harry Dresden series and the Night Watch series (which shows older wizards struggling with technology, younger ones actively using it, and at one point an army of humans with magical artifacts posing quite a challenge)
Nitpick - Harry Potter is set in the 90s, pre smartphone.
As for story suggestions - well, I don't read a lot of fantasy, but the "fanfiction" Harry Potter and the Methods of Rationality (by LessWrong's Eliezer Yudkowsky) is a deeply amusing take on what havoc might have been wreaked in the Harry Potter universe by someone who, finding themselves in it, applies the scientific method. (If that's not enough to pique your interest, at one point in the story Harry explores the possibility of using a time-turner to deterministically solve NP-hard problems through the forced creation of stable time loops.)
Another good story is Sam Hughes' "Ra", where magic is literally a field of scientific research and engineering. It scratches the same itch as Methods of Rationality (scientific deconstruction of magic), but in a universe built for that purpose. It's awesome.
> It also raises the question of how electric something has to be to not work.
In the Dresden Files books, it's a sliding scale and the titular Harry Dresden drives an old VW Beetle because it's primitive enough to still (mostly) function in the presence of magic.
Strong encryption is another thing that sci-fi authors seem to miss. For example 1998's Moonwar has people sneaking communications on laser beams instead of radio waves because spies won't be able to eavesdrop on the straight-line laser without physically going to its location. That, despite the internet already using encryption for financial transactions in real life.
Yeah but how do you do key distribution and management? The physical security and "forward secrecy" (in the loose sense) afforded by an ephemeral transmission that can only be received in a fixed location is non-negligible [1].
Its not about encryption, and real world concerns are echoed there. Radio (or other broadcast) gives up lots of signal intelligence; times, location, correlation with other events etc. A tightly collimeted/focused media (like laser) reduces those risks while having other benefits like a reciever detecting interception via attentuation.
There's a tiny little bit of a difference between 'the villains kill various members of team good' and whatever this stuff is:
“Hey, Draco, you know what I bet is even better for becoming friends than exchanging secrets? Committing murder.” “I have a tutor who says that,” Draco allowed. He reached inside his robes and scratched himself with an easy, natural motion. “Who’ve you got in mind?” Harry slammed The Quibbler down hard on the picnic table. “The guy who came up with this headline.” Draco groaned. “Not a guy. A girl. A ten-year-old girl, can you believe it? She went nuts after her mother died and her father, who owns this newspaper, is convinced that she’s a seer, so when he doesn’t know he asks Luna Lovegood and believes anything she says.” … Draco snarled. “She has some sort of perverse obsession about the Malfoys, too, and her father is politically opposed to us so he prints every word. As soon as I’m old enough I’m going to rape her.”
Also it's been many many years since I've read the books, but I don't remember a whole lot of killing on the parts of the heroes; a quick check of the Harry Potter wiki confirms that Neville killed Scabior, Molly Weasley killed Bellatrix, and various monsters bit it; I'm also going to choose not to count the obvious other example on grounds of intent.
That was a very clumsily executed attempt at shocking the reader by showing how messed up the Malfoy family is, by indirectly informing the reader that Draco believes he is above the law and has no compassion. I don't think there's much more you can read into it
What do you mean by 'rapey' then? There's no way the book was on Draco's side when he made that comment. And I'm not sure what you mean by 'the whole Bellatrix stuff', but wasn't she a victim of the evil antagonist? If you think those bits were awkward, inappropriate, badly written, ineptly conceived, or whatever, I'm sure you'll find plenty of agreement. But calling the book 'rapey' seems to imply that it displays some kind of pro-rape message or attitude. That's a really harsh claim to make without strong evidence.
edit: maybe you just mean that it contains gratuitous references to rape? I guess that's a fair opinion to hold, but I think it's clear that the Draco thing wasn't included for the fun of it, but was intended to show how messed up his upbringing/society was in a way that would genuinely shock the reader. And I can't remember exactly what happened to Bellatrix, but I also don't remember getting the sense that the book was revelling in the grisly details or anything like that.
The HP books are not always internally consistent with regards to time, but a careful reading suggests that Harry was born in 1980 and so the books take place in the early-mid 90s.
The philosopher Alasdair MacIntyre writes about why the future is inherently unpredictable in his book After Virtue. It's in the context of social sciences but it applies just as well here. There's a decent summary here[0]
> MacIntyre argues that "there are four sources of systematic unpredictability in human affairs" which preclude social science from being like natural science (93). The first is radical conceptual innovation, which can be explained in retrospect, but inherently can only be predicted when the innovation has already occurred. MacIntyre notes that this also means that the future of scientific innovation cannot be predicted, invoking the Church-Turing thesis as further proof. The second source is the fact that "the unpredictability of certain of his own future actions by each agent individually" implies the unpredictability of that agent by any other agent, and hence an aggregate unpredictability to the social world (95). The third source "arises from the game-theoretic character of social life" (97). Social life in fact embodies multiple games, players, and transactions and thus cannot be studied as a single instance, reducing the predictive power of game theory. The fourth source is "pure contingency", the way in which "trivial contingencies can powerfully influence the outcome of great events", such as the length of Cleopatra's nose, or Napoleon's cold at Waterloo (99-0).
I have to say I find this piece pretty disappointing. Slightly smug throughout, which is easy in retrospect, with approximately zero to add to the conversation. The thesis seems to be: The future is hard to predict, and even if you can, it's hard to get the timeline right. No shit. Is this what VCs do with their spare time? Remember granddad and contemplate uncertainty?
I think the point the author was making was that people predicted this would take much longer to become feasible than other things like "radar implants for the blind", which obviously was not the case.
Yeah. Long-haul fiber that never interconnects directly with the pstn is not part of the pstn. If I order metro Ethernet over fiber to an ISP with a network that peers at a few public IX and privately with Comcast, Google and Facebook, there's no voice framing or call switching underlying most of my traffic.
One of my favorite examples of SF writers underestimating the pace of certain changes is a scene early in Isaac Asimov's Second Foundation -- which is set 1000s of years in our future -- showing the life of a teenage girl before she starts adventuring.
Sitting in the bedroom of her suburban house, she uses the new technology of a voice-activated typewriter to write a paper for history class. Looking over this homework assignment, her father reprimands her for one of her writing choices, and she sullenly revises the paper accordingly.
(The choice, as you will likely recall if you read the Foundation Trilogy, was to brag about her grandmother, the heroine of the previous book.)
It's a little gloomy to see how early most of the predictions are, they were overly-optimistic by 10-20 years in most cases.
Has anyone ever tried aggregating popular predictions over time? It would be interesting to see a history of predictions for self-driving cars up to now.
My main take-away from this article was that people often use the underlying paradigms of today (perhaps unknowingly) to predict surface level details of tomorrow. "Asking the wrong questions", as the article byline states.
Instead, we should try to predict how the fundamental paradigms will change, and then deduce visible, commonplace details from that.
I did just started reading "The Structure of Scientific Revolutions" by Kuhn, which is all about paradigms (of a specific sort). It's good stuff.
I'm not sure. But I think knowing that the default mode is usually to ask the wrong questions can help us. It provides a new lense for analyzing any ideas that come up: "What assumptions am I sub-conscioisly making here? What current paradigms does the idea depend upon?" etc.
I enjoyed the photo, the family history, and the premise of "asking the right/wrong question". I think that all technologists - or perhaps everyone - can and should take a step back from their daily grind and ask some questions.
I think the wisdom of the crowd can be leveraged in asking questions about the future. There are sites like longbets.org where you can weight in.
It is one of the great gifts to the human species to be able to contemplate a future beyond tomorrow.
I am going to choose to believe that the wrap-up paragraph, asking "who will have the right kind of driving data for autonomy?" is a subtle wink, because the current conventional wisdom about autonomous cars -- that they are coming soon, and that machine learning on massive datasets is the key to their success -- is very likely going to end up being one of those things we talk about jokingly in 50 years. Ha! Remember when they thought we'd have a colony on the moon in 1990, that machines would have genius IQs by 2000, and that cars could drive themselves by 2025? Oh, the people of the past were so cute.
I am not surprised that a telco centric forcasts would be locked into a switched circuit mind-set.
eve in 96/97 interviewing at a british telecom board the 3 (all mainframe guys) just did not get the internet at all - ironically the board was at 207 old street
That is why I always try to ask a question my Dad suggested I ask when looking a future predictions, "What is so common place that they can't see it changing?" That is so difficult to do. Things like "What would the future be like if there literally was no disease and any physical injury could be patched up quickly?" Or "Energy is suddenly free?" or "Lie detection becomes infallible?" etc. Very tough to do.
[1] https://news.ycombinator.com/item?id=13101643