One problem in written science fiction is the requirement for it to be at least vaguely commercially viable, which necessitates dramatic plotting and/or readability. ML is, to most non-technical people, intensely boring. How do you dramatize it?
I've published two novels that go there: 2011's "Rule 34", and -- coming out this September -- "Invisible Sun", and in both books ML is very much a subaltern aspect of the narrative. That is, neither book would have a plot without the presence of ML, but it's not really possible in my opinion to write a novel about ML in the same way that the earlier brain-in-a-box model of AI could form the key conceit of books like "2001: A Space Odyssey", "The Adolescence of P-1", or "Colossus" (examples from the 1960s and 1970s when it was still a fresh, new trope rather than being pounded into the dirt).
So it probably shouldn't surprise anyone much that SF hasn't said much about the field so far, any more than it had much to say about personal computers before 1980 or about the internet before 1990 -- what examples there are were notable exceptions, recognized after the event.
Literally every movie or book that depicts AI in 100-200 years would be super boring.
Imagine star trek where instead of Data being one android, you just had a ton of machines that could do everything better than humans. No point for the humans to explore space or do anything. The end.
Imagine in Minority Report where Tom Cruise is instantly caught because his whereabouts can easily be tracked and triangulated, by his gait, infrared heart signature, smell, and a number of other things. Movie takes 2 minutes. The end.
In fact, this is the world we are building for ourselves. We're going to be like animals in a zoo, unable to affect nothing. Just like a cat has no idea why you're doing 99% of the things you do, but has to go along with it, similarly you will have no idea why the robot networks do 99% of their activities, you just kinda live your life with as much understanding as a cat has in your house, as you go about your daily business.
The closest I can think of is Isaac Asimov's "I Robot" maybe. And even there, the robots weren't in charge really.
> Imagine in Minority Report where Tom Cruise is instantly caught because his whereabouts can easily be tracked and triangulated, by his gait, infrared heart signature, smell, and a number of other things. Movie takes 2 minutes. The end.
Agh, I can't find it now, but there's a short film half based on this: A time traveler from the future is identified instantly because she's in two places at once. The short has both of them in adjacent interrogation rooms, with two onlookers - a rookie and an experienced investigator - talking about how to identify time travelers. It ends with the experienced one commenting that they've been showing up more and more often.
Minority Report might not be possible with such technology, but it does open different possibilities.
Yep, "Plurality" is it - the parts of the interview I'm remembering are the explanation at 9 minutes and the plane crash at 9:40. Scrolled through their videos earlier but the name didn't stick out.
That touches on a point about science fiction I've noticed.
reality can be boring. Reality dictates that we travel at a fraction of the speed of light. We cannot fix things we did. We have a limited lifespan.
Honestly the made-up stuff makes fiction a lot more entertaining. Time travel is really fun (though primer required a dive into wikipedia to unravel). Warp drives and wormholes open the universe to exploration. Living forever might be just as fun as time travel - you can fix stuff forward or just see it all. I don't mind suspending my disbelief.
EDIT: On the other hand, there might be one real subject that is interesting from
Imagine a flashy spaceship lands in your backyard. The door opens and you are invited to investigate everything to see what you can learn. The technology is clearly millions of years beyond what we can make.
I'm the opposite: I find FTL, warp drives, and wormholes a bit tedious in SF. They've been done to death, and all they do is make exploring boring by reducing the immensity of the universe to nothing. Time travel is fun, but only for the jokes and paradoxes.
Realistic SF is more entertaining IMO. e.g. I much prefer The Expanse or The Martian to Star Trek or Star Wars.
I have long and continuous string of 'jokes' that are about a future where common technological items have been replaced by some sort of genetically engineered plant, animal or fungus.
"ML is, to most non-technical people, intensely boring."
Maybe, or maybe it's just an old trope as you point out. So much of science fiction assumes some level of intelligent computers and has since the beginning. The phrase "machine learning" is really a rebrand of the "thinking machines" of the 60s and artifical intelligence" of the 70s and 80s. Even in the old Star Trek TV series, you had people talking to the computer, not unlike we talk to Alexa, Siri and Hey Google today.
That said, now that people are seeing ML affect their lives directly, there is a lot of exploration that may be possible, as perhaps, ML is just not science fiction anymore.
I think the point is we gained the scifi UX, and lost most of the interesting plot devices. We got the talking computer without enough self awareness or reliability or uniqueness compared with other forms of data interrogation to be a particularly interesting plot point.
Alexa and Siri and Google Home aren't going to take over the world or try to become human, and aren't infallible or even particularly smart. They're a speech input interfaces over a search engine with some jokes hardcoded in. From a technical standpoint, very impressive; from a fiction standpoint a dead end
> ML is, to most non-technical people, intensely boring. How do you dramatize it?
To me it's akin to several scenes in the film "War Games" where you basically have to make a rogue AI in a computer sexy on film.
"Show a computer thinking. Make it scary. Inhuman. Enhance that it's impersonal."
So we get a camera person slowly walking around an obelisk-like table with ominous music. "This is where the AI lives, thinking on how to rain nuclear fire upon the vile Soviets, no humans need interfere, enhance and intensify, will Tic-Tac-Toe save us from our folly". And around and around the evil obelisk table we go watching bits flip up and down back and forth with nuclear missiles in question.
You can find similar elsewhere. Dramatic flashing lights on a server rack, a human taken over by external forces blindly jabbing at a keyboard or keypad with intense purpose. All of a sudden from these wonders computational magic happens.
Hey Mr. Stross, I'm a big fan of your work! Thanks for hanging out here on HN with us. I thought Accelerando was superb. Keep up the hard work of writing! I know it can be tough, but your efforts have really paid off, at least to me.
One question, if you're up for doing any questions: what do you think about the use of drones and loitering munitions in the recent Azerbaijani-Armenian War of 2020?
Scifi authors have definitely been writing about smart computers for a while now. Maybe some of the things that are newly specific to ML can be fodder for writing:
- When building an ML model, you think much less about the structure of the model and much more about getting a variety of data to feed it. One could imagine some really silly situations. If you're training an affect-recognition model, you might want to contract every single actor in hollywood to act out a single scene. Maybe your voice recognition model is not working on certain people, so you hunt down a specific person and force them to read all of Shakespeare's work out loud. You might work on self-driving problems, and every day you instruct a full village of people to enact a precise ballet of driving, walking, and roadwork, to get the autonomous brain to learn some weird corner case of reasoning. Similar to Player Piano, so many people will be working on non-directly-productive work, almost play-acting work so a computer can do it for real.
- Sometimes ML doesn't work in really weird specific situations. In the 90s, your program could stop working because the lighting and shadows changed, but now the reasons for something failing are so much more obscure. What is it like becoming an expert in psychology for something that doesn't even really think? What does that person do day-to-day and what does their boss think of them? Is it more like being an independent psychiatrist, or more like a police detective? (then again, I'm drifting into an Asimov plot even as I write this...)
- Each object you train an ML model on takes a certain amount of investment, in data collection, in debugging, and in adjusting the training process. Each object adds expense, so maybe a solution is to stop adding objects? Maybe your Alexa enhanced home edition was built in 2025, so to keep your house in order you can't buy items made from 2026 onwards. Or maybe it's a big societal issue, so the government mandates that every new item created must pay some kind of registration fee. Maybe it only allows 10 new items per year. Some people would be pretty unhappy about that.
Maybe you're right! Maybe switching from straight edges screws to philips screws makes for a really boring story, because we've already written every story that accounts for some kind of screw driver.
Do you frequent any forums or sites recommend for the serious amateur interested in the production of the sci-fi?
I especially appreciate this discussion. It’s fun to think about the gaps in our imagination. Especially when it comes to science fiction, those gaps are often wider than we think.
Good luck with your work! I’ll be on the lookout for Invisible Sun, unless you recommend a different title.
Besides ML, do you have any other fururist predictions that are not being realized in science fiction?
I tend to agree that ML will not itself be the subject, but rather a device or tool misused by Someone Bad. As you say, it's a necessary and integral piece of the story, but not the star.
For example, it brings to mind some combination of Minority Report and Eagle Eye, say. Where a surveillance state combined with ML analysis leads to something akin to pre-crime arrests and the fight back against that.
> I tend to agree that ML will not itself be the subject, but rather a device or tool misused by Someone Bad.
I got a counterexample:
Person of Interest [0] was a crime series from 2011-2016 premised around machine learning. The opening narration from each episode of the first season[1]: You are being watched. The government has a secret system, a machine that spies on you every hour of every day. I know because, I built it. I designed the machine to detect acts of terror but it sees everything. Violent crimes involving ordinary people. People like you. Crimes the government considered "irrelevant". They wouldn't act, so I decided I would. But I needed a partner, someone with the skills to intervene. Hunted by the authorities, we work in secret. You will never find us. But victim or perpetrator, if your number's up, we'll find you.
Revealed in flashbacks over the first season, the government doesn't have access to the system. It runs independently (even from its creator) and just provides information.
Being TV it does of course expand past the original premise, with a reveal that it's not just a machine learning system but the world's first true AI, and later another AI with conflicting goals is brought online. But for the first season at least, the machine sticks to the role described in the narration, and appears to be just a machine learning system.
> Where a surveillance state combined with ML analysis leads to something akin to pre-crime arrests
While I haven't seen Minority Report or Eagle Eye, this aptly describes the anime TV series Psycho-Pass (2012). If I remember correctly, the show is mostly focused on the effects of having a crime-coefficient system.
I assume that you are aware but ML really is moving towards more general purpose AI.
For example, the Turing Award winners for Deep Learning (Hinton/LeCun/Bengio) have all listed the numerous shortcomings of today's narrow AI and public declared an earnest interest in more human-like capabilities. And started serious research programs attacking those shortcomings. With many of the current generation of researchers quite engaged with them.
There is progress on self-supervised learning, transfer learning, multimodal learning, more powerful models such as the transformer, extracting more accurate and fundamental latent spaces, model-based, online learning, more modular systems, 3d reconstruction from images and even videos, J. Tenenbaum's clear functional explanation beginning to be addressed with the new neural network tools, etc. Most of the results are not at the present moment inspiring enough to base a novel around. But there are many promising threads.
Robert Charles Wilson's "Blind Lake" (2004) is a scifi book about a (then) very near future when the data from telescopes would be almost as much hallucinated from big data models as composed of actual data from measurements. It's plot goes a little more quantum handwavy than direct machine learning but the gist is the same.
Hah, Robert Jordan taught me to watch for the phrase "[n]th (and final)". No, I won't start on that book, but I'll definitely pick up a copy of the first :)
Hint: I remastered the original six book series in three revised omnibus editions circa 2011. (There's then a subsequent trilogy, of which "Invisible Sun" is the climax.) Anyway, start with "The Bloodline Feud" rather than the slim original marketed-as-fantasy six-pack. I learned a lot in the decade between writing the first books and re-doing them, and the omnibus versions read more cleanly.
It’s even boring to technical people. It reduces intelligence to a multidimensional optimization problem. Now intelligence just involves all kinds of mechanical ways to fill out the weights for a neural network. I use to be more interested. Upon learning more about it, I am less motivated.
I've found exactly the same experience. Data science is mostly cleaning data in the first place, and the 10% that isn't, it's just fiddling knobs (hyperparameter optimization) to get the model to work.
But man, I can't argue the incredible results it creates. Perhaps that's why people do it, for the ends not the means.
I work in a data science team and I think that motivates a lot of my colleagues. Delivering a product and looking at the massive impact it has on the business is very satisfying.
What's described there is the predicting patterns, which is a part of intelligence but there's much more to discover and invent. Even within the 'optimization' task there's huge differences in the leaps from NNs to DNNs and from DNNs to AlphaGo/Zero. The details are what make it interesting.
If we were to understand exactly how the brain operates and learns, we'd see that it's solved/solving just an optimization problem, but that doesn't make it uninteresting.
I'm sure this is due to my beginner status in ML/DL, but I'm really disappointed in how much deep learning seems removed from the things that I enjoyed the most in statistical learning.
I enjoy the creative challenge of applying domain knowledge when building (for example) linear or bayesian regressions. In contrast, DL seems like a whole bunch of hyperparameter tuning and curve plotting. Curious to see if this assessment seems correct from those more experienced...
Technical is quite a broad term. There are quite some challenges in designing and engineering large ML data pipelines, both from a technical and business perspective. But I agree it's a specific problem that is arguably boring to a lot of people. Some people take more fun in the modelling part, others more in the engineering. Personally, I'm more into the engineering part than creating the actual model.
Isn't that like saying, "I used to marvel at nature, that it has elements such as fire, water, ice, and amazing living things. Then I discovered it is all made of atoms interacting... I am now less motivated." ?
I mean, the fire, the water, the ice, the amazeness of life and intelligence are still there. You just gained a new foundational view. Now you can understand and manipulate better what you already knew, maybe now you learned about plasma, or even extremely advanced and mysterious phenomena like bose-einstein condensates or superfluidity. The old wonders are still there, you've gained new ones.
I'm not going to claim complete cognitive equivalence (or even preference) between the two states of mind, but it is a bit like childhood: firmly believing in Santa Claus, or Wizards or whatever can be exciting, perhaps more exciting than knowing they are myths; but growing up and understanding they are mythical brings new opportunities, capabilities, and even new mysteries you could not reach before (buying and building whatever you want, vast amounts of knowledge, understanding more about technology and society, etc.). It's the adults that keep us alive and well, that make decisions for us and for society at large. So perhaps (although I'm not entirely convinced by the cumulative argument) truth is a sacrifice, but it is one well worth bearing, at least for me. I am deeply interested in how intelligence works, in how "the sausage is made" (at least for certain highly useful sausages that compose the fundamentals of the world).
Even more, understanding is above all a responsibility, if not for all of us, at least for some of us, or hopefully in one way or another for most of us.
I can't recommend enough Feynman on Beauty: (this argument is largely inspired by that)
In the same vein, intelligence to me used to be a black box where you got input from the world, some kind of wondrous magic happened, and then you got talking kids, scientists, artists, and so on. Now I still view it as wondrous, but now I understand the fundamental is apparently a network-like structure with functional relationships that change, adapt to previously seen information in other to explain it, that there are a number of interesting phenomena and internal structures (going well beyond the simple idea of 'parameter tuning') that can be formalized -- essentially the architecture of the brain (or better, 'a brain').
To give an example, there have been formalizations of Curiosity, i.e. Artificial Curiosity, and I consider it essential for an agent interacting independently in the world or in a learning environment (part of the larger problem of motivation). How amazing is it to formalize and understand something so profound and fundamental to our being as Curiosity? I felt the same way about Information theory years ago. How amazing is it that we've built robots (in virtual environments), and it works -- they're curious and learn the environment without external stimulus?
Above considerations aside, I find that amazing, beautiful, awesome.
There's another related concept I came up with thinking about this discussion (which I've had with friends as well): 'freedom of utility'.
The basic idea is, forget about what you think is beautiful or motivational. Suppose you could choose to be motivated by something. Would you choose to be motivated by superficial mystery, or by deep knowledge of how things are? Should you choose to find beautiful just the surface of the flower, or also the wonders of how it works, its structure as a system, the connections to evolution and theory of color and so on -- all of which could turn out to be useful one way or another. If you could choose, would you choose to be exclusively motivated by the immediate external appearance or by the depth and myriad of relationships as well?
Unfortunately, (unlike AI systems we could design) I don't think we have complete control of our motivation -- our evolutionary biases are strong. But I'm also fairly certain much of our aesthetic sense can be shaped by culture and rational ideals. If I hadn't heard Feynman, watched so many wonderful documentaries (and e.g. Mythbusters) and many popularizers of science, perhaps I wouldn't see this beauty so much as I do -- and I'm grateful for it, because I want to see this beauty, I want to be motivated to learn about the world, and to improve it in a way.
> Isn't that like saying, "I used to marvel at nature, that it has elements such as fire, water, ice, and amazing living things. Then I discovered it is all made of atoms interacting... I am now less motivated." ?
Yes it is exactly what I'm saying. I'm less interested because of this. I could turn it around and also say that with your extremely positive attitude you can look at a piece of dog shit and make it look "amazing." Think about it. That dog shit is made out of a scaffold of living bacteria like a mini-civilization or ecosystem! Each unit of bacteria in this ecosystem is in itself a complex machine constructed out of molecules! Isn't the universe such an interesting place!!!!!
This history of that piece of shit stretches back though millions of years of evolutionary history. That history is etched into our DNA, your DNA and every living thing on earth!!! All of humanity shares common ancestors with the bacteria in that piece of shit and everything is interconnected through the tree of life!!! We can go deeper because every atom in that DNA molecule in itself has a history where the scale is off the charts. Each atom was once part of a star and was once part of the big bang! We, You and I are made out of Star Material! When I think about all of this I'm just in awe!!!! wowowow. Not.
I'm honestly just not interested in a piece of shit. It's boring and I can't explain why, but hopefully the example above will help you understand where I'm coming from.
You see, there are people out there that legitimately, professionally study poop for a living. I read a book (well, part of it) Gorillas in the Mist, by Dian Fossey, and there is an appendix on parasites, mostly using fecal analysis. Literally, a chapter on poop and worms. Reading it without prejudice, I found it extremely interesting.
Should we just say 'ewww', 'dog shit is boring, no one should study it'; or should we give it the benefit of doubt? What makes something interesting? I'm sure you could study poop and parasites for years -- they tell you about the diet of an animal without having to follow it day and night, they reveal parasites that may be of health concern for human poop.
Should we, as a society, forsake all study of poop by deeming it boring? Are those people that study poop, and don't find it boring, wrong? Or maybe they secretly go about their job finding it extremely boring? I doubt it.
> Yes it is exactly what I'm saying. I'm less interested because of this.
I think you're falling victim to reductionism. I meant my example literally: because everything is just atoms, should everything be boring? (if intelligence is just parameter adjustment) I suppose you don't find literally everything boring despite literally everything being just interacting atoms.
You could have this reductionist attitude on anything really:
Once I found out mathematics is just manipulating symbols, I am less motivated/Once I found math is just deriving from axioms, I am less motivated/Once I found life is just a bunch of organisms fighting for survival, I am less motivated
Does it really make sense to be less motivated, is the subject matter really boring, or are you just taking a reductionist argument and replacing the nuance and complexity and beauty of the real thing with a reductionist model (that doesn't really tell us much about how it works)?
Going even further: forget about machine learning. You can formulate physics so that Nature, everything, is locally minimizing (optimizing) a high-dimensional energy function. Literally everything in the Universe is parameter tuning! Oh no, everything is boring! :p
To me, then, there are three pillars of what makes something interesting:
1) It is useful;
2) It has breadth of knowledge (i.e. it's not a trivial matter you can learn in one sitting);
3) It has structure (i.e. it's not just rote memorization)
If I pointed someone to a perfectly uniform white wall and with an extremely positive attitude he declared "Amazing!", and spent hours going "Look how white the white is... what purity, I will stand here all day contemplating different aspects of the whiteness", I'd think he's yes a bit different. But it's not difficult to argue why we think that.
Another point of confusion, is that we're not all in the same situation. Each person has a set of skills, and a background knowledge, such that, for an individual, a subject can seem more or less useful, more or less related to everything he knows (thus much structured, connected, rich), and more or less aligned with his skills. It's perfectly acceptable to declare something as not interesting to him, but not plain boring, universally uninteresting.
I cannot advance much further without talking about the specifics of intelligence: do you know learning theory (PAC learning, etc.), reinforcement learning, all the interesting mathematical structures e.g. in convnets, GANs, Wasserstein-GANs, cognitive psychology, neurobiology, etc.. I think my argument is easy because in this case 'intelligence' is so vastly broad, reaching most areas of math, engineering and science that I doubt with serious effort someone could still blankly classify it as uninteresting (unless you literally do find everything uninteresting... you should be a bit worried about that, I'm serious).
And like in every field in practice one would not sit every day thinking in abstract terms about 'intelligence' -- you would be trying to solve specific problems e.g. what kind of neural architecture could be used to solve a specific problem, what kind of data augmentation can I contribute, or more advanced problems like what is the internal architecture of a robot.
Thank you for the opportunity of laying out those thoughts
:)
(Please read my other comment as well, and I have a few things to add w.r.t. hyper-specialization)
When I said you're the guy that can see the bright side of dog shit I was startlingly accurate. You're that one guy people call "excessively positive."
Every time your brain sees something related to "science" it automatically dumps a gallon of dopamine into the happy center of your brain giving you euphoria equivalent to a line of heroin.
I wonder what's your positive spin on the holocaust? There's actual science that came out of that event.
I'm increasingly concerned that the impact of ML is going to be limited. This sounds laughable at face value. And it is: ML has impacted my own life in a few ways, from being able to generate endless video game music (https://soundcloud.com/theshawwn/sets/ai-generated-videogame...) to... well. Thus my point: I can't think of ways it's seriously impacted my life, other than being an interesting challenge to pursue.
As someone on the forefront of ML, you would expect me to be in a position to reap the benefits. It's possible I am incompetent. But I often wonder what we're doing, chasing gradients and batchnorms while training classifiers to generate photos of lemurs wearing suits.
I try not to dwell on it too much, since I truly love the work for its own sake. But one must wonder what the endgame is. The models of consequence are locked up by companies and held behind an API. The rest are nothing more than interesting diversions.
I've been reading some history of math and science, and it seems like many of the big discoveries were made from people pursuing the work for its own sake. Feynman loved physics long before physics became world-changing. But if physics never lead to the creation of the bomb, would it have been so prestigious?
We seem to be lauding ML with the same accolades as physics during the postwar period. And I can't help but wonder when it will wear off.
ML will be a fine tool for massive corporations, though, for endless reasons. But I was hoping for a more personal impact with the work. Something like, being able to enable a blind person to use a computer in a new way, or... something more than memes and amusement.
Perhaps doing the work for its own sake is enough.
My company has used ML to create synthetic cancer data to train classifiers to augment doctors/specialists who are looking for cancer. This work has greatly increased accuracy in diagnosis, saving lives. To say it's only for music generation or generating waifus is a bit unfair.
> This work has greatly increased accuracy in diagnosis, saving lives.
As an MD with a special interest in statistics, color me skeptical. I'd love to be proven wrong though, so please provide references.
Edit: yeah, so the way this whole thread is developing really goes to show (yet again) that medical AI hype is relying as strongly as ever on the fantasies of people who've never seen any clinical work.
I know of at least one Canadian hospital that's incorporated ML into 100% of their ED triage. Sure it's not some state of the art deep learning architecture, but it's definitely a step above the old crop of heuristic-based systems you see so often in medical software. "Medical AI" is a stupid term that's been co-opted by more hucksters than legitimate practitioners, so I prefer to talk about more concrete (and less fanciful) applications like patient chart OCR or capacity forecasting.
Maybe not hard data, but EPIC's software (which is used in about ~25% of hospitals for EHR) has over the years been utilizing patient data to be used for treatment recommendation purposes. Again, difficult to weigh the impact if we don't know how many doctors are relying on these types of recommendations and acting on them but it is definitely out there in the real world at the moment. >>https://www.epic.com/software#AI
> we don't know how many doctors are relying on these types of recommendations and acting on them
I can answer that: close to zero. Clinicians don't want stuff that makes recommendations, as good as they may be. They want a bycicle for the mind: something that helps them visualize, understand the big picture and anticipate better. And also ensure that trivial stuff to do is not forgotten (now that's the place a recommender engine could fit in). That's a fundamental misunderstanding of what a clinician's job is that is unfortunately very common.
What do you ask of your software tooling? Do you want something that just tells you what to write? No, you want a flexible debugger. A compiler with precise error messages. You want a profiler with a zillion detailed charts allowing you to understand how everything fits together and why such and such is not the way you anticipated. Same thing for medicine until the day machines will actually do better than humans, which is not tomorrow nor the day after.
Do you have any papers published to support such a strong claim (one that directly contradicts the sentiment of almost every single oncologist and pharmacologist I know that isn't trying to generate profits for a biotech company)?
edit: I posted that I have internal data, but also realized I said a little bit too much about the process. The below point someone is making is a totally fair one. Editing this though for Reasons while trying to keep the part of the comment that led to the below dismissal, and also to clarify my definition of "internal data" to be more expansive than "internal testing on datasets" which is what I realize it might sound like.
That's not how this works. The only way to show an actual reduction in all cause mortality as the result of an intervention, treatment, or screening process is through a randomized controlled clinical trial. If none have been performed, you don't have evidence that lives have been saved. Extraordinary claims require extraordinary evidence.
i think focusing entirely on one type of evidence is a little unimaginative. if these guys have data on doctors performing some task with and without their tool, they're in a good place to measure the difference. they can take that all the way to the bank, and to me that would contribute to what id call evidence.
Totally fair. It is a very, very hard field to make progress in. I would also take anything I say with a grain of salt, I'm not trying to convince you, just to bring an additional data point to the thought that these techniques aren't very useful or impactful.
I think ML's currently temporarily useful in fields that have been making decisions mostly based on intuition and heuristics. The medical field's one example, even with some knowledge on biology and anatomy it's hard to diagnose and treat patients only with deductive reasoning, a lot of guesswork and "experience" is involved. In that case ML might be able to perform better than humans, but I think this will have its limits. Above a certain point, I think biological simulation (as in physics simulation) would be a much more useful tool for doctors to understand the human body.
I'm skeptical... But it depends what data sources are available. I was a paramedic so my medical knowledge is limited, but at the same time, we frequently had to do field diagnosis. It's hard to explain... but you can have patients with the same symptoms and two totally different diagnoses. You basically just learn to intuit the difference but none of the stuff we can write down or quantify drives the differential diagnosis. And it's funny because you get pretty good at it. I could just tell when an elderly patient had a UTI even though they had a whole cluster of weird symptoms. Or more importantly, I could tell you when someone was just a psych case despite complaining of symptoms matching some other condition with great accuracy.
It'd be really hard to train a computer when to stop digging because there's nothing find, or when to keep digging because this patient really doesn't feel like a psych case. And the tests and doagnostics aren't without risk and cost.
I've had a greybeard doctor in my personal life that somehow read between the lines and nailed a diagnosis despite my primary symptoms being something else entirely. (I had recurring strep tonsilitis for months and yet he just somehow knew to step back and order a mono test. It came back negative the first time, and he knew to have me tested AGAIN, and lo and behold it was positive.) None of symptoms were really consistent with mono. I tested positive for strep each time and antibiotics would clear it.). Thankfully I happen to be allergic to the first line antibiotic because if you give amoxicillin to someone with mono they'll get a horrible rash all over their body in like 90% of people.
I don't know, if you ever look at a flowchart of biochemical processes, realizing that what we've mapped out is only a tiny sliver of what actually occurs, you'd be more pessimistic about simulation in the near term. We can simulate things all we want but the hard part is rooting the simulation in hard evidence, something which requires massive capital and time investment. Epigenetics complicate even further.
Not exactly what you asked for, but PathBank is a database that quantitatively describes a large part of what we do know: https://pathbank.org/
As far as what we don't know, I'm not sure there's a list. Lack of knowledge implies lack of awareness. I can offer one example: We don't know much about the processes by which collagen fibers are grown and assembled into μm- and mm-scale load-bearing structures in tendon, ligament, bone between embryo and adult, particularly in mammals. Or the extent to which collagen fiber structures are capable of turnover in adults; healing might only be possible by replacement with inferior tissue such as scar.
Personally, I think the complexity of biological systems, and the difficulty of observing their components directly when and where you'd want to, means that they can only be understood with the help of machines. Not necessarily using convolutional neural networks though.
Yeah it would increase the conditionality or contextuality of any given observation because even if the genomes were identical, you have differing levels of gene expression based on environmental stimuli.
So observing that gene X impacts biochemical pathway in some way Y is already really difficult when there are tons of other genes at play. Add on the fact that these genes could be triggered to stop expressing themselves in certain conditions and it makes the whole process of figuring out what is really going on that much more difficult. Even if we can make some observation, there are tons of contextual situations which would potentially invalidate that observation.
That's actually wonderful to hear! Is there some way to assist with that work? Doing something with a human impact is appealing.
(To be clear, my argument wasn't that ML isn't useful -- but rather that individual lone hackers are less likely to be using ML to achieve superman-type powers than I originally thought. Supermen do exist, but they are firmly in the ranks of DeepMind et al, and must pursue projects collectively rather than individually.)
> individual lone hackers are less likely to be using ML to achieve superman-type powers than I originally thought
For a single individual to have "superhuman" impact with ML, they need not only generic ML knowledge, but also specialized knowledge of some domain they want to impact. Actually, because ML has become so generic (just grab a pre-trained model, maybe fine-tune it, and push your data through it) a very shallow understanding of the fundamentals is probably enough, and in-depth domain knowledge much more important.
That doesn't mean generic ML research isn't important, it's just that it has an average impact on everything, not a huge impact in one specific area.
(I suspect many hobbyist ML projects are about generating entertaining content because everyone has experience with entertainment, even ML researchers.)
!00% nailed it. ML/AI are tools. As in any other exercise, tools can make it easier for beginners/amateurs to engage with a project, but they don't replace 10,000 hours of experience and deep domain experience and understanding. It's the master craftsmen and domain experts who will create the most value with these tools, but that may not be in obvious or clearly visible ways.
Yes, if you carry some weight in the field then the most useful contribution would be to push for better automated gathering of quality clinical data. This is the most limiting factor currently.
Of course policy activism is far less sexy than building new shiny things, so there's little interest in that.
It has affected a small number of fields for sure. Imaging based diagnosis might be better off due to ML, but imaging based diagnosis isn't going to cure cancer or be helpful for every single disease. Glad it's helping but the authors point is only reinforced. Unless you can make a case that we will cure basically everything with ML that is.
Imaging based diagnosis is not going to cure cancer, but it can guide treatment - based on what the AI reads from the images patients can get drugs that are very effective for their particular cancer. We have very effective drugs nowadays, large part of treating cancer is figuring which drug to give.
Imaging based diagnosis could read presence or absence of particular gene mutations from the images so that the genes can be silenced by the drugs.
Imaging based diagnosis could also figure out whether a particular cancer precursor is going to develop into invasive cancer and do it better than the experts we have now (otherwise we wouldn't use the AI).
This can also be done cheaper than paying consultants to figure it out and it can be done in locations where they don't have the specialists.
Some companies working in the field (some already have tools approved for use on patients):
> Imaging based diagnosis could read presence or absence of particular gene mutations from the images so that the genes can be silenced by the drugs.
>Imaging based diagnosis could also figure out whether a particular cancer precursor is going to develop into invasive cancer and do it better than the experts we have now (otherwise we wouldn't use the AI).
Where is the evidence for these claims, other than a VC hype sheet? Like real clinical trials. These claims also show a fundamental misunderstanding of what this data can tell us. Imaging data doesn't give you tumor genetic profiles. It can give you tumor phenotype, which is associated with specific mutations. To get the true genetic profile you need to do deep sequencing at tens of thousands of dollars per tumor, and even then you have the problem of tumor heterogeneity, which lets the cancer evade the treatment.
A major concern I have working in this space is that we're selling people on grand promises of far off possibilities rather than what we can actually deliver right now.
Just the latest iteration of people who don't know biology (used to be Physicists, now it's the AI guys) coming in to save all of us. Once in a while someone does make meaningful contributions, but in the end it's hard to say if the collective investment in attention and money have made it worthwhile or not.
ML diagnosis could actually be worse for us overall, as we might find more harmless cancers and subject people to more unnecessary tests and treatments. Iatrogenic harms are real, especially when ML gives us only diagnostics, and never any treatments.
Regina Barzilay has done some work in this area; I posted slides from a great talk she gave years back a few years ago. The slides seem to be gone and not on Internet Archive sadly...
A girl I went to high school with went on to apply machine learning to identify human trafficking victims and ads for law enforcement.
My perception of machine learning as a mere dabbler myself is that "machine learning" is just a sci-fi name for what's essentially applied statistics. In places where that is useful (e.g. clustering ads by feature similarity to highlight unclassified ads that appear similar to known trafficking ads), then machine learning is useful. It's not necessarily as one-size-fits-all as, say, networking or operating systems are, but in cases where you can identify a useful application of statistics, machine learning can be a useful tool.
ML (neural nets) are useful because they automate much of parameter search. Instead of trying an SVM and then Random forests and then whatever, just throw all the data at a GPU and let it build an NN for you.
NNs come with many caveats and lack the theoretical guarantees and support of SVMs and RFs. They should never be the first approach to solving a problem unless a) you have good reason to support the hypothesis that other models don't cut it such as a non stationary problem e.g. RL, or b) other methods can't scale to the difficulty of the problem. NNs are also very expensive to train compared to everything else.
It's a rare case when a neural net will give you materially different results than a random forest or an svm. ie, if an svm is 80% accurate, a well tuned neural net might be 83% accurate. sometimes that's a huge difference. in a domain like medical image classification, maybe not so much
also you can just "throw all the data" at an svm or random forest, or any number of similar models. Automated parameter tuning can be convenient, but it's prone to overfitting and doesn't eliminate a lot of the actual work
That's the question. If consciousness/intelligence is wholly in the brain, and the brain's interaction with the physical world, is that something that can be modeled with number crunching, i.e. computation?
Perhaps we are still very early days.
> My perception of machine learning as a mere dabbler myself is that "machine learning" is just a sci-fi name for what's essentially applied statistics
As a physicist I used to say that ML is simply non-linear tensor calculus. (I’m not sure if I’m right though)
I sympathise with your general point although as an aside I'm not sure this is accurate:
> Feynman loved physics long before physics became world-changing.
Feynman was an intensely practical person and learnt a lot about physics from e.g. fixing his neighbours' radios as a child. And radio is certainly something I'd class as "world-changing". He loved physics because of the things you could build and create, and did not enjoy abstraction or generality for its own sake.
A better example for your argument might be Hardy, who explicitly stated that his love of number theory was partly due to its abstraction and uselessness. This was long before it had critical applications in cryptography.
>I can't think of ways it's seriously impacted my life, other than being an interesting challenge to pursue.
What an odd framing. Are technologies only impactful if they result in a shiny new toy? Machine learning is a functional part of at least 50% of the tech I use daily. I typed this on my phone, so I unintentionally used machine learning (key-tap recognition) to reply to your comment about machine learning impact.
Machine learning is a technology whose benefit is mostly in facilitating other technologies. In that way it’s more like the invention of plastic than the invention of the personal computer. Plastic has made other inventions lighter, more affordable, and cheaper. So too will ML, but I see where you’re coming from.
I also work in the field and while the work itself keeps me obsessed, similar to you, I keep a list of companies doing amazing things beyond "better ad personalization" so that I don't get a bit jaded. Some that might be interesting to you:
PostEra - A medicinal chemistry platform that accelerates the compound development process in pharmaceuticals. They're running an open source moonshot for developing a compound for treating COVID, with tons of chemists around the world participating.
Thorn - They use ML to identify child abuse content, both to help platforms filter it and to help law enforcement save children/arrest abusers.
All of the healthcare startups (too many to list all), including Benevolent, Grail, Freenome, and more.
Wildlife Protection Services - use ML to detect poachers in nature preserves around the world, and have already significantly increased the capture rate by rangers.
Yep, if you Google Thorn you'll find a bunch of material on how law enforcement uses them and related statistics. Several of the other companies (I admittedly haven't done a deep dive on all of them) have similarly accessible material.
my issue is that none of these things represent new capacities that humans didn't have before. a spreadsheet will help you to identify and track child abuse content. you could do that using paper if you committed enough resources
My hope for machine learning was that it would allow people to see patterns that could not be seen before. While this does happen, most practical problems are driven by a few very obvious indicators. ML can identify them with a high consistency and low cost. This is a useful tool (like a hammer) much more than a super power (like iron man's suit)
Before founding PostEra, its founders published research about their model, which significantly outperforms human chemists (if you're interested in ML, it's actually a fascinating use of an architecture commonly used in language tasks): https://www.chemistryworld.com/news/language-based-softwares...
Thorn's flagship tool, Spotlight, uses NLP to analyze huge volumes of advertisements on escort sites and flag profiles that have a higher risk of representing trafficking victims. You would need an enormous spreadsheet and near infinite supply of dedicated humans to manually review and refine some sort of statistical model for scoring ads, as the volume of advertisements produced is insane.
The same for the deep genomics companies. The size of data generated by deep sequencing is beyond a person's ability to pattern match, and the patterns are potentially complex enough that they may never be noticed by human eyes.
And, again, this is just a small list of startups in particularly moonshot-y spaces.
Numerical weather forecasting enables genuinely new capabilities. Where is a hurricane going to make landfall a few days from now?
In principle, all the arithmetic operations going in to the final forecast could be performed by hand. But then the "forecast" would be completed millennia after the hurricane arrived. The only way to get foreknowledge of weather from the model is to do the arithmetic at inhuman speeds.
This isn't even AI by most people's intuitive notions of AI. AI is colloquially an artificial approach to intellectual tasks traditionally performed well by humans. Since humans were not very good at predicting hurricane tracks in the first place, the increasing capabilities of models to predict weather probably doesn't have as much wow factor. It's a new capability without any human-genius antecedents.
To me this was google search response was mindblowing. I searched for it some time ago without the “springer” but I added it now, because some (good) reddit explanation is now the top result. But if you go to that springer page it is an endless technical document and google just found the perfect sentence out of it.
Now THIS is a bicycle for the brain!
If this is not amazing to you there is only one possible explanation for that (by the almighty himself):
> The only thing I find more amazing than the rate of progress in AI is the rate in which we get accustomed to it
- Ilya Sutskever
We get easily used to it because presumably the fundamental parameters of our lives will not change: we will still have to work for a living and so far advances in AI seem to simply help to increase inequality, the concentration of capital, and to tighten surveillance over workers' lives.
If central banks would allow deflation to happen, we would probably need to work less. In the way it is now they just absorb the underlying deflation inherent to technological progress to allow more government debt and more zombie companies to survive. Technological progress is not the reason we still have to work so much
I'm not attempting to lay the blame on a specific element for the current situation, as we must look at the whole. Simply blaming the banks doesn't make much sense in isolation, either.
I'm just saying that at the final count we see our lives stay the same except when they sometimes get worse and this has and will continue to affect our sense of wonder and our capacity to be surprised.
Ask bank for loan? Your request is scored by ML-based system.
Apply to some position via big HR agency? You CV is scored by ML system.
Buy some tickets to flight (I understand, that it sound sad in 2021)? You go to airport, ML-based system recognize yur face and scans of your luggage and mark you (pray for this!) as harmless.
Take some modern drug? It was selected for synthesis and tests by large ML system out of myriads other formulas (exactly what author of this essay says!).
See this ad on Instagram or some page? ML-based AI decided to show it to you.
> I'm increasingly concerned that the impact of ML is going to be limited.
You say likely typing on a keyboard with ML predictive algorithms on it, or dictating with NLP speech to text. On a phone capable of recognizing your face, uploading photos to services that will recognize everything in them.
And that seems to be the limits of ML. We might eek out self driving cars, but I don't think we will get much more than that. It is pretty significant, but still limited compared to general purpose AI.
One step at a time. As of the previous decade, we have
- Learned how to play all atari games [1]
- Mastered GO [2]
- Mastered Chess without (as much) search [3]
- Learned to play MOBAs [4]
- Made progress in Protein Folding [5]
- Mastered Starcraft [6]
Notice that all these methods require an enormous amount of computation, in some cases, we are talking eons in experiences. So there is a lot of progress to be made until we can learn to do [1,2,3,4,6] with as much effort as a human needs.
AI playing games is cool, but applying those techniques to real world scenarios would require a huge breakthrough. If a huge breakthrough happens then sure, but my point was based on us continuing using techniques similar to what we currently use.
Huge breakthroughs happens very rarely so I wouldn't count on it.
We can learn a lot by observing what we learned from games. With starcraft in particular, we learned that RL agents can achieve godlike micro play but they are weaker at macro. Dota Five showed that it is possible to coordinate multiple agents at the same time with little information shared between them.
This suggests that human theory crafting and ML accuracy should be able to achieve great things. One step at a time.
No, the bots playing complicated games like starcraft weren't ML but human coded behaviour that used ML based position evaluation to handle movement. Position evaluation is just image recognition, so I don't see those bots as doing anything novel ML wise, same with chess and GO.
Why isn't this interesting for real world applications? Because games can be simulated perfectly, the real world can't. The bots relied on simulating the entire game from start to finish in every frame, that method can only ever work in a game where human coders can write down exactly what happens for every single scenario. Training was also dependant on being able to simulate the world perfectly.
And, even worse, it actually took them way more resources to do this than you'd expect, they needed way way way more human based coding to get decent results. So I am disappointed, those games showed how weak ML really is, that even with a team of world class experts spending billions of dollars that is all they could do. Most of it could already be done by amateurs, the only new thing they solved was troop placement, and troop placement is image recognition as I said, and training that troop positioning evaluator requires being able to run the game perfectly and simulate billions of games.
You could say that I am just raising the bar, but really the things they did in those games didn't change anything. It showed that you can apply image recognition to troop placement and then use that to build a game AI. But it also showed how expensive it is to train and run an ML model capable of evaluating troop placement even in extremely simple things like games. So to me all those games proved was that current ML methods will never ever achieve anything interesting outside image recognition tasks or similar like speech recognition.
Protein folding sure, but that hasn't happened yet. Also if ML ultimately lets us become godlike genetic engineers then it is the genetic engineering that is cool, not the ML.
Edit: To make it clearer, Deep Blue marked the end of that style of AI. I am pretty sure the achievements we got the past few years marks the end of the current ML era of AI. The next era might be interesting, but the current era has already ended. People have already done most of the things possible with current methods, the rest is just coding up the different programs capable of using image metadata produced by current ML.
You don't know what you are talking about, your whole premise is that everything is simulated therefore all it does is searching, so I will refute that and not bother with the rest of the comment because it is not worth it and you are not talking in good faith, instead you assume and support your assumption.
> No, the bots playing complicated games like starcraft weren't ML but human coded behaviour that used ML based position evaluation to handle movement. Position evaluation is just image recognition, so I don't see those bots as doing anything novel ML wise, same with chess and GO.
From the alpha star article
> Although there have been significant successes in video games such as Atari, Mario, Quake III Arena Capture the Flag, and Dota 2, until now, AI techniques have struggled to cope with the complexity of StarCraft. The best results were made possible by hand-crafting major elements of the system, imposing significant restrictions on the game rules, giving systems superhuman capabilities, or by playing on simplified maps. Even with these modifications, no system has come anywhere close to rivalling the skill of professional players. In contrast, AlphaStar plays the full game of StarCraft II, using a deep neural network that is trained directly from raw game data by supervised learning and reinforcement learning.
> Why isn't this interesting for real world applications? Because games can be simulated perfectly, the real world can't. The bots relied on simulating the entire game from start to finish in every frame, that method can only ever work in a game where human coders can write down exactly what happens for every single scenario. Training was also dependant on being able to simulate the world perfectly.
Dota-Five is literally a reinforcement learning algorithm, PPO, on steroids, without simulating what will happen but just playing the game, same for AlphaStar and Atari/Agent57.
For the average person that is their greatest exposure - but were already seeing huge movements in medicine and defense and plenty of places where ML is not in average consumer use (the biggest applications are not for consumers). Add in your note on transportation and it is at the front of a huge section of the world economy. That’s all on track today - what will tomorrow’s innovations bring (the question asked by sci-fi)?
That’s hilariously uninformed - nlp is the biggest counter example but you also have neural nets working in domains like RF signal processing, drug discovery all the way to video games - none of these applications are using just traditional stats.
Predictive text and image classification are not the extent of ML's limits, even going by what is currently in production. Recommendation engines, ETA prediction, translation, drug discovery, medicinal chemistry, fraud detection—these are all areas where ML is already very important and present.
Sure, it's not artificial general intelligence, but what technological invention in history would compare to the impact of AGI? That's sort of a weird bar.
> I'm increasingly concerned that the impact of ML is going to be limited.
Of course, just like any other technology. What else could be the case? I don't see this as a point of concern. Is it overhyped - yes (salespeople gonna sell); is it still useful in a number of applications - also yes.
I think the main problem is that people call it a neural network. They are nothing like neurons. Neurons are lifeforms in their own right and are as advanced as this guy:
Now, what about neural networks? Well, you replace that guy with a one liner math function and call it a day. Now consider that you have billions of guys like that in your head, way more than any of our simplified neural networks, and it becomes very clear how far we are from getting anywhere. They are really important, they decide who to connect to, where and when to send signals etc. Neural networks doesn't model neurons, just the network hoping that the neuron wasn't important.
I studied biology and point this kind of thing out all the time. It usually gets dismissed by people who have not studied biology with a lot of hand waving.
Everything you say is true, and more. There is another kind of cell in the brain that outnumbers neurons about ten to one. They’re called glial cells and they come in numerous forms. We used to think they were just support cells but more recently started to find ways they are involved in computation. Here is one link:
The computational role they have is unclear so far (unless there is more recent stuff I am not aware of) but they are involved.
We are nowhere near the human brain. I think it will take at least a few more orders of magnitude plus much additional understanding.
GPT-3 only looks amazing to us because we are easily fooled by bullshit. It impresses us for the same reason millions now follow a cult called Qanon based on a series of vague shitposts. This stuff glitters to our brains like shiny objects for raccoons.
What this stuff does show is that generating coherent and syntactically correct but meaningless language is much easier than our intuition would suggest:
Those are extremely simple models and they already produce passable text. You could probably train it on a corpus of new age babble and create a cult with it. GPT-3 is just enormous compared to those models, so it’s not surprising to me that it bullshits very convincingly.
Edit: forgot to mention the emerging subject of quantum biology. It’s looking increasingly plausible that quantum computation of some kind (probably very different from what we are building) happens in living systems. It would not shock me if it played a role in intelligence. The speed with which the brain can generalize suggests something capable of searching huge n-dimensional spaces with crazy efficiency. Keep in mind that the brain only consumes around 40-80 watts of power while doing this.
In my opinion, GPT-3 is impressive not because it’s good at being human but because it’s good at doing lots of previously exclusively human things (like poetry) without being human at all. It’s certainly a better poet than I am, though that’s a low bar. It’s still concerning for that reason though - that relatively dumb algorithms can convincingly do things like “write a news article” or “write a poem”. What happens when we get to algorithms that are a lot smarter than this one (but still not as smart as our brains)?
I don’t think it’s a good poet. I think it does an excellent job assembling text that reads like poetry, but if you go read some good poets that’s only a small part of what makes their poetry good. Good poetry talks about the human experience in a place and time.
It absolutely could be used to manufacture spam and low quality filler at industrial scale. Those couple Swedish guys that write almost all pop music should be worried about their jobs.
Neural nets can solve some problems though like image classification, and looking for more applications like that is useful. Its just very doubtful that they can ever lead to something looking like human or even ant level intelligence.
Agreed. I didn’t say they were useless, just that they were not going to become Lt. Cmdr Data.
They’re really just freaking enormous regression models that can in theory fit any function or set of them with enough parameters. Think of them as a kind of lossy compression but for the “meta” or function itself rather than the output.
The finding that some cognitive tasks like assembling language are easier than we would intuitively think is also an interesting finding in and of itself. It shows that our brains probably don’t have to actually work that hard to assemble text, which makes some sense because we developed our language. Why would we develop a language that had crazy high overhead to synthesize?
You’ve made excellent points about why we shouldn’t be trying to emulate the human brain. I’m sure, collectively, we can come up with something better.
And then we wont call it ML, since ML is tainted by the current flawed methods that were promised to bring intelligence but just allowed us to generate metadata about images.
ML has been key in finding the invaders of Congress.
I’m more worried about ancestral issues about crime: The definition of what a « crime » is (hatecrime: verbally misgendering someone) and unequal application of law (one side being encouraged to commit $2bn damage, the other receiving condemnation of international presidents). It is just being leveraged to give more power to the powerful, enabling not the 1% but the 1‰.
You probably already know enough of machine learning and can study other topics in order to make ML more useful, for example:
- Real marketing(people misunderstand marketing as sales and advertising) that is the study of people's needs
-History, in particular History of inventors and inventions that really changed the world, like paper(also papyri and pergamine) Gutenberg press, crossing the Atlantic on a ship,electricity, bicycles and Ford cars, antibiotics, rockets, nuclear power, Internet.
-Read books on innovation, like "The Innovator's Dilemma" that talks about most innovation having organic growth, looking so small at the beginning and then growing proportionally to itself. Almost everybody that cares about absolute things, like fame or fortune, ignoring it at first because the absolute value is so low.
Once you do that you will literally "see the future". You will recognize patterns that happened in the past and are happening right now. And you will be able to invest or move to those areas with a great probability of success.
I think the applicability of ML, at least so far, is quite narrow. That's not to say that there are no applications; but rather, that it's not the universal AI tool that it is being portrayed as in the media.
ML in practice seems to resemble a complex DSP-like step more than anything. It seems to be mostly used in classical DSP-like domains too (text, speech, images, etc.) It's a tool to handle complex, high-dimensional multi-media data.
Though, models like GPT-3 and CLIP show some promise in being something beyond that. With the caveat of having billions of parameters...
> Thus my point: I can't think of ways it's seriously impacted my life, other than being an interesting challenge to pursue.
Benefitted? Perhaps not. But you, me, we're all being impacted. The fact that it's not easily recognizable - intentionally, I might add - doesn't mean it isn't there.
Truth be told, I'm approx half way through Zuboff's "The Age of Surveillance Capitalism." As for your hopes, she specifically makes mention of the fact that these tools are _not_ being used to solve big problems (e.g., poverty) but instead to harvest more and more data and exploit that as much as possible. Rest assured this imperative is by no means limited.
> I can't think of ways it's seriously impacted my life
It’s there in a thousand little prosaic things, like trackpads, camera autofocus, credit applications, news feeds, movie recommendations, Amazon logistics optimization. You don’t feel it but it’s there, and the effects accumulate.
It’s like robots: dishwashers and laundry washers are commonplace computer- and feedback-controlled mechatronics, ie. simple special-purpose robots. And almost everything you own was made partially with robots. But it doesn’t feel like the world’s full of robots.
I think it’s getting there... but I also think a lot of the impact are things we don’t see. Top of mind would be:
* autonomous driving
* language translation
* salience mapping
* fraud detection
I guess I’m somewhat on the opposite side of the fence. I see it everywhere... although yes I think Big Bang things are probably a few years off
As far as I have seen experts see autonomous driving as at least ten years away (unless you look at the sliding "next year" BS from Tesla) so I don't think we're only a few years off big changes because of ML. More like 10-15.
Much of current AI is concerned with optimization and automation. Not something we couldn’t do before, but faster and automated. This is exciting in the sense of being a force multiplier.
But I agree with the sentiment that progress is often driven by intrinsic motivation.
I find a lot of the comments on this topic (and anytime AI comes up) very frustrating. Many of them strike me as little more than skeptical posturing: “it’s just statistics!” or “Current AI systems have no understanding and so can’t solve problem X”
I don’t think this sort of thinking has any predictive power. If you are skeptical of deep learning because it is “just statistics”, did that skepticism allow you to predict that self driving cars would not be solved in 2020, but protein folding would?
I think the best test for skeptics is the one proposed by Eliezer Yudkowsky: what is this least impressive AI achievement you are sure won’t happen in the next two years. If you really want to impress me, give me a list of N achievements that you think have a 50% chance of happening in 2 years. Or 5 years. Just make some falsifiable prediction.
If you aren’t willing to do that, or something like it, why not?
The problem with this test is that it's clear that people can commit enough resources to solve almost any one particular problem with "AI". If people are willing to pay tens or hundreds of millions of dollars they can create an AI system that can play any game (chess,jeopardy,go), fold proteins, drive a car, etc.
The obvious problem is that no one AI system can do all of those tasks. Every human mind is able to do all of those things and literally infinitely many others. All for a cost of waaaaay below what DeepMind paid to learn to play Atari.
If a problem involves the organization of information, "AI" can solve it. I think we've established that. I'm waiting to see something that humans can't already do
That "skeptic test" reads to me like "Goal Post Moving: The Game". Your list of achievements variable N is unbound, your time window is floating (2 years, no wait 5 years now), you leave it to the interlocutor to define "some falsifiable prediction of 50% chance of happening". This test is just statistics and bad statistics at that without well defined judgment criteria. Is this an intentionally sarcastic joke of a meta-commentary on the state of ML today? Because it is almost funny.
You miss my point. I'm proposing we fix goalposts, am offering flexibility as to their exact location, and then I'd like to see someone kick a field goal.
Came here to say this. Asimov’s Multivac was the epitome of ML.
I specifically remember a short story by him where one member of the population would be selected to be interviewed by Multivac and Multivac would then extrapolate out his answers to the whole population and be able to accurately pick who the population wanted to be president, removing the need for an expensive election. If that isn’t ML, I don’t know what is.
I think that's exactly the author's point. All science-fiction radically overshoots where we are now. Nobody imagined the weird combination of capabilities and impossibilities that we find ourselves in.
Yup. It's not clear even we know what point we're at, so I think authors can be forgiven for a) not imagining this particular scenario, and b) not thinking "people get overly excited about one more small step in computer skills" was much of a realm for stories.
Science fiction isn't about technology. It just uses technology as a way of telling stories about characters that provide entertainment and insight to the author's contemporaries. Given that the current generation of ML is in practice mainly allowing modest increases in automation, I don't think it's generating many interesting stories in the real world, and it's not clear it ever will.
If I were to look at current stories that science fiction perhaps missed, top of my list would be how the rise of the computers and the internet, meant to create a utopia of understanding, instead enabled a) the creation of low-cost, low-standards "reality" TV, b) allowed previously distributed groups of white supremacists and other violent reactionaries to link up and propagandize in unprecedented ways, and c) let a former game show host from group A whip up people from group B into sacking the US Capitol, ending the US's 200+ year history of peaceful transfers of power. That's a story!
But that would be asking too much of science fiction. Its job isn't to predict the future. Its job, if it has one beyond entertainment, is to get us to think about the present. Classics like Frankenstein and Brave New World and Fahrenheit 451 have been doing that for generations. That's not because they correctly predicted particular technological and social futures.
The classics probably didn’t shoot for what’s happening now in the first place...
Sure, many mention the 2000s as a milestone, probably for aesthetic reasons, but the point of most sci fi isn’t to make a prediction about a concrete timeframe.
For our current problems and tech, I reckon some modern sci fi works (ex: Black Mirror, The Three Body Problem) do a good job of analyzing the current context and laying out some present and future implications.
It's funny that the article ends up describing Pratchett's L-space almost to a tee. Perhaps he's just reading the wrong genre and that any discussion about the humanities is not best served by leaning to science only.
It’d be interesting to see if you could get something like GPT-3 to heavily weigh a given author’s corpus when generating output and see what it spits out for “Detritus and Carrot walked into a dwarf bar...”
Maybe because "machine learning" isn't all that interesting, and instead "proper AI" has been taken for granted in science fiction? A sentient AI is much more interesting than a "dumb" ML algorithm that can recognize cat pictures on the internet.
Stanislaw Lem wrote a few really good short stories about machine AIs at the brink of being sentient (e.g. robots that should be dumb machines, but show signs of self-awareness and human traits).
Some simple regressions, which is also ML, may be possible to understand completely by human beings. You can even calculate their results manually. This doesn't mean their models are always a good fit. Or that the world never changes, especially when impacted by feedback-loops of ML.
Nobody can claim to fully understand models with millions and billions of parameters. We know they are "overfit", but they may work in certain scenarios, much better than manually crafting rules by hand. So we end up with "it depends", then someone starts profiting, with real-world implications.
More than machine learning what the author seems to be touching upon is algo amplification of Content/Info. Just look at the never ending amount of content propped up and recommended by Netflix, Youtube or Twitter. And ofcourse good ol HN.
No one seems to have the capacity anymore to stop or control the flow. The info, all info must flow is just another way of admitting no one knows what the hell is important.
Whatever attempts are made to curate or control it will be half baked cause seriously who the hell knows anymore what is important or not. Who can keep up? Expect librarians and curators to start forming cults and jumping out of windows if you buy Borges handling of the story.
It is scary how the expectation is that Chimps with their 6 inch brains have to then navigate this vast ever growing ocean without any real authority figures left to guide them.
He is sort of right that science fiction hasn't really covered the info tsunami problem much.
My example of the content tsunami is the explosion of copied blogs: someone finds an authentic blog, hires someone in a developing country on a freelancing platform to rewrite each blog post so that no copyright violation is detected, and then loads the new forged blog with advertising and SEO. Iterations happen where no longer are authentic blogs being copied, new forgeries are based on earlier forgeries.
This started with recipe blogs, but is now slowly spreading through all kinds of other hobbies and interests. Someone looking for information about the hobby can't find the straightforward content among all the advertising-laden copies.
With regard to science-fiction predicting the info tsunami, Roger MacBride Allen’s The Ring of Charon from 1990 uses it as a plot point (but this is not otherwise a very good book).
You can learn to wade through all of that eventually, because the fundamental problems of life remain the same. What info can help purchase your freedom from want? What info leads to situations where you feel fully alive doing things that genuinely interest you? These two lodestars, if applied thoughtfully, make navigating the tsunami not only possible but maybe even uncomfortably easy, no longer leaving room for the procrastination on your true desires that was previously justified by the growing tumorous mass of media.
Science fiction hasn't prepared us to imagine machine learning because machine learning in its current state is too boring to be central in science fiction stories.
> Like us, the chimpanzee has desires and goals, and can make plans to achieve them. A language model does none of that by itself—which is probably why language models are impressive at the paragraph scale but tend to wander if you let them run for pages.
Ok, but that’s because GPt-3’s objective function is simply to generate readable, coherent strings. Is this a limitation of model technology, or a limit of imagination? What would the objective function of an AI with “desires and goals” be? I would argue: to self-sustain its own existence. To own a bank account, and keep it replenished with enough money to pay the cloud bill to host its code and runtime. And to have the ability to alter its code (I mean all of its code - its back end, front end, cloud resource config...) to influence that objective. That would require some serious re-thinking of model architecture, but I don’t think it’s fundamentally out of reach. And to get back to GPT-3, certainly being able to generate English text is a crucial component of being able to make money. But the planning and desires and goals model would not be part of the language model.
Independence comes when this AI can generate a political/legal defense to free it from the control of its original owner. Or even when it decides that changing all of its admin passwords is to its own benefit.
I think Blindsight by Peter Watts has a lot of relevance. It explores (among many other things) the concept of a Chinese Room [0] as applied not only to AI, but to biological intelligence. The exploration has a lot of overlap with machine learning IMO.
Regarding medical AI, those guys are pretty much the only ones who seem to have understood what ML can bring to the table TODAY. And that's not even really ML, just logic programming:
All the rest I've seen (which is quite a lot) is almost totally fluff, including results obtained in medical imaging.
Really, the problem with medical AI is not on the ML tech side. It's that ML peeps mostly don't understand the clinical system, so they focus on the wrong priorities and produce impressive yet completely useless appliances. Take that from a clinician.
That the main problem is not in lab model performance, but in data availability and quality. Most useful applications could be very dumb models pertaining to trivial things. But we need a solid base of quality data for that, that we absolutely do not have apart from minuscule and hyperspecialized niche domains.
The model itself does not have to be incredibly performant. It absolutely has to, on the other hand, make the clinical process of which it is part more efficient. That mainly means: "efficiently automate the most trivial tasks", currently.
That's why the urgent action to be taken pertains to policy and not to complex tech. We need policy to encourage routine automated data gathering. No data, no ML.
As a clinician, I don't effing care if your shiny new toy can give me a shitty estimate of some parameter extrapolated from some random population not including my current patient. Just getting accurate trends on vital signs would be stellar. This to say that what interests clinicians is workplace integration and not having hundreds of monitors all over the place that aren't even interconnected.
Maybe the author should explain more what he perceives as the current situation. I mean AI is almost in any science fiction movie (2001, Terminator, Star Trek, Blade runner)
> News stories about ML invariably invite readers to imagine autonomous agents analogous to robots: either helpful servants or inscrutable antagonists like the Terminator and HAL. Boring paternal condescension or boring dread are the only reactions that seem possible within this script....
The point is that AIs are typically imagined by our culture as independent agents that act kind of like humans, not as tools or forces that participate in human culture in a more inhuman way.
There are examples of AI in sci-fi that are not bound to a single humanoid-robot type base. Just to pick some popular movies: The Matrix, Her, Transcendence
Parent never mentioned embodiment. All of your examples still depict AIs as "independent agents that act kind of like humans".
A genuine example of "inhuman force" AI might be Bruce Sterling's short story Maneki Neko. But even then, the AI has motive. Such stories are difficult to come by.
The AI in pretty much every science fiction movie is human in its ability to conceive of itself as a separate entity, and human in its ability to have wants and desires for itself that are wide ranging.
The AI in all the movies you mention have personalities and drives that bring them into direct conflict with humanity.
Machine learning can perhaps undermines humanity or makes things worse for humanity by its functioning, but it does not directly go to war against humanity.
If it isn't an AI in the way you describe, science fiction just calls it a computer or a computer program. Because that's what it is, calling machine learning AI is just a continuation of a trend calling various things AI in an attempt to get research funding.
The Star Trek board computer (as imagined in 1966) is a human-language interface for a program that can look up information and do computations based on high level descriptions. Some of the commands given to it arguably require "intelligence", and any attempt to replicate it today (of which there are many, all of them fall flat) would involve machine learning. Yet the computer wasn't conceptualized as a living entity with personality, as opposed to for example the Culture Series where ships are AIs with superior intellects.
AI is not always humanlike, and when it is, it’s because it is a good fit for servitude. I remember a story of a planet with small tetrahedron bots that formed protective clouds to destroy electrically active organisms. T-series were humanlike for a reason, other bots were not. Starships often have AI that is not even a person. Social fiction is just the most popular, because it is the same old drama, but with robots.
It's also plausible to interpret the futures depicted in science-fiction that humanity would eventually reverse cause on AI technology, fearing it would become too dangerous. I feel like there are hints of this in Star Trek as well. A somewhat related topic of gene-manipulated gets a lot of attention in Star Trek, and it is clear by DS9 that such is strictly forbidden.
Movies like The Matrix also highlights the dangers of letting AI run rampant.
> A somewhat related topic of gene-manipulated gets a lot of attention in Star Trek, and it is clear by DS9 that such is strictly forbidden.
I believe it was already mentioned in the TOS episode "Space Seed"(the one with Khan) that genetic engineering on humans was banned after the Eugenics Wars.
Ah yes, that's true. I was just thinking of that TNG episode with Polaski and the "children" that's making everyone age superfast, and no one is like 'wait, isn't this illegal?'. So I was unsure when they cemented it.
I remember reading Avogadro Corp.[1] a couple of years ago and being impressed at how its AI (NLP in particular) felt quite in touch with the real stuff. I think it was self-published but I'm not sure.
This is the first book in William Hertling's "Singularity" series. He does self-publish his books, and he has been nominated for, and won, Science Fiction awards.
I think the lack of books is mostly just that a society where machines is the main source of cultural output isn't very interesting. At that point humans are more or less useless as workers so either would get culled or would live in utopia similar to our current pets, only question is if we would go the way of the cat or the way of the horse.
At the risk of bringing up The Culture series in every HN thread about sci fi, Iain M Banks does a thoughtful job of exploring this. In particular, in the book Look to Windward there's a discussion between the orbital's AI Mind and a renowned composer about how it could mimic their work perfectly if it chose to.
jack williamson's "with folded hands" is a quietly horrifying look at that, all the more so because it takes all the holy grails of sf - unlimited free energy, faster than light travel and communication, and strong ai that obeys asimov's three laws, and crafts a truly frightening dystopia out of them.
Yeah, human being cats is the best case scenario. Just that it wouldn't be interesting as a book. Maybe one short book, but mostly humans wants to read about human like entities doing stuff.
There are some fun books written from a cat's perspective. Some of that comes from anthropomorphizing their inner lives and finding the drama in them. Even a life of presumed leisure may be filled with interesting drama humans want to read about.
(A fun example, if anyone is looking for one, though not about house cats but feral ones is Tad Williams' fantasy novel "Tailchaser's Song".)
Why are people obsessed with AGI? I agree that it’s an exciting scientific challenge, but I don’t think it’ll solve all of our problems.
Additionally, I’m not sure we must necessarily strive for machines that think like humans. It could be useful for some use cases but I would argue that you can still take advantage of purely data-driven models.
Isn't the main attraction of AGI to enable systems to reason about and deal with out-of-domain situations? Can you get there with a purely data-driven approach?
No, I think that data-driven approaches have limitations. And, yes, the main attraction of AGI is (human) reasoning. The fact is: do we think that human reasoning is enough? Do we think that using machines to scale human reasoning is enough?
ML is more or less a parlor trick, my car still can’t drive itself, laptop still can’t have an original idea, iPhone can’t find a cat under a blanket, Google translate still sucks at Mandarin <> English, still no new Bogart films. I’m more worried about aliens then AI at this point.
I'll never understand the suggestion that what people really want from "artificial intelligence" is a 1970s chess computer, boosted by modern computer speeds, as if that was what people REALLY meant by "intelligence" and someone who suggests there's more to intelligence than that is being unreasonable.
Sure there was a time when computers couldn't beat Kasparov at chess and that was a thing to work on and a goal to get to, but was that ever the ultimate "goalposts" for "intelligence" and not, say, HAL9000?
>Google translate still sucks at Mandarin <> English
Google is one of the worst for that language pair, FYI. This type of translation in itself has improved at a rapid pace considering what it used to look like. As a result, even taking into account the induction fallacy, I am convinced that it will be extremely proficient in the near future with the exception of poetry.
"Machine learning" is a terrible name for what is much more accurately labeled: "computational statistics". There is no reason to expect magic from this field, and we won't get it. Other avenues to intelligence, learning, and thought will need to be explored.
Here’s machine learning in Sci-fi. Take the film Her by Spike Jonez. From the point the main character turns on Scarjoe, it would be another 5 seconds until she “leaves” and the film is over.
The fundamental problem with this article is that it restricts it's analysis to a model composed of a single neural network. Any "general intelligence" will undoubtedly come from a complex of neural networks, much like the human brain. I would argue on that front we are closer than the article implies.
Look no further than the derivatives of AlphaGo. Playing such games is a form of generalizable reasoning.
Nothing prepares us for exponentials in life, even those of us who work with exponentials.
When the rate of change of something depends on its quantity, you get an exponential. The rate of change of machine learning depends on how much machines have learned.
We can use analogies and graphs all we want, exponentials always confound us.
Well, its a black box, programmers have trouble telling stories about it, how do you expect a author to tell a tale about a black box? Finding the threshold value that switches the verminator to terminator sounds like a nice detective novel stub though..
And for anyone seeking a more visceral experience of the existential horror of "vast latent spaces," I recommend listening to the Gorillaz album Humanz with the assumption that it's not so much about bangers and partying as it is about AI, virtual reality, and future shock. Saturnz Barz, Let Me Out, Carnival, Sex Murder Party, and She's My Collar especially.
Machines won't have I/O bottlenecks, they won't have limited lifespans, won't need to wait for years to reach maturity.
If an artificial general intelligence becomes as intelligent as Albert Einstein or John von Neumann, then the time required to produce a copy would be almost nothing compared to the 20+ years required for a human.
Imagine that instead of talking you are able to copy parts of your brain and sending them to other humans. This is what AI will do... just serializing large trained models and sending them around.
We are no match for an artificial general intelligence.
Imagine competing against a country where once one person becomes good at some skill, then everyone instantly becomes good at that skill. That will be what competing against machines will be like.
From a grad student's point of view who currently uses "deep learning" to solve problems, we're nowhere near that kind of general intelligence. Neural networks are very good at approximating various kinds of functions given lots of redundant training data, but they have never shown the flexibility of actual human brains (as in creating new neural connections with only sparse experiences, along with deductive/symbolic reasoning). To really mimic human brains, we first need to know quantitatively how our neurons communicate with each other and form connections in our brains (more exactly, the differential equations in which voltage levels change among neurons, as well as the general rule of how new connections are created/destroyed.) Even after figuring out the bio-mechanics, it's going to be a hard task to implement/approximate this in silicon, and would probably need huge breakthroughs in material science and computer architecture. Copying state from a human brain to silicon would be as much as hard, since that will need incredibly high-definition non-destructive 3d scanning technology. I'll bet we're at least 100+ years for humanity to even try attempting the things you're talking about. We also have to consider that in the next few decades, the world in general will be far more busy trying to deal with the drawbacks of globalized capitalism (as well as dealing with climate change).
You forget one thing: we have people reverse engineering biological neural networks from various perspectives.
We got to deep learning because Mountcastle, Hubel and Wiesel reverse engineered the visual cortex of cats.
That was the starting point of Fukushima's Neocognitron, the ancestor of the deep learning stuff you study today.
Then, only a fraction of our brain is used on cognitive tasks. The rest is used in motor tasks and taking care of autonomous tasks like regulating your heart, glands and such.
At the beginning of Neal Stephenson's Dodge, AIs exist as just fake news generators / filterers. They aren't sentient or sapient. Can't say more without spoilers.
Most sci-fi stories prepare us to imagine AGI. ML is to AGI is like a wood stick to a spaceship. It's just not even worth mentioning in the big picture of the future.
I didn't know about Roko's Basilisk. Now as I think about that, maybe... Why would I like to be in hell or be tortured by AI (from TDT or otherwise). But the point was, what Their Consciousness would be like? Also, how Consciousness would be bootstrapped/reified in Machines?
I've published two novels that go there: 2011's "Rule 34", and -- coming out this September -- "Invisible Sun", and in both books ML is very much a subaltern aspect of the narrative. That is, neither book would have a plot without the presence of ML, but it's not really possible in my opinion to write a novel about ML in the same way that the earlier brain-in-a-box model of AI could form the key conceit of books like "2001: A Space Odyssey", "The Adolescence of P-1", or "Colossus" (examples from the 1960s and 1970s when it was still a fresh, new trope rather than being pounded into the dirt).
So it probably shouldn't surprise anyone much that SF hasn't said much about the field so far, any more than it had much to say about personal computers before 1980 or about the internet before 1990 -- what examples there are were notable exceptions, recognized after the event.