I find his thoughts about the potential limits of humanity provocative and really clever. These three parts made great impression on me:
> And in fact, you could argue that the reason why we’ve generated computational devices is consciously or unconsciously, we’ve come to recognize that our endogenous, organic computing power is not up to the task and we have to recruit machines to represent culture, because we cannot. I think there’s good evidence for that.
> The atom bomb, for example, forced a crisis. We had an extraordinary power and we didn’t really have the moral probity or sophistication to deal with it. We still do not. And that’s not making a judgment about whether our actions were right or wrong; it’s just that I think thinking reasonably about how to deploy power on that scale is beyond us.
> Human beings are hardware that’s about 100,000 years old, but we run string theory, Lie algebra. We’re running 21st-century software! How is it possible that old, antiquated hardware can continue to run ever newer and more complex cultural software?
Interestingly enough the "mental software" used to handle the crisis of the atom bomb came from mathematics (von Neumann's game theory, in particular the idea of mutually assured destruction) rather than moral philosophy or something which we label as "humanities". I'd probably argue that what von Neumann was doing with MAD was "humanities" but I don't have a good definition of the word.
I think the point is that when the AI takes over the control of the world it does not have to be through a technological singularity, with Roko's Basilisk and all the drama. It could as well be through generations of people progressively yielding their free will to what he calls "apps". In this scenario, our choices, over the course of a century or so, become ultimately non-existent.
> In this scenario, our choices, over the course of a century or so, become ultimately non-existent.
You need to explain how choosing to let apps make certain types of decision removes all choices whatsoever. As it stands, you make it sound like a carpenter who is using a nail gun instead of hammer has stopped driving nails.
Suppose the carpenter pulls out an autonomous house-building robot and tells it to build him a house. Is he still driving nails?
But to your main point, while we may be offloading only trivial decisions to apps today, the better they become at making these decisions, the more natural it will be to trust them for more significant ones. As the original article mentions, it's not much of a stretch to imagine an app that looks at your demographics and preferences and tells you who to vote for. And from there, why not apps that choose where to live, what career to pursue, or who to marry? Some day, it may even seem foolish not to defer to apps for important decisions. After all, how can one fallible, emotional person ever hope to make a better decision than a datacenter full of machines that can coolly consider all of the parameters and potential outcomes?
At that point, floating through a blissfully optimized life, one might say that yes, the apps are deciding everything for me, but they're doing so only in accordance with my preferences and values. I'm still in charge; I'm still exercising free will. But in the absence of making decisions oneself, where exactly did those preferences and values come from?
Yes, you are right, it is much simpler when you have just a single number to "optimize" across population. And the correlations are strong indeed. But if you want to explain more variance, you need to reach for better tools.
We are all well familiar with people's various simplified models of the world. They tend to itch hackers, because they work well enough to not be automatically rejected by their users, yet hackers know, and sometimes even have proofs, that the models are ultimately wrong. The same thing happens with IQ. We all know it is mostly bullshit, but the truth is that it does work as a rough predictor of performance. It does explain some of the variance, not all of it.
They used to be somewhat legitimately used for spacing and sizing table-based HTML layouts (through the mid-2000s or so). That's rarely done today -- layout is in CSS and gifs are generally used for tracking.
I'm from east Europe, so the problem could not concern me as it has been written, as nobody here really flashes their money around like a clown. But I remember the feeling of being excluded as a student not coming from a top notch high school. The majority of the other people had nice groups of friends from day zero. But guess what - I graduated with a really good diploma, easily landed a job at a major tech company, and most of these people are now my friends or colleagues.
What matters in the end is your results, and your results only.
To be precise, Alice quits programming because modern languages lack proper concurrency specification, and thus lack actual "Math" in this area. I can sympathize with that.
The concurrency behavior of Java/C++/etc is specified, it's just that the specification doesn't match the model given in the article. It follows a different mathematical model—that doesn't mean that it "lacks actual math".
Fabrice Bellard also did some research on pi digits calculations. Because of that, in 2009 he held a record for the longest pi expansion ever calculated.
Ebola is a very "clever" virus with its interferon-blocking properties. But still, out of the infections we still cannot vaccinate against, influenza has been the single most deadly one in the past, and sexually transmitted diseases are the real issue in the developed world. Why don't we encourage people to have flu shots and use condoms? Well, we do. But this is boring.
So I guess Ebola is big news because it is something new on the panic scene, sweating with blood sounds crazy dangerous, the fears have been accelerated by having the outbreak in a place that "has been forgotten by the god", devastated by wars being fought by children. As to why there is nothing to be afraid of, take a look at the CDC's data.
> And in fact, you could argue that the reason why we’ve generated computational devices is consciously or unconsciously, we’ve come to recognize that our endogenous, organic computing power is not up to the task and we have to recruit machines to represent culture, because we cannot. I think there’s good evidence for that.
> The atom bomb, for example, forced a crisis. We had an extraordinary power and we didn’t really have the moral probity or sophistication to deal with it. We still do not. And that’s not making a judgment about whether our actions were right or wrong; it’s just that I think thinking reasonably about how to deploy power on that scale is beyond us.
> Human beings are hardware that’s about 100,000 years old, but we run string theory, Lie algebra. We’re running 21st-century software! How is it possible that old, antiquated hardware can continue to run ever newer and more complex cultural software?