> In this scenario, our choices, over the course of a century or so, become ultimately non-existent.
You need to explain how choosing to let apps make certain types of decision removes all choices whatsoever. As it stands, you make it sound like a carpenter who is using a nail gun instead of hammer has stopped driving nails.
Suppose the carpenter pulls out an autonomous house-building robot and tells it to build him a house. Is he still driving nails?
But to your main point, while we may be offloading only trivial decisions to apps today, the better they become at making these decisions, the more natural it will be to trust them for more significant ones. As the original article mentions, it's not much of a stretch to imagine an app that looks at your demographics and preferences and tells you who to vote for. And from there, why not apps that choose where to live, what career to pursue, or who to marry? Some day, it may even seem foolish not to defer to apps for important decisions. After all, how can one fallible, emotional person ever hope to make a better decision than a datacenter full of machines that can coolly consider all of the parameters and potential outcomes?
At that point, floating through a blissfully optimized life, one might say that yes, the apps are deciding everything for me, but they're doing so only in accordance with my preferences and values. I'm still in charge; I'm still exercising free will. But in the absence of making decisions oneself, where exactly did those preferences and values come from?
You need to explain how choosing to let apps make certain types of decision removes all choices whatsoever. As it stands, you make it sound like a carpenter who is using a nail gun instead of hammer has stopped driving nails.