I'm going to do a bad job of explaining what I think the value of this quote is, but here's what I thought upon reading it: He's not asking for the technical difference between consulting a phone and having the phone direct you, but in a deeper sense (I guess maybe of some false depth, according to some), posing the question of where the line is crossed before which computers are simply tools used by humans and beyond which computers/technology in general must be considered a separate entity or life-form of some kind.
I think the "giving directions" example was sort of a poor choice, because people will think "yeah, a GPS 'commands' you, but it isn't actually 'commanding', it's simply information delivered in a command-like format so that you don't have to interact with it while driving". I think this was simply supposed to be an example of a more general philosophical question of at what point do we program-in such layers of thought, abstraction, problem-solving, etc. into technology that the resulting intelligence cannot be considered anything but another form of life (you could argue that as it exists right now, it is a very primitive version of that).
Even in the GPS example, it's not a poor choice. What if there are places you decide to visit or not visit, based on the availability of GPS? For example, exploring a new city in your car; if I didn't have my iPhone so I could search interesting locations (where the definition of "interesting" is not up to me) and find directions for them immediately, I probably wouldn't go through the trouble of exploring in a lot of cases. So even with this example, it's already not clear who is "commanding" who.
This touches on an ethical problem that is also briefly mentioned in the book "Programming by Demonstration," which I think restates the problem in the best way:
"In any case, I think the most important issues regarding end-user programming and its subbranch of programming by example are pedagogical and ethical. There is no question that a human with a goal wants to have the sub-goals ready made and at hand. One shouldn't have to learn about Carnot cycles of internal combustion engines--or even just hand cranking it--in order to drive an automobile. And agents that can be told goals and can go off and solve them have been valuable and sought after for as long as humanity has endured.
On the other hand, it takes a very special value system for children and adults to be able to exist as learning creatures--indeed as humans at all--in the presence of an environment that does all for them. 20th century humans that don't understand the hows and whys of their technologies are not in a position to make judgments and shape futures. At some point it is necessary to understand something about thermodynamics and waiting until then to try to learn it doesn't work. Nature's rule is "use it or lose it"--most social systems that have incorporated intelligent slaves or amanuenses have "lost it". In fact most never gained it to lose. In a technopoly in which we can make just about anything we desire, and almost everything we do can be replaced with vicarious experience, we have to decide to do the activities that make us into actualized humans. We have to decide to exercise, to not eat too much fat and sugar, to learn, to read, to explore, to experiment, to make, to love, to think. In short, to exist.
Difficulties are annoying and we like to remove them. But we have to be careful to only remove the gratuitous ones. As for the others--those whose surmounting makes us grow stronger in mind and body--we have to decide to leave those in and face them."