I agree that there are important differences between human and computer languages and that we should be mindful about the gap, especially when we try to automate cognitive tasks like interpreting contracts (the DAO!) , assessing beauty, deciding who lives and who dies, etc. However, I don't think that the differences between human and computer languages justify very many claims about AI, how AI will develop, or what AI will be capable of. After all, the programming language we use to build an intelligence might have very little to do with its inner workings or internal representations of concepts.
I very strongly disagree with the following quotation: "If we had good enough theories of human semantics we could program such theories into the computer and the computer would then understand like a brain after all. But we don't, in most areas, so we mostly can't program computers to emulate humans." I don't think we should try to automate tasks like understanding language by writing computer programs that work the way we imagine ourselves working. I think a more fruitful approach is to design general /learning/ algorithms that allow the computer to figure out what language is and how to use it by observing how it is used in the wild (i.e., by emulating other language users, maybe with some trial and error).
I very strongly disagree with the following quotation: "If we had good enough theories of human semantics we could program such theories into the computer and the computer would then understand like a brain after all. But we don't, in most areas, so we mostly can't program computers to emulate humans." I don't think we should try to automate tasks like understanding language by writing computer programs that work the way we imagine ourselves working. I think a more fruitful approach is to design general /learning/ algorithms that allow the computer to figure out what language is and how to use it by observing how it is used in the wild (i.e., by emulating other language users, maybe with some trial and error).