I spent a lot of time thinking about this during my PhD. If you think about it analogies are at the heart of kernels and other similarity based methods.
You can totally difference embedding vectors to represent relations. The canonical example from word2vec is basically solving an analogy. The big problem that you run into when applying this stuff more broadly is context, particularly how much context relating the subject and object you want, and which features encode that context. So the problem of abstraction she is talking about maps onto a regularization / feature selection problem.
You can totally difference embedding vectors to represent relations. The canonical example from word2vec is basically solving an analogy. The big problem that you run into when applying this stuff more broadly is context, particularly how much context relating the subject and object you want, and which features encode that context. So the problem of abstraction she is talking about maps onto a regularization / feature selection problem.