It leads me to wonder why we haven't seen (or at least heard about the development of) an operating system with purely voice "UI" based on an RNN... Especially after Her came out.
I understand it's hard but it also sounds like a fun project for people with the relevant know-how.
Andrew Ng gave a keynote at GTC in which he talked about bringing Baidu's Deep Speech technology to phones (for accessibility). You betcha they're working on it!