>Maybe there should be a mode you can turn on where Siri will watch what you do and when there are things you could do more quickly with Siri, it could tell you.
You say that as if it's a completely non-trivial thing to implement.
I think you mean 'as if it's a completely trivial thing to implement', since non-trivial means hard and complicated.
But, assuming that's what you meant, how does that compare to, say, speech recognition and understanding? Apple, Google, and Amazon are up to their eyebrows in machine learning and AI. They're working to solve very, very hard problems. But the next step of that needs to be to make it so their hard work is discoverable by users organically.
Sorry, you're right. I did mean "trivial" rather than "non-trivial" but barfed the second out anyways. :-P
I think that problem is much harder than just plain machine learning because you have to somehow gauge what the user is doing or what tasks they're performing well enough for the AI to be able to even tell where it can insert itself. I don't know of any product or service right now that can "watch" a user's behavior and suggest places where the AI can insert itself and I suspect the reason why is because doing something like that is very, very difficult.
It's one thing to detect that you leave work at 5pm every day and that you usually drive home. It's another to say "I noticed that you set a cooking timer every day around 6pm, do you want me to set that for you automatically?" You're not setting a timer just to set a timer, you're setting a timer because maybe the recipe you're using requires it. Different recipes will have different timers. The AI doesn't know the intent behind the action just the action itself.
That might be the case but it's definitely not watching what you're doing, like you suggested. There may be prompts that are triggered by repeated actions (think hitting the Shift key 5 times in Windows and getting the Sticky Keys prompt) but there's nothing in there that's watching you and suggesting where you can make improvements.
Again, what you're asking for isn't a trivial implementation.
What I'm asking for is pretty much exactly what JetBrains have done in IntelliJ. When they system sees me do something manually that could be done with voice, tell me. For example, if I enter an address into maps then start navigation, the machine could suggest I use Siri next time with the phrase "navigate to 123 main street".
At the very least, it would be nice if I could say "Hey Siri, what voice commands can I use with app X?"
You say that as if it's a completely non-trivial thing to implement.