Hacker News new | past | comments | ask | show | jobs | submit login

I'm a little frustrated at the moment -- The future is arriving with voice controlled devices, yet I don't trust any of these companies with my words.



Same sentiment. We all know, that the existing cloud voice recognition features such as Siri, Samsung Smart TVs and upcomoning feature etc etc will eventually hunt some uf us (or all of us) down.

We must build new tech concepts, where privacy and _full_ controll and 100% ownership of our data is controlled by us. From the ground up.

Not sure if SV and HN is the best location to put such a statement - but I do hope I'm not alone in this.


I feel like this is a UX issue that touches on our natural desire to have private and public conversations.

If I'm talking to a device, I consider it to be the same kind of conversation I would have with a close friend. One that I would naturally want to keep between to two of us. You wouldn't find me screaming it for everyone to hear and think about.


Just like gun registration, passports, driver licenses, and cell phones are, huh. That's why we all live in constant fear of our tyrannical overlords.


I think the only solution is to develop "personal AI" that runs on personal hardware... or an anonymous AI running 'in the cloud' but paid with cryptocurrency. Hopefully we'll be able to control what runs locally and what gets farmed out to the cloud on a more granular level one day.


Are there open-source voice recognition projects in the works that could replace the likes of Google Now and the others? If not, there should be.


This is what I started thinking about as I was reading the paranoid HN comments. I believe there's open source voice recognition but the challenging part is taking commands and making them actionable.


Defining commands and their corresponding actions is something I think an open source could actually do much more effectively than companies. When everyone in the world can contribute commands rather than a single team of software devs it is possible build up a much larger collection of them. I would really like the ability to add commands to a natural language command system when a command I use doesn't work. Also, I think that a reprogramable command system would open up an interesting programming paradigm where one could define new commands and actions in terms of other commands in the system. For example,

What's new?

> Unrecognized command

When I say "What's new" read the "In the news..." section of Wikipedia's main page.

> Acknowledged.


That's a great idea. If this was used by a large group of people one could take the total commands used for a specific action that were programmed and make the n% most popular ones the new standard going forward.

Also, if there existed something like a phrase thesaurus that could be extremely useful for building out a list of commands. For instance, "What's the weather?" and "What's it like outside?" mean the same thing and if you searched for one in the phrase thesaurus a synonym for the other would pop up. Then all the computer would have to do is take the input phrase, search the thesaurus, and find a synonym that it recognizes.


AFAIK we don't even have good voice synthesizers. We're way back on this stuff.


None of the voice controlled devices I've ever used have been a pleasant experience (Google, Siri, various others). I don't think the tech is there yet personally.

They might have it down for certain english accents but even as a native speaker, their success rate is probably about 25% for me.

On a related note, iPhone's dictation just took a huge step backwards with the new iOS release.


What locale are you? Here in the US, I find it improved since iOS7. The "google now" style of on-the-fly response really does help with dictation.


Have you tried dictation on Mac OSX? Do you find it any good, particularly offline dictation?


> I'm a little frustrated at the moment -- The future is arriving with voice controlled devices, yet I don't trust any of these companies with my words.

The public doesn't understand the technology or its implications well enough for consumer demand to have an effect.

I think regulation likely is needed. In terms of confidentiality these are dangerous products. For example, the confidentiality of health and financial information is regulated, I assume because consumers cannot evaluate and design security systems and therefore cannot demand them from vendors. The same should apply to these products (which will capture health, financial, and much other private data).


The Terms of Service should cover how your voice data is being used, who it is shared with, and the purpose for both.


The path to more reliable voice recognition is through data and companies race to gather the most of it. The companies that do win this race are the ones, who can serve as interfaces to tomorrows services.


But this data capture could be done through different means. That doesn't require capturing private conversations.


Would be cool to plug M-x spook (http://www.cypherspace.org/rsa/spook.html) into text to speech and play it for good old Alexa more or less not stop...


I don't have as big of a trust problem as I do a problem with announcing my computing intentions to the whole world all the time.

If everyone was talking to their computers all the time the world would be terribly noisy.


Well, you do. You just don't feel as comfortable when they make it apparent.


Your paranoia is nobody's fault but your own.


Well said.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: