Hacker News new | past | comments | ask | show | jobs | submit login

That's still missing the point: from the fact that your model derivation is intelligible, it doesn't follow that websites will know that you're using the model, or will be able to product consistent, desirable behavior for both you and everyone else using some different behavior.

Sites will break without warning for users until some engineer discovers that some ML model told the browser to only set cookies for a particular set of sites determined by IP, domain, "trustworthiness", and a few other factors engineers don't want to think about when designing the app.

If you're saying that you can publish the model openly with the expectation that sites will know you're using it, that's fine, and that's a valid approach, but that has nothing to do with ML per se, and is just another attempt at rearchitecting the browser/server interface with all of its associated issues, and continues the same arms race between sites that want to de-anonymize you.

In short: there's a tradeoff between "hiding information about yourself" and "providing a stable set of expectations for web sites to build off of"; ML can favor one of those objectives but not eliminate the tradeoff and somehow get the best of both worlds. The only way to get the best of both worlds is to rearchitect the API by which browser clients talk to webservers so that it's easy to separate out what you do want/need to tell the server vs what you don't.




That's definitely a valid consideration. I was more focusing on refuting OP's statement in implying machine learning is a black box because only a select few people are able to understand it due its complexity. I think I should have made that more clear in my original comment in order to avoid any confusion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: