Hacker News new | past | comments | ask | show | jobs | submit login

This is the same message that was released to the press yesterday.

I'm surprised Google itself has not said anything, as they are also at fault for not showing the permissions workflow in the first place.




Exactly, that's what I'm more worried about - Google needs to present clear information about what access is provided BEFORE I accept the account connection. It's not necessarily Niantic's fault for asking for too much, it's Google's for not at least making me aware.


It's entirely plausible that Niantic didn't realize they were asking for too much specifically because Google didn't show the permissions being asked. So any time they tested their own app, they would have just seen what everybody else saw, which is that the app asked for access to the Google Account with no mention of what permissions.


Despite that, it is wexing that they wouldn't come across this issue during testing. Maybe it's a production issue. I am very interested in a technical explanation from either Google or Niantic. They surely had to test iOS app and check their Google permission page. Otherwise it would be really sloppy testing.


the thing is...as long as its a frame inside the app i have zero way of knowing whether im actually looking at googles login page, or if niantic is reading the traffic/javascript. At least when I get bounced out to safari I only have to trust apple, which I already implicitly do.


If you want to be paranoid, you could run a MITM proxy from your computer between your phone and Niantic's servers.


And see TLS traffic? How would this help anyone?


My comment was downvoted for some reason... perhaps I'm mistaken (haven't tried myself) but I'm fairly confident Charles can do this. I've heard of using it to reverse engineer APIs from mobile apps that use SSL, Robinhood for example.

https://www.charlesproxy.com/

> Charles can be used as a man-in-the-middle HTTPS proxy, enabling you to view in plain text the communication between web browser and SSL web server.

From https://www.charlesproxy.com/documentation/proxying/ssl-prox...


A MITM proxy typically means a decrypting proxy, so you can see all the traffic, irregardless of if it's wrapped in TLS.


I thought that Google and other app writers weren't keen on blindly accepting generated certificates, and used certificate pinning. Do they really accept anyone's google.com certificates?


I haven't tried it with a Google API specifically but here's their description:

> Charles does this by becoming a man-in-the-middle. Instead of your browser seeing the server’s certificate, Charles dynamically generates a certificate for the server and signs it with its own root certificate (the Charles CA Certificate). Charles receives the server’s certificate, while your browser receives Charles’s certificate. Therefore you will see a security warning, indicating that the root authority is not trusted. If you add the Charles CA Certificate to your trusted certificates you will no longer see any warnings – see below for how to do this.

https://www.charlesproxy.com/documentation/proxying/ssl-prox...

It seems you are correct if they use pinning:

> Note that some apps implement SSL certificate pinning which means they specifically validate the root certificate. Because the app is itself verifying the root certificate it will not accept Charles's certificate and will fail the connection. If you have successfully installed the Charles root SSL certificate and can browse SSL websites using SSL Proxying in Safari, but an app fails, then SSL Pinning is probably the issue.

https://www.charlesproxy.com/documentation/faqs/ssl-connecti...


To follow up — someone unbundled the Android APK and confirmed that it does not use certificate pinning.

https://applidium.com/en/news/unbundling_pokemon_go/


Cert pinning can be circumvented w/ a rooted or jailbroken device.


> Google ... are also at fault

Primarily at fault.


I went through the updated app (v1.0.1) and the oAuth flow clearly indicated what I was granting access to. I'd be very curious if this was a fix by Google, a Google bug where requesting "full access" is missing that step, or Niantic changed the way they do oAuth (still a Google issue, the method lacking confirmation shouldn't exist).


It was some JavaScript injected into the WebView that automatically clicked confirm and sent a message to reposition the WebView offscreen as soon as that happens.


If they did that, they don't deserve a second chance at trust. That is outright malicious, and definitely a dark pattern. Their app deserves to be deleted and not used again.

Oauth2 has some serious holes - I have no idea if the Google login page is served by Google, or is simply a copy of their landing page designed to phish for credentials. This needs to be fixed as Oauth is becoming increasingly prevalent. We need some type of web of trust like SSL EV that gives me attestation the Oauth login page is being served by the company that I think it is.


This is why providers like FitBit require that you use APIs such as Chrome Custom Tabs or SafariViewController, where the OS presents an out-of-process limited web view that the host app doesn't have access to.


Terrible if true. Is there a source for this claim?


[citation needed]


They did that? That doesn't sound like a mistake, that sounds like a CFAA felony.


Are you sure the login flow used the official flow API and not collected the login credentials themselves to "helpfully" register the user?

I haven't used to app so I have no idea, just a thought.


it's the google login flow


Google probably doesn't want to draw that kind of attention to itself




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: