Hacker News new | past | comments | ask | show | jobs | submit login

Because the app review team only evaluates a compiled binary, it is fairly easy to obfuscate nefarious activity. It could be as simple as serving different javascript from your backend to a [JSContext evaluateScript] call based on a flag you set after the app is approved, for example.



Of course, serving up javascript from your backend to a [JSContext evaluateScript] call is itself a violation of the app store guidelines. Interpreted code not executed as part of a web page rendered by WebKit either has to be bundled in the app itself, manually entered by the user, or has to be in the context of a learning app with a slew of restrictions around the UI (basically, the exception for Swift Playgrounds).


Apple's guidelines around this are pretty clear, but almost everyone that tries to use this to A/B test seems to get through app review. I'd really rather this just be outright banned, but it looks like the current policy is annoyingly lenient.


Maybe. I'm not advocating this technique, merely explaining the possible gaps. Also pretty sure JavaScriptCore doesn't have a blanket ban on evaluating a remote NSString (not that it matters for a blackhat type app), and also, this was just an example, the same decision logic could be implemented by if'ing on xml values or whatever


Isn't it possible for the review team to check video/screenshot data flowing to appsee.com or uxcam.com and check whether app has a privacy policy that mentions this explicitly?


Sure it is. So a flag could hide that.


For 30% of my revenue it seems like a lightly loaded excuse.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: