Their new slate of ai products will be able to have screen reading capabilities. There are legitimate concerns for user privacy here. Their agreement with OpenAI is likely very strict.
Given that all three of those things are documented and present threats to Windows and MacOS (as well as iOS and Android), I guess so. Those users seem to "surivive" fine.
I've never heard of ransomware, botnets, or keyloggers on iPhones. I know less about Android, and how much it depends on whether you only install apps from within the Play store or not.
But no, I wouldn't call it "surviving" if you get hit by ransomwhere and have to pay hundreds/thousands of dollars to restore your data, or lose your data.
OK great, but getting hacked by ransomware still isn't what I'd call "surviving".
It's a matter of tradeoffs. In exchange for making more security decisions for yourself, you wind up with a much higher risk of losing your data and privacy. So it's important to acknowledge what you're giving up as well.
And Apple isn't exactly a "random tech company". It's currently literally the third largest company in the world [1]. That's about as opposite of being a "random" company as a company can possibly be.
In contrast, it's precisely the software that people do install from seeming "random tech companies" that can conceal secret keyloggers, etc.
They dealt with this for third-party keyboards by barring them from connecting to the internet, they could do something similar to replacement on-device models. Even if that meant the "ask chatgpt" option wouldn't be available in the EU.
I think they're more motivated by trying to scaremonger about the DMA, in much the same way they and others did about GDPR.
There are legitimate concerns for user privacy with any third-party integration, including OpenAI which is why Apple asks you before sending a request to ChatGPT. They solved this problem already, and I guess they want to try and hold EU users over a barrel by claiming interoperability is impossible or something.
Seriously, what kind of fearmongering can Apple push that doesn't make you side-eye the OpenAI integration even more?
> There are legitimate concerns for user privacy with any third-party integration, including OpenAI which is why Apple asks you before sending a request to ChatGPT
Right - which is why the ChatGPT integration is such a small part of the total Ai/LLM capabilities they announced, and they've loudly and publicly announced that it will support other 3rd parties in the future.
Buuuuut - what happens if/when the EU decides the DMA requires Apple to open up their Private Cloud and on-device models to 3rd party replacement?
Some of them might be worse for privacy, but if they offer better (or cheaper, which can be a quality) features, that's up to people.
If this was "the olden days", it would be a legitimate concern. But mobile OSs have long forced Apps to request permission to use various features explicitely and fall back graciously when some aren't provided. So outside security concerns that exist either way, they really can only claim to care about misinformed users.
It's literally a spyware API in the hands of a 3rd party - the options here for misinformed users are critically dangerous.
Recall was seen as 1st party spyware, Apple has more privacy trust and their implementation is very secure. Allowing Bob's AI screen reader to access the same total file/device access is very scary.
Then maybe don't build a spyware API? It's the same problem as screen-recording; you can design it to respect user privacy and prompt the user before turning on. You don't have to make it a matter of implicit trust if you design it well, and we should expect these sorts of private data APIs to be designed well and respect our agency.
If you don't look at Recall and Apple Intelligence with the same suspicion, you're stupid. Currently they are literally the same product, rebranded and marketed to different users. Two data pipes leading to one company's pool. But Apple's pipe is safer because... they have more "privacy trust" and Twitter says their whitepaper looks secure. You're being robbed in plain daylight and defending the robber; the implications of both features are disastrous.
> and they've loudly and publicly announced that it will support other 3rd parties in the future.
Then I don't see why people are defending the opposite statement, preemptively.
> Buuuuut - what happens if/when the EU decides the DMA requires Apple to open up their Private Cloud and on-device models to 3rd party replacement?
Literally nothing bad? If Apple implements all of these third-party models the same as their first-party ones, I think that would be awesome. People could replace Siri with Mistral and start getting useful results back instead of the same canned responses.
They have both. In the demos they were sending select complex tasks out to chatgpt with a large UI alert confirming the data was leaving Apple's walled garden.