Hacker News new | past | comments | ask | show | jobs | submit | tmdg's comments login

I feel like this is a false sense of security. Even before this change, they can easily access and scan photos on your device. If they do any post-processing of the image on device, they already do.


It’s not a false sense of security, it’s a clear delimitation between theirs and mine; Debian package maintainers can also slip a scanner on your machine but that is a big line to cross on purpose and without notifying the user.


But with a debian package you can choose not to accept the upgrade and see any funny business in the release source code..


That is technically true but in a real very practical sense everyone here using OSS absolutely is trusting a third party because they are not auditing every bit of code they run. For less technical people there is effectively zero difference between open and closed software.

It’s really disingenuous to suggest that open source isn’t dependent on trust, you just change who you are trusting. Even if the case is someone else is auditing that code, you’re trusting that person instead of the repository owners.

I’ll concede that at least that possibility to audit exists but personally I do have to trust to a certain extent that third parties aren’t trying to fuck me over.


> Even if the case is someone else is auditing that code, you’re trusting that person instead of the repository owners.

Suppose Debian's dev process happened at monthly in-person meetings where minutes were taken and a new snapshot of the OS (without any specific attribution) released.

If that were the case, I'd rankly speculate that Debian devs would have misrepresented what happened in the openssl debacle. A claim would have been made that some openssl dev was present and signed off on the change. That dev would have then made a counterclaim that regular procedure wasn't followed, to which another dev would claim it was the openssl representative's responsibility to call for a review of relevant changes in the breakout session of day three before the second vote for the fourth day's schedule of changes to be finalized.

Instead, there is a very public history of events that led up to the debacle that anyone can consult. That distinction is important-- it means that once trust is in question, anyone-- including me-- can pile on and view a museum of the debacle to determine exactly how awful Debian's policy was wrt security-related changes.

There is no such museum for proprietary software, and that is a big deal.


That's certainly true, and it is a strong 'selling point,' so to speak, for open software. But openness is just one feature of many that people use for making considerations about the sort of software they run and frankly, for an average consumer, it probably weighs extremely low on their scale, because in either case it's effectively a black box, where having access to that information doesn't actually make them more informed, nor do they necessarily care to be informed.

Most people don't care to follow the controversies of tech unless it becomes a tremendously big issue, but even then, as we've seen here, there are plenty of people that simply don't have the technical acumen to really do any meaningful analysis of what's being presented to them and are depending on others to form their opinion, whether that be a friend/family member or some tech pundit writing an article on a major news organization's website.

Trusting Apple presents a risk to consumers but I'd argue that for many consumers, this has been a reasonable risk to take to date. This recent announcement is changing that risk factor significantly, though in the end it may still end up being a worthwhile one for a lot of people. Open Source isn't the be all end all solution to this, as great as that'd be.


Thinking about this.. I guess my trust, is that someone smarter than I will notice it, cause a fuss, and the community will raise pitch forks and.. git forks. My trust is in the community, I hope it can stay healthy and diverse for all time.


Trusting in a group of people like you to cover areas you might not is the benefit of open source and a healthy community.

With Apple you have to trust them and trust they don't get national security order.

I trust that if everyone who had the ability to audit OSS got a national security order it would leak and it would be impossible for many who live in other nations.


Maybe if you drink from the NPM PyPI firehouse without checking (as too many do unfortunately).

For regular Linux distribution there are maintainers updating packages from upstream source that can spot malicious changes slipped in upstream. And if maintainers in one district don't notice, it is likely some in onether distro will.

And there are LTS/enterprise distros where upstream changes take much longer to get in and the distro does not change much after release. Making it even less likely a sudden malicious change will get in unnoticed.


Somewhere along the line someone is producing and signing the binaries that find their way onto their computer, they could produce those binaries from different source code and I would be none the wiser.

Debian tries to be reproducible, so to avoid being caught they might need to control the mirror to so that they could send it to only me. I.e. if I'm lucky it would take a total of 2 people to put malicious binaries on my computer (1 with a signing key, 1 with access to the mirror I download things from).


The only way what you said is not true for any networked device is to just go down to the river and throw it in and never use a digital device again. It's not a false sense of security, it's a calculated position on security and what you will accept, moving spying from the server to the phone was the last straw for a lot of people.


Apple already scans your photos for faces and syncs found faces through iCloud. I’d imagine updating that machine learning model is at least as straightforward as this one.


They're searching for different things though. To my knowledge, before now iOS has never scanned for fingerprints of specific photographs. It would be so darn easy to replace the CSAM database with fingerprints of known tiananmen square photos...


That is a distinction without a difference. I’m sure you could put together quite a good tank man classifier (proof: Google Reverse Image Search works quite well), and it’d catch variations which a perceptual hash wouldn’t.

The only difference is intent. The technical risk has not changed at all.


That is to say face scanning is equally insidious as the new feature?


The technical risk to user privacy - if your threat model is a coerced Apple building surveillance features for nation state actors - is exactly the same between CSAM detection and Photos intelligence which sync results through iCloud. In fact, the latter is more generalizable, has no threshold protections, and so is likely worse.


It's the legal risk that is the biggest problem here. Now that every politician out there knows that this can be done for child porn, there'll be plenty demanding the same for other stuff. And this puts Apple in a rather difficult position, since, with every such demand, they have to either accede, or explain why it's not "important enough" - which then is easily weaponized to bash them.

And not just Apple. Once technical feasibility is proven, I can easily see governments mandating this scheme for all devices sold. At that point, it can get even more ugly, since e.g. custom ROMs and such could be seen as a loophole, and cracked down upon.


This hypothetical lacks an explanation for why every politician has not demanded Apple (or say Google) do this scope creep already for photos stored in the cloud where the technical feasibility and legal precedent has already been established by existing CSAM scanning solutions deployed at scale.


I have to note that one of those solutions deployed at scale is Google's. But the big difference is that when those were originally rolled out, they didn't make quite that big of a splash, especially outside of tech circles.

I will also note that, while it may be a hypothetical in this particular instance as yet, EU already went from passing a law that allows companies to do something similar voluntarily (previously, they'd be running afoul of privacy regulations), to a proposed bill making it mandatory - in less than a year's time. I don't see why US would be any different in that regard.


Ok but now you’ve said that the precedent established by Google and others already moved the legislation to require terrible invasions of privacy far along. You started by saying Apple’s technology (and, in particular, its framing of the technology) has brought new legal risk. What I’m instead hearing is the risk would be present in a counter factual world where nothing was announced last week.

At this point of the discussion, people usually pivot to scope creep: the on-device scanning could scan all your device data, instead of just the data you put on the cloud. This claim assumes that legislators are too dumb to connect the fact that if their phone can search for dogs with “on-device processing,” then it could also search for contraband. I doubt it. And even if they are, the national security apparatus will surely discover this argument for them, aided by the Andurils and NSOs of the world.

As I have repeatedly said: the reaction to this announcement sounds more like a collective reckoning of where we are as humans and not any particular new risk introduced by Apple. In the Apple vs. FBI letter, Tim urged us to have a discussion about encryption, when we want it, why we want it, and to what extent we should protect it. Instead, we elected Trump.


The precedent established by Google et al is that it's okay to scan things that are physically in their data centers. It's far from ideal, but at least it's somewhat common sense in that if you give your data to strangers, they can do unsavory things with it.

The precedent now established by Apple is that it's okay to scan things that are physically in possession of the user. Furthermore, they claim that they can do it without actually violating privacy (which is false, given that there's a manual verification step).


The precedent established by Apple, narrowly read, is it’s ok to scan data that the user is choosing to store in your data center. As you pointed out, this is at least partly a legal matter, and I’m sure their lawyers - the same ones who wrote their response in Apple vs. FBI I’d imagine - enumerated the scope or lack thereof.

Apple’s claim, further, is that this approach is more privacy-preserving than one which requires your cloud provider to run undisclosed algorithms on your plaintext photo library. They don’t say this is not “violating privacy,” nor would that be a well-defined claim without a lot of additional nuance.


Nonsense. Building an entire system as opposed to adding a single image to a database is a substantially different level of effort. In the US at least this was used successfully as a defense. The US cannot coerce companies build new things on their behalf because it would effectively create "forced speech" which is forbidden by the US Constitution. However they can be coerced if there is minimal effort like adding a single hash to a database.


Photos intelligence already exists, and if people are really going to cite the legal arguments in Apple vs. FBI, then it’s important to remember the “forced speech” Apple argued it could not be compelled to make was changing a rate limit constant on passcode retries.


Exactly this. The whole thing is a red herring. If Apple wanted to go evil, they can easily do so, and this very complex CSAM mechanism is the last thing that will help them.


I’ve read your comments, and they are a glass of cold water in the hell of this discourse. This announcement should force people to think about how they are governed - to the extent they can influence it - and double down on Free Software alternatives to the vendor locked reality we live in.

Instead, a forum of presumably technically savvy people are reduced to hysterics over implausible futures and a letter to ask Apple to roll back a change that is barely different from, and arguably better than, the status quo.


Thanks. I couldn’t agree more.

We need both - develop free software alternatives (which means to stop pretending the alternatives are good enough), and to get real about supporting legal and governance principles that would protect against abuses.

If people want to do something about this, these are the only protections.


So you have User A - they upload a pic with User A and Peoples B,C,D,E,Z

icloud scans for those faces

finds those faces and ties them to other ID accounts via face - then via fingerprint recognition to a device, and to a location based on IMEI etc.

Apple's platform is literally the foundation for the most dystopian digital tool-set in history...

Once the government is able to crack the apple war chest, everything is fucked.


A false positive in matching faces results in a click to fix it or a wrongly categorized photo. A false positive in this new thing may land you in jail or have your life destroyed. Even an allegation of something so heinous is enough to ruin a life.

The "one in trillion" chance of false positives is Apple's invention. They haven't scanned trillions of photos and it's a guess. And you need multiple false positives, yet no one says how many, so it could be a low number. Either way, even with how small the chance of it being wrong is, the consequences for the individual are catastrophic. No one sane should accept that kind of risk/reward ratio.

"Oh, and one more thing, and we think you'll love it. You can back up your entire camera roll for just $10 a month and a really infinitesimally minuscule chance that you and your family will be completely disgraced in the public eye, and you'll get raped and murdered in prison for nothing."


Ok.

So iCloud Photos circa 2020 [and Google Photos and Facebook and Dropbox and OneDrive] aren’t a risk you should be willing to take.

This feature doesn’t change anything in that regard; the scanning was already happening.


I literally do not take that risk in 2021. I do, currently, make the reasoned assurance that the computational overhead of pushing changes down to my phone, and the general international security community, are keeping me approximately abreast of whether my private device is actively spying on me (short answer: it definitely is, longer answer: but to what specific intent?)

Apple's new policy is: "of course your phone is scanning and flagging your private files to our server - that's normal behavior! Don't worry about it".


This is clearly stated in the article:

"You can see that these were indeed at-risk patients – the overall mortality rate in the general population for coronavirus infections is nowhere near those rates, and it’s a damn good thing it isn’t. The mean age of the patients was 59 years, 64% male, with 50% of them having hypertension, 45% of them obese, and 30% with Type 2 diabetes. So even though they were characterized as mild-to-moderate on enrollment, this was just the sort of group that you’d worry about as a physician."

It is still a valuable study even if it is not a study done on the general population.


Because this wasn't the case before.


Sounds like Apollo can connect to both MySQL and Postgres (as well as other data stores): http://docs.apollostack.com/apollo-server/guide.html#Connect...


I think you have the wrong idea about what a derivative is. The article does a pretty good job at explaining what a financial derivative is and gives some example of common ones. Financial derivatives have no real relationship with mathematical derivatives.


Financial derivatives have no real relationship with mathematical derivatives.

Indirectly they do. Option prices are derived from the underlying price as well as the implied volatility, time, and interest rates.

So when certain parameters change, the price changes. And that is a relationship with a mathematical derivative. The change in option price vs the change in the underlying (delta) is the first derivative, and then you've got gamma as the 2nd derivative. And so on.


damn. do you think I could get people to pay me for a seminar on it anyway?


probably


I'll give you some advice you won't listen to: don't transfer to another college just for a girl. It's really not worth it. If you were married or have a kid together that is a different story. But if she's not even your girlfriend anymore, then definitely no. There are plenty of fish in the sea.

While grad schools do put some emphasis on the school you graduate from, the most important thing (I'm assuming you're talking about a PhD) is showing you can do good research and getting good recommendations by your professors. Getting decent grades probably wouldn't hurt either.

As far as getting a job is concerned, it really depends on what you want to do. If you want to go work in consulting or at an investment bank, then yeah, the school is very important and going to a good school will open a lot of doors. If you want to work at a startup, being able to get things done is much more important.


Agree with tmpg. If you are planning to go to a better school (more precisely, a school KNOWN TO BE better) then it's fine. Else don't do it any way. If you have problems convincing yourself that love is not a very good thing hope this blog post of mine will help

http://chanux.wordpress.com/2008/02/15/the-post-valentine-da...

And also you better read following article, at least the topic "School" of it.

http://www.randsinrepose.com/archives/2007/02/25/a_glimpse_a...


Yep - seconded. I did that and have regretted it for 20 years now.


I think that's a main issue. Many of these models are based on math that few people understand. I don't think it necessarily is an issue of integrity, but more an issue of traders and asset managers using the results of these mathematical models without fully understanding the implications and risks. They are, after all, probabilistic models and it seems like they weren't treated as such.


exactly, people were just saying "well the model says it will work", and didn't realize that the models they were using had huge assumptions hard coded into them.


There is definitely a demand for those "fluff" skills that many engineers like myself cannot fulfill. There are many great careers out there that don't require math beyond the high school level. Don't look down upon them just because they have passions in other areas.


Though I don't completely disagree that many Facebook apps are useless, when did fun become useless?


when, because of diminishing returns, one reaches the margin where wasting time is no longer providing significant benefits (such as feeling better).


Fun became "useless" when we started analyzing it, poking and prodding it.

At the same time, apps became useless when people started making "apps" like Vampires, Mob Wars and called them "applications" instead of 'games'.

What a beautiful game of wordplay we play.


I would hardly say Facebook apps are useless in terms of revenue generation:

http://www.onlineepiphany.com/2008/02/08/how-i-would-get-sta...


I don't see how Facebook growing quickly can be a bad thing. Would it be better to grow at a slower rate?


I'd rather slow sustainable growth, than hype filled growth. Facebooks growth is hot air... which at some point will all come pouring out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: