This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Apple has yet again betrayed the fact that "it's not technically feasible" is bullshit when it comes to government access
They said it concerning device access, and that was a lie. They said it concerning private cloud compute, and that was a lie. Hell, they said it concerning even the conceivability of secure backdoors, and that was probably a lie, too (not even getting into the fact that they definitely already have magic numbers that can overtly tell your phone to execute arbitrary code).
Apple has now introduced Enhanced Visual Search, technomagic which whizzes your photos (even the ones not in iCloud) off to Apple servers, using all sorts of mathematical goodies like homomorphic encryption to keep them private while allowing them to tag said photos with labels like, "What is this landmark in this photo?" It would be strictly easier to design a system that looks for stuff like child porn and alerts them.
Yes yes, you will rapidly object. False positives, false negatives. I grok filtering theory. Who will have the authority to do what with the information? Sure sure. Who will have to maintain the databases or filters that look for it? Yup, I hear ya. Those are not technical objections. Those are process objections.
I have never supported having the government involved in any of these things. I understand the significant nature of the tradeoffs at the societal level. But I have routinely said that the cry of, "This is just technologically/mathematically impossible," is total bullshit. It's just not true. Several other influential voices in the tech privacy world are coming around sort of slowly to this. They're seeing the deployment of these systems and saying how they're taken aback. How, well damn, if you can do that, then you probably can do this other stuff. But of course, doing the other stuff seems socially problematic, so they don't know how to feel.
It's actually easy to know how to feel. Just drop the lie that these sorts of things are technologically impossible. They are possible. But there are process tradeoffs and there are potentially huge liberty concerns that you can focus on. The more you continue to try to push the Noble Lie, the more likely you're just going to harm your own credibility in the long-term. Better to fight on the grounds of true facts with, "We don't want it," win or lose, than to prepare the grounds for a complete credibility crisis, such that when the time comes, no one is able to responsibly push back.
Privacy and Security is not the same thing. Just because there is opt-in feature that supposedly keeps your privacy with cryptography it is not that the same thing as securing information with cryptography from untrusted parties. I usually detest analogies around these things. But the lock for your bathroom door is for privacy and the lock on your front door is for security, the requirements for the lock designs are different. Kerchoffs's Principle still applies for any kind of key escrow in that your information security will be as secure as the security guarantee of the escrow provider.
Also directly from Apple Machine Learning Research page about the homomorphic encryption. Identifying the database shard relevant to the query could reveal sensitive information about the query itself, so we use differential privacy (DP) with OHTTP relay — operated by a third party — as an anonymization network which hides the device's source IP address before the request ever reaches the Apple server infrastructure. With DP, the client issues fake queries alongside its real ones, so the server cannot tell which are genuine. The queries are also routed through the anonymization network to ensure the server can’t link multiple requests to the same client. For running PNNS for Enhanced Visual Search, our system ensures strong privacy parameters for each user's photo library i.e. (ε, δ)-DP, with ε = 0.8 , δ = 10-6. . i.e. it requires a trusted third party not to leak information to apple about your IP as a sidechannel.
Further I would like to have independent cryptographic review of the homomorphic-encryption both of algorithm and implementation before even trusting it with my privacy. There are so many ways of creatively undermining cryptography algorithms and implementations for information leakage.
More options
Context Copy link
I’ve never understood people who believe that the purpose of backing up your images in any form wasn’t about anything other than Apple trying to become another entry for the Eye of Sauron. It’s a huge cost to create software whisking your photos into a server, labeling them etc. and at least according to theory, the idea is totally not to get the feds involved or to maintain privacy while doing so. Especially given that the bad solution they used to have was “what’s on your phone stays private on your phone because we don’t download it, look at it, or label it. It’s just crazy because doing nothing would have continued to give users the privacy that Apple claims to be all about, while protecting Apple from liability for not finding crime-think images. Now, because of the downloads, I could theoretically sue Apple for not catching images of my rape or of my ex holding a gun he then tried to kill me with — they have those images, they’re tagging them, and if they’re potentially hinting a crime, them not reporting it would risk them being an accessory to what was happening in those images.
Long story short, anything that doesn’t remain completely off the internet servers is public at this point and the only way to guarantee that is to use analog cameras and write on paper.
More options
Context Copy link
The question of whether or not it's possible is leaps and bounds above my pay grade, but I suppose if the cryptobros are able to come up with things like a fully anonymous immutable ledger of transactions, using zero knowledge proofs and other voodoo, it shouldn't be surprising if someone manages to come up with a way to flag child porn without knowing what's exactly on the picture.
That said, even though you wanted to focus on the technical side, my mind always floats towards the question of "what even is privacy?". Europe has a strong core of "muh privacy" protectors, but as far as I can tell, they only care about individual privacy - as long as I'm not able to connect any particular data to any particular person, that's all fine by them. For my part I find "anonymous surveillance" even scarier, because all the creepy stuff happens on an aggregate level. I might as well face it, I'm not that unique, and if you've seen one traditionalist conspiracist schizo, you've seen them all. So governments and megacorps being able to collect all they data on trad-schizos they want, """"anonymously"""", doesn't really bring me much comfort. As time goes by they'll probably know me better then I'll know myself, and if there's any way to nope out of that, I will.
Now, does that justify lying about whether or not it's technically possible to flag child porn? I'll go with a "no". It has the same energy as those folks coming up with a new species to stop the construction of a dam.
Indeed. Something like " scanning photos using an algorithm for a set of well-known landmarks in a way that never exposes anything to a human" doesn't feel like it violates much. That's subjective tho.
The weird thing is that Facebook did the CSAM and other moderation using humans in a way that apparently looking at this stuff nonstop has serious psychological consequences. Life imitates South Park, I guess.
More options
Context Copy link
Anonymous this. Anonymous that. With sufficiently effort you can be deanonymized. What cell phone regularly visits your work, home and relatives? It's yours. Somewhere in aggregated """anonymized""" data is a perfectly obvious unique user number that is you.
More options
Context Copy link
I think the anonymous thing is kinda a joke. You don’t even need that much “anonymous” datapoints taken together to figure out who someone is. The time and location where you post, the people you follow, the exact configuration of your browser, put enough details together and anyone with access can find you pretty easily.
More options
Context Copy link
More options
Context Copy link
Well, there are some caveats there - if they are actually using homomorphic encryption to run the classifier, that means that Apple's servers do not at any point learn what the landmark is. If the goal is to report it to the FBI if a picture being sent has been labelled "child porn", accordingly, the phone would have to be wired up to report/send the image if the data it received, once decrypted, indicates that it was classified as such. How do you stop people from blocking this reporting functionality on their end? Adding additional user-unmuteable snitching logic to end-user devices comes with all sorts of legal, technological and security risks (and quickly puts you in a league with North Korean computing equipment that comes with daemons watermarking every document you touch, which they make it illegal to disable).
That being said, I see your argument at least insofar as the case for "it can't be done" is overstated and oversold, but I am not enough of an idealist to agree with this "just be truthful to the ruling classes and try to defend what you want on principle, the truth always wins in the end" thinking. I'm pessimistic about the prospect of a principled stand - we'll get the mandatory surveillance rectangle reporting on wrongthink eventually, because the powers-that-be really want it, and the majority of our fellow citizens probably already want it as well, or else the ruling classes will have all the opportunity on their side to manufacture the conditions that will make them want to, be it by propaganda, dissolving the cohesion of their opposition (note how effectively they split the tech anarchist scene into those who still want to keep the government out and those who think that the Nazis who want to keep the government out are the real danger) or creating real problems to which they are the solution (people want less government spying -> import scary foreigners into what to them is a scary foreign land -> old natives want more government spying to keep them safe from scary newcomers, newcomers want more government spying to keep them safe from racist natives). As far as I am concerned, the better choice at this point is just to lie and obstruct all the way. This buys time for some technical or societal deus ex machina solution to emerge, or else at least lets us spend a bigger fraction of our remaining time on this mortal coil out of bondage.
Are they currently claiming they do this today? My understanding of homomorphic encryption (admittedly a bit outside my wheelhouse) is that it's nowhere near as well-trod a space as, say, RSA. When I last looked, it was possible -- with a bunch of caveats -- to do simple things with a whole bunch of overhead, and certainly no equivalent of a NIST standard (if you trust those: say "Dual EC" with me) for it.
I didn't think the technology to do this well was ready for prime time today, but maybe I'm just a bit out of date. Do they have a white paper, or, better, a bunch of academic papers?
My general take is that the main use of homomorphic encryption is to make cryptocurrencies look respectable by comparison.
It's main purpose seems to be to get people to move their data into the cloud, while pretending that you can do useful computations on the data without learning anything useful about it, because it is all encrypted.
More options
Context Copy link
Yup. They even open sourced a library. They're citing the BFV papers from 2012.
Thanks. I suppose publishing a white paper at least opens them up to more serious scrutiny (which I've seen in serious-ish forums like HN). But my initial response is rather skeptical still, though. The encryption methods seem like they should be far more expensive than Apple is letting on [1], and they say they're using this for querying remote databases with encrypted queries. They're less clear on how these databases (for photographic landmarks, URLs, and such) are encrypted in a manner that actually hides the query from Apple. Are the encryption keys different per-device? If so, how do they avoid needing a separate database per device? And if not, it seems there's a lot of trust that they would be unable to figure out which rows matched.
I know Google's approach to similar issues has been focused more on device side ML models. Pixel phones support offline song recognition (I've noticed it's fairly limited to popular songs), and Google Translate can work (in a limited fashion) offline. Why does Apple need to do cloud-based POI recognition in photos? The whitepaper only shows 6 very well known landmarks, but it seems like it'd be easy enough (and secure!) to do this on-device. Given the known computational costs of FHE, it might even be better for battery life.
Oh, I would read it differently. I think they're letting on that it's pretty expensive, which is why they're doing all the mess with sharding and DP.
My understanding of BFV is that when the device does its keygen (unique for each query, I assume), it produces and then passes an "evaluation key" as part of the public key. One has to design the scheme so that there is sufficient expressivity in the set of evaluation functions that can take in the evaluation key and perform the desired operations on the encrypted message. I don't believe this involves completely encrypting the entire database from scratch using the public key every time; it just requires running the same operations on the same underlying database, but with the evaluation key in the operations. They call out that it's important that the BFV scheme has many of the operations they want, expressible in, shall we say "evaluation key parameterized form".
I think a lot of the work in library building is essentially building up a set of these "evaluation key parameterized" operations. You have to start with extremely simple operations and then build your way up to more usable tools that are composed of those simple operations.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If they were designing a system to enable this, they might not include all of the features they currently have. It would probably be easier to just make one that does it than to start from what they have and add it in. For example, they wouldn't go to all the trouble to try to hide the correspondence between a device ID and the encrypted bag of photos, either (or they'd have to similarly specially puncture it).
Also, most of the concerns about "can't people just turn it off" forget how strong Apple's control is over their walled garden. Ultimately, this usually descends into arguments over Apple's update process, which is where they are the strongest. I'm mentally skipping some steps, but I could imagine designing a system where, when the device requests the latest update, their request must come with certain proofs, and one could imagine building in that those proofs show that they're doing things that Apple wants (it can check by, say, leaving a blob on their server, one that still doesn't reveal the contents of the photos, but can be used to verify that they're being truthful about them), and denying the update if they can't prove that they're doing what they're told. (Probably still easier to just do it the first time, though.)
That discussion usually descends into questions of society, not tech. Whether people will still use Apple devices. Whether people will eschew updating their devices with Apple software and try to go their own way, etc. All of those discussions mostly amount to, "Yes, you can leave Apple's walled garden and go your own way," but that's true of any piece of electronics if you try hard enough. For any devices that stay in Apple's walled garden, Apple is nearly omnipotent (absent possibly nation state level efforts).
Fair enough on the strategy piece. One clarification, though. I did not say:
Often times, the truth doesn't win. Many of those times, it's because the people who could have the credibility to fight have publicly burned it until the truth doesn't matter anymore.
More options
Context Copy link
More options
Context Copy link
That article does not say it was a lie. It says that there is an "Apple-owned private key", but that's a matter of interpretation. Apple destroys its copy of this key before the phone is sold, so the key only exists on the device itself. They say Apple COULD make a law-enforcement accessible version, not that the version they actually make is law-enforcement accessible.
I'm not sure what you think Enhanced Visual Search proves. Certainly it has always been technically feasible for Apple to search your phone for child porn. No one said it wasn't, as far as I know; Apple announced they were doing it in 2021 and backed off. Enhanced Visual Search is controlled destruction of privacy.
False positives and false negatives aren't the main issue. The main issue is it's not a proper role of the government (whether itself or through agents) to go rifling through my data looking for crimes.
Precisely as I claimed.
Oh my. I'm not sure what rock you've been living under.
Right. Anyone paying attention back then should have knocked it off with the "it's technically infeasible" song and dance, but sooooo many people haven't.
It sounds like you're in about the same camp I am; we just have different judgments regarding the extent to which people have been claiming technical infeasibility all the time. That's one of those 'sense of the discourse' things that is basically impossible to prove, because one would have to set weird standards about, "Oh, it needs to be person of this much prominence," etc. Probably the easiest way to see how pervasive the meme still is, even if some voices have started to shy away from it, is to peruse /r/technology.
But not the opposite of the claim that Apple has made. If I encrypt some data and burn the key, it is technically infeasible for me to decrypt the data. I could have just not burned the key, that's obvious, but I did burn the key.
Right. That's the this part:
Apple's claim is fine. It just also demonstrates that other schemes are not, in fact, technologically infeasible. Actually, there is slight quibble; the quoted bit isn't entirely true. Apple doesn't destroy the only copy of its key. It destroys the mechanism by which they can access the key (they put the cards in a blender); there remains a copy of the key (or the key ("Apple's key") that unlocks the key ("user's key"), to be strict) in the HSM that they keep possession of. They just believe that there is no longer any way for them to access it (short of theoretical attacks like decapping the HSM).
The disagreement is: having made a phone that works this way, it is now technically infeasible to search it. It was not technically infeasible to build the phone another way, but Apple also never claimed that. After all, they did this deliberately, as a sales pitch.
That is a claim, but it's in a different debate. One is, "The government wants Apple to get into this existing phone right here, right now." The other is, "The government is considering passing a law that would require Apple to build future devices which would allow them to perform searches." I don't think anything that anyone has said so far in this thread is really relevant for the first debate; I certainly haven't said anything about that. It's the latter debate that is the context for so many of the poor applications of, "...but that's technologically infeasible!"
The only thing I ever saw Apple commenting on is the first one. Did they say it's technically unfeasible to build a surveilable phone anywhere? Cause that's dumb.
I doubt Apple ever officially used those words, because it is dumb. But all sorts of tech industry/press folks say variations on the theme. The link I gave to the CKV/"AKV" debate was a back-and-forth with people who are still very prominent, who were absolutely actually claiming that it was technologically infeasible (to get there, you need to add certain qualifiers as to what "it" is). Reading that whole back and forth is probably the most useful for understanding where people try to draw the battle lines.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I am fairly sure that the heavy hitters in the US intelligence community have a way to compel apple to extract data from a device they physically possess. I doubt that they have officially sanctioned zero day backdoor though.
But this is the world consumer wanted. They wanted always online devices, they wanted walled gardens.
Sure, the allegation is that they had their own ‘emergency’ backdoor that even the NSA and GCHQ weren’t supposed to know about, but which they kept in their back pocket in case they had to - themselves - use it on behalf of intelligence for a justified reason.
But what is also obvious is that if you’re the NSA and believe this to be the case, you just wiretap the fuck out of Apple’s internal Slack, email system, surveil senior security engineers directly and so on until you either have the exploit, know where to look, or know exactly who to ‘persuade’ into telling you.
Then, you use it sparingly enough that nobody at Apple can deduce that you have it, and it lasts 4 iPhone generations before Kaspersky find it.
This part never actually works. Some idiot blasts a bunch of them at some journalist that has a clue which captures it to where it's trivial to patch something in the exploit chain.
More options
Context Copy link
that is not how you do it. No backdoor is needed. You create a legitimate update for the phone and sign it with the boot loader key and also probably update the secure enclave to allow unlimited tries. Once you have that - the rest is just bruteforcing the pin to get the part of the composite key (if the master key really is composite, it may be just stored in the SE, no one really knows)
This is not possible. The encryption keys that are used to decrypt user data are mixed with the signature of the OS that's booted. That's why in the normal update flow, you have to enter your passcode to kick off the update -- your phone is forward-encrypting the parent keys to the new software before you boot off the old one.
It is really great that apple don't have the hashes of their own OS-es right. You can totally boot custom kernel, and then just fake whatever signature you need to blast with the pin to the secure enclave. Or unmix whatever mixing you do with the pin. Device ID, os hash, number of luxes hitting the phone in the factory - apple already have this.
Mixing here would be a one way irreversible operation, of which there are plenty in modern cryptography.
Whcih is absolutely irrelevant if you know a part that you have mixed with. The salt doesn't make brute forcing harder. It makes hash lookup harder. Completely different approaches.
It's not irrelevant -- if you take a standard one-way function F then even knowing all the identities for X and Y, you can't load software version Y and back out what the key would have been had the diversification been done with X.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The secure enclave is supposedly not updatable.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There are tiers of government access. The NSA and CIA guard their tech closely and certainly don’t hand it out to every law enforcement agency that wants to catch petty drug dealers and degenerates with stuff saved on a switched off iPhone. The fact that the FBI had to physically grab Ulbricht’s unlocked laptop, that they still try to grab unlocked phones and that intelligence sources regularly discuss ways of working around Apple’s encryption suggest that, if useable hardware back doors into this kind of hardware exist, they’re limited in usage to the absolute most pinnacle of intelligence or criminal investigation operations against political enemies of the state, foreign governments and alleged terrorists. All of those should know that they’re under constant surveillance and every thought they utter is being recorded somewhere in Langley. High level backdoors are hoarded by the relevant agencies and - if you’re an enemy of the state to the extent you need to be worried about them - you’re probably already fucked and the powers that be can likely already manufacture the relevant pretext to ruin/arrest/kill you.
Well, parallel construction is a thing.
Of course, a spook will not crack phones of random suspects in police custody (because this would confirm to the police that they have this ability), nor will they risk their precious 0-days to infect phones of suspects in ordinary criminal cases, but if drug dealers who have incriminating messages on the Apple cloud had an above-baseline rate of being 'randomly' checked when crossing the border, I would not be surprised.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link