Private Cloud Compute Security Guide
(security.apple.com)398 points by djoldman 6 days ago | 222 comments
398 points by djoldman 6 days ago | 222 comments
0xEF 6 days ago | root | parent | next |
You raise the question I ask every time I spin up a VPS on one of the providers I use; do I trust this company? There's still no clear rules that say Apple, Akamai, AWS, etc have to protect me or my data if the right people start asking questions. I'm less worried about any hackers these days, since my dumb little projects aren't likely worth their time, but more worried about gov'ts that are increasingly surveillance-oriented who have an active interest in categorizing people into boxes. If I want to run a private alternative to Discord, for example, where I can safely express my dissent among friends and family who also have access, I can't really do it without going to the trouble of setting up my own hardware, first.
Companies like Apple do try to protect their users, and I applaud them for that, but what happens when (not if) they flip?
roca 6 days ago | root | parent | prev | next |
> For other services to leak their data, all it takes is for one employee to do something they shouldn't.
This is not true for Google, at least. I know because I work at Google.
So I wonder how accurate your knowledge of Meta, Microsoft etc is.
abalone 5 days ago | root | parent | next |
PCC is a whole different level. For example, you still have to trust that Google is doing what it says to control access. PCC makes it auditable and verifiable by clients when connecting to a node.
You can also audit that the binaries don’t leak any data in, say, debug logs, which is definitely possible on GCP/Borg. PCC nodes are “cryptographically airtight.”
roca 5 days ago | root | parent |
I'm only here to correct the parent's false claims.
4 days ago | root | parent |
andruby 6 days ago | root | parent | prev | next |
Without revealing things under NDA, what could you share about what more it would take?
mattlondon 6 days ago | root | parent |
https://archive.is/HFlx1 is one example of the lengths they are going to - third parties running entire Google cloud software stack (i.e. the GCP stuff we all know, and the underlying infrastructure too) in their own data center that is entirely air-gapped from Google itself. This is a huge undertaking.
qmarchi 5 days ago | root | parent |
There's a huge difference in what GDC-Airgapped runs on and what is actually running in GCP.
Airgapped is based on top of Kubernetes, and it's using mostly off the shelf components for networking and compute.
GCP is based on top of Borg, using custom networking and computer hardware (though manufacturers by a partner). As a note, not only is access without a support token alertable (goes to your skip level), there's a distinct level of "I'm not even going to build the tool to enable this". Which makes being in support a b**.
If you want access to something, it's significantly easier to just ask the customer to do it themselves.
Disc: Former TSE for Advanced Support.
jshen 5 days ago | root | parent | prev |
How is it not true?
roca 4 days ago | root | parent |
I don't know what details I would be allowed to share, so I'd better not share any. You can try looking it up on the Internet.
But Google does a lot of work to protect against insider threats, because everyone understands that in an organization of this size there will always be bad apples, spies, etc. Google's systems are designed to protect customer data from malicious employees using technical measures; it's much more than just "if we catch you, you're fired" as was asserted upthread.
deanCommie 6 days ago | root | parent | prev | next |
zelon88 5 days ago | root | parent | prev | next |
> how much one can trust Apple.
> Apple's security posture is qualitatively better than Google, Meta or Microsoft.
So Apple is telling me that it's ok to trust Apple because Apple trusts Apple? Gotcha.
zaroth 5 days ago | root | parent |
That’s not, in fact, what Apple is saying.
mattlondon 6 days ago | root | parent | prev | next |
Citation needed.
I know that there is a lot of work being done at the Big Cos to meet various regulations and conformance things to make sure that data is encrypted at rest and in transit with customer supplied keys, no unilateral access, end-to-end audit logging etc.
You don't win the big big big hundreds-of-millions government/military/finance/healthcare contracts without these sort of things. The Big Cos are not going to ignore those sorts of opportunities, and are obviously putting in the work with hundreds/thousands of engineers implementing the provably-secure nature of their products, from supply-chain to hardware to software to customer support access.
abalone 5 days ago | root | parent | next |
PCC is fundamentally more secure than merely encrypting at rest and auditing access. That still has a variety of attack vectors such as a software bug that leaks data.
Apple is unable to access the data even if subpoenaed, for example, and this is provable via binary audits and client verification that they are communicating with an auditable node.
mattlondon 5 days ago | root | parent |
How is that any different in either direction? Bugs exist in any and all code. Encrypted data is unencryptable if you don't have the keys.
I don't see that apple software is any different in that regard (just try using Mac OS for any length of time even on apple silicon and you run out of fingers to count obvious UI bugs pretty quickly just in day to day usage). And obviously AWS won't be able to decrypt your data without your keys either.
The people running these huge multi-multi-billion clouds are not idiots making fundamental errors in security. This is why they all pay mega salaries for highly skilled people and offer five-figure bug bounties etc - they take this seriously. Would some random VPS or whatever be more likely to make errors like this, sure - but they are not in (and not expected to be) in the same league.
mattlondon 4 days ago | root | parent |
And just to confirm, less than 24 hours a post on HN here about a whole new batch of critical security bugs in Mac OS: https://jhftss.github.io/A-New-Era-of-macOS-Sandbox-Escapes/
I would not trust Apple any more or less than other other big-time cloud provider.
johnklos 5 days ago | root | parent | prev |
You forget that big companies will gladly offer shit to their customers, but then will offer something better to those who are willing to pay more (id est, governments. Why make things more secure for everyone if not doing that can make more money?
6 days ago | root | parent | prev | next |
harry8 6 days ago | root | parent | prev |
> Apple has a compelling value proposition.
No. Apple has a proposition that /may/ be better than the current alternatives?
lukev 6 days ago | root | parent |
If Apple is doing what they say they are, it is in fact better. No maybe about it.
If they’re not, that means they are acting and intentionally deceiving the public security community which they are inviting to audit it.
Is that something you actually think is happening? I think we need to be clear here.
Your threat model may or may not be covered by the guarantees they are able to document, but just saying “well maybe they’re still doing some unspecified nefarious thing” is not contributing to the discussion.
Especially when none of the alternatives are even trying.
harry8 6 days ago | root | parent | next |
I don't make predictions about what different companies will do 10 years hence given they will be a collection of people, most of whom don't work there currently, doing business with regulations that don't presently exist.
"May" is just correct usage. How are you sure here? How could you convince a skeptic that it is possible to be sure?
talldayo 6 days ago | root | parent | prev |
Honestly I think this is a disingenuous defense. It's not insane to look at a closed-source project that is being partially-audited by cherrypicked organizations and say "that's not a very secure or trustworthy process". There is no reasonable accountability being offered to the community. It's like Ford selecting private safety inspectors to tell customers how great their safety is while conveniently leaving out any of the results from their federally-mandated crash tests. Is this really helping customers, or is it just blatant and masturbatory marketing?
Apple has worked to deceive the public before, in both small and large ways. They lied about backdooring notifications for the US government when they were asked to[0], so it's not too hard to imagine it happening anywhere else in their systems. They're not taking a traditional approach to software transparency which is suspicious, and their "threat model" has professedly not protected against motivated requests for identifying information[1].
When the Mechanical Turk attempted to fool commoners watching it work, it was imperative to hide every trace of the human inside. The candle used to see inside the machine was masked by smoke from candles placed around the room, the cabinet was locked to avoid accidental opening, and people were told not to touch it because it was apparently 'expensive and fragile'. Looks like Apple is the ringleader this time around.
> but just saying “well maybe they’re still doing some unspecified nefarious thing” is not contributing to the discussion.
But Apple is saying the opposite, "well, maybe we're doing the detailed secure thing, ask these people we hired", and you're praising them for it. If calling out objective and obvious logical fallacies isn't contribution, then how are we supposed to argue inside the Reality Distortion Field? Do we make-believe and assume that Apple's provided preconditions are true, or can we express concerns for the outstanding issues? I don't understand how these real-world flaws are somehow unjustified in conversation. You're allowed to hold Apple to adversarial levels of scrutiny if you take security seriously.
> Especially when none of the alternatives are even trying.
Apple is the largest company in the world and by many metrics (and points of comparison) isn't even doing the bare minimum in managing public trust. Whenever you are shown a whitepaper without the means to validate the contents yourself, you are being fed what is called "marketing" in the tech circles. You don't have to feel bad about being tricked though, it's the same thing that fools investors and overly-faithful engineers. Whitepapers are whitepapers, handpicked security audits are handpicked security audits, and code is code. There is no blurring of the lines.
[0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...
avianlyric 6 days ago | root | parent | next |
> They lied about backdooring notifications for the US government when they were asked to[0]
That’s a bit much. They were compelled by the U.S. government to deny handing over data. Sure it’s technically a lie, in the same that a General stating they “neither confirm or deny X” is also likely a lie.
But it’s entirely unreasonable to judge Apple for following the legal, and mandated instructions issued by the democratically elected government of the nation they operate within. Are you honestly suggesting that companies like Apple should be expected to simply ignore the law when you think it’s convenient?
talldayo 5 days ago | root | parent |
> Are you honestly suggesting that companies like Apple should be expected to simply ignore the law when you think it’s convenient?
No, I am suggesting that none of you know what you're talking about when defending Apple's brand of privacy. We know that they can be compelled to lie to us about their compute architecture, so why accept half-measures in your security? Because Apple is a good company and deserves the respect???
alsetmusic 5 days ago | root | parent |
> We know that they can be compelled to lie to us about their compute architecture, so why accept half-measures in your security?
There's a difference between a court ordering preventing disclosure and compelling speech. The first amendment prevents compelling speech. They can be forced not to reveal. They can't be forced to make false claims.
talldayo 5 days ago | root | parent |
Distinction without a difference, here. Apple's marketing already promised things they cannot guarantee, and instead of dropping the privacy shtick altogether they deliberately misconstrued their image to promote sales of Apple devices. The NSA didn't write the lines for them, but they also knew Apple wouldn't stop marketing privacy even if the CCP owned iCloud servers. Lying for marketing purposes is part of Apple's core identity.
Therein lies the problem. If you distort reality to cast a positive light on a service of dubious value, you're only going to drive out the knowledgeable users. This is how Apple killed FCP, Logic, Aperture, XServe, Metal and it's how they've driven out security experts too. Everyone serious about security got out of dodge years ago - the only people left are the sycophants who argue on the merit of whitepapers that cannot be validated. With Apple suing security researchers and neglecting their bug bounty program, it's no wonder we ended up in this situation. Companies like Cellebrite and Greykey can stock up on exploits because Apple doesn't take their security researchers seriously.
abalone 5 days ago | root | parent | prev | next |
Your concern is the hardware auditors are not trustworthy because Apple hired them?
I mean that’s fair but I don’t think the goal here is to offer that level of guarantee. For example their ceremony involves people from 3 other Apple organizational units, plus the auditor. It’s mostly Apple doing the certification. They’re not trying to guard too heavily against the “I don’t trust Apple is trying to fool me” concern.
What this does protect you from is stuff like a rogue internal actor, software vulnerability, or government subpoena. The PCC nodes are “airtight” and provably do not retain or share data. This is auditable by the whole security community and clients can verify they are communicating with an auditable binary. It’s not just a white paper.
That’s an enormous step up from the status quo.
lukev 6 days ago | root | parent | prev |
I mean they’re making the code public and inviting external auditors. There’s literally nothing else they can do as a private company. What evidence of the integrity could possibly satisfy you?
And again, the benchmark isn’t “theoretically perfect”, because I agree this isn’t. The benchmark is “other cloud providers” and they are either lying through their teeth or categorically better.
talldayo 6 days ago | root | parent |
They're making the promises public. The code is being shown to selected individuals deemed suitable and then they're telling me by way of proxy. That's nonsense, show us the code if it's that easy to give to others. Anything else is suspiciously furtive.
> The benchmark is “other cloud providers” and they are either lying through their teeth or categorically better.
Other cloud providers aren't part of PRISM and generally don't receive the same level of concern from the world governments. They can afford to resist legal demands from countries they don't respect because they have nothing to lose from denying them access. Apple has not demonstrated that they have the willingness to resist this unlawful coercion, even recently. If they lied about the Push Notification security spec, what's to stop them from lying about this one too?
johnklos 6 days ago | root | parent | next |
This has a ring of the same arguments made by flat Earthers. You could offer to take one to near space and show them things, but then every other one will stop believing that one person, so you're expected that unless you can take all of them to near space, you can't "prove" what you're trying to prove.
Your argument isn't far off from saying that Apple will collude with lots of security researchers, and because you're not invited to the party, nobody can prove that you're wrong. Oversimplification, yes, but basically true.
talldayo 5 days ago | root | parent |
> that Apple will collude with lots of security researchers, and because you're not invited to the party, nobody can prove that you're wrong.
Or conversely, because you and I aren't invited, we have been deliberately deprived of a legitimate opportunity and excuse to inspect the code. Even Apple's researchers aren't going to be put in a position where they can say with absolute certainty the server isn't backdoored.
This is puppet-show levels of security theater.
rwiggins 6 days ago | root | parent | prev | next |
> show us the code if it's that easy to give to others
See https://security.apple.com/documentation/private-cloud-compu...
disclosure: work at Apple, opinions absolutely my own.
saagarjha 6 days ago | root | parent | next |
The fancy security properties they’re talking about rely on a whole lot of closed source code not included there. Though an Apple intern did “donate” some of it to the public years ago.
talldayo 5 days ago | root | parent | prev |
Why the ever-living fuck do you think I should take Apple's documentation seriously when they lied through their teeth about the Push Notification architecture?
This isn't source code, it's marketing.
rwiggins 5 days ago | root | parent |
It's a link to source code on GitHub. It could be marketing too, but it's absolutely, incontrovertibly source code.
avianlyric 6 days ago | root | parent | prev | next |
> They can afford to resist legal demands from countries they don't respect because they have nothing to lose from denying them access.
> Apple has not demonstrated that they have the willingness to resist this unlawful coercion, even recently.
These two statements are rather contradictory. A state is perfectly allowed to write and enforce any laws it deems fit, and if companies want to operate within those states, they need to follow those laws.
You also make it sound like companies like Apple are part of PRISM because they want to be, rather than being forced/coerced into participating. Do you honestly believe that Apple has more geopolitical power than the US state? Entire nations have been humbled by the U.S. state, do you honestly believe that a private enterprise is capable to withstanding that kind of pressure, while also remaining within the law?
PRISM as a whole may have eventually been determined to be unlawful, but that was only after-the-fact, and only because U.S. state secrets were leaked. How is an organisation like Apple supposed to prove that the requests they received were unlawful, when entirely legal apparatus was used to essentially make it illegal to challenge the orders themselves. It’s a perfect catch-22. To resolve this issue you’re demanding that Apple knowingly breaks the law sometimes, and that individuals within Apple should risk their own freedom and liberties for your convenience.
talldayo 5 days ago | root | parent |
> A state is perfectly allowed to write and enforce any laws it deems fit, and if companies want to operate within those states, they need to follow those laws.
Or you can just leave. Google did it when China demanded unfair censorship and surveillance measures, Apple can too if they wanted to market themselves as a security defender that has an actual backbone. Right now Apple's whole security shtick feels like the theater you get out of Bitlocker or McAfee.
> do you honestly believe that a private enterprise is capable to withstanding that kind of pressure, while also remaining within the law?
No, I believe that a private enterprise claiming to respect privacy as, and I quote, a "human right" would be willing to stand up for the rights they believe in. Whether that means disclosing when things are backdoored, apologizing and preventing further backdoors, or outright open-sourcing your code, is up to Apple. They have communicated none of those things clearly or quickly which leads most people to (correctly) assume their obligation to the state supersedes their obligation to individual privacy.
> How is an organisation like Apple supposed to prove that the requests they received were unlawful [...] when entirely legal apparatus was used to essentially make it illegal to challenge the orders themselves.
By not automating the process? Let's break it down here - assuming Apple's PRISM compliance is real, we can assume the status-quo is Apple and the NSA both wanting to keep the surveillance quiet. Being sneaky with their backdoors is mutually beneficial and allows both of them to maintain plausible deniability when a national news story starts breaking.
The NSA has basically no leverage over Apple. The federal government could punish them punitively for refusing to disclose information in the name of national security, but unless they have dirt on Tim Cook the NSA is mostly relying on cooperation to get what they want. Apple on the other hand has everything to gain from proving their dedication to security and identifying illegal misconduct within their own services. When they don't identify these things and admit they were compelled to stay silent about blatant dragnet surveillance it reneges the faith they advertise to those of us in the security community.
abalone 5 days ago | root | parent | prev | next |
> The code is being shown to selected individuals deemed suitable and then they're telling me by way of proxy.
That is incorrect! The binaries are public and inspectable by anyone. The tools for doing it are bundled right into macOS -- like literally on every consumer's machine.[1]
Furthermore the protocol for connecting to a PCC node involves cryptographic proof that it's running a published binary.
[1] https://security.apple.com/documentation/private-cloud-compu...
talldayo 5 days ago | root | parent |
Binaries != code. A security professional cannot evaluate a remote service by inspecting the binary that (supposedly) runs on a remote system. Even under ideal conditions it's a move that proves you still have something to hide by not just showing people the code that your architectures running on. It's as if Apple will do anything to prove their innocence except removing all doubt.
tharant 6 days ago | root | parent | prev |
> Other cloud providers aren't part of PRISM and generally don't receive the same level of concern from the world governments.
Um, Microsoft, Google, Meta, Yahoo, YouTube, Skype, and AOL are/were PRISM Service Providers and I’d argue that they all receive(d) equal (+/- 5%) concern and scrutiny from those world governments.
> They can afford to resist legal demands from countries they don't respect because they have nothing to lose from denying them access.
Are you talking about the cloud providers I listed above? From my perspective, those guys all tend to honor the demands of any state that offers a statistically significant percentage of current/potential consumers, regardless of the demand. Perhaps they have some bright spots where they “did the right thing” (like refusing to unlock a device, or refusing to provide access to private data) but by and large they all—including Apple—are subject to the rules of the states within which they operate.
> Apple has not demonstrated that they have the willingness to resist this unlawful coercion, even recently.
Ten years ago, Apple refused demands by the FBI to unlock the iPhones of various suspects. Four years ago they did the same during the Pensacola Naval Base shooting investigation. I would guess there’s plenty of other examples but I’ve not been watching that stuff much over the past couple years. Were those instances just cherry-picked for marketing purposes? Maybe, but until someone shows me compelling evidence that Apple is /not/ acting in good faith towards both their consumers and the governments under which they operate, I see no reason to believe that they’re “lying about this one too”.
I do keep a salt-encrusted spoon nearby when reading about these things but that doesn’t mean I refuse to trust someone who has demonstrated what appears to me a good-faith effort to keep my privacy intact. Maybe what Apple is doing with PCC is just security theater; I doubt it but I also recognize that marketing and technology are often in conflict so we must always be cautious. But the important thing, both to me and GP, is that none of the other cloud providers have offered (whether it be sane and intelligent privacy controls or just snake oil-like scams) any solution beyond “encrypt your data before you upload it to the cloud”.
talldayo 5 days ago | root | parent |
> Um, Microsoft, Google, Meta, Yahoo, YouTube, Skype, and AOL are/were PRISM Service Providers
Correct. I didn't say all cloud providers aren't part of prism, just that many (most?) aren't scrutinized like Apple is.
> From my perspective, those guys all tend to honor the demands of any state that offers a statistically significant percentage of current/potential consumers
I know. It's awful, we don't have to defend it just because "the other guy" does it. Microsoft and Google left markets over this sort of disagreement, but curiously Apple doesn't.
> until someone shows me compelling evidence that Apple is /not/ acting in good faith towards both their consumers and the governments under which they operate, I see no reason to believe that they’re “lying about this one too”.
...do I have to link you the notification thing again, or is that evidence that the government is acting in bad faith and Apple is entirely scott-free for deliberately lying about their security/privacy marketing while being coerced to pretend nothing bad happened?
See, part of the problem isn't just comparing Apple to their competitors, but to their own advertisements. Apple knew their security was compromised but continued to promote their own security and even fabricate entirely misleading documentation for their own supposed system. This is why I will never be satisfied unless Apple nuts up and shows everyone all of the code. They have proven beyond a shadow of doubt that they will exploit anything we take for granted or are told to accept as-written.
solarkraft 6 days ago | prev | next |
Sibling comments point out (and I believe, corrections are welcome) that all that theater is still no protection against Apple themselves, should they want to subvert the system in an organized way. They’re still fully in control. There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
tw04 6 days ago | root | parent | next |
As soon as you start going down the rabbit hole of state sponsored supply chain alteration, you might as well just stop the conversation. There's literally NOTHING you can do to stop that specific attack vector.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
ferbivore 6 days ago | root | parent | next |
Apple have name/address/credit-card/IMEI/IMSI tuples stored for every single Apple device. iMessage and FaceTime leak numbers, so they know who you talk to. They have real-time location data. They get constant pings when you do anything on your device. Their applications bypass firewalls and VPNs. If you don't opt out, they have full unencrypted device backups, chat logs, photos and files. They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network. Opting out of all tracking doesn't really do that. And even if you trust them despite all of this, they've repeatedly failed to protect users even from external threats. The endless parade of iMessage zero-click exploits was ridiculous and preventable, CKV only shipped this year and isn't even on by default, and so on.
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
commandersaki 6 days ago | root | parent | next |
> If you don't opt out, they have full unencrypted device backups, chat logs, photos and files.
Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
> real-time location data
Locations are end to end encrypted.
dwaite 6 days ago | root | parent |
> Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
"If you have a Mac with Apple silicon or an Apple T2 Security Chip, your data is encrypted automatically."
The non-removable storage is I believe encrypted using a key specific to the Secure Enclave which cleared on factory reset. APFS does allow for other levels of protection though (such as protecting a significant portion of the system with a key derived from initial password/passcode, which is only enabled while the screen is unlocked).
commandersaki 6 days ago | root | parent |
Yeah its a bit nuanced. You're correct encryption is automatic, but the key is unprotected unless you enable FileVault, which is the opt-in bit I was talking about.
So by default it is easy to recover data on a mac.
lurking_swe 6 days ago | root | parent | prev | next |
> They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network.
What kind of targeting advertising am i getting from apple as a user of their products? Genuinely curious. I’ll wait.
The rest of your comment may be factually accurate but it isn’t relevant for “normal” users, only those hyper aware of their privacy. Don’t get me wrong, i appreciate knowing this detail but you need to also realize that there are degrees to privacy.
talldayo 6 days ago | root | parent |
> What kind of targeting advertising am i getting from apple as a user of their products?
https://support.apple.com/guide/iphone/control-how-apple-del...
In the App Store and Apple News, your search and download history may be used to serve you relevant search ads. In Apple News and Stocks, ads are served based partly on what you read or follow. This includes publishers you’ve enabled notifications for and the type of publishing subscription you have.
avianlyric 6 days ago | root | parent | next |
Not sure about anyone else, but
“we may use data from you choosing to use our apps, to target ads at you in our apps”
just doesn’t quite hit the same as
“we’ll track your web browsing habits via rats nest of social media buttons, and tracking pixels embedded in every major website, to sell highly targeted adverts on every major website, and give indiscriminate filtering capabilities capable of filtering to populations sizes as small as 4 or 5 individuals, to anyone that pays us a few cents”.
talldayo 5 days ago | root | parent |
It's up to you. Both companies are selling ads, and both companies know personalized ads sell better. Both are mediating their ad business with total control and arguing that it's unsafe to give competitors this info despite the fact that it's totally safe for them to collect it as a first-party. And the cherry on top, both Apple and Google are unfairly abusing their dominant position as platform-holders to collect data on their users solely for a marketing advantage.
Feel whichever way you want about it, but I cut Apple out of my life years ago when they made it clear service revenue was their goal. Windows became insufferable after Microsoft tried the same thing, and my time using iOS/MacOS intermittently makes me very glad I didn't invest in an equally-degraded experience.
lurking_swe 6 days ago | root | parent | prev | next |
appreciate you engaging in the discussion in good faith. This seems pretty “boring”, but i agree it’s definitely targeting advertising.
luqtas 6 days ago | root | parent | prev |
so just stocks, apps and news? their hardware quality unsnarl me with their performance per watt every time i need to flee from civilization and program Java on Nepalese caves for _THREE_ days without plugging into a power outlet. i accept the compromise and their word on my data!
/s
Tagbert 6 days ago | root | parent | prev | next |
They have not been punished because they have not abused their access to that data.
lmm 6 days ago | root | parent | next |
There's no appreciable difference between how much abuse is being done by Apple, Google and Facebook. But somehow people fall over themselves to make excuses for them when it's Apple.
sunnybeetroot 6 days ago | root | parent | prev |
Some might call this abuse: https://news.ycombinator.com/item?id=42069588
threeseed 6 days ago | root | parent | prev |
It's disingenuous to compare Apple's advertising to Facebook and Google.
Apple does first party advertising for two relatively minuscule apps.
Facebook and Google power the majority of the world's online advertising, have multiple data sharing agreements, widely deployed tracking pixels, allow for browser fingerprinting and are deeply integrated into almost all ecommerce platforms and sites.
sunnybeetroot 6 days ago | root | parent | prev | next |
Didn’t Edward reveal Apple provides direct access to the NSA for mass surveillance?
> allows officials to collect material including search history, the content of emails, file transfers and live chats
> The program facilitates extensive, in-depth surveillance on live communications and stored information. The law allows for the targeting of any customers of participating firms who live outside the US, or those Americans whose communications include people outside the US.
> It was followed by Yahoo in 2008; Google, Facebook and PalTalk in 2009; YouTube in 2010; Skype and AOL in 2011; and finally Apple, which joined the program in 2012. The program is continuing to expand, with other providers due to come online.
https://www.theguardian.com/world/2013/jun/06/us-tech-giants...
astrange 6 days ago | root | parent | next |
That seemed to be puffery about a database used to store subpoena requests. You have "direct access" to a service if it has a webpage you can submit subpoenas to.
theturtletalks 6 days ago | root | parent | prev |
Didn’t Apple famously refuse the FBI’s request to unlock the San Bernardino’s attacker’s iPhone. FBI ended up hiring an Australian company which used a Mozilla bug that allows unlimited password guesses without the phone wiping.
If the NSA had that info, why go through the trouble?
alsetmusic 5 days ago | root | parent | next |
As I recall (because I was paying very close attention at the time), the FBI dropped the case on the morning before the first court session. They'd either cracked it last minute or had the info earlier and hoped to set a precedent but veered at the last minute because their case was weak enough not to take a chance. I was extremely angry at the FBI at the time and ranting to anyone who would listen about the privacy implications, so I remember this well.
tkz1312 6 days ago | root | parent | prev | next |
FBI already had full access to the unencrypted icloud backup from a few days prior.
quyleanh 6 days ago | root | parent |
Sorry, could you please give me a source about that?
talldayo 6 days ago | root | parent | prev |
> If the NSA had that info, why go through the trouble?
To defend the optics of a backdoor that they actively rely on?
If Apple and the NSA are in kahoots, it's not hard to imagine them anticipating this kind of event and leveraging it for plausible deniability. I'm not saying this is necessarily what happened, but we'd need more evidence than just the first-party admission of two parties that stand to gain from privacy theater.
afh1 6 days ago | root | parent | prev | next |
> There's literally NOTHING you can do to stop that specific attack vector.
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
godelski 6 days ago | root | parent | next |
> E2E(E)
I assume you wanted the second EBut no, this might not be enough. If you have full control over the device, especially at the hardware level, then you got nothing. The data has to go from “not encrypted” to “encrypted” at some point. Yes, this is best if it’s on device. Even better if it is encrypted on and in memory. But can you read it? If so, there’s access. You don’t need to break encryption if you get it before, after decryption, or can just read the screen.
Security is not a binary thing. It’s not ever perfect. It is “make it really fucking hard to break or get around.” But if you can get it down to needing to infect the supply chain then you’re doing really good. Because at that point the governments are looking for it and are not going to allow those devices at least for themselves (yes, it’s the governments putting the stuff in but you know… there’s more than one government and devices don’t stay in one border…)
halfcat 5 days ago | root | parent | prev | next |
> ”that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment”
What’s an example of how this would work? Are there people building their own network cards, motherboards, RAM, and CPUs from scratch, or how would one verify even a single component in the stack?
warkdarrior 6 days ago | root | parent | prev |
E2E does not protect metadata, at least not without significant overheads and system redesigns. And metadata is as important as data in messaging and storage.
afh1 6 days ago | root | parent |
> And metadata is as important as data in messaging and storage.
Is it? I guess this really depends. For E2E storage (e.g. as offered by Proton with openpgpjs), what metadata would be of concern? File size? File type cannot be inferred, and file names could be encrypted if that's a threat in your model.
mbauman 6 days ago | root | parent | next |
The most valuable "metadata" in this context is typically with whom you're communicating/collaborating and when and from where. It's so valuable it should just be called data.
fsflover 6 days ago | root | parent |
How is this relevant to the private cloud storage?
Jerrrrrrry 6 days ago | root | parent |
No point in storing data if it is never shared with anyone else.
Whom it is shared with can infer the intent of the data.
fsflover 6 days ago | root | parent |
Backups?
Jerrrrrrry 6 days ago | root | parent |
yes, got me there.
but i feel in the context (communication/meta-data inference) that is missing the trees for the forest
fsflover 6 days ago | root | parent |
On the other hand, useful features should not be ignored, because other, almost unrelated things are hard.
6 days ago | root | parent | prev |
vlovich123 6 days ago | root | parent | prev | next |
Strictly speaking there's homomorphic encryption. It's still horribly slow and expensive but it literally lets you run compute on untrusted hardware in a way that's mathematically provable.
romac 6 days ago | root | parent | next |
And they are pushing in that direction: https://machinelearning.apple.com/research/homomorphic-encry...
commandersaki 6 days ago | root | parent | prev |
Yeah the impetus for PCC was that homomorphic encryption wasn't feasible and this was the best realistic alternative.
abalone 5 days ago | root | parent | prev | next |
> As soon as you start going down the rabbit hole of state sponsored supply chain alteration, you might as well just stop the conversation. There's literally NOTHING you can do to stop that specific attack vector.
Just want to point out that Apple has designed in a certain degree of protection against this attack, and they talk about it![1]
In a nutshell they do two things: supply chain hardening and target diffusion. Supply chain hardening involves multiple verification checkpoints. And target diffusion greatly limits the utility of a small-scale compromise of a few nodes, because users are not partitioned by node. Together these mean the entire system would have to be compromised from manufacturing to data center and across all or most nodes. Which is certainly possible! But it's a significant raising of the bar above your "run of the mill" state-sponsored shipment interdiction or data center compromise.
[1] https://security.apple.com/documentation/private-cloud-compu...
natch 6 days ago | root | parent | prev | next |
As to the trust loss, we seem to be already past that. It seems to me they are now in the stage of faking it.
6 days ago | root | parent | prev | next |
talldayo 6 days ago | root | parent | prev | next |
...in certain places: https://support.apple.com/en-us/111754
Just make absolutely sure you trust your government when using an iDevice.
spondyl 6 days ago | root | parent | next |
When it comes to China, it's not entirely fair to single out Apple here given that non-Chinese companies are not allowed to run their own compute in China directly.
It always has to be operated by a sponsor in the state who hold encryption keys and do actual deployments etc etc.
The same applies to Azure/AWS/Google Cloud's China regions and any other compute services you might think of.
talldayo 6 days ago | root | parent |
It's entirely fair. Apple had the choice to stop pursuing business in China if they felt it conflicted with values they prioritized. Evidently it doesn't, which should tell you a lot about how accepting Apple is of this behavior worldwide.
astrange 6 days ago | root | parent | next |
iCloud E2E encryption (advanced data protection) works in China.
There are other less nefarious reasons for in-country storage laws like this. One is to stop other countries from subpoeanaing it.
But it's also so China gets the technical skills from helping you run it.
musictubes 6 days ago | root | parent | prev |
You don’t have to use iCloud. Customers in China can still make encrypted backups on their computers. I also believe, but please correct me if I’m wrong, that you can still do encrypted backups in China if you want.
All the pearl clutching about Apple doing business in China is ridiculous. Who would be better off if Apple withdrew from China? Sure, talldayo would sleep better knowing that Apple had passed their purity test, I guess that’s worth a lot right? God knows consumers in China would be much better off without the option to use iPhones or any other Apple devices. Their privacy and security are better protected by domestic phones I’m sure.
Seriously, what exactly is the problem?
jayrot 6 days ago | root | parent | prev |
>Just make absolutely sure you trust your government
This sentence stings right now. :-(
hulitu 6 days ago | root | parent | prev |
> History has shown, at least to date, Apple has been a good steward.
cough* HW backdoor in iPhone cough*
evgen 6 days ago | root | parent |
cough bullshit cough
Don't try to be subtle. If you are going to lie, go for a big lie.
stavros 6 days ago | root | parent | prev | next |
> that all that theater is still no protection against Apple themselves
There is such a thing as threat modeling. The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
hulitu 6 days ago | root | parent |
> The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
Well, to be honest, theater is a pretentious word in this context. A better word will be shitshow.
(i never heard of a firewall that claims it filters _some_ packets, or an antivirus that claims that it protects against _some_ viruses)
consteval a day ago | root | parent | next |
> i never heard of a firewall that claims it filters _some_ packets, or an antivirus that claims that it protects against _some_ viruses
What? That's all of them. Every single one. If they say otherwise, then you're being lied to.
stavros 6 days ago | root | parent | prev |
Really? Please show me an antivirus that claims that it protects against all viruses. A firewall that filters all packets is a pair of scissors.
derefr 6 days ago | root | parent | prev | next |
The "we've given this code to a third party to host and run" part can be a 100% effective stop to any Apple-internal shenanigans. It depends entirely on what the third party is legally obligated to do for them. (Or more specifically, what they're legally obligated to not do for them.)
A simple example of the sort of legal agreement I'm talking about, is a trust. A trust isn't just a legal entity that takes custody of some assets and doles them out to you on a set schedule; it's more specifically a legal entity established by legal contract, and executed by some particular law firm acting as its custodian, that obligates that law firm as executor to provide only a certain "API" for the contract's subjects/beneficiaries to interact with/manage those assets — a more restrictive one than they would have otherwise had a legal right to.
With trusts, this is done because that restrictive API (the "you can't withdraw the assets all at once" part especially) is what makes the trust a trust, legally; and therefore what makes the legal (mostly tax-related) benefits of trusts apply, instead of the trust just being a regular holding company.
But you don't need any particular legal impetus in order to create this kind of "hold onto it and don't listen to me if I ask for it back" contract. You can just... write a contract that has terms like that; and then ask a law firm to execute that contract for you.
Insofar as Apple have engaged with some law firm to in turn engage with a hosting company; where the hosting company has obligations to the law firm to provide a secure environment for the law firm to deploy software images, and to report accurate trusted-compute metrics to the law firm; and where the law firm is legally obligated to get any image-updates Apple hands over to them independently audited, and only accept "justifiable" changes (per some predefined contractual definition of "justifiable") — then I would say that this is a trustworthy arrangement. Just like a trust is a trust-worthy arrangement.
neongreen 6 days ago | root | parent |
This actually sounds like a very neat idea. Do you know any services / software companies that operate like that?
commandersaki 6 days ago | root | parent | prev | next |
> They’re still fully in control. There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
But any such software must be publicly verifiable otherwise it cannot be deemed secure. That's why they publish each version in a transparency log which is verified by the client and handwavy verified by public brains trust.
This is also just a tired take. The same thing could be said about passcodes on their mobile products or full disk encryption keys for the Mac line. There'd be massive loss of goodwill and legal liability if they subverted these technologies that they claim to make their devices secure.
chadsix 6 days ago | root | parent | prev | next |
Exactly. You can only trust yourself [1] and should self host.
9dev 6 days ago | root | parent | next |
That is an answer for an incredibly tiny fraction of the population. I'm not so much concerned about myself than society in general, and self-hosting just is not a viable solution to the problem at hand.
chadsix 6 days ago | root | parent | next |
To be fair, it's much easier than one can imagine (try ollama on macOS for example). In the end, Apple wrote a lot of longwinded text, but the summary is "you have to trust us."
I don't trust Apple - in fact, even the people we trust the most have told us soft lies here and there. Trust is a concept like an integral - you can only get to "almost" and almost is 0.
So you can only trust yourself. Period.
killjoywashere 6 days ago | root | parent | next |
There are multiple threat models where you can't trust yourself.
Your future self definitely can't trust your past self. And vice versa. If your future self has a stroke tomorrow, did your past self remember to write a living will? And renew it regularly? Will your future self remember that password? What if the kid pukes on the carpet before your past self writes it down?
Your current self is not statistically reliable. Andrej Karpathy administered an imagenet challenge to himself, his brain as the machine: he got about 95%.
I'm sure there are other classes of self-failure.
martinsnow 6 days ago | root | parent |
Given the code quality of projects like nextcloud. Suggestions like this makes the head and table transmugify into magnets.
lukev 6 days ago | root | parent | prev | next |
The odds that I make a mistake in my security configuration are much higher than the odds that Apple is maliciously backdooring themselves.
The PCC model doesn't guarantee they can't backdoor themselves, but it does make it more difficult for them.
astrange 6 days ago | root | parent |
You also don't have a security team and Apple does have one.
saagarjha 6 days ago | root | parent |
Speak for yourself
commandersaki 6 days ago | root | parent | prev | next |
> "you have to trust us."
You have fundamentally misunderstood PCC.
dotancohen 6 days ago | root | parent | prev |
I don't even trust myself, I know that I'm going to mess up at some point or another.
talldayo 6 days ago | root | parent | prev |
Nobody promised you that real solutions would work for everyone. Performing CPR to save a life is something "an incredibly tiny fraction of the population" is trained on, but it does work when circumstances call for it.
It sucks, but what are you going to do for society? Tell them all to sell their iPhones, punk out the NSA like you're Snowden incarnate? Sometimes saving yourself is the only option, unfortunately.
6 days ago | root | parent |
remram 6 days ago | root | parent | prev |
Can you trust the hardware?
killjoywashere 6 days ago | root | parent | next |
There's a niche industry that works on that problem: looking for evidence of tampering down to the semiconductor level.
blitzar 6 days ago | root | parent | prev |
If you make your own silicon can you trust that the sand hasnt been tampered with to breech your security?
patmorgan23 6 days ago | root | parent | prev | next |
Yep. If you don't trust apple with your data, don't buy a device that runs apples operating system
yndoendo 6 days ago | root | parent | next |
That is good in theory. Reality, anyone you engage with that uses an Apple device has leaked your content / information to Apple. High confidence that Apple could easily build profiles on people that do not use their devices via this indirect action of having to communicate with Apple devices owners.
That statement above also applies to Google. There is now way not prevent indirect data sharing with Apple or Google.
hnaccount_rng 6 days ago | root | parent | next |
Yes, if your thread model includes the provider of your operating system, then you cannot win. It's really that simple. You fundamentally need to trust your operating system because it can just lie to you
fsflover 6 days ago | root | parent | next |
This is false. With FLOSS and reproducible builds, you can rely on the community for verification.
hnaccount_rng 6 days ago | root | parent | next |
You really cannot. Both from a practical point of view. Does the thing really does what _you_ want it to do? A typical OS is much too complicated to verify this (and no theorem provers just move the problem).
But also from a theoretical point of view: I give you that the source does what you want it to do (again: unrealistic). Then you still need to verify that the software deployed is the software that builds reproducibly from the source. At the end of the day you do that by getting some string of bits from some safe place and compare it to a string of bits that your software hands you. That "your software" thing can just lie!
And yes you can make that more complicated (using crypto to sign things etc.), but that just increases the complexity of the believable lie. But if your thread model is "I do not trust my phone manufacturer" than this is enough. In practice that's never the thread model though.
fsflover 6 days ago | root | parent |
> does what you want it to do
What are you even talking about? We're talking about security, not 100% correctness, which is indeed not achievable. Security as in the software doesn't contain backdoors. This is much easier to verify, and even the very openness of the code will prevent many attempts at that.
Also, trust must not be 100%, as Apple is trying to train their gullible users. Oppenness is definitely not a silver bullet, but it makes backdoors less likely, thus increasing your security.
> you do [verification of reprodicible builds] by getting some string of bits from some safe place and compare it to a string of bits that your software hands you.
Exactly, and here's an example of how to do it reasonably (not perfectly!) well: https://www.qubes-os.org/security/verifying-signatures/
Also, please stop with the security nihilism: https://news.ycombinator.com/item?id=27897975
hnaccount_rng 4 days ago | root | parent |
The XZ backdoor was completely in the open. It only got found because an engineer at Microsoft was far too good at controlling his environment and had too much free time to track down a 1% performance degradation. So... no, you really cannot verify that there is no backdoor. Not against a well resourced, patient adversary.
I'm not sure what your links are supposed to be proving. I'm neither of the opinion, that PCC is useless, nor am I under the misconception that a signature would provide a guarantee of non-maliciousness. All I'm saying is that, if you include Apple as an adversary in your thread model, you should not trust PCC. But not because it's closed source (or whatever) but simply because you fundamentally cannot trust the hardware and software stack that Apple completely controls all interfaces to.
Personally I don't consider this a useful thread model. But people's situation does vary
fsflover 4 days ago | root | parent |
> signature would provide a guarantee of non-maliciousness
Nobody said that. A signature guarantees integrity and authorship.
> no, you really cannot verify that there is no backdoor
Again, nobody said that. I was talking about a lower probability to hide a backdoor and higher probability to find it in FLOSS.
> simply because you fundamentally cannot trust the hardware and software stack
Trust doesn't have to be binary (1 or 0). You can trust but verify.
philjohn 6 days ago | root | parent | prev |
Not unless your entire stack down to the bare silicon is also FLOSS, and the community is able to verify.
There is a lot of navel gazing in these comments about "the perfect solution", but we all know (or should know) that perfect is the enemy of good enough.
fsflover 6 days ago | root | parent | next |
> Not unless your entire stack down to the bare silicon is also FLOSS,
threeseed 6 days ago | root | parent | prev |
We've seen countless examples of relatively minor libraries being exploited which then cause havoc because of a spider web of transitive dependencies.
fsflover 6 days ago | root | parent |
On Qubes OS (my daily driver), which runs everrything in VMs with strong, hardware virtualization, you can use minimal operating systems with very low number of installed libraries for security-critical actions: https://www.qubes-os.org/doc/templates/minimal/
hulitu 6 days ago | root | parent | prev |
> You fundamentally need to trust your operating system because it can just lie to you
Trust us, we are liars. /s
afh1 6 days ago | root | parent | prev | next |
Depending on your social circle such exposure is not so hard to avoid. Maybe you cannot avoid it entirely but it may be low enough that it doesn't matter. I have older relatives with basically zero online presence.
dialup_sounds 6 days ago | root | parent | prev |
Define "content / information".
isodev 6 days ago | root | parent | prev | next |
That really is not a valid argument, since Apple have grown to be "the phone".
Also, many are unaware or unable to make the determination who or what will own their data before purchasing a device. One only accepts the privacy policy after one taps sign in... and is it really practical to expect people to do this by themselves when buying a phone? That's why regulation needs to step-in and enforce the right decisions are present by default.
threeseed 6 days ago | root | parent | prev | next |
And if you don't trust Apple with your data you shouldn't use a phone or internet at all.
Because as someone who has worked at a few telcos I can assure you that your phone and triangulated location data is stored, analysed and provided to intelligence agencies. And likewise this would be applying to ISPs.
mossTechnician 6 days ago | root | parent | prev |
But if you don't trust Google with your data, you can buy a device that runs Google's operating system, from Google, and flash somebody else's operating system onto it.
Or, if you prefer, you can just look at Google's code and verify that the operating system you put on your phone is made with the code you looked at.
johnklos 6 days ago | root | parent | prev | next |
Add to this that the only true deterrent to any company these days is the loss of money, and we have a pretty air tight case here.
If Apple were seen to be subverting privacy at all, they'd lose literally tens of billions of dollars of revenue because they would no longer be a "premium" brand with offerings that truly differentiate themselves from every Android provider.
I trust that no company of that size would risk the most important thing that differentiates themselves from the whole rest of the world for the purpose of spying, making profit from tracking and advertisements, et cetera.
abalone 5 days ago | root | parent | prev | next |
> There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
I would not say "plenty." The protocol that clients use to connect to a PCC node leverages code signing to verify the node is running an authentic, published binary. That code signing is backed by the secure element in Apple's custom hardware (and is part of the reason PCC can only run on this custom hardware, never third party clouds). So to attack this you'd really have to attack the hardware root of trust. Apple details the measures they take here.[1]
Having said that, it would be a mistake to assume Apple is trying to cryptographically prove that Apple is not a fundamentally malicious actor that has designed a system to trick you. That's not the goal here.
What they are providing a high level of guarantee for is that your data is safe from things like a rogue internal actor, a critical software vulnerability, an inadvertent debug log data leak, or a government subpoena. That's a huge step forward and nowhere near what other architectures can guarantee in an independently verifiable way.
[1] https://security.apple.com/documentation/private-cloud-compu...
1vuio0pswjnm7 6 days ago | root | parent | prev | next |
"Sibling comments point out (and I believe, corrections are welcome) that all that theater is still no protection against Apple themselves, should they want to subvert the system in an organized way. They're still fully in control."
It stands to reason that that control is a prerequisite for "security".
Apple does not delegate its own "security" to someone else, a "steward". Hmmm.
Yet it expects computer users to delegate control to Apple.
Apple is not alone in this regard. It's common for "Big Tech", "security researchers" and HN commenters to advocate for the computer user to delegate control to someone else.
halJordan 6 days ago | root | parent | prev | next |
Its not that they couldn't, its that they couldn't without a watcher knowing. And frankly this tradeoff is not new, nor is it unacceptable in anything other than "Muh Apple"
netdevnet 4 days ago | root | parent | prev | next |
The vendor will always be in control. Worth knowing but feels a bit of an empty statement (like water is wet)
isodev 6 days ago | root | parent | prev | next |
Indeed, the attestation process, as described by the article, is more geared towards unauthorized exfiltration of information or injection of malicious code. However, "authorized" activities are fully supported where that means signed by Apple. So, ultimately, users need to trust that Apple is doing the right thing, just like any other company. And yes, it means they can be forced (by law) not to do the right thing.
natch 6 days ago | root | parent | prev |
You're getting taken in by a misdirection.
>for them to run different software than they say they do.
They don't even need to do that. They don't need to do anything different than they say.
They already are saying only that the data is kept private from <insert very limited subset of relevant people here>.
That opens the door wide for them to share the data with anyone outside of that very limited subset. You just have to read what they say, and also read between the lines. They aren't going to say who they share with, apparently, but they are going to carefully craft what they say so that some people get misdirected.
astrange 6 days ago | root | parent |
They're not doing that because it's obviously illegal. GDPR forbids sharing data with unknown other people.
natch 6 days ago | root | parent |
Unknown to whom?
astrange 6 days ago | root | parent |
To you.
lxgr 6 days ago | prev | next |
This is probably the best way to do cloud computation offoading, if one chooses to do it at all.
What's desperately missing on the client side is a switch to turn this off. It's really intransparent which Apple Intelligence requests are locally processed and which are sent to the cloud, at the moment.
The only sure way to know/prevent it a priori is to... enter flight mode, as far as I can tell?
Retroactively, there's a request log in the privacy section of System Preferences, but that's really convoluted to read (due to all of the cryptographic proofs that I have absolutely no tools to verify at the moment, and honestly have no interest in).
saagarjha 6 days ago | root | parent |
Yep, this is the real issue. It is very unclear what gets sent to this thing. And Apple has repeatedly made that distinction very hard to discern.
jagrsw 6 days ago | prev | next |
If Apple controls the root of trust, like the private keys in the CPU or security processor used to check the enclave (similar to how Intel and AMD do it with SEV-SNP and TDX), then technically, it's a "trust us" situation, since they likely use their own ARM silicon for that?
Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
skylerwiernik 6 days ago | root | parent | next |
I don't think they do. Your phone cryptographically verifies that the software running on the servers is what it says it is, and you can't pull the keys out of the secure enclave. They also had independent auditors go over the whole thing and publish a report. If the chip is disconnected from the system it will dump its keys and essentially erase all data.
hnaccount_rng 6 days ago | root | parent | next |
But since they also control the phone's operating system they can just make it lie to you!
That doesn't make PCC useless by the way. It clearly establishes that Apple mislead customers, if there is any intentionality in a breach, or that Apple was negligent, if they do not immediately provide remedies on notification of a breach. But that's much more a "raising the cost" kind of thing and not a technical exclusion. Yes if you get Apple, as an organisation, to want to get at your data. And you use an iPhone. They absolutely can.
plagiarist 6 days ago | root | parent | prev | next |
I don't understand how publishing cryptographic signatures of the software is a guarantee? How do they prove it isn't keeping a copy of the code to make signatures from but actually running a malicious binary?
dialup_sounds 6 days ago | root | parent |
The client will only talk to servers that can prove they're running the same software as the published signatures.
https://security.apple.com/documentation/private-cloud-compu...
warkdarrior 6 days ago | root | parent |
And the servers prove that by relying on a key stored in secure hardware. And that secure hardware is designed by Apple, who has a specific interest in convincing users of that attestation/proof. Do you see the conflict of interest now?
HeatrayEnjoyer 6 days ago | root | parent | prev | next |
How do you know the root enclave key isn't retained somewhere before it is written? You're still trusting Apple.
Key extraction is difficult but not impossible.
abalone 5 days ago | root | parent | next |
According to Apple,
"A randomly generated UID is fused into the SoC at manufacturing time. Starting with A9 SoCs, the UID is generated by the Secure Enclave TRNG during manufacturing and written to the fuses using a software process that runs entirely in the Secure Enclave. This process protects the UID from being visible outside the device during manufacturing and therefore isn’t available for access or storage by Apple or any of its suppliers."[1]
But yes of course, you have to trust the manufacturer is not lying to you. PCC is about building on top of that fundamental trust to guard against a whole variety of other attacks.
[1] https://support.apple.com/guide/security/secure-enclave-sec5...
jsheard 6 days ago | root | parent | prev | next |
> Key extraction is difficult but not impossible.
Refer to the never-ending clown show that is Intels SGX enclave for examples of this.
https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...
yalogin 6 days ago | root | parent | prev |
Can you clarify what you mean by retained and written?
lmm 6 days ago | root | parent | prev |
> you can't pull the keys out of the secure enclave.
You or I can't, but that doesn't mean Apple can't. They made that enclave, and put the keys in it in the first place.
SheinhardtWigCo 6 days ago | root | parent | prev | next |
It was always "trust us". They make the silicon, and you have no hope of meaningfully reverse engineering it. Plus, iOS and macOS have silent software update mechanisms, and no update transparency.
ant_li0n 6 days ago | root | parent | prev | next |
Hey can you help me understand what you mean? There's an entry about "Hardware Root of Trust" in that document, but I don't see how that means Apple is avoiding stating, "we can't access your data" - the doc says it's not exportable.
"Explain it like I'm a lowly web dev"
abalone 5 days ago | root | parent | prev | next |
> Harder to attack, sure, but no outside validation.
There is actually a third party auditor involved in certifying hardware integrity prior to deployment.[1]
But yes, the goal is to protect against rogue agents and hackers (and software bugs!), not to prove that Apple as an organization has fundamentally designed backdoors into the secure element of their silicon.
[1] https://security.apple.com/documentation/private-cloud-compu...
wutwutwat 6 days ago | root | parent | prev | next |
every entity you hand data to other than yourself is a "trust us" situation
fsflover 6 days ago | root | parent |
Unless it's encrypted.
wutwutwat 6 days ago | root | parent |
you trust more than I do
fsflover 5 days ago | root | parent |
How so?
ozgune 6 days ago | root | parent | prev |
+1 on your comment.
I think having a description of Apple's threat model would help.
I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.
dagmx 6 days ago | root | parent | next |
Their threat model is described in their white papers.
But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”
bootsmann 6 days ago | root | parent | prev |
They define their threat model in "Anticipating Attacks"
h1fra 6 days ago | prev | next |
Love this, but as an engineer, I would hate to get a bug report in that prod environment, 100% don't work on my machine and 0% reproducibility
pjmlp 6 days ago | root | parent | next |
Usually quite common when doing contract work, where externals have no access to anything besides a sandbox to play around with their contribution to the whole enterprise software jigsaw.
slashdave 6 days ago | root | parent | prev |
That's a strange point of view. Clearly one shouldn't use private information for testing in any production environment.
ericlewis 6 days ago | root | parent |
As a person who works on this kinda stuff I know what they mean. It’s very hard to debug things totally blind.
sourcepluck 5 days ago | prev | next |
Does anyone have any links to serious security researchers discussing this adversarially or critically? Or, if it's too soon as it's such a recent release, links to the types of serious security researchers who publish that sort of thing.
The discussion here would seem to suggest there's definitely a need for such a thing. Bruce Schneier comes to mind, and doing a search of:
"cloud" site:https://www.schneier.com/
did have a few results. Would be interested in more trustworthy figures to have a read of.
6 days ago | prev | next |
majestik 6 days ago | prev | next |
PCC is a highly secure transport system for routing user queries to Siri, which then failover to ChatGPT over the public internet.
m3kw9 6 days ago | prev | next |
I will just use it, it’s Apple and all I need is to see the verifiable privacy thing and I let the researchers let me know red flags. You go on copilot, it says your code is private? Good luck
danparsonson 6 days ago | root | parent | next |
I've got a fully private LLM that's pretty good at coding built right into my head - I'll stick with that, thanks.
reassess_blind 6 days ago | root | parent |
Same, but it’s vulnerable to prompt injection among other things.
z3ncyberpunk 6 days ago | root | parent | prev | next |
Apple hands your data over to PRISM since 2012.
talldayo 6 days ago | root | parent | prev |
> it’s Apple and all I need is to see the verifiable privacy thing and I let the researchers let me know red flags.
Oh I've heard of Apple, they're the company that sued Corellium for letting researchers study iPhone security too well.
No source code, no accountability.
6 days ago | prev | next |
curt15 6 days ago | prev | next |
For the experts out there, how does this compare with AWS Nitro?
bobbiechen 6 days ago | root | parent |
AWS Nitro (and Nitro Enclaves) are general computing platforms, so it's different. You'd need to write a PCC-like system/application on top of AWS Nitro Enclaves to make a direct comparison. A breakdown of those 5 core requirements from Apple:
1. Stateless computation on personal user data - a property of the application
2. Enforceable guarantees - a property of the application; Nitro Enclaves attestation helps here
3. No privileged runtime access - maps directly to the no administrative API access in the AWS Nitro System platform
4. Non-targetability - a property of the application
5. Verifiable transparency - a mix of the application and the platform; Nitro Enclaves attestation helps here
To be a little more concrete: (1 stateless) You could write an app that statelessly processes user data, and build it into a Nitro Enclave. This has a particular software measurement (PCR0) and can be code-signed (PCR8) and verified at runtime (2 enforceable) using Nitro Enclave Attestation. This also provides integrity protection. You get (3 no access) for "free" by running it in Nitro to begin with (from AWS - you also need to ensure there is no application-level admin access). You would need to design (4 non-targetable) as part of your application. For (5 transparency), you could provide your code to researchers as Apple is doing.
(I work with AWS Nitro Enclaves for various security/privacy use cases at Anjuna. Some of these resemble PCC and I hope we can share more details about the customer use cases eventually.)
Some sources:
- NCC Group Audit on the Nitro System https://www.nccgroup.com/us/research-blog/public-report-aws-...
- Nitro Enclaves attestation process: https://github.com/aws/aws-nitro-enclaves-nsm-api/blob/main/...
computerfriend 6 days ago | root | parent |
I have used some of your employer's software.
I do not disagree with your points, but the NCC audit is not compelling. They only interviewed engineers and didn't audit the code or how it is deployed.
flybarrel 5 days ago | root | parent |
You are likely right the code is not audited, or deployment is not verified. This is a design-level review.
You are inaccurate on "they only interviewed engineers". There are document reviews as well which covers design and architectures.
Source of the report on methodology shared such information.
gigel82 6 days ago | prev | next |
I'm glad that more and more people start to see through the thick Apple BS (in these comments). I don't expect them to back down from this but I hope there is enough pushback that they'll be forced to add a big opt-out for all cloud compute, however "private" they make it out to be.
commandersaki 6 days ago | root | parent |
You can disable Apple Intelligence thereby opting out of private cloud compute.
vtodekl 6 days ago | prev | next |
[dead]
max_ 6 days ago | prev | next |
[flagged]
jasongill 6 days ago | root | parent |
The core of this article, if I understand it correctly, is that macOS pings Apple to make sure that apps you open are safe before opening them. This check contains some sort of unique string about the app being opened, and then there is a big leap to "this could be used by the government"
Is this the ideal situation? No, probably not. Should Apple do a better job of communicating that this is happening to users? Yes, probably so.
Does Apple already go overboard to explain their privacy settings during setup of a new device (the pages with the blue "handshake" icon)? Yes. Does Apple do a far better job of this than Google or Microsoft (in my opinion)? Yes.
I don't think anyone here is claiming that Apple is the best thing to ever happen to privacy, but when viewed via the lens of "the world we live in today", it's hard to see how Apple's privacy stance is a "scam". It seems to me to be one of the best or most reasonable stances for privacy among all large-cap businesses in the world.
astrange 6 days ago | root | parent | next |
> This check contains some sort of unique string about the app being opened,
It's not unique to the app, the article is just wrong. It's unique to the /developer/, which is much less specific.
saagarjha 6 days ago | root | parent |
Yeah, no. This is a stupid argument. If you’re opening an app signed by Mozilla Corporation it’s probably Firefox. If you’re opening an app from [porn app publisher] guess what, it’s a porn app. Nobody cares which one.
astrange 6 days ago | root | parent |
The difference is that it happens much less often because it's cached.
saagarjha 6 days ago | root | parent |
Again, how does this help the "I opened app from porn developer now my computer broadcasts that I did that" case?
astrange 6 days ago | root | parent |
I just checked my Steam library and none of them use codesigning so I guess that solves that. Video playing apps do though, so depends on plausible deniability.
It does seem like this could be fixed using the private relay system. It certainly doesn't need to be unencrypted.
saagarjha 6 days ago | root | parent |
Who would run the intermediate hop, though? Other Macs?
astrange 5 days ago | root | parent |
Randomly selected CDN companies.
https://security.apple.com/documentation/private-cloud-compu...
saagarjha 4 days ago | root | parent |
I guess it would be nice if they added that then :)
max_ 6 days ago | root | parent | prev |
Have you read the linked article?
jasongill 6 days ago | root | parent | next |
Yes, that's why I commented, because the article's core complaint is about the fact that the OS'es Gatekeeper feature does an OCSP certificate validation whenever an app is launched and there's no way to disable it, and that supposed calling home could leak data about your computer use over the wire.
However, it also has a LOT of speculation, with statements like "It seems this is part of Apple’s anti-malware (and perhaps anti-piracy)" and "allowing anyone on the network (which includes the US military intelligence community) to see what apps you’re launching" and "Your computer now serves a remote master, who has decided that they are entitled to spy on you."
However, without this feature (which seems pretty benign to me), wouldn't the average macOS user be actually exposed to more potential harm by being able to run untrusted or modified binaries without any warnings?
pertymcpert 6 days ago | root | parent | prev |
Did you?
jgalt212 6 days ago | prev | next |
[flagged]
wutwutwat 6 days ago | root | parent | next |
Comments like this are extremely common on any apple post related to photos and honestly it's pretty sus that you and many others will start complaining about a thing nobody even mentioned, because of the thing you're complaining/concerned/pissed about. That's pretty telling imo and nobody ever calls it out. I'm going to start calling it out.
saagarjha 6 days ago | root | parent | next |
They’re extremely common because that’s how everyone on Hacker News posts. No need to insinuate someone is a pedophile.
MaKey 6 days ago | root | parent | prev |
What exactly are you calling out?
dialup_sounds 6 days ago | root | parent | next |
They're saying the above user may be a pedophile on the basis that they brought up CSAM on an article that has nothing to do with it.
jgalt212 6 days ago | root | parent | prev |
Indeed. What exactly is being called out? That someone is expressing concern at the impossibility of a vendor's claims?
wutwutwat 6 days ago | root | parent |
calling out you complaining about child porn scanning when nobody is talking about that and it isn't what the link posted is about. Why bring up and express your dislike of a thing that 1. was never implemented and 2. was conceived to prevent the abuse of children.
People who post things like you did, unprovoked, when nobody is talking about it and it has nothing to do with the post itself is fucking weird and I'm tired of seeing it happening and nobody calling out how fucking weird it is. It happens a lot on posts about icloud or apple photos or ai image generation. Why are you posting about child porn scanning and expressing a negative view of it for no reason. Why is that what you're trying to talk about. Why is it on your mind at all. Why do you feel it's ok to post about shit like that as if you're not being a fucking creep by doing so. Why do you feel emboldened enough to think you can say or imply shit and not catch any shit for it.
luuurker 6 days ago | root | parent |
The feature wasn't introduced because of public backlash. It had problems and could be tricked, which at the very least should make you stop accepting everything Apple says about security and privacy at face value. On top of this, while "conceived to prevent the abuse of children", it could be easily used to target someone such as yourself for sharing a meme making fun about the president of your country (or something like that[0])... there's also the fact Apple has bent backwards just to be present in some markets (eg: China and Apple banning VPNs[1]). It doesn't take much to understand why these comments pop up on posts about images + Apple's security/privacy.
Since we're calling people out, allow me to call you out:
Wanting your devices to be private and secure or asking questions about Apple after their f-up doesn't make you a pedo or a pedo sympathiser. Comments that suggest otherwise can also be a bit "sus" (to use your expression), especially in a place like HN where users are expected to know a thing or two about tech and should be aware that the "think of the children" excuse - while good - is sometimes used to introduce technology that is then misused (eg: the internet firewall in the UK that was supposed to protect the children and now blocks sexual education stuff, torrents, etc).
I'll assume your intentions are good, but it isn't right to assume or imply that people complaining about this stuff are pedos.
[0] https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-w...
[1] https://www.reuters.com/article/technology/apple-says-it-is-...
wutwutwat 6 days ago | root | parent |
this person specifically mentioned CSAM. They brought it up to complain about it being intrusive. You're defending someone who is bringing up and complaining about child porn detection when nobody was talking about it. you're defending a person who shouldn't be defended because what they are upset with is companies trying to combat CSAM.
good luck with that
luuurker 6 days ago | root | parent |
Apple specifically mentioned CSAM when announcing the system. I don't understand why you find it weird that people refer to it as the system that detected CSAM when that's essentially what Apple was calling it.
The scanning Apple wanted to do was intrusive, had flaws, and could be abused. That's why you had security researchers, the EFF, etc, speaking out against it. Not long after the announcement, people were sharing "collisions" on Github ( https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue... ) showing that fake positives would be a problem (any alarm bells?)... which then forced Apple to say that there would be a v2 that would fix this (even though they had said that v1 was safe and secure).
On top of ignoring these issues, you seem to be under the impression that the system was only for CSAM detection. It wasn't. The system looked for content... and Apple was going to feed it a CSAM "database" to find that type of content. The problem is that Apple has to follow local rules and many governments have their own database of bad content to block (and report to the authorities)... and Apple usually complies instead of leaving the market. For example, in China, the state has access to encrypted data because Apple gave them the encryption keys per local law. They also ban censorship avoidance apps. For some reason this would be different?
If you want to insist that it was just for CSAM and that people criticising Apple are pedos or are against companies combating CSAM, then do it, but do it with the knowledge that the system wasn't just for CSAM, that it could be tricked (and possibly ruin people's lives), and that it would likely been abused by governments.
nerdjon 6 days ago | root | parent | prev | next |
That was never actually released so there is no "still".
Also worth mentioning that if that had shipped, it would have only taken affect if you uploaded images to iCloud.
niek_pas 6 days ago | root | parent | prev |
They never actually went through with that, did they?
ZekeSulastin 6 days ago | root | parent |
They indeed shelved the plan {1}, and have also introduced iCloud Advanced Data Protection (their branding for end to end encryption) {2}.
There is still the opt-in Communication Safety {3} that tries to interdict sending or receiving media containing nudity if enabled, but Apple doesn’t get notified of any hits (and assuming I’m reading it right the parent doesn’t even get a notification unless the child sends one!).
1: https://archive.ph/x6z0K (WIRED article)
2: https://support.apple.com/en-us/102651 (Adv Data Protection)
3: https://support.apple.com/en-us/105069 (Comm Safety)
natch 6 days ago | prev | next |
>No privileged runtime access: PCC must not contain privileged interfaces that might enable Apple site reliability staff to bypass PCC privacy guarantees.
What about other staff and partners and other entities? Why do they always insert qualifiers?
Edit: Yeah, we know why. But my point is they should spell it out, not use wording that is on its face misleading or outright deceptive.
pertymcpert 6 days ago | root | parent | next |
Apple are running the data centers... this seems like an extreme nit pick of language.
natch 6 days ago | root | parent |
I thought contractors employed mostly by overseas affiliated staffing firms partnering with IS&T run the data centers, but what do I know. I actually don’t. But the wording is odd.
Super weird to take the effort to make the wording so needlessly specific so as to exclude for example government liaison personnel and contractors. Very nit picky of them, to borrow your words. It’s almost as if there’s a reason.
astrange 6 days ago | root | parent | prev |
There aren't any other staff or partners.
natch 6 days ago | root | parent |
Then say that. In the document.
_boffin_ 6 days ago | prev |
I really don’t care at all about this as the interactions that I’d have would be the speech to text, which sends all transcripts to Apple without the ability opt out.
lukev 6 days ago | root | parent | next |
Settings > Privacy and Security > Analytics and Improvements
_boffin_ 5 days ago | root | parent |
Are you sure about that?
astrange 6 days ago | root | parent | prev |
IIRC that uses servers on HomePods but not anything else.
lukev 6 days ago | next |
There's something missing from this discussion.
What really matters isn't how secure this is on an absolute scale, or how much one can trust Apple.
Rather, we should weigh this against what other cloud providers offer.
The status quo for every other provider is: "this data is just lying around on our servers. The only thing preventing a employee from accessing it is that it would be a violation of policy (and might be caught in an internal audit.)" Most providers also carve out several cases where they can look at your data, for support, debugging, or analytics purposes.
So even though the punchline of "you still need to trust Apple" is technically true, this is qualitatively different because what would need to occur for Apple to break their promises here is so much more drastic. For other services to leak their data, all it takes is for one employee to do something they shouldn't. For Apple, it would require a deliberate compromise of the entire stack at the hardware level.
This is very much harder to pull off, and more difficult to hide, and therefore Apple's security posture is qualitatively better than Google, Meta or Microsoft.
If you want to keep your data local and trust no-one, sure, fine, then you don't need to trust anyone else at all. But presuming you (a) are going to use cloud services and (b) you care about privacy, Apple has a compelling value proposition.