I wonder. After all, this is a demonstration on how the British could have identified, and stopped key players in a rebellion, yes?
And you think people will be upset about the concept?
This is a poor example to use.
edit: I see lots of downvoting, but people... be realistic. The average person will only see that a rebellion could be stopped. This isn't an example which explains negative outcomes to them well.
People will give up all due to fear. Hell, people will give up all privacy to Google for beads and trinkets!
And this example shows a government being able to thrwart an actual, violent rebellion! The average Sally and Joe will think "great"!
> edit: I see lots of downvoting, but people... be realistic. The average person will only see that a rebellion could be stopped. This isn't an example which explains negative outcomes to them well.
I think you underestimate the American investment in our founding myths/stories. This is probably a bad example to convince an English person, but I imagine it would have considerable visceral appeal to an American.
"Metadata is information about information. A library’s card catalogue is an example of metadata: it contains details about the library’s books, which themselves are treasure troves of information. Metadata about a phone call can include: who called whom, what date and time the call was placed, each party’s location, and the call’s duration. ...
"Had such metadata been available to the Third Reich, it is easy to imagine that Kelly would have been flagged by encoding 3 (homosexual) on a punched card, arrested, and his death recorded by encoding 4 (execution), having been found guilty by association. There are other explanations for Kelly’s actions that, knowing the conversations’ contents, would lead to completely different interpretations."
What about a "20 person town" that doesn't even exist, with a police department that has a totally legit looking website and a police chief who emails AT&T requesting emergency access to prevent a suicide?
If my experience working with AT&T in the private sector is any indication, they would be cross referencing this sort of list when establishing new access, and I doubt this is something that can be done quickly.
> ...police chief who emails AT&T requesting emergency access to prevent a suicide?
This isn't a plausible hypothetical. If it was an emergency they would be calling through established channels and the fastest way to get access to that sort of information would likely be an emergency warrant, which is easily granted in the situation you described.
It's a bit more than a plausible hypothetical. Granted it didn't involve fake police departments, but there are documented cases [0] of hackers emailing companies, including Meta, from compromised police accounts with "emergency data requests" - which are, notably, warrantless.
Many (real) police departments don't actually have such an address, so that's not a reliable test.
Also, maybe not relevant for this discussion of AT&T, but what about "law enforcement" outside the US? My point is more broadly about the problem of verifying a request really and legally came from "law enforcement."
Yes, some small town cop might not have any oversight and if he starts requesting phone records of "famous" people it can quickly become a huge issue not that warrent-less access isn't already an issue for everyone else.
One of the big Snowden revelations is using NSA spying to do just that was so common place that co-workers were suspicious of anyone who didn’t illegally spy on exes or random crushes.
Not even the NSA has a database of famous person phone numbers. Most of the rich and famous don't have phone numbers, or even mailing addresses. They have companies that in turn manage their devices and properties, usually run through a "family office" of attorneys and accountants. So the NSA will have a list of numbers associated with their production company or family law firm, which would require further human investigation to determine which was actually in a person's pocket day to day.
Politicians have started adopting such structures too. House Speaker Mike Johnson famously has no bank account. He does, just not under his name. It would be managed by his family office.
I worked for telcos, and the rich & famous & royalty typically had their accounts restricted from access by regular CSRs. I know there were restrictions built into the internal systems, though it didn't apply to my role.
Now the elite use Telegram groups, Signal, and rotate SIM cards. Everybody should be doing life this way, so the telcos become dumb data pipes.
I don't think this is close to universal among rich and famous, although no doubt you are correct about some. For one thing there is a principal agent problem in giving this kind of personal service company control over all your comms. Big impersonal companies can be more relied on to treat everyone the same. Many famous people will just sign up under a pseudonym.
Family offices are also an artifact of generational wealth. Plenty of famous people aren't rich enough for something that heavily staffed. Also newly wealthy will often want to be able to deal with existing friends and family on the same basis as before.
Indirectly, I've been able to observe some of the habits of someone with stratospheric levels of wealth. They had a phone number, but changed it frequently.
In some respects, uber wealthy and street/cyber savvy criminals act same: use intermediaries, use cash, use trusts incorporated in states/places, where it is hard to trace beneficiaries, burner phones bought with cash, by intermediaries, frequent change of phones, no trace of their names on data brokers, consultants who help get rid of any trace in data brokers, etc.
You seem to be under the mistaken belief that cops are held accountable. They’re not. Accessing Taylor Swift’s text messages is least dangerous thing cops of all levels regularly engage in.
Until someone complains to the press, then it’s one of the more common ways LEO get fired.
Agencies will throw officers under the bus at the slightest provocation to save their own asses. The easiest way is if they get caught violating policy. Officers will often work hard to find out how to not get caught/the cracks in the system, vague areas of policies, etc.
It’s always been a cat and mouse game. Same in the military, except the military doesn’t have to deal with civilian courts and shit rolls down hill more explicitly.
Cops are almost never fired. Placed on paid administrative leave pending the outcome of an investigation? While the press is in a tizzy? Sure. Fired? No. Almost all cops found in violation of policies win on internal appeal. Even in the extremely rare case when they are fired, they’re rehired in a neighboring jurisdiction post haste.
The courts? PUH-LEASE! Prosecutors never prosecute. Even if they go to a grand jury, they end up instructing the grand jury to pass a no-bill (i.e. not enough evidence of a crime), thus letting them was their hands and say, “Gosh! The jury didn’t indict. Can’t do anything,” when in actually grand juries indict everyone. (When I was on a federal grand jury, the AUSA told us that if we were going to pass a no-bill, to tell him so he could bring more evidence (a legitimate ask) and told us that a no-bill would result the a call to him, and his boss, from the Attorney General himself.) Then of course there’s qualified immunity and case law that is overly differential to police actions — even when they violate procedure.
It’s bullshit. Cops are completely unaccountable. They’re gangs.
What does this have to do with east coast elites? There needs to be certain oversight, including chain of command and not every tiny police department should be able to acess ALL phone data without any logging, oversight, intermediary, and at best not without warrant. Why do you twist into a culture war narrative..
I'm not American, but from what I understand the FBI is more politically exposed than small-town cops. If the Bureau screws up, it's national news and someone loses a job; if Sheriff McBigot screws up, as long as he can count on the support of a few local villagers he'll be just fine.
Federal law enforcement and our intelligence agencies are politically powerful organizations that pretty much do as they please and never face consequences. Even our elected officials fear them. As senator Chuck Schumer put it “Let me tell you, you take on the intelligence community, they have six ways from Sunday at getting back at you.”
None of this contradicts what the GP said. Both are unaccountable in different ways; a local small-town sheriff is much more likely to get away with things that the FBI can’t, and vice versa.
> Neither group should have the power but I am far more likely to be in danger from a nearby guy on a power trip than from the FBI.
I think that that's a less obvious conclusion than one might think, even though I share your more acute distrust of local than of federal law enforcement. It might seem extremely unlikely for someone inconsequential like, let's be honest, many of us here to come to the attention of a national body; but history shows that all it takes is being affiliated with a group or movement that makes the federal government nervous to become the subject of FBI scrutiny, and this likelihood is heightened the easier surveillance becomes.
> Few have made a dent: it appears that the Feds are doing everything they can to keep Hemisphere secret.
This to me is the most concerning part. The government has a legal way to get that information, but belligerence to serve the rights of their citizens? That's particularly concerning, like they've forgotten who they serve and work for.
I don’t think under current Constitutional precedent a warrant is required to retrieve information exchanged with a third party voluntarily from that third party.
Smith v. Maryland is probably the most relevant case:
ha, that's a good one! Nobody. Here's an example of why that won't happen from another post awhile back:
> The NSA has built a surveillance network that has the capacity to reach roughly 75% of all U.S. Internet traffic.
> An internal NSA audit from May 2012 identified 2776 incidents i.e. violations of the rules or court orders for surveillance of Americans and foreign targets in the U.S. in the period from April 2011 through March 2012, while U.S. officials stressed that any mistakes are not intentional.
> The FISA Court that is supposed to provide critical oversight of the U.S. government's vast spying programs has limited ability to do so and.
> A legal opinion declassified on August 21, 2013, revealed that the NSA intercepted for three years as many as 56,000 electronic communications a year of Americans not suspected of having links to terrorism, before FISA court that oversees surveillance found the operation unconstitutional in 2011.
> it must trust the government to report when it improperly spies on Americans
The courts told them to watch themselves and to self incriminate if they do crimez. An honor system. Which naturally, they did not snitch on themselves to the courts, because why would they.
This is precisely the logic behind qualified immunity: "you can only jail them if they knew it was illegal, otherwise they get a slap on the wrist and politely asked to stop". It sounds nice in theory, but the inevitable result in practice has been total immunity for all lawbreaking, because of a compliant court system which sets the bar for "should have known it was illegal" to be impossibly high. Maybe at one point before we knew this it was possible to argue for QI in good faith, but now we empirically know it's an unworkable standard and should be scrapped.
Besides, civilians get the "ignorantia juris non excusat" [0] principle, and the rule of law means that everybody, including government officials and law enforcement, should be held to the same standard.
Because if I do what my boss told me to do, what has been done for ages without consequence, and you’re going to pass a law putting me in jail for it, I will expend every effort to thwart it. If, on the other hand, you’re saying “stop it,” my give-a-shit factor is lower.
The broader public is indifferent about the Fourth Amendment. Polling and elections and listening to phone calls from constituents shows this. Activating a concentrated mass of political energy against yourself, in this context, is not a bright idea.
Also, it’s wild how cavalierly we’re willing to contemplate taking away someone’s freedoms in a conversation about other freedoms.
Because we have explicit laws set up that already say this isn't an excuse. If your boss tells you to rob his neighbor, that won't save you. If your boss tells you to drive drunk you'll still get arrested for drunk driving. The agency is yours. The question is if a reasonable person would know this a violation of the fourth amendment and I'm pretty confident the answer is definitively yes. And we're talking about cops, who are trained professionals. Who are supposed to be trained in the law. They should know more than a reasonable person.
> The broader public is indifferent about the Fourth Amendment.
You're confused. The broader public is jaded. They've given up hope after decades of abuse. Losing hope is not the same as being indifferent. Even ambivalent would be a better word but I'd still say jaded. We're tired. We're exhausted. Depressed. Worn out. But not indifferent.
This was supposed to be an important freedom; a pivotal one in fact. It deserves more protection than "oopsie! Ok we promise to stop violating the rights of untold thousands".
I think it shows considerable restraint, to ask for the incarceration of a handful of offenders, compared to the flagrant abuse of the rest of everybody.
> Also, it’s wild how cavalierly we’re willing to contemplate taking away someone’s freedoms in a conversation about other freedoms.
When someone abuses power and privilege in order to take away freedoms from the powerless, the powerless wanting to take away the freedoms of the powerful is to be expected.
Even if you take this one capability away from the privileged and powerful, they will remain privileged and powerful. They've demonstrated a willingness to abuse their position to hurt others.
In contrast with removing only one capability from their arsenal, it would be better if the abusers would also be stripped of their power and privilege for having abused one of said powers and privileges.
That doesn't necessarily require incarceration, logically speaking. Sure, the wounded may feel better if it did, but the goal of policy in this situation isn't to please. But at least the individuals who violated constitutional rights are no longer trustworthy.
> Even if you take this one capability away from the privileged and powerful, they will remain privileged and powerful
I’m all for firing the people and banning them from public service. I just don’t see the benefit of putting them in jail. From their perspective, what they’re doing is legal. Moreover, it’s deeply precedented.
And again, it’s a false economy. Threaten to put people in jail around a freedom that ranks low in voters’ minds and you’ll wind up with nothing.
I think our criminal justice system is overly punitive, but I see violating a fundamental right as much much worse than e.g. dealing drugs which I'd possibly spend years in jail for. Is spending years in jail for that reasonable? No. It's a mind bogglingly disproportionate punishment. But violation of fundamental rights is a heinous crime and I'd expect the punishment for it to be greater than the punishment for drug crime. Hence a call for jail time.
That is what government procurement means. Any competitive process comes before the award of the contract. After that point everyone is generally locked into a one-customer one-seller situation.
Just because it’s how it’s done doesn’t mean it’s not anti-competitive. Government procurement should be sourcing from multiple vendors and requiring interoperability from all of them. Otherwise it’s way too easy for megacorps to lobby their way into a contract and offer products at a loss before cranking up replacement and repair costs by a 10,000% markup since the govt has no choice but to pay it.
This here. Consumers should expect that their data will be sold as soon as it is available to another company.
Facebook, Google, etc proved that user data is lucrative. Why is it surprising when other companies want to jump on the same money-making stream?
Users don’t read terms and service agreements anymore. As long as companies include data collection and selling the data in the ToS, and customers continue to agree to it, what is the problem?
Customers can either vote with their wallet and move to a provider who respects their data, or they can push their elected politicians to make the sale of data like this illegal.
It depends. Phone carriers can still get metadata about iMessage. What phone numbers you are connecting with, when you send them, etc. Even if the actual content is encrypted, the metadata could be useful.
>
"We defer to the Justice Department, to whom Senator Wyden's letter is addressed, for comment," the AT&T spokesperson said. "Like all companies, we are required by law to comply with subpoenas, warrants and court orders from government and law enforcement agencies."
Okay, sure. But, are they compelled by law to keep 40 years’ worth of call records, and not only keep them but in an easily indexed manner?
If they aren’t, then clearly there is something going on. Those records probably aren’t that cheap to keep around, compared to not keeping them around. They’re a public company with shareholders; if it were less costly for them to get rid of the records they presumably would have already gotten rid of them, so.. what’s going on?
Feds would typically pay yearly through classified budget to have this access. They simply procure it like everything else, then hide it in reams of black budget.
Even with recent FCC changes, I still get as many spam phone calls as legitimate phone calls. There should be a nearly unlimited number of these spam call records, should be easy to investigate the sources and stop them.
I'm not a cop, but how can I get access to this Hemisphere system? Do I pretend to be a cop and walk into a police station, open the start menu and click on Hemisphere, and then "search all"? Seems doable.
There are jurisdictions in the US that elect local judges and justices of the court that have zero votes at election-time[1]. You could always just move to the middle of nowhere and write yourself in and hope for the best. And in some places like New York, you need no experience to become one.[2]
Learning the fact that you need no credentials to be a family judge, for instance, permanently damaged my trust and respect in the system. If Joe Bumpkin down the street with no education can decide the fate of others' lives, does our society truly understand and practice justice?
Personally I'm using it as an opportunity to tell my story and call out authorities by name, when my case is over.
I would like to see a country where judges don't play God or choose which families win and lose in this country. They owe every citizen respect, not just the ones with uteruses.
I'm guessing that Joe Bumpkin's blatantly legally wrong decisions would be overturned on appeal by a higher order court. But I live in a country with a non-stupid way to select judges, so I don't know for sure.
I find it naive for someone to trust a process that, at just about any junction, can be corrupted and force you to lose resources.
Our rights are but toys to them. This will not change until we start dismantling the structures of power that allow them to steal autonomy from others.
I didn't say non-stupid judge selection is perfect, just that it's strictly better than electing them with no required qualifications.
Also, a firm no thanks from me on dismantling the structures of power until we settle unambiguously on what replaces them. Again, maybe my Canadian bias, our "structures of power" are not great but typically those who want to dismantle them are advocating sheer wingnut lunacy in their stead.
Yeah, government's never get it wrong or anything...
There are some things, like families, that governments are simply not equipped or qualified to rule on. It doesn't concern them, they have no standing and no stake in the outcome. They are reticent to take real action and instead want people to bend to their will without having proven superiority of character, word, or deed.
I'll give you that it is indeed one judge actually being held accountable, to some mild degree.
Still not enough to convince me that this country is good or is capable of respecting its own documents and laws. It does what is convenient to it, no more, no less.
I will return the favor when the US inevitably pisses off another country and they invade.
My aim in sharing that article was "how we get to a point where judges even think this is acceptable behavior/know they can usually do this kind of thing with impunity".
No, you, and the author, are assuming the antecedent which we have to do to make your question relevant. If we do assume that what is done in public, which you've willfully involved a 3rd party in, is somehow 4th amendment protected- it isnt and never has.
It's necessary that government and large corporations work in tandem to prevent terrorists and child abuse. The point of government is to keep us safe. Without government, licensing, laws, fines, taxes, etc, etc we would keep 40% more income... But no one would want to build roads sadly, and the terrorists and child abusers would be rampant, unlike today. Thank you big G!
Curious that the annual funding has dropped to extremely low levels. It looks like they have no more than two or three people supporting this program. I wonder if AT&T has alternate revenue streams for this data.
We need lifetime jail sentences for people who violate the bill of rights. They entire department and any cop who uses this should be rotting behind bars in solitary confinement for eternity.
Extended solitary confinement is torture that causes permanent physiological damage to the mind; we should not be torturing criminals, regardless of their crimes.
The claimed features (see pg 5 of https://www.wyden.senate.gov/imo/media/doc/wyden_hemisphere_...) operate at a cell service level, so as long as your phone is still connecting to base stations the answer would be "no." Take, for example, "Linking multiple devices/phone numbers to an identified target" -- even if you use Signal on two phones and never do regular text/call, if the two devices travel together (e.g. connect to the same carrier base stations) one can make a guess that they are related. This has much wider applicability than the drug investigation mandate: e.g. you could use such capability to identify who is meeting with which investigative journalist.
I think most privacy-conscious people have a pretty good idea about how to maintain proper private key hygiene (key should never leave the device, use FDE + a passphrase or a hardware token, etc). But we've been leaking metadata (such as mail headers) left and right ever since PGP was a hot new thing, something which should've been our primary concern no later than since Snowden/2013.
It would be good to have a proper field guide written down, that's a little more in-depth than "leave your phone at home", weighing risks vs convenience, going into detail on what kinds of metadata you might be leaking, etc. Most of us have some rough idea but it isn't at all obvious the way we know "MD5 is broken".
AT&T would know where you are from connections to the towers. They wouldn't normally know who was on the other side of the jabber connection, particularly if one or more or the involved servers was not where AT&T could see any traffic.
If you want generic advice, it's "Faraday bag or pull the battery". "Off" isn't "Everything is off and nothing will communicate via any radio" on every handset, and some will connect to towers without a SIM in case you need to make an emergency call.
I use signal but, most of my contacts do not. I've opted for porting my phone number to JMP.chat which does offer E2EE. However, the recipient must also have an encrypted Jabber account.
Is that why Signal collects your phone number? Hah.
Any E2E app that doesn't collect your phone number or any identifiable info that you can use as a standalone app on the desktop and E2EE is the only way it works (not optional or opportunistic like jabber/xmpp, whatsapp, matrix, telegram,etc...) and is developed ouside the US by a well known/reputed dev(s). Is what I recommed.
Check out wire and briar if they meet these requirements. Personally, I would not use digital media if I don't want the US gov knowing about it entirely. The solution is legislative not technical. Kind of like how you get a free TSA body massage at the airport, the people have willed it.
“End-to-end encrypted” and “private identity mapping” are orthogonal properties: a chat system can have both, but doing so is significantly harder (both in terms of engineering complexity and teaching users to operate the system safely).
Signal chooses (or more accurately chose, since they’re working on eliminating it) to depend on telephone numbers for identity mapping, which was and is a reasonable design constraint given their target audience.
Bullshit! When you compromise the phone of someone you find out the phone numbers of everyone they talked to. I won't even get into how signal can abuse this willingly or against their will.
Signal is no better than just using imessage or whatsapp. The terribly deceptive thing about it is that it is marketed as a super secure messaging app better than alternatives but in the most important way that matters: not crypto but metadata and plausible deniability it makes such compromises. Explain to me why signal on the desktop can't function without a mobile app?! Governments have complete control over mobile phone infrastructure and can perform targeted compromises by using signal contacts for target selection.
Do not use signal thinking it will protect you better than any other encrypted app. Governments and private partires are not out there cracking the crypto itself.
Do you have something _in evidence_ indicating that Signal is abusing contact information?
The rest of this is scattershot: what matters for the overwhelming majority of use cases is end-to-end encryption between parties that know each others' identities but aren't necessarily technically proficient enough to play key management games. This is the user story that matters for dissidents, journalists, public figures, and ordinary people: if you aren't servicing those people, then it's extremely likely that you're (1) servicing nobody at all, or (2) servicing people who treat security as a LARP rather than a practical concern.
> between parties that know each others' identities
And Signal needs to collect their identities and reveal them to each other despite the risk of one of them getting compromised?
> This is the user story that matters for dissidents, journalists, public figures, and ordinary people
When you are outed as a source, as the romantic partner of an estranged spouse, as the public figure losing an election because of an embarassing message,etc... it matters. And you have not given me a technical reason why Signal can't protect people as they expect it to. None of these people are concerned about the FBI doing a forensic investigation or wiretapping them.
> servicing people who treat security as a LARP rather than a practical concern.
Or normal people who don't know enough to think about security and threat models who simply trust you the tech savvy person recommending them Signal which will protect them, you know, the general population. Matter of fact I would bet good money most signal users don't even know you have to verify each other's codes in person for the e2ee to even mean anything other than false security!
Non-technical users, including people who don’t know what “end-to-end encryption” means. The right to privacy isn’t just for dorks who practice keyring bonsai.
And those people assume their phone number will not be revealed to random contacts and their contacts! A phone number on its own is PII! It is in some contexts more dangerous than knowing your drivers license or social security number.
What utter deception! And why i distrust signal even more! The entire world uses whatsapp which has its own identifiers as do most messaging apps. Signal deviated and went out of its way to collect the one piece of information even more identifying than your full name and address! Lol
I don't know anybody who thinks this. If you use a standard population distribution: it's safe to assume a slight majority of Signal's user base remembers when phone numbers were publicly accessible through printed phone books. Contact sharing is a substantially less problematic subset of that.
But when you use any messaging app they show your nick (and signal lets you set your name), the natural assumption which I too had was that phone numbers are used for sms only to invite others but on singal my name/nick is used like whatsapp, viber,etc..
> Contact sharing is a substantially less problematic subset of that.
HN rate limits me so please look at other comments i made on this thread about why this is decidedly more dangerous than just about any insecurity you know about. Nothing is more dangerous than false security especially when most people don't think in detail about security, they just assume signal will take care of it. I have an example about sources being revealed when a journalist's phone is compromised (many more examples).
For the general population, are you saying man in the middle attacks are of a greater risk than the other person's phone being compromised? Because if so I would strongly disagree with that and can provide sources to back that up (but save me time and look into all the pegasus pwnages and mobile stealers). In which case, in the threat model that matters most to the general polulation, signal compromised by sharing the one piece of information that is so good at identifying people it is the most popular anti-fraud identifier: phone numbers!
My trust in it is even lesser by how everyone rallies in defense of signal and downvotes any critique of it like with this thread. Be wary of crap you're not allowed to question!
> the natural assumption which I too had was that phone numbers are used for sms only to invite others but on singal my name/nick is used like whatsapp, viber,etc..
I don't know about Viber, but this isn't true for WhatsApp. If someone sends you a message on WhatsApp, you can see their phone number.
Again: the overwhelming threat model is here is "two individuals that already know each other want to communicate privately." That's what Signal facilitates, and it does so pretty well given the purity compromises that need to happen to do that for non-technical users. They're not worried about leaking phone numbers, because they're already shared.
Finally: there's a good chance you being downvoted here because (1) these comments are indistinguishable from FUD, and (2) you're making claims (and now talking about examples) without citing them. I'll lead by example here: we know that the FBI can only retrieve minimal metadata from Signal[1], and various foreign intelligence services have more luck deploying malware to phones[2] than they do actually breaking anything about Signal's design. Nation state adversaries don't have trouble finding peoples' phone numbers.
> Again: the overwhelming threat model is here is "two individuals that already know each other want to communicate privately." That's what Signal facilitates, and it does so pretty well given the purity compromises that need to happen to do that for non-technical users. They're not worried about leaking phone numbers, because they're already shared.
Well there is no justification for that threat model beyond "our leader said so". Especially when they expressly fight state level censorship and interference but something as simple as someone shoulder surfing you defeats it. Threat models are for security professionals not regular people. Regular people don't model threat or assess securitu risk properly. They don't know encryption is useless if you don't authenticate. And signal's refusal to be independntly usable outside of smartphones given how much law enforcement and spies love to abuse mobile phone infrastructure leaves me to be very suspicious of their intent. Making phone numbers opt-out just makes you less discoverable at best. They have 50 million dollars and various projects no one asked for yet this is too difficult and complex? You still have't given me a reason to accept that beyond "trust me, i know".
> Finally: there's a good chance you being downvoted here because (1) these comments are indistinguishable from FUD, and (2) you're making claims (and now talking about examples) without citing them.
Disagreeing with you is FUD? What claims did i make that need citing? Please challenge me then?
For anyone who reada this thread, do you really want to use Signal given the hostility a person would get for questioning their terribly questionable choices?
> know that the FBI can only retrieve minimal metadata from Signal[1], and various foreign intelligence services have more luck deploying malware to phones[2] than they do actually breaking anything about Signal's design. Nation state adversaries don't have trouble finding peoples' phone numbers.
Do you freaking realize that you are making my point for me here? The problem is being able to connect signal messages with phone numbers. Of course they know everyone's phone numbers! But reporter A talking to source B is all they need to know because they can get access to either's phones! There are very few cases where a real life adversary cannot at some point access one party's phone over time.
If the only protection is against man in the middle attacks then signal is by far the weakest app in that category because wire, briar,etc.. i can just use them on any device.
I had advocated for signal for many years and have gotten burned by it more than any other messaging app. The worst security tools are the ones that lead you to trust them more than you should, the more cultishly supportive their supporters are the more wary of them you should be.
For the target audience of signal, imessage on an iphone is a better choice. For the real users of signals that need higher security wire and briar are better. Signal compromises on too much and then claims too much security guarantees.
> The right to privacy isn’t just for dorks who practice keyring bonsai.
I never said anything contrary to that.
I know that normal people are part of the intended audience - I'm interested in whether you think that Signal's target audience includes or excludes "keyring bonsai" users (which, admittedly, is an amusing and not entirely inaccurate way of describing much of the security community).
If it includes those users - then why Signal couldn't have been designed such that use of phone numbers for identification are optional (but the default)?
I think the short version is that doing so substantially complicates Signal’s design (meaning more edge cases, more complicated threat models, more exploitable bugs) for a marginal user case. Parsimony is a significant virtue in E2EE designs.
But for those users: you can effectively use Signal without a phone number by using a virtual number or similar for the one-time registration process. That’s clunky and not ideal, but IMO is a reasonable hurdle for “bonsai” users.
> I think the short version is that doing so substantially complicates Signal’s design (meaning more edge cases, more complicated threat models, more exploitable bugs) for a marginal user case.
Marginal users are just as human as the rest of us. Can you show examples of how this complicates Signal's design? The idea in my head requires only marginally more complexity to serve ~hundreds of thousands of more users.
Closer to tens of thousands, if you use other “keyring bonsai” metrics (such as maintaining a PGP key). Signal’s intended userbase is O(humanity).
The complexity here is in crossing domains: Signal will need to decide how to communicate which “kind” of identity a user has, what that means, etc. They’ll need to decide whether to use random-but-intelligible identifiers (easy to make errors with) or allow people to configure identifies (which means storing more personal data, plus impersonation risks). And so forth.
> Closer to tens of thousands, if you use other “keyring bonsai” metrics
I'm talking about users that have the understanding and desire/need to disconnect their Signal identity from their phone number. That's hundreds of thousands, minimum, if not millions.
> Signal’s intended userbase is O(humanity).
This doesn't obviously interfere with Signal's ability to create Good privacy mechanisms, e.g. disassociation between identity and phone number.
> The complexity here is in crossing domains: Signal will need to decide how to communicate which “kind” of identity a user has, what that means, etc. They’ll need to decide whether to use random-but-intelligible identifiers (easy to make errors with) or allow people to configure identifies (which means storing more personal data, plus impersonation risks)
None of these obviously "substantially complicates Signal’s design" as you claimed earlier.
> communicate which “kind” of identity a user has
Tell the user that other users either have a "phone number" identity or a "certificate" identity. Done. They're already responsible for verifying that the phone number matches the person they think it does.
> what that means
Tell users that a "certificate identity" just means that that person isn't using a phone number. And they need to be extremely careful when interacting with people using these, and absolutely should verify them using a secure channel. Or just disable these entirely until the user taps the "about signal" button in the settings menu 7 times or something.
I don't see any problems here that can't be overcome with a very modest amount of engineering. And, because it's the right thing, they should invest that effort.
No, that's the deception of signal. The general population is already using other identifiers like whatsapp and viber numbers (which almost every country outside the US use even more than sms). Signal refuses to opt-out of phone number collection and usage. With the tens of millions at their disposal and with the time they spend on mobile payments, crypto,etc... you are telling me they can't auto-generate identifiers as alternative to phone numbers? They can't make it alphanumeric and consider any id that is all numbers a phone?
It's all culting around tech/crypto personalities and ignoring the obvious things that don't pass the smell test.
Explain to me why Signal is special as opposed to more popular apps made for the general population that also do E2EE? Explain to me specifically why phone numbers and mobile usage is not optional? Even after like a decade of people begging for it?
This is a lot like PGP email, the same circles of people promoted it (still do in some cases) but the government loves it because email metadata is unencrypted and tech circles insist on email dependency on every app because of the same cult mindset even though hostile middle parties love it. Everything I do in amazon, netflix, uber, slack you name it you can tell my whole life pattern just looking at email subjecte in the clear on an MTA! All because of tech sector refusal to apply critical thinking and creativity when it comes to these things.
So again I ask, if I am allowed to criticaly examine Signal: why is it special and unique that it needs phone numbers no matter what? Especially given device compromise of people you talk to is not in their threat model. e.g.: you are a source and the journalist's phone is compromised, that is exactly what governments do! If signal didn't collect phone numbers all they would see on the journalist's phone would be your nick or in-app id, but thanks to signal they can find out who the source is, and using exploit kits like pegasus this way is not uncommon! Real people are put in danger by signal.
Look at all my downvotes and tell me this is not tech sector conspiracy or at best culting after personalities.
You could easily snoop who is messaging whom, but it's very different than knowing what is being messaged. End to end encryption would protect the second. Carrying a second phone intuitively protects the first, but in reality does not.
its the meta-data they are accessing not the call itself, which means the title is somewhat misleading as no one would ever need a warrant in the first place!
Because movements that get large enough get co-opted by one side or the other, filled with bad-faith actors from the other side, and filtered through an agreed-upon media narrative.
AT&T did divest CNN at least, and no thanks to the US Gov't.
I can’t say that I’m terribly surprised. AT&T has been complicit in mass surveillance in the past. The first example that comes to mind is Room 641A: https://en.m.wikipedia.org/wiki/Room_641A
Frankly, I think it’s safer to assume that most (American) communications companies are involved in some level of surveillance.
America's government actually innovates a simple free market solution to the bureaucratic and regulatory burden of the Fourth Amendment, and Americans have the gall to complain. The nerve. Just imagine how many tax dollars are being saved on bribing judges alone.
That’s not historically how it works in the US. If government comes to you and says, “hey, so we have the ability to regulate you, imprison you, etc. Now we can do that, or you can accept this payment to give us access to private records — you have a day to decide”
Most people call that blackmail, even if it is government.
Second, it’s still a search and seizure of your records. You made the call, AT&T was only the common carrier connecting you. Regardless, of whether or not they paid to conduct the search. Now the judiciary is pretty slanted to supporting law enforcement so it will take time to play out. Often the higher courts are more apt at deciding these issues (ie less bias)
That said, there’s a tendency in the US justice system to protect the system. So we will see how it shakes out in 5 years while this makes its way through
Historically, here’s how it works: Suppose the police come to your door and say, “Hi, we’re investigating your neighbor for a crime and we would like to see the footage from your security camera for the last few days.” At that point, you can choose whether you share the records with the police. It’s up to you. You may demand to see a search warrant, but you do not need to. Your neighbor does not get a say in whether you help the police.
Here, the same pattern applies, but the scale of cooperation is staggering. “Quantity has a quality all its own,” on steroids. In the case of AT&T, their cooperation includes trillions of records and affects literally everyone who has used a phone since 1987.
The issue, according to the article, is not whether voluntary sharing of records is constitutional, but whether the public should have a right to know the details of how the system works.
> If government comes to you and says, “hey, so we have the ability to regulate you, imprison you, etc. Now we can do that, or you can accept this payment to give us access to private records — you have a day to decide”
Who said the Government blackmailed them? These companies are selling this data to every large corp asking, not just the Government. You give private corporations way too much credit here, if they can earn a buck they will sell your soul.
The government does blackmail. They will freeze you out of government and defense contracts if you do not cooperate. I would also bet that every other carrier has a similar arrangement, not just AT&T.
Furthermore, if the DOJ contracted them to build the system there would have been due diligence on the legality and process.
But basically all American tele operators are selling this data to private corporations already, it is a standard contract there is no need to blackmail to buy data then. The only strange thing here is that they try to hide this project, not that it is ongoing.
And I'm pretty sure that most corporations that sells your call data to other private corporations sees no issue to also sell it to the government. To them it is just another customer.
Nope. Private operators cannot access this data. American telcos are heavily regulated. This falls under customer private network information (CPNI) to be shared only with the owner of the data(the customer) or with law enforcement (and internal use but not for commercial use).
> This falls under customer private network information (CPNI)
Any source for that being the case here? CPNI is partially protected, but subsets of it can be freely shared and sold by companies. Do you have a link showing the data police accesses here goes under the protected category and not just the freely available category?
Private companies can access the following according to the CPNI wikipedia, parts of it can only be shared inside the company but a very large part like location data and URLS and demographic data can be sold freely:
> Verizon shares CPNI "among our affiliates and parent companies (including Vodafone) and their subsidiaries unless you advise us not to". and states that it shares "URLs (such as search terms) of websites you visit when you use our wireless service, the location of your device ("location information"), and your use of [Application software [applications] and features" as well as other "information about your use of Verizon products and services (such as data and calling features, device type, and amount of use), as well as demographic and interest categories (such as gender, age range, sports fan, frequent diner, or pet owner)" with other non-affiliated companies
This is historically not how it works. You do not own the records the bakery keeps of when and what the oven is used for. If the cops want to pay for that record keeping you have no say even if you've ordered a cake from that bakery. The fact that you did or did not use the cake to smuggle a hammer into a prison is frankly unrelated.
This is a great article to share with friends who don't understand how dangerous this is.