This sounds a lot like the 'general warrants' which helped spark the first American revolution.
LA is tracking cars through cameras / license plate detection. In response to an FOIA request for details on the tracking, their response was that all the data was part of an "active investigation" and would not be disclosed. I think this was the first time a city has tried to hide details of a surveillance dragnet like this, across an metro area, by claiming basically the entire city was being actively investigated. [1]
In that light, this development is extremely concerning, and I can only hope the judiciary will push back hard to protect some semblance of the 4th. The average person doesn't typically consider themselves a target, but as the number of dragnets increase, and if the Feds can perform mass-hacking of our personal devices where the results are directly admissible in court (putting aside parallel construction for a moment) this mentality will have to change.
The writing has been on the wall for a while, but there's always been some comfort that at least the mass surveillance wouldn't be admissible. Now you start to see, we're getting boiled like frogs. The direction this is all going leads up to a very draconian future just 5 - 10 years out. I'm not sure there's any way to stop it, I don't think enough of the American people can get their heads around how this is cyanide to a free society.
It's not enough to campaign, vote, or even lobby. We need to form a vanguard party and run our own candidates and get them elected.
We should think more broadly than just legislators and chief executives. Consider sheriffs, judges (where they're elected), district attorneys, city controllers, public utilities commissions, etc.
Ideally, clever technology. We can at least try to stop digital dragnet surveillance with better technology. It might be possible to frustrate physical dragnet surveillance the same way.
As a fallback, there's always violence if people get pissed off enough.
A couple weeks ago, when I asked someone how to verify on demand that a BIOS isn't compromised, someone else quipped "Could be the processors too, better forge those by hand." https://news.ycombinator.com/item?id=7609780
In fact, it turns out the future is probably headed in that direction. All mobile phones are already compromised; every phone has a proprietary baseband chip with full remote DMA access that no amount of open software running on your phone can stop. And as laptops become more and more mobile, it's going to seem strange that we've spent so long trying to tether our mobile phones to our laptops. Perhaps future laptops are going to have 3G access embedded right into them which consumers can subscribe to for some low monthly fee. Consumers would probably love it, because it's very enticing: you get internet access in most of the world without having to find a public hotspot or tether your phone. No more dealing with hotel wifi; no more dealing with logging in to someone else's.
The takeaway is that your children may grow up in a world where it's impossible to guarantee the government can't get into your computer if it really wanted to. Desktop computers aren't ever going to go away, but hardware design seems to be trending towards having built-in theft prevention. One feature of theft prevention is having the ability to locate the computer, or send it remote kill signals. If trends like that do catch on with consumers, it's "gg no re," because once our hardware is compromised to the point of third parties being able to remotely access it on demand, we've all lost something precious, and there won't be any opportunity to fix it. The more I think about it, the more it seems like it's just a matter of time until this happens, precisely because once it's here, it's never going away.
More and more network adapters seem to have DMA access to your computer. It would be interesting if the protections afforded by open source software were defeated at the hardware level without most people noticing. There doesn't seem to be any way to defend against it, because open source hardware simply can't survive: no money is necessary to develop open source software, whereas large investment would be necessary for development of open source hardware down to the chip level.
> The takeaway is that your children may grow up in a world where it's impossible to guarantee the government can't get into your computer if it really wanted to.
If your adversary is a well funded government you need to have:
Secure software
Secure firmware
Secure hardware
Secure staff who follow procedure
Secure location
Armed guards
Etc
Most people can not do all of this and this have been vulnerable to governments for a long time.
Suggesting that your mobile communications data was ever secure when it was available to your telecoms provider seems odd to me.
> The takeaway is that your children may grow up in a world where it's impossible to guarantee the government can't get into your computer if it really wanted to.
This is impossible to guarantee today. Certainly if you run the zero-day magnets known as browsers, and even if not, there is always some possibility of physical intrusion.
> More and more network adapters seem to have DMA access to your computer.
With an IOMMU (VT-d or equivalent on other platforms), it should be possible to protect against malicious DMA from any source.
Also, not all phones have basebands with DMA access to main memory. I think iPhones do not, though I am not sure, and some older iPhones have been attacked by turning on "auto answer", demonstrating direct access to the microphone.
Unfortunately, projects such as DROPOUTJEEP confirm that the iPhone isn't to be trusted.
This is impossible to guarantee today. Certainly if you run the zero-day magnets known as browsers, and even if not, there is always some possibility of physical intrusion.
Today you can use OS's such as Tails to prevent most exploits from embedding themselves into your computer. This is what Snowden used, for example. But if hardware becomes compromised, Tails will offer much less protection.
Here's an interesting section of the article:
The department must describe the computer it wants to target with as much detail as possible. For example, an investigator may be covertly communicating with a suspected child molester and know an IP address, and then obtain a warrant to use malware to find the actual location. In the case of botnets, malware might be used to try to free the compromised computers from a criminal’s control.
Imagine if child molestors begin using Tails. The government response may be to try to set up some kind of "Tails dragnet" via compromised network interfaces. It should be possible for a network adapter to detect that Tails is running. At that point, since it has DMA access, and since few people use Tails at any given time, it should be possible to instuct a network adapter to search through a computer's memory for evidence of activities that the government doesn't like. Since Tails offers strong anonymity protection, there's no way to describe a computer "as specifically as possible" other than to say "it's running Tails while watching child porn."
The unfortunate conclusion is that in the future, someone like Snowden might immediately be caught. "If someone is using a strong anonymity tool and GPG to hide their conversation, we should probably configure their network card to monitor their activity."
Once hardware begins to turn against you, there seems to be nothing anyone can do to protect themselves. Encryption doesn't work against an adversary that has access to your computer's memory.
just thinking, the problem, it seems, is that end-to-end encryption is not really end-to-end. the user is the endpoint, not the computer.
from a ux point of view, a dongle between screen / keyboard and computer for an encryption overlay could be a way to unambiguously protect information - so information never exists decrypted in a machine.. just the screen.
user input-output accessories are much more technologically static than software / hardware, so an open-source hardware solution may be possible?
>The unfortunate conclusion is that in the future, someone like Snowden might immediately be caught.
I think that is too naive. Snowden types don't assume they won't be caught, they probably assume that it is only a matter of time until they are caught, and play the cards they have in such a way that you make it really hard to send your garden variety cia/dia/spec ops/defense contractors out on a pick up operation not only only from a feasibility standpoint, but from a geopolitical stand point (e.g. What will Beijing's/Moscow's/D.C.'s response be if we run such an operation in their front yard? What precedents might we be setting?).
Also to note that offensive/defensive technical capabilities aren't as asymmetric as they appear for all possible targets of nation states, some yes, but probably not as much to those with the technical knowledge who can create/use such and derivative systems which might very well be other nation states (or appearing to originate from such).
Also to note that offensive/defensive technical capabilities aren't as asymmetric as they appear for all possible targets of nation states, some yes, but probably not as much to those with the technical knowledge who can create/use such and derivative systems.
If you concede that your computer has a chip with DMA access which can be used by the government, then you must concede that the same chip can monitor you for activity that triggers active surveillance. For example, I think Tails is going to force governemnts into monitoring at least which operating system you're using. There's no way to target a specific Tails user, so the only recourse is for the government to do dragnet surveillance of everyone using Tails, or ignore the activities of those using Tails. Since the latter seems politically untenable, the former is becoming more likely with time. When the government can passively check whether your activity is fitting the pattern of some kind of criminal activity, the situation is about as asymmetric as I can imagine. Is there really any technical knowledge that could protect you?
>If you concede that your computer has a chip with DMA access which can be used by the government, then you must concede that the same chip can monitor you for activity that triggers active surveillance.
Whats DMA access? Direct Memory Access access?
That aside, I'm not willing to concede that across every computer than has been/can be built and be exploited by a government out of the box remotely [because dragnet] (most of them, I will concede probably can though, and conversely anyone technical enough can probably exploit many systems in the same way for their own means[don't trust your spouse/freinds/employees?, bug them with remote backups of data to analyze in real-time, hell, companies do such things now as-a-Service]). But continuing on with your conclusion of a dragnet (which is more or less present today), access isn't really the problem, but you have a signal and a noise problem, wherein you will have false positives and false negatives. Text book example of the mal-possibilities is the NSA providing data which led to the targeting phones in the ME, which drone strikes we're initiated and hit innocent civilians[0]. Just wait when we're at the point when this is happening within a countries national borders by domestic agencies, one day, someone is going to be taken out that wasn't meant to be taken out. Can't ignore the false neg/positives forever, though governments seem to try very hard to do so. I think corporations are more forthright about the extent the data they collect is able to be used because if you knowingly contract/ utilize bogus data for certain applications, someone else will eat your lunch eventually.
>activity is fitting the pattern of some kind of criminal activity
From a predatory-prey/evolutionary standpoint, criminal activity is always evolving (typically every living being and the systems they rely on are).
Not to mention the time sensitive nature of these systems that do the analysis so if $criminal_activity is always changing and over a defined period of time, you risk that you will get no signal for those who conduct such $criminal_activity in less than the defined period of time or that by the time the analysis has been done, or any signals collected from such device will be moot (i.e. computer was destroyed, thrown away or even worse: passed along to/associated to someone else which also means any point there after associated with such systems is akin to going after a ghost within the machine).
>Is there really any technical knowledge that could protect you?
Well since the focus is on tails [but mostly on the dragnet], one can clone the sc[1] and go through it for what could possibly define one as a Tails user, replace that with something else, build their own image and voila, you just avoided being in the dragnet. The thing about dragnets is that they can only really capture the lowest common denominator, deviate only slightly from that, and the adversary will have to expand resources going for an targeted operation (any adversary, not just nation states is technically possible of doing these things and by definition not a dragnet). This is what happens today. Not in some far off distant dystopian future meant (intended or not) to invoke fear in the ignorant/lazy. Yes if one wants to avoid being in a dragnet with some of the tools they use, then one will take the steps necessary to keep such information obfuscated/opaque from the dragnet.
This is what happens today. Not in some far off distant distopioan future meant to invoke fear in the ignorant/lazy.
Why not talk with me without the snark? This topic seems like it interests you a lot, so it seems like we have some shared ground.
one can clone the sc[1] and go through the source code for what could possibly define one as a Tails user, replace that with something else, build their own image and voila, you just avoided being in the dragnet.
This won't work because it's extremely difficult to analyze your network card and discover its behavior, and without this knowledge you'd be changing things blindly. There are far too many ways to detect an OS to change them all. Tweak-and-recompile would work if they use a naive and brittle heuristic like "look for the first 64 bytes of whatever is loaded into memory when Tails is booting up," but they wouldn't employ such a brittle heuristic in the first place because every time a new version of Tails is released, they'd need to update their entire infrastructure to look for a new pattern. Something like monitoring the network traffic for a unique "Tails signature" is more likely in this scenario; for example, how many computers start Tor immediately after a network card is connected? Detecting that condition would be a decent starting point for detecting Tails, and they'd want to combine it with some other hard-to-evade condition to cut down on false positives without introducing false negatives.
One interesting way to detect that someone is using Tails would be to notice that their system clock is set to UTC time. Most of the computers connected to the internet aren't using UTC, so UTC time plus Tor usage on startup is pretty commonly associated with anonymity OS's. That said, it seems like it might be difficult for the network card to detect whether the system clock is UTC time, but it's just an example of how difficult it is to fully conceal your usage of an anonymity tool. It's not just a matter of tweaking the source code.
This seems to prove the seriousness of this threat, though. Once you agree that it might be possible for your network card to be your adversary, there are endless ways that it can be used to defeat you. Hardware manufacturers have evidently been thinking along these lines, so why shouldn't we try to think of ways to prevent this from happening? As the BIOS exploits have shown, that dystopianic future may be closer than anyone's comfortable admitting.
EDIT: Someone went through and downvote bombed our whole converastion on both sides... I tried to correct it, but it looks like upvotes from Tor users under a certain karma threshold aren't registering, so I wasn't able to help fix it.
>Why not talk with me without the snark? This topic seems like it interests you a lot, so it seems like we have some shared ground.
>One interesting way to detect that someone is using Tails would be to notice that their system clock is set to UTC time. Most of the computers connected to the internet aren't using UTC, so something like that is pretty commonly associated with Tails. That said, it seems like it might be difficult for the network card to detect whether the system clock is UTC time, but it's just an example of how difficult it is to fully conceal your usage of an anonymity tool. It's not just a matter of tweaking the source code.
It's not out of snark (I apologize for if it sounds like it, not intentionally seeking to offend anyone), but mainly out frustration about the conversation on how everything seems to be so difficult. Difficulty to whom? Someone who cannot modify sc to a significant extent? Someone who just downloads the program and expects it to just work? Not just some random tweak, I mean going through looking at what the functions actually do, which remote connections do they rely on to connect to at various stages, how data is generated and allocated in memory, what system calls are made, etc and change it according to ones threat model so that the program one complies has the same functionality but is not recognized as the same program. Maybe that involves changing the the system time. Again, trying to target someone doing such is trying to target someone actively adapting, probably faster than it takes for the dragnet to adapt since like I said, dragnets mainly hinge on effectively going after the common denominator that of which is usually of the mind set of someone who downloads/uses a program system and expects it to just work and address all of their concerns without doing anything themselves. In the end, anyone can try all they want to cut down on the false negatives and positives, but they will still exist and that's where the "real" danger comes from for groups/orgs/gov's that go to such extents.
>Once you agree that it might be possible for your network card to be your adversary, there are endless ways that it can be used to defeat you.
If this is really in one's threat model, one is probably throwing away or using shared computers before this point… maybe from within a virtual machine on a large banks network from an exploit one used (remote, or local).
>so why shouldn't we try to think of ways to prevent this from happening?
Few people do this today for themselves, most others do not. People today seem to have come to expect that someone else needs to protect them which must have fmr cyhperpuks laughing. As far as I'm concerned, we are already living in the dystopian future, and the few who take the steps to mitigate based on their threat model do. These issues have been around for a while, and those who cared all along took steps they felt were necessary to protect themselves and still do. Maybe that involves not taking advantage of the latest skinner box of the day, again tradeoffs and threat models to consider. And those now made aware have to learn a lot to put themselves in the same shoes, if they even care enough to learn what they need to start protecting themselves and to continue to adapt to do so. Again, its not like BIOS exploits suddenly became possible because snowden profiteers told us and because all of this I don't think it really is a serious threat (any more than it already was) because your adversaries are opening themselves up at the same time. This has always been an evolving landscape. Such is the world we live in and have always had.
Edit: No worries, as I've learned over time, down-voting isn't really effective for silencing ideas/discussion since it just attracts more interest to those who want to seek such information.
> projects such as DROPOUTJEEP confirm that the iPhone isn't to be trusted
That iPhones used to completely trust physically connected devices without any verification was obvious to anyone paying attention even before it was verified at Black Hat USA 2013 [1]. This was fixed in iOS 7. The evidence we have of DROPOUTJEEP says it is installed via "close access methods" [2]. I wouldn't be surprised if remote vulnerabilities exist that could be used to install it remotely, but I am aware of no public evidence that they are being exploited now.
> Once hardware begins to turn against you, there seems to be nothing anyone can do to protect themselves. Encryption doesn't work against an adversary that has access to your computer's memory.
In the future (or today, depending on your setup), IOMMU. In the present, there is no evidence that baseband backdoors of this type actually exist (as opposed to hacks). When the adversary adds backdoors deeper in the hardware? ...well, we'll see if that is discovered someday.
To editorialize a bit, I guess it can't hurt to worry about and try to head off anticipated future threats - it's not like anticipating different threats is mutually exclusive - but still, I somehow can't shake the feeling that people's emphasis on secret backdoors unduly weights threats that are easier to romanticize over more pragmatic but more dangerous ones.
The reason it's good to proactively think of future threats is because so many past concerns have proven to be true. Several months ago, no one on Hacker News really believed that BIOS backdoors were much of a threat. But today it's a well-established fact, for example.
The tools of law enforcement probably aren't going to be revealed, and they're hard to discover. Nobody knew about the zero-day exploit employed against Tor browser, for example, and there are almost certainly many more tricks like that up their sleeve. They already take steps to conceal them; parallel construction is an unfortunate reality. And since there's not much justification for a whistleblower to reveal the techniques, it's unlikely someone will come out and talk about them. We'll probably need to think along the lines of "What's technologically possible, and how is it useful to law enforcement?" It's not a good idea to wait until a weapon is used before thinking about how to react to it.
The history of communications technology and how governments have reacted to the technology is actually quite fascinating. Wiretaps used to be extremely commonplace, and since there's not too much legal protection from the government rifling through your digital life at will (at least compared to getting permission for wiretapping your phone), it seems like it's better to err on the side of caution.
It's also important to realize that even though some governments follow due process, several powerful ones don't. Also, there are other global other considerations. The US has made it pretty clear that their legal restrictions are designed to apply to US citizens, not any foreign person. You may be forced into a situation of choosing which governments you'll trust, especially when cross-nation collaboration becomes even more pervasive. Assuming that other countries adopt a similar attitude of "Our citizens are protected; other citizens are examined," then the US may simply outsource their databases of information to be examined by some other government, like any other member of the Five Eyes.
I understand your concern and skepticism though. It was a question I've often wrestled with myself.
Nobody knew about the zero-day exploit employed against Tor browser,
for example, and there are almost certainly many more tricks like that
up their sleeve.
I had actually been patched already upstream, so it was not really a zero-day. I'm not sure if it a patched Tor Browser Bundle had been released and people just hadn't upgraded, or if the patch hadn't made its way to the bundle yet.
> This is impossible to guarantee today. Certainly if you run the zero-day magnets known as browsers, and even if not, there is always some possibility of physical intrusion.
Bingo. Even if you go all-out with security and only browse the web with a pure text web browser through Tor running on a VM that you purge after every use, use full-disk encryption with plausible deniability, fully shut down your computer and wait until the RAM is cool before leaving, inspect your computer for NSA/other implants every time before boot, tape your webcam and mic, never use your real name, and whatever else you can think of, you're just going to go nuts from all the paranoia, as well as from realizing all the myriad ways your security could still be broken (don't forget to check the keyboard for a built-in logger and look inside your case for PCI cards you don't recognize, and hope they don't have any implants that look convincingly like something you'd recognize as yours). Never mind that this isn't a very viable way to do most things most people actually use their computers for, like personal email, online banking, social networking, and so on.
The takeaway is that your children may grow up in a world where it's impossible to guarantee the government can't get into your computer if it really wanted to.
The government has always had access to everything if they a) really wanted to and b) had just cause. That's why search warrants, tailing suspects, court-approved phone taps, bank account freezes, etc etc etc exist.
The notion that the government ought to not be allowed into your computer, ever, doesn't seem grounded in either reality or historical precedent.
The notion that the government ought to not be allowed into your computer, ever, doesn't seem grounded in either reality or historical precedent.
I didn't intend to argue that. I'm saying that strong anonymity OS's like Tails will force governments to do dragnet surveillance using compromised hardware in order to track suspects down. There is no way to tailor surveillance to an individual using Tails, because it's set up to hide your IP address at the OS level (assuming Tails is implemented correctly).
Assume child molestors begin using Tails or whatever environment that prevents FBI browser exploits from working. What then? There's one recourse: the government can set up your network card to monitor when you're using Tails for unlawful activity. And since it's very difficult to come up with a "footprint" of an individual Tails user, i.e. some way to monitor or attack one specific individual, this is likely to force the government into monitoring all activity. This can be done via compromised hardware, like a network card, which can be remotely configured to monitor memory for specific trigger conditions like "user is running Tails, and main memory contains specific terms for underage children."
Sure, it sounds unlikely right now. But this is the general direction that technology has been headed in. How much ground should we concede in this debate? Is it ethical for a government to be able to subvert someone using strong anonymity tools if it forces them to broadly target everyone using such a tool?
More broadly, what mechanism should we approve of the government using to inject your computer with code? If the government has DMA access to everyone's computer, then that hardware could be configured to monitor which operating system you're using, and only triggered into actively targetting you specifically when certain conditions arise, such as using a strong anonymity tool, or a certain specialized browser that child pornographers also happen to use. Should the government be allowed to be proactive in its hunt for offenders? Are we comfortable with a hardware device watching which OS we're running? There are a lot of issues that seem worth thinking carefully about.
Yeah, it is a damn tough question. Criminals have more tools than ever for operating under the radar, so restricting agents to traditional rules for investigation & surveillance seems like a mistake. But on the other hand, how do you grant increased surveillance capabilities to counter increased covert capabilities, without ruining privacy? Basically, it's like privacy is caught in the crossfire.
Criminals have always had lots of tools available to them; by definition, they aren't restricted by law, which opens up many possibilities not available to the rest of society. Nobody ever said police work is (or should be) easy.
Despite that, the answer to how you grant increased surveillance capabilities is easy: you get a warrant.
It isn't a terribly difficult bar to reach - judges will hand out warrants quite easily. We - the citizens - just ask that those asking for such capabilities ask for them (each time...), and at least show they have some minimal sort of reason to want such easily-abused capabilities.
Requiring the warrant therefor shouldn't slow down legitimate investigations more than a trivial amount. If enforced, on the other hand, it does act as a "limiter" to sweeping abuses.
> Perhaps future laptops are going to have 3G access embedded right into them which consumers can subscribe to for some low monthly fee.
We're getting bit off topic here, but my colleague has a 2-year old Sony Vaio laptop that has this. He also has a SIM card for it that came for free with his €50,- internet/tv subscription (incl more monthly GBs than he needs).
Wow, well I just had a moment of self-reflection on what a Pollyanna I am...my first thought at reading the headline was, "Oh good, now people like Aaron Swartz can't be threatened with 15 year prison sentences, or (in other cases) become felons for violating certain interpretations of Terms of Service". And of course, it is not that.
Didn't these folks take an oath to defend the US constitution? This pretty clearly violates 4th Amendment protections against unreasonable search and seizure.
Yes, I'm not a lawyer, so I don't know what "doctrine", "touchstone" or "Three Pronged Test" makes the clearly unconstitutional into something lawfully constitutional, but that's a lawyer problem.
Beyond practical considerations, like this makes the FBI into an ethically dubious organization, doesn't doing this kind of thing grate on lawyers and officers who take very solemn oaths against doing bad things? Clearly, this will have undesired side effects, and make the police into something even less trustworthy than they already are.
Probable cause is an under-appreciated term. Every American should know what it means. You'll see actors in movies and TV say to other actors posing as police that they can't enter their home without a warrant and that's not true.
Police can enter your home without a warrant, all they need is probable cause.
What is probable cause? Probable cause is when a police officer is 51% sure that a crime is occurring. So if police are 51% sure a crime is occurring in your home, they can enter, and you can't stop them and anything illegal they see in line of sight can be used to prosecute you.
Police can only go to the place where they think the crime is occurring, so if they have cause to think someone is doing something illegal on the first floor of your house, they can't climb the stairs to the second floor to find illegal drugs in your bedroom closet. Probable cause doesn't give police leeway to search your entire house just because they saw someone smoke pot through the kitchen window.
Having said that, when I say police can only go to the 1st floor, that really only means that they'll have a lot of trouble convincing prosecutors and judges that anything they found on the second floor will be admissible, they can physically go up to your second floor and you should not stop them.
What police will do is try to use line of sight to find anchors to get deeper into a house. So if they barge in because a suspect ran into the living room, they'll try to find other things that will give them 51% probable cause to move further through the house.
tl;dr: Probably never try to physically or even verbally stop police from searching your house, car and person and property. They don't need a warrant, they just need to think something is up. The movies and TV are not reality.
Source: Some administration of justice classes in school, experience with police.
And even in the event that they do need a warrant, warrants can be issued easily through email and fax machines that are located in every single cop car.
So... if you deny the search, the police can often call a judge to write up a warrant really quick. Judges work overtime, on weekends and on night shifts for this kind of thing.
This seems not quite right to me... you need probable cause to get a warrant. You can't skip the warrant just because you think you have probable cause.
However, there are 'exigent circumstances' where the police can enter your house without a warrant. I think that comes down to things like hearing screams for help, preventing destruction of evidence, etc.
tl;dr: Probably never try to physically or even verbally stop police
from searching your house, car and person and property.
Actually, my understanding is you should always verbally explicitly refuse to consent to a search. But if they search anyway, of course you do not physically try to stop them. They will probably claim you gave consent and misplace the recording, but it's the best you can do.
Conversely, you do not need to give permission to police just because they ask. Even at police checkpoints, they do not have the right, without probable cause or your permission, to search your vehicle. If we don't stand up for our rights, we will lose them.
(Now, realistically, if you say "no", they will bring a dog over that they've trained to signal, thereby giving them "probable cause".)
If you say no and they say 'get out of the way', you better get out of the way though, you can't physically prevent the police from searching your car if they decided to do it anyway. All you can do is hope your lawyer fights any evidence they get that way.
This is why a wise person accepts entry upon prepayment of a sum of money, as this allows the homeowner and police to both stay in honor, provided consideration is then given promptly thereafter.
a proposal made public yesterday that would give federal agents greater leeway to secretly access suspected criminals’ computers in bunches, not simply one at a time.
"Secretly access" and "in bunches". Sounds rather like a general warrant to me, and without probable cause, except maybe to fearful know-nothings, where "hacking" is essentially the same as "magic".
Did you read the proposal? Or are you one to only gather information from biased news articles? Hell, the proposal isn't even up for public review yet.
If the standing committee agrees to take up the
matter, the proposal would be opened for public
comment in August for six months. It could be
amended before the comment period begins and
would eventually need to be reviewed by Congress
for changes.
We can't even _look_ at the primary source material until August, and you've already got an opinion on it. Until then, I'm going to post sarcastically to privacy advocates who likely haven't even read the damn thing yet.
And you've already closed your mind and think its an "unconstitutional general warrant". Hah.
Out of curiosity- all I see in the news & on the boards is backlash against the government when these sorts of programs come up. Can anyone provide thoughtful suggestions on how the fundamental adaptation the feds are seeking- improved ability to conduct digital espionage- could be implemented in a positive way?
I do think the notion that traditional rules for search & seizure need to be updated for the modern age might hold some water. For example, destruction of evidence has become easier, and catching someone "in the act" is probably harder.
I don't believe in the idea that the feds should have zero access & zero surveillance ability. Unfortunately I am unsure how to give the ability to match (in spirit, not in letter) the functions they have in the real world without incurring a big privacy hazard.
"...no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."
I'm tickled that they're actually getting warrants sometimes, but this sort of warrant doesn't sound like it would meet that test.
If you can't describe a physical location of your search, you're opening up all sorts of potential for mischief that would play off the technical ignorance of some judges and the inherent interconnectedness of the Internet.
"Machines connecting to the target machine or relaying traffic for the target machine", etc.
You have to be pretty smart guy to become a judge and pass the whole confirmation grilling.
And from what I have seen recently a some of the judiciary have picked up enough tech know how to not be fooled into rubber stamping.
But you need wide programs to change the culture of the police departments and prosecutors. If I was doing my job/followed orders is not acceptable excuse at trial it cannot justify put the criminal behind bars by any means necessary.
“The proposed amendment would enable investigators to conduct a search and seize electronically stored information by remotely installing software on a large number of affected victim computers pursuant to one warrant issued by a single judge”
I wonder if we'll start to see researchers/people come across more of things like this in the wild? Which makes me wonder if federal agents are going to be enlarging the attack surface against their own systems?
I believe there have already been at least 2 cases of the FBI using the very mass exploitation technique discussed in the article. The 2012 investigation dubbed "Operation Torpedo"[0] and the Freedom Hosting exploit in 2013 dubbed "Torsploit"[1].
There were 25 arrests in the first operation and those suspects are currently fighting the search warrant because the FBI failed to give notice to the those who had the virtual search warrant executed on them within the required 30 days.
Nothing has really come of the second operation (after over 10 months now) except for an arrest of someone the FBI was able to identify without the help of their exploit (they used the same user name on Tor as they did on the clearnet).
If your quote means that one warrant can force a victim of hacking to install FBI spyware on their systems, this is very troubling, and seems like it would dissuade many organizations from reporting a hack at all, because the cure will be worse than the disease
I think this violates the 3rd Amendment too. The government wants to (among other things) be able to secretly monitor laptop cameras of people never suspected of wrongdoing. That's basically an electronic-age version of this 3rd Amendment case: http://www.volokh.com/2013/07/04/a-real-live-third-amendment...
So if a botnet operator infects my machine, gaining access to my files... and then the FBI gets a warrant to install further software on my machine, ostensibly to investigate the people operating the botnet, haven't they just gotten a "warrant" that entitles them to everything on my machine, independent of who owns it?
"secretly access suspected criminals’ computers in bunches, not simply one at a time."
Note the word "suspected". I really don't think its a good idea to have a legal standard granting authority to a government worker when that standard is based to one degree or another on the level of that government worker's paranoia.
Sometimes I wish law enforcement would just suck it up, grow some balls and start pushing for the wholesale repeal of the 4th amendment by constitutional amendment. I mean, stop fucking around guys - I thought you guys were tough.
LA is tracking cars through cameras / license plate detection. In response to an FOIA request for details on the tracking, their response was that all the data was part of an "active investigation" and would not be disclosed. I think this was the first time a city has tried to hide details of a surveillance dragnet like this, across an metro area, by claiming basically the entire city was being actively investigated. [1]
In that light, this development is extremely concerning, and I can only hope the judiciary will push back hard to protect some semblance of the 4th. The average person doesn't typically consider themselves a target, but as the number of dragnets increase, and if the Feds can perform mass-hacking of our personal devices where the results are directly admissible in court (putting aside parallel construction for a moment) this mentality will have to change.
The writing has been on the wall for a while, but there's always been some comfort that at least the mass surveillance wouldn't be admissible. Now you start to see, we're getting boiled like frogs. The direction this is all going leads up to a very draconian future just 5 - 10 years out. I'm not sure there's any way to stop it, I don't think enough of the American people can get their heads around how this is cyanide to a free society.
[1] - https://www.eff.org/deeplinks/2014/03/los-angeles-cops-argue...