Hacker News new | past | comments | ask | show | jobs | submit login
Signal's Cellebrite Hack Is Already Causing Grief for the Law (gizmodo.com)
389 points by gumby on April 27, 2021 | hide | past | favorite | 155 comments



In a previous role I worked, law enforcement would send over requests for information from time to time. Once my scripts became better at gathering information requests turned from providing highly specific information (ex: details about this individual, on this date and time doing this particular action) to blanket statements (we think this person is called X or Y - can we get all info covering these two years).

General security/crime prevention concerns were being provided as reasons for these searches. I had voiced my concern to my managers but they said as long as we had a paper trail which clearly tied each request to the info provided, they didn't really care. I left the place shortly afterwards, but it highlights, I think, how quickly these things spiral out of control. How specific turns general, how true belief turns into mere suspicion or blanket statements (ex - everyone in THAT area is a criminal so we need all the info).


It also highlights the bias in law enforcement/intelligence that "stopping bad things from happening" is a good enough rationalisation to justify a lot of morally grey behaviour, at best. As a colleague once told me before starting on a support ticket for the federal police: "be careful, because they're trained to look for a suspect, not a root cause or solution".

What approximate time did you work there? Did you perhaps get to experience the difference between the situation before and after the Patriot Act, or before and after the emergence of ISIS? Or was this just the usual scope creep?

I'm old enough to have witnessed 9/11 live on TV and as I see it, there has been a downward spiral in the areas of privacy, the barrier for probable cause and the presumption of innocence - particularly in the US but almost equally egregiously here in Europe. Honestly, COVID isn't helping either, because it's just enough of a threat to be considered a security concern.


This was a few years ago in Europe not the US. I think this was one of the positive aspects, since law enforcement here (especially dealing with people from different countries) would have different expectations for the type of information that they could receive or ask for.

A representative from Italy might ask for one specific element of information, another one from Germany might look for associative information (who a person is and who they were with) and one from the UK might ask for things as precise as the hour and minute at which a particular action would occur (this is due to the high amount of cameras that cover London for example, which could then be used to match movement around a region with surveillance to the data I would provide).

I would say most of the requests were reasonable from my point of view given the background they would provide - I guess it's also down to the person requesting the information. But as time progressed they figured out that I was able to get information or that we had information on a wide variety of elements... As I said initially, it goes from "Find info on John Doe, with an email account jdoe@email.com, with a date of birth of 33/33/2033 who did this activity at 11:30 PM on 01/01/2053" to "We have this email which is either jdoe@email or jdoe@gmail but we're not sure. Can you check all such occurences?".

I think the point is also that Europe isn't THAT much better and while people might wave privacy laws and GDPR as proof of Europe's "superiority", I think a lot of those things look good on the surface, but they still don't protect you as much as you'd think.


It is dangerous, especially without warrants and massive data trails of who requested the data and why, who looked it up, what they checked and more. Not only just from the law/security side but from people in your position, probably at a security firm that some of this is outsourced to or a govt agency that has these capabilities.

For instance, take someone in your role or even a team of someone from enforcement and someone in your role, if 1-2 individuals get in that situation that are corrupt, it can turn into a blackmail or potential business espionage scenario. Since requests/warrants are being suppressed to keep a low profile, even finding out what and how information was looked up would be easy to pass for some scammers or organized criminals. Someone could sell access to information and start targeting many people. Scams could be run on wealth or small competitors suppressed.

The other side to this is, I wonder if you worked with anyone, or if people in your role, ever looked up information on people they know or potential competitors/enemies to use against them even if they did nothing wrong? What kind of oversight would be there to watch that, especially in the more underground operations in law enforcement or national security? Cops now in most places can look up anything on your phone. Are they doing it to family/friends/business people that they want to have some blackmail or upper hand on? Are they spying on people for their benefit, not for a crime? Probabilistically yes.

The gaping hole of security is the human, a few bad people with access to that data and you have some authoritarian powers that have never been seen in history to be abused. Giving individuals that much power will end badly, there is no way someone could control themselves if there is little oversight. If a law enforcement officer or security/intel professional has to provide no trail of that, it will be abused. It will be abused even with the trail.


As a European: Privacy here is much better in the commercial world due to things like GDPR. But law enforcement and other parts of government aren't much better than the US in terms of data collection, and in some cases even much worse. GDPR mostly doesn't apply to law enforcement, and unlike the US there are far less "intelligence services shall not target our own citizens" kind of rules which the US at least in some form has.


Also, in most parts of Europe, there is no "fruit of the poisonous tree". Meaning that law enforcement can use all the ill-gotten evidence they like, with extremely rare exceptions.


Parallel construction substantially negates the perceived penalty for such evidence though, so it’s still frequently sought, even when the police know it can’t be used directly.


I believe we need to stop, as developers, to contribute our expertise to enabling such blatant surveillance that hurts much more than it helps. We're ruining our own and our children's future by writing the code that enables this, by teaching people how to write such code, how to test such code, how to deliver such code. It's on us.


You can't teach someone how NOT to write a program, speaking on the technical side. The more you know the more capable your programs can be. It's about one's morals after that.


> The company claimed that the patches had “been released to address a recently identified security vulnerability. The security patch strengthens the protections of the solutions.” However, Vice also reports that the company did not “specifically say whether the addressed vulnerability is one and the same as the one disclosed by Marlinspike.”

IIRC Marlinspike didn't disclose one vulnerability, but a massive class of them: Cellebrite must process media in every possible format and thus relies on components from outside sources, FFmpeg being just one example. Cellebrite inherits security vulnerabilities from all those components, and doesn't appear to be diligent in keeping them patched (not that a perfect record of patching would protect them - the attack surface is enormous and probably most component developers don't provide sufficient security).

Integrating insecure code is likely a necessary decision for Cellebrite: Writing, maintaining, and updating their own code to process all types of media, and keep up-to-date with all changes to all those types of the media, seems impossible and certainly uneconomical. What could they do? Maybe they could sandbox each component but then they'd need a lot of horsepower in their device.


An interesting problem is that even if you sandbox everything perfectly, you still need to get accurate information out of the sandbox. If a sandboxed parser was compromised, you still can't trust that it reported true information to you.

Sandbox prevents malicious code from escaping, but not from lying to you.


Agreed - it just limits the damage.


>Writing, maintaining, and updating their own code to process all types of media, and keep up-to-date with all changes to all those types of the media, seems impossible and certainly uneconomical. What could they do?

There is a middle ground of "force update software on the device more than once a decade." Yes, FFmpeg will always have vulnerabilities, and recreating their work would cost an enormous amount and be less secure. Properly distributing security patches is an affordable alternative.


There is no abstract security benefit from updating some software, updates just changes things. Software security is like a bubbly shell with holes and no thickness, patching a random hole means nothing.


It changes whether I could exploit the device. I have realistically no way of developing a new hole in their software, but I'm capable of becoming a script kiddie if I wanted to.


The "patch" is to not use the physical extraction thingy on iPhones anymore: https://www.google.com/amp/s/9to5mac.com/2021/04/27/cellebri...


Wow, that seems really obfuscatory. It's implausible that they only have a vulnerability to iPhone devices, but I guess that's the line of argumentation they'd use to limit the fallout. Sucks to be cellebrite.


I think it's related to the stolen Apple DLLs in their product. Any competent legal department would tell them to remove those immediately, because you don't want to get in a lawsuit with Apple over such a clear case.


It does seem very clearly a permissible use. Corellium beat Apple's copyright claims and they did far more than just embed a DLL.


That is incorrect, Corellium does not ship Apple code.


My understanding was that the copyright portion of the suit was over them doing exactly that? Maybe running it in the cloud vs shipping it, but e.g. https://techcrunch.com/2020/12/29/apples-lawsuit-against-cor... mentions iOS IPSW files.

Edit: it seems like they downloaded the file from Apple's site. The order says using the IPSW file was fair use, and that doesn't appear to depend on them downloading it vs shipping it. I don't think that makes a different. The cellebrite usage is also fair use for the same reasons (doesn't compete with Apple, transformative usage, etc)


> they'd need a lot of horsepower in their device.

Googling "cellebrite forensic extraction device" shows a rugged laptop-like device in a case the size of a desktop PC. I'm not sure if that's the real thing or not, but in any case it doesn't look like space is a concern. There's plenty of room in that suitcase for a modern CPU, lots and lots RAM, and as much battery power as you need.


The « rugged » part usually translates to thermal and reliability concerns, which are traded for raw performances.


The friggin device probably costs over $100,000 and comes with a full-time support contract. They could add an extra $500 worth of CPU and RAM to the device, assuming they can find a competent evil kernel dev to do the work


You'll find that dollars don't actually override physics. If you can't cool your extra fasty CPU, it's not going to be vey fasty.

For reference look at panasonic's "fully rugged" toughbooks. A baseline 31 will cost you $3700 for a 7300U, 16GB RAM, a 256GB SSD, and a 13.1 1024x768 display. On base spec it'd be a $800 laptop if you could find one with a screen that bad, and that outdated a CPU.

Except you can drop the 31 from 6ft, you can literally hose it down, you can rub it in a dirt pit, you can drive on it with a truck, and it's expected to survive and work fine. It also has a baseline 20h battery. That resistance, that ruggedness, comes with limitations. And it's not very useful to have an indestructible laptop if a gentle shake kills the electronics either.

Hence "thermal and reliability concerns". You look for parts you can cool with a relatively low thermal budget — which limits your performances — and you look for parts which have been designed and manufactured for industrial resistances, which also limits your performances. No matter how you slice it, a GTK Boxer won't outrace a Bugatti Chiron.


The parent comment means that hardware that runs in a rugged container thingy is typically limited by thermal constraints. It's not the $ 500 that's the problem, but a faster CPU could overheat with that packaging.


It costs around $10k.


And because Marlinspike specifically mentioned FFMPEG by name, I think vulnerability was not in there.

At least that's what I would do.


He said it was an FFMPEG snapshot from 2012, which has had dozens of vulnerabilities since.


He even said that the vulnerability was not in there.

But since he advertised that one, anyone wanting to compromise Cellebrite has an easy target. So Cellebrite has to fix it.


Sandboxed code is not slower than unsandboxed code


Really? It seems like there should be some overhead ex. to verify whether each operation is allowed. Or does it not work like that?


The modern Linux kernel allows you to sandbox applications natively. There are tools that make this easy to set up, for instance [1].

[0]: https://blog.cloudflare.com/sandboxing-in-linux-with-zero-li...

[1]: https://firejail.wordpress.com/


I use firejail and have the standard redirects in /usr/local/bin for most firejail supported applications, but the performance hit is definitely noticeable. I'm on an i7-8650U on my laptop with 16G ram.

Sandboxing _can_ be without overhead, but in the case of firejail it's definitely not the case.


> performance hit is definitely noticeable

Could you please name which application, and how do you notice it?


I notice it by how long it takes to start applications when I run them through firejail vs. when I run them directly.

I don't know if it also makes a difference during execution.

EDIT: it's kinda amusing to get downvoted on this.

a) There's no performance issue. Just use linux sandboxes

b) There are these possible performance hits

c) I use those sandboxes and I have observe X performance issues.

a) Your performance issues are not valid performance issues, because I say so.

Ok, gotcha


That's really a caching issue. Normally dynamically linked bins share a lot of code pages (many are already loaded) due to SOs. In jailed apps these are duplicated and have to be loaded again for each app.


It’s a caching issue that manifests itself as a performance difference, which is all GP claimed.


I'm using a low-spec laptop, so I usually notice these things before others. With Firejail I don't notice any performance hit at all on any of the apps I've used: Telegram, various windows games under Wine, Chromium.


Those sandboxes are certainly not free to set up (internal kernel locking) fwiw.


there is some overhead depending of how you define isolation.

seccomp has very little impact but it's not 0:

https://wiki.tizen.org/Security:Seccomp#Performance_Analysis

firejail or systemd isolation techniques (using a variety of methods such as mounting a private version of directories or initiating ACL's at application start) are extremely lightweight but not 0. Most processes (regardless of desktop or server) usually live long enough (more than a few seconds anyway) to justify any theoretic impact. Even firejail on an ancient desktop system will not change things so that it's noticeable by a user imo.


To expand on vngzs's post, many key Linux syscalls (such as opening a socket/file descriptor) always check if a bit is set in a capabilities bitfield, so the only extra overhead is the one-time syscall to drop some set of capabilities.

Now, there is some overhead in forking a child process to have different capabilities than the parent, and some overhead in using a pipe/shared memory to communicate between the sandboxed child and the parent, but that overhead should be tiny compared to the work of decoding compressed video, etc.


Speaking theoretically: If you want to isolate an application, it seems you need extra resources to run the isolation functions, to handle any trade-offs they introduce (it's hard to believe it's zero), and to reproduce needed functionality and services that are outside the isolated space.

Essentially, I'm imagining a bunch of isolated VMs. How is that wrong in practice?


There's a bunch of isolation that you're already paying the price of.

Your OS already has a layer that checks, when a process tries to open a file or access a bit of hardware, whether you have permission to access it. Your CPU has a layer that checks, when you try to read or write a given memory address, whether you have the right permissions. And so on.

A lot of sandboxing amounts to making those existing checks more restrictive - telling the OS "If this program running as John Doe asks to open a file, instead of treating it like John Doe treat it like someone with no files at all" - which is no slower than what the OS was doing anyway.

The sandboxing that comes with performance penalties is generally doing more than just sandboxing - for example, Ubuntu's 'snaps' have sandboxing, but they also duplicate a bunch of libraries for compatibility reasons.


On Linux, namespaces and seccomp have minimal overhead. seccomp can filter syscalls directly, so if you just need to convert formats, you could load the library in a helper process and drop privileges. Sure the helper process would require more resources, but it shouldn't take much. For more sophisticated isolation, Linux namespaces can be used.

Out of boredom, I started 400 containers with podman running only bash in Ubuntu. It used less than 1 GB of RAM in total. podman is likely overkill if you just want sandboxing, however.

I'm sure there are still tradeoffs to containerization but at least the overhead is minimal.


>Previously, data extraction was primarily used in only certain types of cases—typically child pornography or, sometimes, drug offenses. Now, however, cops’ first move is typically to find some sort of incriminating evidence on a suspect’s cell phone, he said, regardless of what kind of case it is.

...What? Are cops frequently scanning people’s entire phones as a first move now? I had heard that this occasionally happened at borders or with serious crimes, but I had no idea it was this widespread.


> ...What? Are cops frequently scanning people’s entire phones as a first move now?

Some people I know who were arrested during the series of Portland protests last year, whether or not they were even active participants... (you may have heard of some people being snatched from the streets into vans, for example)

...mentioned that their phones were missing their MicroSD cards when they got their items back.

Let that sink in.


Well...this is exactly how power corrupts, right? If you have a power, you're bound to use it.

Give a general a jet plane, and he will sooner or later bomb some people with it.

Give a politician the power to classify information, and sooner or later he will use it to cover up some shady stuff he did.

Give a cop the power to scan a phone, and sooner or later he will scan every phone he encounters.

This is exactly why the government should not get a "backdoor" into E2E encrypted chat: today it will be used to prevent a terrorist attack, tomorrow to send your mom to a gulag for thoughtcrime.


A disappointingly large amount of policing is picking a "suspect" and then trying to find a crime.


[flagged]


A previous line manager of mine was a former cop. I asked him one time about the thought process that is behind the choice to arrest or not.

Nobody — nobody — knows the entire legal code, because it is much to big. Cops necessarily have to look for suspicious behaviour and then afterwards see what fits.

I don’t like this at all, I want all legal codes to be simple enough for the average person to know their legal rights and legal responsibilities, but that’s the status quo.


Just look on a global scale. Nothing new here.


"Global scale" is not a citation.

Please show data that "a large amount" of policing is done this way, keeping in mind most of these arguments are in the context of the US.


[citation needed] is not a comment


Are you misquoting them on purpose? They said "A disappointingly large amount" not "a large amount".


Nope, HN doesn't show parent comments in threaded discussions and I'm on mobile with noprocrast. I'm sorry I missed one word, although it doesn't really affect my point at all.


> Nope, HN doesn't show parent comments in threaded discussions and I'm on mobile with noprocrast.

I haven’t used the noprocrast feature. But there is a “parent” link to see further up a thread when checking discussion replies. The description of norpocrast doesn’t seem to describe removing that.

TIL about noprocrast being a thing.

https://news.ycombinator.com/newsfaq.html


Oh even better: the take custody of your phone, just because they can.

A turned off iPhone is completely unusable for most of their intentions, still will be taken and only given back after x months.


They can see what traffic stops, and then starts when you pick it back up.


You say that as if the person with the confiscated device isn't going to get a replacement device. People can't go 15 minutes without looking at their device let alone x months to get it back. Also, personally, if I did receive a confiscated device back, I would only consider it being useful as a trade in. There's no way in hell I'd ever use it again, especially if it is a laptop.


After a car crash, the police may search your device for evidence of inattentive driving. Anything else they find on it is fair game.


Naively, I had always assumed this would be a first move from them.


Given what we know about how law enforcement really acts, it's not naive.


Makes me sick to read. 1984 is here.


There's a reason that I rarely carry my tracking device out with me when I leave my house. Only when I head to another city. I don't see the need to, as it doesn't actually provide a lot of benefit, and it's fine to be without it for a few hours at a time. Sure, if I need mapping software it does - but the reality is that I want to have conversation with people at the times I want, not all the time. I also generally try and live notif free nowadays (clearly this ebbs and flows).

This works surprisingly effectively even with a family, purely by running a voip client on my desktop, and ensuring my mobile forwards to my voip line, meaning I can still receive calls as necessary from schools, daycares, etc.

It's an experiment I intend to only push further and further. Clearly also pull back, but a fun one.


If the cops get their hands on your phone they will search it. No different than a backpack.


I know multiple people, unconnected from each other except through me, who've encountered this. Time to live in NZ I guess.



> Are cops frequently scanning people’s entire phones as a first move now?

Cops do whatever they want

> I had heard that this occasionally happened at borders or with serious crimes, but I had no idea it was this widespread.

See the first point


In the physical world we take measures to ensure evidence is not tampered with like tamper “resistant” packaging etc... We don’t have any such measures at all for the digital “evidence.” And what exactly is digital evidence and how do we keep and maintain the integrity of it from beginning to end. We have trouble maintaining the security of mission critical systems. How are we gonna ensure there are no rogue elements in this process who conveniently can and will digitally frame us. They already use fake evidence in the physical world by planting stuff on us. Digital planting is so much easier.

On a similar note, nation state hackers already have false flag tools in their arsenal, where they “plant” evidence of other nations doing an attack.

https://en.wikipedia.org/wiki/Vault_7


"We don't have any such measures at all for the digital "evidence"."

This has not stopped courts from accepting digital evidence and the creation of a "digital forensics" industry. Perhaps the only persons who could successfuly call this into question are the same people, so-called "experts", who are supporting its continued existence.


Right now, there are numerous trials across Europe (and other places too, of course) based in large part on the fallout of the EncoChat hack. https://en.wikipedia.org/wiki/EncroChat

At least in Sweden (where I can follow the discussions easily) the defence are raising questions about the legality and to some extent, the validity of the evidence from EncroChat. So far it seems seems the courts are accepting the evidence.

Also, it is quite amusing to read the EncroChat logs from some of these trials. The user names selected and the messages sent sometimes shows very bad opsec.


> Also, it is quite amusing to read the EncroChat logs from some of these trials.

That's interesting, where can I find those?


I'm not sure digital forensics is invalid, but it certainly needs a process of validating that the evidence wasn't tampered with due to the easier nature of tampering. Independent review/auditing is insufficient because it could be fooled by tampering. Independent oversight, though, perhaps could work.

Imagine a scenario like FBI Bob and EFF Alice show up to image a suspect's hard drive. Both of them image it, Alice hashes it and throws away the image, Bob keeps the image, then if there's a dispute, Alice is called in and provides her hash and if Bob's image doesn't match it, the tampering alarms go off.


Hashes of disk images are definitely recorded and stored as part of forensics evidence trail. Being able to prove that the files you got came from the hard drive of the suspect - essentially a physical chain of custody until the image is taken, recording what was taken, and a digital chain with hashes/signatures starting from the imaging.

The hash verification you describe happens (without independent oversight but enugh to protect against a single bribed/malicious officer) and it does protect from post-factum altering of any digital evidence; you try to do the early (physical) parts quickly, the majority of analysis work comes after you have the images and their hashes - but everything from that analysis can reproduced from the verified images if it's disputed.


For this to work, FBI Bob and EFF Alice need to trust each other. Otherwise, if FBI Bob gets to connect to the hard drive first, the imaging process could tamper with the contents before EFF Alice has a chance to do the hashing. And if instead EFF Alice connects first, she could be wiping the incriminating data before taking her hash and FBI Bob imaging anything.

If course, if they already trust each other (including all the used tools), then the whole excercise is moot.


Both parties could be connected to a bridge device that only allows read commands to pass through the actual drive.

They would need to trust that the bridge is not malicious. And that is a whole other rabbit hole. But I think it is possible for them to attest the firmware/gateware running on it, through some convoluted cryptographic ceremony.


What you could do, theoretically, is optically isolate the drive from the bridge, and have an optical splitter on the reader-to-drive direction who's signal is recorded by both parties.

They'd get a live notification of tampering, and could have independent signal blocksers that could physically block the command from actually arriving at the drive (assuming they use a few hundred meters of coiled fiber as a delay line).

The receiver on the drive side could even be a single phtotodiode, which could be made to allow easy verification with, say, an electron microscope if you're really paranoid. There are probably ways to use field-suitable technology if you only need to ensure the photodiode has the same structure as what you expect.

Cryptography won't help you with trusting hardware. Delays and intervention-ability would help, though.


Not sure how optics would help. If your wire has some kind of side channel, optics can have it too. If you want to detect stray mysterious photons then you could do the same with electrons.

But in any case, this is not really the level of concern here. It's equipment that tampers with the device. The only way to be sure is to roll your own. Which holds for both sides. So the perfect systems needs to be created by two adverse parties, which means it's impossible to do. Qed.

(In the real world with physical proof this is different since tampering is much harder and it's a problem worked on for centuries. It's not bullet proof either but much more mature.)


Usually they use things like write blockers to prevent accidental modification of the data, at least with hard drives.


> they use things like write blockers

Q: How could you verify that a write blocker was actually used, and that it functioned as described?


Just as with most things for physical evidence, paper trail documenting what was done by whom and when, plus the testimony of these people. Think the equivalence of "how do you verify that the fingerprints shown as exhibit A were actually taken off that gun?" - if it's disputed, the forensics expert testifies what they did and without very specific counterarguments the juries generally trust that it's true.


They could be extracting the data once into a Merkel tree and then peruse that. I've got to believe the reason they don't is that various intelligence agencies don't want them to.


You could reconstruct that any way you want to later.


Yes, you'd need to have separate storage for intermediate product (or at least periodic incremental hashes) to ensure against tampering. The meatspace chain of control rules would likely be adequate (at least to the current state of the legal art) to handle the two, were they stored separately.


This would require regulation of these devices, and they would have to have adequate hardware protection of the keys used to secure these Merkle hash trees.


Not really: if you copied them to a pair of USB keys, those could be handled like other physical evidence.

The point of the Merkle tree in this case is just to make tampering much more difficult/ easier to suspect; independent entities could hash he result and compare for evidence of tampering.

This isn’t a case of absolute perfection, this is bringing things up to the current standard (aka “state of he art”) WRT courts and police procedure.

Sure, one could design all sorts of additional mechanisms (error correcting codes in the trees etc) but realistically, it’s tamper detection that matters, and it only needs to be as good as paper, candlesticks, fingerprints (sigh) or whatever else is already customary in the evidence room.


If you have no cryptographic protection then it's just a matter of writing code to tamper with the thing leaving no evidence of that tampering behind.


It's worth noting that the link you've posted is mostly about the CIA, and it debunks their use of false flag tools.


> We don’t have any such measures at all for the digital “evidence.”

This is false. The chain of custody and integrity of digital evidence is pretty much a solved problem. Digital forensics is a fairly mature field.

This is not to say there aren't issues with some of the tools, such as Cellebrite's clown show.


Your two paragraphs are contradicting each other.

The mere possibility that a "clown show" of that magnitude can exist is evidence for the lack of a solution to the problem of digital forensics.


This clown fiesta on Cellebrite's side is an example of the issues of trusting a compiler that Ken Thompson pointed out all those years ago[1].

Trust and Computers don't go hand in hand [2].

[1] https://softwareengineering.stackexchange.com/questions/1947...

[2] https://pluralistic.net/2020/12/05/trusting-trust/#thompsons...


This actually happened with someone way back when: https://www.quora.com/What-is-a-coders-worst-nightmare/answe...


Pretty sure we had to do the "trusting trust" patch set as a homework assignment during my CS undergrad. It's not actually super complicated. Backdoor login.c when you see it, and backdoor GCC when you see it.

Sounds like someone took code from their homework assignment and added a few fun extra credit features to screw with their psychology professor.

I can imagine how it would be pretty confusing to stumble across if you hadn't read the paper though :-)


"brutally dunked on Cellebrite" why is this phrase showing up in so many news articles now? It's an old phrase, but I swear I've seen it on the titles of hacker news articles like 10 times last week. Such a annoying phrase, at that. It makes everything sound like it was just a witty comeback on twitter.


I notice this a lot in journalism in general, and I assume it’s mostly a case of “journalist A uses a phrase, other journalists read it and like it so decide to use it in their next article”.


A great example of this is the word "refusenik". Originally a term used to refer to those refused permission to emigrate from the USSR, but weirdly the British press began using it to mean "someone who doesn't want to do something". That is basically what that word means now in English, you'd probably struggle to find someone using it who knows the original meaning.


The tech world does that a lot. Suddenly every developer is a poylglot and every framework is opinionated.


Cliches considered harmful.


On a similar note, it seems a lot of media articles copy and paste the same dumb phrases. Search any trending news and half of them are quoting the same exact thing in the title, maybe only one or two bother to throw in a synonym.


Largely unsubstantiated hypothesis:

Did you see "The Last Dance?" Popular lockdown fare made the Michael Jordan hagiography more prominent that it would otherwise have been.

Scotty Pippen v Patrick Ewing [1] suddenly became more prominent in the minds of many. We were lacking excitement and drama in our lives. Social media was full of this. Lazy writing follows the zeitgeist. Brutal, disrespectful dunking became more of a thing again...

[1] https://www.youtube.com/watch?v=xtOUpybXmzo


I think this one actually escaped from video games, specifically this video from 2011: https://www.youtube.com/watch?v=r-fIOrUTIOQ


After a snickers commercial in 2009 the phrase became repeated and popularized by the general public

https://youtube.com/watch?v=qarH-PTKulQ


Yeah I'm sure that the jordan biopic wasn't the source. I am suggesting it's a reason why an agressive basketball metaphor has regained popularity of late. Could be very wrong, of course.


Words enter and leave common use all the time, lately I’ve also heard dunked a lot and it stood out to me too. Just accepted it as a new term people are using. Another recent one is oooof


Must have been a tough choice for the author between that and "slammed".


"Cellebrite got bodied, yo"


I mean, a motion was filled, but has that actually caused any real issues? Scattershot motions get filed constantly as the process of law unfolds. My understanding was these sorts of tools have case law around them that they're de facto trusted barring some proof that their defects actually impacted your case.

It'd be like going in and saying that someone could reprogram the red light camera to superfluously give out tickets and record incorrect data. Almost certainly true (there's no way these systems are anywhere near secure), but try that argument in front of a judge and I believe you'll find it won't get you anywhere.


> It'd be like going in and saying that someone could reprogram the red light camera to superfluously give out tickets and record incorrect data.

Over here a guy got a speeding camera fine thrown out because the authority running the cameras couldn't prove the MD5 hash they were using was adequate to verify the evidence.

I can't quickly find a detailed explanation, but here's a reference: https://www.schneier.com/blog/archives/2005/08/the_md5_defen...

If my memory serves, the actual argument was that the MD5 hash was only covering the photograph, and that the speed/date/time stamps weren't included in the hashed data and therefore _could_ have been tampered with.


DeepFake technology seems like it makes "bearing false witness" into an arms race.

Who does the better job of framing the opponent?

Does X start off getting busted, as though framed by Y, only to have the truth be X framing Y to appear as though framing X?


Not really. Experts can still detect if something is a deep fake, and the punishment for trying to pass one off as evidence is severe.


Adding another area of a court proceeding that relies on "experts" is already a problem. Even when the deep fakes become good enough that they can't reliably detect them, they will continue to claim they can.

And if they wrongly testify that a genuine video is fake, it's much harder to claim that they submitted false evidence.


> Not really. Experts can still detect if something is a deep fake

Currently. Currently, experts only can determine if something is a deep fake.

Considering that we're still in the infancy of deepfakes, this is not a reliable defence against deepfakes being used as evidence.


Yeah, if there's one thing the worldwide politics of the last several years has shown us, it is that punishment is swift and severe when governments knowingly lie to their citizens.

/s


Well they didn't need high tech means to do that, did they?


Lying to citizens and willfully falsifying evidence in a court of law are different things that are treated very differently.


> Experts can still detect if something is a deep fake

For how long?


Hard to predict the future. I predict a very long time (going from 99.9% perfect to 100% is really hard). Regardless though, today is not that day.


DeepFake technology democratizes the ability to produce believable fakes. Before only countries and large institutions could do so.


And that may not be a good thing. Now we won't just hear the occasional lying about WMD in far-away-country, which is bad, but if your government wants to trick you, they'll succeed nonetheless. Now it can easily be the idiot neighbor who fakes some nonsense about you and can damage your reputation for life with the right tools, until the general public has caught up and understands that a video can't be trusted anymore.


The end-game is witness signature; where a third party observes and records at least a record of a signature so that the message is authenticated to a specific time by a witness. Ideally to multiple witnesses, who all cooberate that the other witnesses saw the same thing.


One could have written the same thing about Photoshop, couldn't they have?


I suppose, but how many court cases turn on a photo?


I've heard[0] that much more than they should, at least in the US. If that's the case, then there's a more fundamental problem of justice system that has no clue about how technology works.

We have to first graduate to "sender can write anything in From: header of an e-mail" levels of understanding of technology before tackling deep learning.

--

[0] - I'm very much going by anecdotes here, so if they aren't representative, my whole comment is irrelevant.


A substantial amount of evidence heard in court is outright junk science. Photos are the least of their problems.


It still seems doubtful this kind of defence is going to hold up in court...

> So what you're claiming, Mr Defendant, is that these Signal messages found on your phone were not in fact written by you, but were placed there by a virus that someone else had put onto police equipment when they were previously arrested? And this virus also knew your exact writing style and knew to place incriminating messages to your childhood schoolfriends? And you didn't mention this possibility at your original trial?


If I could install an app on my phone that would destroy the machine being used for Cellebrite, I would do that in a second.


Bruce Schneier reports that Signal will do exactly that (sometimes):

https://www.schneier.com/blog/archives/2021/04/security-vuln...


Not police equipment. The device itself.

Furthermore, the attacker would _want_ to mimic the suspect's exact writing style.

The real problem I would assume is: The software used to produce the incriminating material has been alleged to produce incriminating material out of thin air at will. This certainly should change your perception of the material produced by this software (even if in this case you find it does not change your conclusions).


Remember though that in that situation they don't have to convince the judge that that actually happened - they only have to convince the judge that they should have been able to raise the possibility with the jury.


I wonder if/when we'll see an attack on Cellbrite's tools in the while. Getting a ransomware trojan onto an airgapped evidence computer would be a neat trick.


Ransomware on an evidence computer could be hilarious... Even more so if it doesn't lock immediately but first waits to connect to a network and spread. Also because you don't want to be standing there when it happens (would get you arrested for a new crime on the spot).

You will need a very good lawyer to explain that you didn't hack them but they fucked up by trying to steal your data and "by accident" ran into a special file you had.

Then make sure you cannot legally be compelled to give them the decryption key. But I guess you should aways be able to "forget" the key, can't be forced to remember something.


That's assuming the individual is even aware that they're the attack vector. Apps can easily write a suitable file to their private storage that will deliver a payload.

And now I'm wondering - since the exploit only requires that the bad file be parsed, couldn't you put an appropriate exploit in place just by sending the file via email, sms, or airdrop? As long as it's someplace Cellbrite will see and process it, you're good.


Honestly, we'll probably see a PoC against Autopsy any day...

Budget Android phone containing basic payload against an open source ecosystem and boom... you got a research paper, baby!


There's a team at University of Michigan who can't do their "sending shit patches to the linux kernel" research anymore, surely they'd breeze through ethics signoff on doing this...


Minnesota


Mr. "Marlinspike" could probably earn a few (edit: more) quid as an expert witness thanks to that blog post.


If you look him up you’ll see he already does.


Think of the absurdity of this situation. We all get together and vote to pay one group of people to keep us safe from another group of people, and then cheer for the ability to evade the people we hired.

It sure seems like the overwhelming majority of law-abiding citizens with an opinion disagree with the extent of electronic surveillance (maybe it’s just this echo chamber). So, is democracy working? Are we all grossly misinformed about the terrible things that this prevents? How can democracy work if we are so uninformed?

People talk about abolishing police, and I wonder what would come of that. I believe that the popular will toward some form of community protective services is so strong that it would materialize out of necessity. But in what form? Even the cartels deliver world-class public safety for their territories, but only if you tolerate their own atrocities.

The justification of electronic surveillance is mostly stated in terms of physical crime prevention. This suggests that the forces on the ground are basically incompetent. Any sort of real-world crime that would require large-scale electronic communication, involves a lot of people, and leaves a lot of physical evidence. It just can’t go on within a community that trusts and invites law enforcement.

Likewise in the electronic world, certain levels of surveillance are welcomed. Most people would probably rather use a platform that recovered funds from hacks and scams. But this doesn’t seem to be its purpose. Actually we have no idea what is being watched or why, and the only thing we observe from officials is political shoe-banging against whatever group of citizens are the objects of today’s moral outrage. So it’s pretty easy to become extremely paranoid, and reject all forms of surveillance in favor of accepting the risk of all those terrible things that it may or may not have prevented.

So if this is the wrong idea, it seems pretty easy to fix with transparency. What do we really gain by operating in the shadows? I see this as somewhat similar to the comparison between proprietary and open source models. The Britannica is dead. Transparency allows trust and collaboration from sources that never would have been able to contribute. And what if criminals knew what evidence is being collected on them? My best guess is that 99% of criminals would give up at the first sign of trouble. Transparency is the ultimate force multiplier. The greatest victory is to win without a fight. We forget this. It feels good to fight and win and get a medal at the award ceremony. Transparent power is boring, safe, democratic.


The data extraction piece is bad enough.

I find the problems of integrity in that data to be worse. Tying right into that narrative is "Coded Bias", a documentary investigating the bias in facial recognition algorithms [0]. Available on Netflix [1].

[0] https://www.codedbias.com/about

[1] https://www.netflix.com/title/81328723


"Now, however, cops’ first move is typically to find some sort of incriminating evidence on a suspect’s cell phone, he said, regardless of what kind of case it is."


But for now, not if the suspect has an iPhone: https://www.google.com/amp/s/9to5mac.com/2021/04/27/cellebri...

Cellebrite has told all customers to stop using the physical device on iPhones. Could be related to the stolen Apple DLLs in their product. Any competent legal department would tell them to remove those immediately, because you don't want to get in a lawsuit with Apple over such a clear case.


The headline might be accurate if the lawyer actually gets a new trial for their client. At this point there has been demonstrated zero grief for the law.


The article states "that corrupted apps on a targeted phone could basically overwrite any data extracted by Cellebrite’s tools"

Traditionally a non volatile memory chip is forensically read by being removed from the board and read out directly. Each sector is read from its memory registers. This is a robust forensic method since 1) its passive 2) its repeatable 3) you can fit the chip into another surrogate device and obtain the same results or use a different piece of software to obtain the results

If what cellebrite is doing is altering the memory when it interrogates it, this breaks the chain of custody. The process cannot be repeated.


The described cellebrite tool is not used to read chips removed from the board, it attaches to an unopened, unmodified phone over a standard external interface and extracts data over that.


I understand that. My intent was to point out the difference between traditional digital forensics and what cellibrite does.


Cellebrite's "Physical Analyzer" can parse imported data, including disk images and chip images.


In that case it can't alter any evidence in an undetectable way - it can't alter the disk or chip itself; and it can't alter the image without changing the (already recorded offline) image hashes.


How do you know that? If the imported data pops the host process, it could take over the host and mount the phone read/write.


Naively assuming ffmpeg and others aren't too happy that their hard work gets used in questionable military or LE operations. Perhaps a more restrictive license[1] can help fight companies like NSO Group, Cellebrite & Co (and continue the momentum Signal created). Best case it would make it illegal for them to use the software and worst make the possibility to use public patches much harder.

[1] https://news.ycombinator.com/item?id=14864197 (from the top comment):

> Linked below is an example to restrict military use. Note though, it's so broad reaching that it might scare away even non-military organizations for using your software. And it still doesn't address how you enforce such license. So there's lots of questions about the applicable of this example license. http://web.cs.ucdavis.edu/~rogaway/ocb/license2.pdf

  --------------------------
EDIT: Such a license would probably no longer be fully open source (afaik). Perhaps it's time to create a new breed of license that allows people to build systems where they know their work isn't going to be used to guide a missile or get someone killed for their sexual, political or religious believes.


Wasn't Cellebrite already violating Apple's copyright?


allegedly afaik. the Signal post pointed out a dll that looked like it belonged to Apple and it was unclear if they had permission from Apple if so.


Then they probably don't give a shit about what the ffmpeg license says.


I don't think a license is going to stop them from adopting the software. Their current FFMpeg is 9 years out of date! That should illustrate the diligence already.


It's not illegal to use outdated software. It would be illegal to use tools without a license. That might very well put pressure on LEA to abandon Cellebrite for a vendor that makes stronger promises[1] here.

Also LEA could very well now be reminded to not break the law themselves and put pressure on vendors. Somebody in charge of procurement doesn't usually care much about the finer details of whether it works or is the right tool (they have other people to decide upon which tools they might find useful to procure). But procurement does take not over licensing cost and its terms.

Also licensing cost has always been a issue with justifying use of Cellebrite. Any excuse to switch to a different product would just be icing on the cake. E.g. 1 seat for Cellebrite costs about as much as a Grayshift license for the whole team.

[1] whether that will improve things for the person that finds themselves as the target of these tools is another matter.


> That might very well put pressure on LEA to abandon Cellebrite for a vendor that makes stronger promises[1] here.

Hahaha. I can't wait till HN finds out about how rubbish the other vendors are.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: