I am pretty surprised how they allowed reusing IV. Unique IV is explicitly mentioned to be an assumption for AES GCM (first sentence in security section of AES-GCM wikipedia page)
How could anyone design TA (i.e application whose whole point is security and hence it runs in the secure mode) and allow user to set IV in the API?
My understanding is that TLS spec did not enforce non-repeating nonce, only suggested it and left it to implementers to decide which led to the vulnerabilities you explored.
This Samsung one here is in a way similar - the TEE API had a way for users of the API to set IV which it should not, TA should make sure the IV is not repeated.
Since you have done prior research in this area, is using a counter for IV still recommended even when IV is 12 byte? I assume chances of HW random number generator (which I assume exists on most phones today) colliding for 12 byte random number generation would be pretty low.
That isn't pragmatic, it silently breaks programs that rely on specified behavior just to fix one of many self inflicted security issues polkit had over the last decade.
The sysctl can have three settings: 0 to do nothing, 1 to emit a warning, 2 to fully enable the patch that blocks argc=0. Use 1 by default as not to break userspace, let people opt-in to 2 for the additional security
Which is fine: https://news.ycombinator.com/item?id=30208963 is pretty on the money here. Patch this behaviour, and fix the extremely low number of offending applications concurrently.
Question is would we really get any benefits by this - compilation would take longer by some x amount which may or may not be less than the time linking step takes currently.
That's a question worth considering. But fundamentally, it should be faster to write compiler output directly to the final executable than to write it to an object file that is then copied into the final executable.
Is the slow part of linking the "copy all the bytes into the executable" step (in which case avoiding separate-link is a clear win, saving a copy), or is it the "do all the relocations" work, which I think needs to be done anyway ?
I put my question to a friend of mine who works on linkers, and his take was that for a single threaded linker like ld.bfd the copy-bytes part would probably dominate, but that for a multithreaded linker like lld that part trivially parallelizes and so the slow part tends to be elsewhere. He also pointed me at a recent blogpost by the lld maintainer on this topic: https://maskray.me/blog/2021-12-19-why-isnt-ld.lld-faster which I should go and read...
If you have looked at rust, you would understand that "Written in Rust" means something more than just the face value.
Written in Rust means that there will be less memory/concurrency bugs due to the language itself ruling out classes of those bugs and that is a feature that I would definitely care about in any peice of software I use.
Most commonly used software such as browsers can abort on OOM, this isn't about those cases at all.
In conditions that you do not want to abort, usually you can and would want to allocate pool before hand.
I have seen this type of reasoning before in HN comments but from a user's perspective it does not make sense. Imagine every user is sending a maximum amount of information, which we can see keeps increasing over time, via HTTP headers (including cookies), browser capabilities, hardware capabilities, etc. This "run with the herd" reasoning seems to suggest the best way to avoid fingerprinting is to send the maximum amount of information, "like everyone else". That only results in ever more information being sent to the online advertising industry. The probability they can distinguish one user fingerprint from another goes up as the amount of information sent increases. The objective of the online advertising services company is to gather as much information as possible from users.
The objective for the user should be to send as little information as possible. If a fingerprint shows the user is not running JS and is providing only a very minimal, generic set of information, how much value is there is trying to serve ads to that user. Users who want better privacy should be trying to reduce the amount of information they send. Maybe the first movers in that effort are "fingerprinted" as being privacy-conscious, tech-savvy, etc. That is probably going to result in less ads served to them, not more. Eventually, when most users, "the herd", is sending the minimal amount of information, the fingerprints all look similar.
Think it through. Advertisers do not care about users who will not indiscriminantly run JS. They go for the low-hanging fruit.
I'd posit that the biggest risk for advertisers is "plausible bullshit". Their ability to say "look at our huge tracking profiles" is dependent on both quantity and quality of data. If ad networks can't accurately sanitize their data, advertisers are going to balk at spending $6 per click for misprofiled audiences, when they can spray-and-pray "good enough" contextual ads for 30 cents a click.
Give me a VPN that regularly geolocates me at a Starbucks 30km out of town. Give me plugins that stuff my search history with a fixation on the Cincinnati Bengals and replacement parts for a 2013 Hyundai Accent. Yeah, they might see my actual traffic patterns, but the goal is to make it expensive and hard to filter the real use from the elaborate story.
Amongst the top 1% of tech savvy users, maybe. In all my years of supporting 100,000s of “regular” users I’ve never encountered anyone with JS disabled.
How could anyone design TA (i.e application whose whole point is security and hence it runs in the secure mode) and allow user to set IV in the API?