Hacker News new | past | comments | ask | show | jobs | submit | SiebenHeaven's comments login

I am pretty surprised how they allowed reusing IV. Unique IV is explicitly mentioned to be an assumption for AES GCM (first sentence in security section of AES-GCM wikipedia page)

How could anyone design TA (i.e application whose whole point is security and hence it runs in the secure mode) and allow user to set IV in the API?


> How could anyone design TA (i.e application whose whole point is security and hence it runs in the secure mode) and allow user to set IV in the API?

I mean... TLS did the same (in 1.2, it was fixed in 1.3). I co-authored a paper about it: https://www.usenix.org/conference/woot16/workshop-program/pr...


Thanks for the pointer to your interesting paper.

My understanding is that TLS spec did not enforce non-repeating nonce, only suggested it and left it to implementers to decide which led to the vulnerabilities you explored.

This Samsung one here is in a way similar - the TEE API had a way for users of the API to set IV which it should not, TA should make sure the IV is not repeated.

Since you have done prior research in this area, is using a counter for IV still recommended even when IV is 12 byte? I assume chances of HW random number generator (which I assume exists on most phones today) colliding for 12 byte random number generation would be pretty low.


My experience is the odds are greater than 50% that people designing a system do the wrong thing with the I.V.'s.

The NSA gave up on back doors, limiting the key size, etc. because people are too stupid to manage keys correctly.


"We do not break the userland"


The actual quote is:

  WE DO NOT BREAK USERSPACE! Seriously.
  How hard is this rule to understand?
  We particularly don't break user space
  with TOTAL CRAP.


Indeed. But a sysctl to switch the patch on/off seems like the pragmatic solution


That isn't pragmatic, it silently breaks programs that rely on specified behavior just to fix one of many self inflicted security issues polkit had over the last decade.


The sysctl can have three settings: 0 to do nothing, 1 to emit a warning, 2 to fully enable the patch that blocks argc=0. Use 1 by default as not to break userspace, let people opt-in to 2 for the additional security


Which is fine: https://news.ycombinator.com/item?id=30208963 is pretty on the money here. Patch this behaviour, and fix the extremely low number of offending applications concurrently.


What specified behavior?


Posix apparently explicitly allows calling programs with an empty argv, so it isn't just a Linux implementation detail Polkit failed to handle.


Great article, I'm bookmarking it so I can point to it if and when required ;p


Question is would we really get any benefits by this - compilation would take longer by some x amount which may or may not be less than the time linking step takes currently.


That's a question worth considering. But fundamentally, it should be faster to write compiler output directly to the final executable than to write it to an object file that is then copied into the final executable.


Is the slow part of linking the "copy all the bytes into the executable" step (in which case avoiding separate-link is a clear win, saving a copy), or is it the "do all the relocations" work, which I think needs to be done anyway ?


I put my question to a friend of mine who works on linkers, and his take was that for a single threaded linker like ld.bfd the copy-bytes part would probably dominate, but that for a multithreaded linker like lld that part trivially parallelizes and so the slow part tends to be elsewhere. He also pointed me at a recent blogpost by the lld maintainer on this topic: https://maskray.me/blog/2021-12-19-why-isnt-ld.lld-faster which I should go and read...


The approach I suggested would do away not only with the intermediate object files but also the "do all the relocations" work.


If you have looked at rust, you would understand that "Written in Rust" means something more than just the face value. Written in Rust means that there will be less memory/concurrency bugs due to the language itself ruling out classes of those bugs and that is a feature that I would definitely care about in any peice of software I use.


> If you have looked at rust, you would understand that

That's needlessly inflammatory.


Most commonly used software such as browsers can abort on OOM, this isn't about those cases at all. In conditions that you do not want to abort, usually you can and would want to allocate pool before hand.


Ah yes, modern git for people using modern c++ If your plain old git works fine for you, there is no need to go looking for modern git IMHO.


Won't turning js off make you one of the few people in the world that do this and hence easier to fingerprint?


I have seen this type of reasoning before in HN comments but from a user's perspective it does not make sense. Imagine every user is sending a maximum amount of information, which we can see keeps increasing over time, via HTTP headers (including cookies), browser capabilities, hardware capabilities, etc. This "run with the herd" reasoning seems to suggest the best way to avoid fingerprinting is to send the maximum amount of information, "like everyone else". That only results in ever more information being sent to the online advertising industry. The probability they can distinguish one user fingerprint from another goes up as the amount of information sent increases. The objective of the online advertising services company is to gather as much information as possible from users.

The objective for the user should be to send as little information as possible. If a fingerprint shows the user is not running JS and is providing only a very minimal, generic set of information, how much value is there is trying to serve ads to that user. Users who want better privacy should be trying to reduce the amount of information they send. Maybe the first movers in that effort are "fingerprinted" as being privacy-conscious, tech-savvy, etc. That is probably going to result in less ads served to them, not more. Eventually, when most users, "the herd", is sending the minimal amount of information, the fingerprints all look similar.

Think it through. Advertisers do not care about users who will not indiscriminantly run JS. They go for the low-hanging fruit.


I'd posit that the biggest risk for advertisers is "plausible bullshit". Their ability to say "look at our huge tracking profiles" is dependent on both quantity and quality of data. If ad networks can't accurately sanitize their data, advertisers are going to balk at spending $6 per click for misprofiled audiences, when they can spray-and-pray "good enough" contextual ads for 30 cents a click.

Give me a VPN that regularly geolocates me at a Starbucks 30km out of town. Give me plugins that stuff my search history with a fixation on the Cincinnati Bengals and replacement parts for a 2013 Hyundai Accent. Yeah, they might see my actual traffic patterns, but the goal is to make it expensive and hard to filter the real use from the elaborate story.


You're just added to a (very large pool) of people who browse with JS turned off. Turning JS off as a default is a common thing.


Amongst the top 1% of tech savvy users, maybe. In all my years of supporting 100,000s of “regular” users I’ve never encountered anyone with JS disabled.


This is the real question. Especially for vaccines that mimic the spike protein to make another attenuated virus look like a coronavirus.


A ASUS phone (zenfone 2) had this hardware superresolution feature you are talking about


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: