Hacker Newsnew | past | comments | ask | show | jobs | submit | remcob's commentslogin

You can verify in limited memory by repeatedly verifying modulo a few small integers. If that works, then by Chinese remainder theorem the main result also holds.


Besides the data being signed as already mentioned, the protocol is interactive and custom to passport documents. So you can’t just put it on any programmable NFC tag. I also doubt you can buy programmable ones implementing the passport protocols. But maybe you can find general purpose programmable ones you can implement the protocol on.

There are also optional subprotocols that allow the chip to be authenticated (i.e. proof it knows a private key). These prevent copying valid signed data to a different chip.


You can definitely run the protocol on a programmable smartcard (see for example https://jmrtd.org/), but without the required PKI certificates, nobody would accept your home-made passport.


Yeah but since the USA doesn't sign on to anything above basic auth (MRZ unlock) everyone also has to work on the more basic level. Kinda unfortunate.


It’s well known GPUs are good at cryptography. Starting with hash functions (e.g. crypto mining) but also zero knowledge proofs and multi party computation.


They're not particularly good at cryptography, but they are good at highly parallel tasks like trying a bunch of hashes.


And now they sell gold-plated optical connectors.


And it so totally rugged against tarnished contacts, unlike copper or brass contact.

Would recommend.


Did you verify this through disassembly? These loops can still be replaced by a closed form expression and I wouldn’t be surprised if LLVM figured this out.


Sooooo.... anyone know a compiler that actually does the closed form tactic on the loop(s)? If I see correctly, in theory the program could be compiled down to something that finishes nearly instantly, with no loops at all?


If it turned the inner loop into a closed form expression, I would expect the outer loop to go through 10k iterations a lot faster than needing half a second.


Running the C code through Godbolt with the same Clang and -O3 optimizations it looks like you're right, there are still 2 loops being performed. The loops are unrolled to perform 4 iterations at a time but it's otherwise the same. Other than that it didn't do too much with the loops. Hilariously there is a god awful set of shifts and magic numbers... to save a single modulo operation from running once on the line declaring 'r'.

See here for why the Go version performs worse https://news.ycombinator.com/item?id=42251571


For me it means I can fork the repo and start hacking on the code immediately, and it will have reasonable quality. With C++/Python and even Node I often find myself wasting half a day just getting it to build.


Yep. If it's a Python project, it's about a 60% chance it won't run on the first try after a fresh clone.

When I see a CLI tool written in Rust or Go, it usually just works out of the box without having to mess around with godawful pip environments or conda.


I say this as someone who cut their teeth on and loves Python for a thousand reasons, I have to agree. Python projects are abysmal. Rye and UV are promising (and I am very excited about them), but they aren't quite ready yet.

"Written in Rust" carries with it significant promises that only Go also has. (Go has a lot of the same promises, for having good tooling and the same mostly-statically-compiled philosophy.)

"Written in Rust" tells me a project is easy to install and easy to hack on. I am far less interested in using non-Rust projects, and I am definitely disinterested in making code contributions to non-Rust projects.

Case in point: It took me much longer to write this comment than it took to install and use marmite.


This. It's basically about average quality. It's the old Python Paradox all over again. Javascript / Typescript is the bottom of the barrel in terms of quality. Python / C++ is higher than that. And Rust is at the top.


It actually uses a lot of modern cryptography (zero knowledge proofs, multiparty computation, decentralized consensus, etc) to avoid exactly that.


To avoid harvesting people’s biometrics for profit?


Rather, to obscure exactly that.


So what is the root of trust in this system then?


Sam?


The distance between two uniform random points on an n-sphere clusters around the equator. The article shows a histogram of the distribution in fig. 11. While it looks Gaussian, it is more closely related to the Beta distribution. I derived it in my notes, as (surprisingly) I could not find it easily in literature:

https://xn--2-umb.com/21/n-sphere


> The distance between two uniform random points on an n-sphere clusters around the equator.

This sentence makes no sense to me.


He means it clusters around the distance from a pole to the equator.


Correct. I was too short in my comment. It's explained in the article: without loss of generality you can call one of the two points the 'north pole' and then the other one will be distributed close to the equator.


Pick an equator on an n-sphere. It is a hyperplane of dimensions (n-1) through the center, composed of all but one dimensions of your sphere. The xy plane for a unit sphere in xyz, for example.

Uniformly distribute points on the sphere. For high n, all points will be very near the equator you chose.

Obviously, in ofder for a point to be not close to this chosen equator, it projects close to 0 on all dimensions spanning the equatorial hyperplane, and not close to 0 on the dimension making up the pole-to-pole axis.


My first thought is that it's rather obvious, but I'm probably wrong, can you help me understand?

The analogy I have in mind is: if you throw n dice, for large n, the likelihood of one specific chosen dice being high value and the rest being low value is obviously rather small.

I guess that the consequence is still interesting, that most random points in a high-dimensional n-sphere will be close to the equator. But they will be close to all arbitrary chosen equators, so it's not that meaningful.

If the equator is defined as containing n-1 dimensions, then as n goes higher you'd expect it to "take up" more of the space of the sphere, hence most random points will be close to it. It is a surprising property of high-dimensional space, but I think it's mainly because we don't usually think about the general definition of an equator and how it scales to higher dimensions, once you understand that it's not very surprising.


> The analogy I have in mind is: if you throw n dice, for large n, the likelihood of one specific chosen dice being high value and the rest being low value is obviously rather small.

You're exactly right, this whole thing is indeed a bit of an obvious nothingburger.


"clusters" is acting as a verb here, not a noun.


beautiful visualizations, how did you make them?


The first one IIRC with Geogebra, all the rest with Matplotlib. The design goal was to maximize on 'data-ink ratio'.


Why stop there and not go all the way to

    pub fn read(path: Path) -> Bytes {
      File::open(path).read_to_end()
    }


How to return an error in your example?


    pub fn read(path: Path) -> Result<Bytes> {
      File::open(path)?.read_to_end()
    }
isn't so bad either.


Exactly, and this is in my experience what most Rust code ends up looking like.

It compromises a bit on generality and (potential) performance to achieve better readability and succinctness. Often a worthwhile trade-off, but not something the standard library can always do.


Throw an exception

proving the point of the article even further


You would actually use a Result type:

  use std::io;
  
  pub fn read(path: Path) -> io::Result<Bytes> {
    File::open(path)?.read_to_end()
  }


Sure, if you are allowed to change the signature, makes it look more ugly than just returning Bytes though


Adding to this great list: batch processing inputs can allow you to get more throughput at expense of latency.


If you make your batches small, you can get pretty much all of the benefit without adding (appreciable) latency. e.g. batch incoming web requests in 2-5 ms windows. Depending on what work is involved in a request, you might 10x your throughput and actually reduce latency if you were close to the limit of what your database could handle without batching.


This and other approaches can be found in other industries like warehousing, logging, and retail


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: