Hacker News new | past | comments | ask | show | jobs | submit | hecturchi's comments login

> Yubico has developed a library in-house that performs the underlying cryptographic operations (decryption, signing, etc.) for RSA and ECC.

No mention of why


Was there an alternative to use? Mostly gpg is used for these use-cases, which they support fully.

I've been moving from gpg to openssh and age for my uses.


I'm the main author of the Merkle-DAGs CRDTs and I'm very happy to see this here. I'm a bit sorry now that I used the "Merkle-CRDT" name all over the paper because Merkle-Search-Trees CRDTs also deserve that tag.

I think it is good to explain Merkle-DAG CRDTs (MD-CRDTs) as "merklerized "op-based CRDTs", and I would add that Merkle-Search-Tree CRDTs (MST-CRDTs) are akin to "state-based" CRDTs, to complete the analogy. This helps to bridge traditional and Merkle CRDTs and I would probably introduce it the same way, but it is also not quite right when going into some more depth. I will try to explain why.

In MD-CRDTs we have a growing-DAG that works like a chain of operations that are applied in the order that the DAG is traversed. It looks a lot like an operation log, except it can have multiple branches and, if we follow my paper, operations do not need to have any total order (unlike op-based crdts). We know the latest operation in the DAG (the root node) happened AFTER its children, so there is an embedded notion of order. However, if it has multiple children, we don't know how to order the operations in their respective branches, and we don't need to because they should be commutative etc. For that to work, the paper explains that DAG-nodes embed not operations, but "state deltas". So in reality, MD-CRDTs are delta-based crdts where the merkle-dag structure stores the deltas as they are broadcasted from replicas. It's just that deltas look a lot like operations.

In MST-CRDTs, if my understanding is correct, replicas will be broadcasting the roots of the DAG pointing the full state (similar to state-based CRDTs broadcasting the full state). However, thanks to Merklelization etc., changes to the state only update sections of the full DAG, and diffing, retrieving or broadcasting only the changed parts is super easy and very compact. Since the nodes values will be CRDTs themselves, they can be merged on conflict just fine. In practice, it is sort of also dealing with delta-crdts.

The main thing in both is that reconstructing the full state from deltas is easy and efficient because everything is linked, so you just follow links and drop branches that you already have. Deltas without headaches.

Perhaps another way to conceptually understand both is that in MD-CRDTs, you add DAG-nodes to the top of the DAG as root DAG nodes, leaving the bottom of the DAG untouched. In MST-CRDTs, you add dag nodes to the bottom (the leaves) of the DAG, which cascades into having new root DAG nodes. In both cases you broadcast new root DAG nodes and the rest of people traverse and sync.

Now, what are the practical consequences for implementations:

- The main problem in pure MD-CRDTs (with no caveats) is the unbounded growth (ala blockchain) even when the state becomes smaller. This only matters if you plan to delete or overwrite things from the state. In MST-CRDTs it is "easy" to discard orphaned data. However, one of the main advantages in MD-CRDTs is that "syncing" happens from the last operation/delta. This can be important if you care about having certain notion of order, i.e. that "elements added last" are the first ones to be synced into an empty replica. Another advantage may be to be able to reconstruct multiple views of the same state by re-traversing the DAG, which has "everything".

- MST-CRDTs are very good for database tables, syncing is very efficient, things are compact. I haven't implemented them myself but they seem quite adequate for that use-case. When thinking of downsides, I wonder if they suffer from the cost of re-hashing operations as they grow with a very large number of keys and with heavy usage (larger blocks to hash or more levels to update). One very important advantage of MST-CRDTs is that they are more resilient if a leave-DAG-node is unretrivable. In MD-CRDTs an unretrievable DAG node can break DAG-traversal and with it a large portion of the state while in MST-CRDTs, this will affect only a key or a set of keys related to the missing DAG node.

In the end, for both the devil will be in implementation details: how good is your pubsub code? How good and parallelizable is your update-processing code? How good is your GC process? How good is your blockstore? All those things become performance bottlenecks even when the theoretical properties work perfectly.


I forgot: key-value store using MD-CRDTs was implemented here: https://github.com/ipfs/go-ds-crdt

The trickiest part was not the CRDT, but the DAG traversal with multiple workers processing parallel updates on multiple branches and switching CRDT-DAG roots as they finish branches.


Were you inspired by Git? How did you come up with MD-CRDTs?


I worked on IPFS Cluster and people like my co-authors had been experimenting with this already, but didn't have a name nor was formalized. I needed a way to sync state and decided to write it down "properly" as a paper along the way.

I think it's very valuable to bring people into the topic which is very cool (that's why I added a very long intro to try to provide all the background).


People operating IPFS gateways might benefit from using NOpfs, pulling the official public gateways badbits.deny list:

https://github.com/ipfs-shipyard/nopfs


The linked article with details on design (http://www.zenithair.com/stolch801/design/design.html) is very interesting and I wish every company marketing a product would write down their design choices like that.

Imagine engineers designing a car would have to write about the design rationale of their touch interfaces...


>rationale of their touch interfaces...

Automotive EE here, I can explain that.

Its cost. Done.

There is design and engineering in buttons and micros that are responsive, durable, perform how the user expects.

There is designer placement. Ergonomics and accessibility. This button is perfect and looks great here, but for a 5-percentile height Female it is outside the limits so, let’s redesign a ton of things to move it done here where it is stupid for everyone.

Digital buttons have almost zero cost and could be changed at will. There is also no holding a vehicle on a lot for weeks because the micro that switch bank X uses is unavailable.

For annoying reference, some of the new GM trucks have no fog light buttons. You need to access that through the primary screen. It’s all cost.

It’s basically an afterthought now. If they get enough complaints they can change it, otherwise, it must be fine and within expected grumpy user margins.


Chevy (and I guess Tesla before them) is removing headlight switches entirely which is the topic of much justified angst. Safety controls should never only be accessible from a touch screen.

https://www.roadandtrack.com/news/a42953152/the-2023-chevy-c...


Can't wait to try that line with my boss:

> Katcherian claims that his team removed all possible bugs from the infotainment system.


Yeah any software engineer knows what a terrible BS statement this is.


Very annoyed at these digital buttons and especially buried options on touch screens (high-skilled driver, had racing license etc.). IT's da,n dangerous, increasing driver workload fumbling around for the right control that used to be by touch

OK I get the cost - if that's the case, let us configure it! E.g., essential to have the windsield defrost in an upper corner of the button array so when it fogs suddenly in traffic at night, I can just reach and hit it, instead of fumbling around blind....


Sorry, that would be more cost!

Plus, they need it to look fancy. If someone gets in your vehicle and the screen doesn’t look good, they might not buy the same vehicle.

It’s all cost, and digital isn’t free, but it’s way cheaper than physical buttons.


Oh geez, that's probably right. So, we're doomed?

I suppose the market is clearly moving away from anyone who actually drives e.g., see the lousy uptake of manual transmissions.

Yet this seems very serious. I think the job of the automobile's cockpit designer is to reduce the driver's workload and maximize the ease and reliability/repeatability of any needed action to control the car. This now seems to have been replaced with "provide the cheapest possible controls to minimally carry out some functions".

The result is that even significant functions that are carried out while driving require multi-layer on-screen menu navigation! I can hardly think of anything more hazardous - actively reating a function that requires continuous attention for multiple seconds (when you'll go 100m in 3-4 seconds); even with actual training to rapidly rotate my gaze between windscreen, mirrors, & dashboard, this is difficult, and likely mostly impossible for ordinary drivers.

Does management just not GAF about these issues? Crazy

EDIT: My first thought is a set of physical buttons with the logo for the function (heat, defrost, rear defrost, seat heater, etc.) that we can just pull and put in the arrangement on the row where we want them. But then that's probably more expensive. At least let us move items around at the block level in the touchscreen menu tree and screens? Again, more coding, and I get why we can't just open-source it. Isn't there any way to get someone involved that has an understanding of actual driver kinesiology and workload?


1) To be fair, private IP ranges should no longer be published to the public DHT (there is a LAN DHT for them).

And of course what interfaces are announced is and has been configurable for as long as I remember ("NoAnnounce" setting in the config).


I think the point is not to force decentralization, but to allow it. When the protocol is open, there are more chances to break the monopolies through innovation, quality of service etc. and avoid lock-in.

In this case, IPFS breaks the monopoly of trust (only obtaining content from a single source because that source is the reputable source for it). Content addressed data makes the source irrelevant. Many possibilities open from there (i.e. breaking the monopoly of storage).


It is relatively new so not many people know about it, but now libp2p (and Kubu/go-ipfs) can be configured with a Resource Manager that effectively can keep a lot of the resource usage at bay: https://github.com/ipfs/kubo/blob/master/docs/config.md#swar...


Any chance that, by limiting RAM usage, you forced your application to heavily swap, clogging the disk and making your machine slow?

I have run a public gateway on 2GB of RAM. Later 4GB because it was subject to very heavy abuse, but it was perfectly possible. Perhaps it is a matter of knowing how to configure things and how to not inflict self-pain with wrong settings.


Yes, there is definitely a chance but it was wayyyy worse when I gave it more or unlimited ram. At least this way the machine was operational most of the time. I don't think it was swapping but since I think the limit I applied also affected the page cache it was likely reading it's data from disk a lot more often than it would of if it could own the whole page cache. But this is basically the same effect as swapping.

Maybe there is a Goldilocks value I could find, but I didn't really need IPFS running that much so I just removed it.


The content ID is not the hash of the content, it is the hash of the root of the Merkle DAG that carries the content.

Doing it like that has many advantages, like being able to verify hashes as small blocks are downloaded and not after downloading a huge file. Being able to de-duplicate data, being able to represent files, folders and any type of linked content-addressed data structure.

As long as your content is under 4MiB you can opt out of all this and have a content ID that is exactly the hash of the content.


As I just replied to "cle", some disadvantages doing the way that it is because one can't predict what content ID would be produced. Perhaps the hash of the entire contents of the file could point the hash that is current the content ID would solve this issue. To me, IPFS does not seem useful unless this issue is solved. Also, multiple hashes (different algorithms) of the file could point to the content ID/merkle DAG; so if both SHA2 and SHA3 were both used and one of them had a security issues, then just use the one that is OK.


IPFS node garbage collection is not related to Golang GC.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: