Hacker News new | past | comments | ask | show | jobs | submit login

> Does the newest version of dat handle large files well (10gb)?

Large files work fine but currently any change to a file rewrites it in its entirety. That will mean history will be large until the GC kicks in, and any file that's modified has to be redownloaded in its entirety.

The team spent a fair amount of time looking at a solution to partial file-updates which works like inodes. They ultimately decided it was too difficult to pull off for now.

> Does it handle tons of files nested in a few directories well?

Yep, no issues there

> What is the command line support like for multi-writer?

We're still deciding on how to handle multi-writer. It's a priority for us after the upcoming stable release.

> Do you have any metrics for how much Dat is currently being used?

Nothing concrete atm. If I had to guess, it'd be no more than 1k.




> Does the newest version of dat handle large files well (10gb)?

I'd be curious if anyone wanted to try it ;) https://github.com/mafintosh/hyperdrive/blob/master/index.js...

> What is the command line support like for multi-writer?

There is an experimental multiwriter CLI using hyperdrive and kappa-db (github.com/kappa-db)

https://cobox.cloud


We've solved efficient partial file updates in Peergos, which is built on ipfs. Happy to talk you through our data structures if you're interested. The key ones are cryptree and Merkle-champs.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: