> Does the newest version of dat handle large files well (10gb)?
Large files work fine but currently any change to a file rewrites it in its entirety. That will mean history will be large until the GC kicks in, and any file that's modified has to be redownloaded in its entirety.
The team spent a fair amount of time looking at a solution to partial file-updates which works like inodes. They ultimately decided it was too difficult to pull off for now.
> Does it handle tons of files nested in a few directories well?
Yep, no issues there
> What is the command line support like for multi-writer?
We're still deciding on how to handle multi-writer. It's a priority for us after the upcoming stable release.
> Do you have any metrics for how much Dat is currently being used?
Nothing concrete atm. If I had to guess, it'd be no more than 1k.
We've solved efficient partial file updates in Peergos, which is built on ipfs. Happy to talk you through our data structures if you're interested. The key ones are cryptree and Merkle-champs.
Large files work fine but currently any change to a file rewrites it in its entirety. That will mean history will be large until the GC kicks in, and any file that's modified has to be redownloaded in its entirety.
The team spent a fair amount of time looking at a solution to partial file-updates which works like inodes. They ultimately decided it was too difficult to pull off for now.
> Does it handle tons of files nested in a few directories well?
Yep, no issues there
> What is the command line support like for multi-writer?
We're still deciding on how to handle multi-writer. It's a priority for us after the upcoming stable release.
> Do you have any metrics for how much Dat is currently being used?
Nothing concrete atm. If I had to guess, it'd be no more than 1k.