This article gives me whiplash, it starts at a high level, dives, stops just before it touches upon the implementation details (and all the weird stuff which makes you realise the developers were human after all) then shoots back up again.
> Git does not currently have the capability to update a packfile in real time without shutting down concurrent reads from that file. Such a change could be possible, but it would require updating Git’s storage significantly. I think this is one area where a database expert could contribute to the Git project in really interesting ways.
I'm not sure it would be super useful, because of the delta-ification process. Adding "literal" (non-delta) files to a pack isn't much of a gain (file contents are zlib-compressed either way).
Also presumably "shutting down concurrent reads from that file" is not much of a problem because the file can just be unlinked and it will be GCed when the last reader finishes naturally. Other than hung processes the only downside is extra disk space usage and some cache.
> Git does not currently have the capability to update a packfile in real time without shutting down concurrent reads from that file. Such a change could be possible, but it would require updating Git’s storage significantly. I think this is one area where a database expert could contribute to the Git project in really interesting ways.
I'm not sure it would be super useful, because of the delta-ification process. Adding "literal" (non-delta) files to a pack isn't much of a gain (file contents are zlib-compressed either way).