Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It makes clean-up simpler - if you never got to the "last" one, it's obvious you didn't finish after N+Timeout and thus you can expunge it. It simplifies an implementation detail (how do you deal with partial uploads? make them easy to spot). Otherwise you basically have to trigger at the end of every chunk, see if all the other chunks are there and then do the 'completion'.

But that's an implementation detail, and I suspect isn't one that's meaningful or intentional. Your S3 approach should work fine btw, I've done it before in a prior life when I was at a company shipping huge images and $.10/gb/month _really_ added up.

You lose the 'bells and whistles' of ECR, but those are pretty limited (imho)



In the case of a docker registry, isn’t the “final bit” just uploading the final manifest that actually references the layers you’re uploading?

At this point you’d validate that the layers exist and have been uploaded, otherwise you’d just bail out?

And those missing chunks would be handled by the normal registry GC, which evicts unreferenced layers?


It's been a long time, but I think you're correct. In my environment I didn't actually care (any failed push would be retried so the layers would always eventually complete, and anything that for whatever reason didn't retry, well, it didn't happen enough that we cared at the cost of S3 to do anything clever).

I think OCI ordered manifests first to "open the flow", but then close is only when the manifests last entry was completed - which led to this ordered upload problem.

If your uploader knows where the chunks are going to live (OCI is more or less CAS, so it's predictable), it can just put them there in any order as long as it's all readable before something tries to pull it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: