If all you want is to generate zip files and not read them, you can use uncompressed blocks, barely more complicated than tar and a lot faster and less memory hungry than many compression libraries: https://tools.ietf.org/html/rfc1951#page-11
I've been using this for a javascript bookmarklet (< 2000 characters) which automatically downloads all images from a web page on click.
And if one wants to list the content of the archives, zip is better and orders of magnitude faster, as it also has a "central directory" so one doesn't have to read the whole archive to get the list.
I've used that "central directory" approach even over the internet with a great success (downloading just that segment instead of the whole archive to get a list of what is inside).
But how did you make a bookmarklet to produce a zip archive, even non-compressed?
I assume that the author concatenated the image blobs into one buffer, then manually generated a file table. I believe there is a way to offer a “Download” button for buffer objects in JS.
and then you can right-click the link to bookmark it. Now, when you want to run that JavaScript code on some website, visit the website and click on the bookmark.
I know bookmarklets basics. What I wanted to know is which API calls you used inside of the < 2000 character bookmarklet to achieve the functionality you described. I also believed that the bookmarklets are limited to the security context of the web page and I don't understand how you did all that, if I understood you correctly you are generating zip out of the images from the web page using only the bookmarklet?
> I know bookmarklets basics. What I wanted to know is which API calls you used inside of the < 2000 character bookmarklet to achieve the functionality you described.
To create the archive data, I used an Uint8Array where I wrote the bytes into.
To download the images, I used XMLHttpRequest.
> I also believed that the bookmarklets are limited to the security context of the web page
That seems to be true unfortunately. However, besides the images on the same domain, it should also be possible to download some more images from other domains by allowing cross origin requests if the remote server cooperates and sets the respective header. but I have not looked into that yet.
This is such a great idea! You don't need compression because images on the web are generally compressed already. Can you share any of your code that does this?
Can you post your bookmarklet? I tried making a similar thing (using TamperMonkey) and ran into browser security restrictions preventing downloading of the files.
I tried TamperMonkey for a different project, but it kept deleting my scripts, so I haven't pursued it any further.
Bookmarklets are executed in the context of the currently opened website, so downloading images from the same origin is usually not a problem. What can be an issue are remote images. I tried setting `Access-Control-Allow-Origin: *`, but then other downloads would fail, so I left it as it was since it worked for the websites I was interested in.
I've been using this for a javascript bookmarklet (< 2000 characters) which automatically downloads all images from a web page on click.