That's too bad, the site got my hopes up when I saw the 'unlimited size' capability. It's really limited by the size of the target.
If you have any desire to support this kind of usage, I for one don't actually care what happens after the backup. It would not be an updated backup, just a copy the files to a set of disks fresh every so often.
In addition to the target not being a limiting factor size wise, using independently mountable non-spanned/raided/etc disks can simplify recovery and improve recoverability of files long term. Even if a few disks fail in storage, the files on the others are perfectly accessible without special software or hardware, and if multiple backup disk sets exist, the total number of files lost would likely be very low.
So I've been thinking about this scenario. Tell me this - say you have a mix of files of different sizes. Would you expect the program to try and pack as much files onto each drive (to maximize per-disk use) or copying files in order and switching disks when the next file doesn't fit is OK?
If it's the latter, I think I might be able to cook up something command-line for this sort of case. This is still a very niche scenario, but you aren't the first one to ask about it.
Thanks for at least considering it! The latter would work in my scenario (just copy until the next file doesn't fit, then pause and prompt for a new target). The files I work with are seldom larger than a few hundred MB, and many are only a few dozen KB. Even in the case where a few GB are wasted per disk due to unusually large files showing up, a few GB is nothing compared to the overall size of drives these days. I routinely waste at least that much when doing similar multi-disk backups manually by selecting files which fit. Simplicity and ease of use are big benefits when it comes to backing up hundreds of TBs.