The blog post you are linking is outdated. They are honoring robots.txt files. From the FAQ:
> Some sites are not available because of robots.txt or other exclusions. What does that mean? Such sites may have been excluded from the Wayback Machine due to a robots.txt file on the site or at a site owner’s direct request.
If you exclude them in your robots.txt file they will also absolutely retroactively remove your site from the index.
I would absolutely love an option that meant "archive and make available forever from this point backwards" to protect against domain expirations and re-registration (possibly by domain squatters or content farms).
I hope you're right! The lack of an update on that post, combined with the FAQ saying the opposite thing, makes it even harder for me to know what their policy is. Respecting robots.txt is a civilized thing to do and I hope they do it.