Hacker News new | past | comments | ask | show | jobs | submit login

You're absolutely right but this is a strange situation. It would take a pretty good NLP to figure out that this is the case and all things otherwise being equal I don't know how the archive would be told not to archive certain data if not through robots.txt, which is the non-human readable form. Otherwise you could always use the absence of either to do an end-run around the other.

So for practical reasons it is probably best to claim both copyright on the page and to set up robots.txt to specifically forbid those pieces that you don't want spread around from being indexed/archived.

In the eyes of the law probably only the copyright bit matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: