The practice of "cloaking" has been around for ages and I'm sure Google has (or at least had) solutions against it.
I'm not sure on what grounds could someone sue for crawling from random, unaffiliated addresses as long as the crawling isn't causing a denial of service (they can always check robots.txt using the main IP then use that to throttle crawling from random IPs as to remain compliant).
> The reason being that some pay-walled news article websites won't be indexed properly, as the “unofficial IP-ed” Googlebot will not get the paywalled content.
I'm not sure on what grounds could someone sue for crawling from random, unaffiliated addresses as long as the crawling isn't causing a denial of service (they can always check robots.txt using the main IP then use that to throttle crawling from random IPs as to remain compliant).
> The reason being that some pay-walled news article websites won't be indexed properly, as the “unofficial IP-ed” Googlebot will not get the paywalled content.
Good riddance? That would be a welcome change.