I’ve been wondering to myself for many years now whether the web is for humans or machines. I personally can’t think of a good reason to specifically try to gate bots when it comes to serving content. Trying to post content or trigger actions could obviously be problematic under many circumstances.
But I find that when it comes to simple serving of content, human vs. bot is not usually what you’re trying to filter or block on. As long as a given client is not abusing your systems, then why do you care if the client is a human?
> As long as a given client is not abusing your systems, then why do you care if the client is a human?
Well, that's the rub. The bots are abusing the systems. The bots are accessing the contents at rates thousands of times faster and more often than humans. The bots also have access patterns unlike your expected human audience (downloading gigabytes or terabytes of data multiples times, over and over).
And these bots aren't some being with rights. They're tools unleashed by humans. It's humans abusing the systems. These are anti-abuse measures.
Then you look up their IP address's abuse contact, send an email and get them to either stop attacking you or get booted off the internet so they can't attack you.
And if that doesn't happen, you go to their ISP's ISP and get their ISP booted off the Internet.
Actual ISPs and hosting providers take abuse reports extremely seriously, mostly because they're terrified of getting kicked off by their ISP. And there's no end to that - just a chain of ISPs from them to you and you might end with convincing your ISP or some intermediary to block traffic from them. However, as we've seen recently, rules don't apply if enough money is involved. But I'm not sure if these shitty interim solutions come from ISPs ignoring abuse when money is involved, or from not knowing that abuse reporting is taken seriously to begin with.
Anyone know if it's legal to return a never-ending stream of /dev/urandom based on the user-agent?
> Then you look up their IP address's abuse contact, send an email and get them to either stop attacking you or get booted off the internet so they can't attack you.
You will be surprised on how many ISPs will not respond. Sure, Hetzner will respond, but these abusers are not using Hetzner at all. If you actually studied the actual problem, these are residential ISPs in various countries (including in US and Europe, mind you). At best the ISP will respond one-by-one to their customers and scan their computers (and at this point the abusers have already switched to another IP block) and at worst the ISP literally has no capability to control this because they cannot trace their CGNATted connections (short of blocking connections to your site, which is definitely nuclear).
> And if that doesn't happen, you go to their ISP's ISP and get their ISP booted off the Internet.
Again, the IP blocks are rotated, so by the time that they would respond you need to do the whole reporting rigomarole again. Additionally, these ISPs would instead suggest to blackhole these requests or to utilize a commercial solution (aka using Cloudflare or something else), because at the end of the day the residential ISPs are national entites that would quite literally trigger geopolitcal concerns if you disconnected them.
They’re not cutting you off for torrenting because they think it’s the right thing to do. They’re cutting you off for torrenting because it costs them money if rights holders complain.
> These the same residential providers that people complain cut them off for torrenting?
Assume that you are in the shoes of Anubis users. Do you have a reasonable legal budget? No? From experience, most ISPs would not really respond unless either their network has become unstable as a consequence, or if legal advised them to cooperate. Realistically, at the time that they read your plea the activity has already died off (on their network), and the best that they can do is to give you the netflows to do your investigation.
> You think they wouldn't cut off customers who DDoS?
This is not your typical DDoS where the stability of the network links are affected (this is at the ISP level, not specifically your server), this is a very asymmetrical one where it seemingly blends out as normal browsing. Unless you have a reasonable legal budget, they would suggest to use RTBH (https://www.cisco.com/c/dam/en_us/about/security/intelligenc...) or a commercial filtering solution if need be. This even assumes that they're symphatetic to your pleas, at worst case you're dealing with state-backed ISPs that are known not to respond at all.
When I was migrating my server, and checking logs, I have seen a slew of hits in the rolling logs. I reversed the IP and found a company specializing in "Servers with GPUs". Found their website, and they have "Datacenters in the EU", but the company is located elsewhere.
They're certainly positioning themselves for providing scraping servers for AI training. What will they do when I say that one of their customers just hit my server with 1000 requests per second? Ban the customer?
Let's be rational. They'll laugh at that mail and delete it. Bigger players use "home proxying" services which use residental blocks for egress, and make one request per host. Some people are cutting whole countries off with firewalls.
Playing by old rules won't get you anywhere, because all these gentlemen took their computers and work elsewhere. Now we all have are people who think they need no permission because what they do is awesome, anyway (which is not).
A startup hosting provider you say - who's their ISP? Does that company know their customer is a DDoS-for-hire provider? Did you tell them? How did they respond?
At the minimum they're very likely to have a talk with their customer "keep this shit up and you're outta here"
Please, read literally any article about the ongoing problem. The IPs are basically random, come from residential blocks, requests don’t reuse the same IP more than a bunch of times.
Are you sure that's AI? I get requests that are overtly from AI crawlers, and almost no other requests. Certainly all of the high-volume crawler-like requests overtly say that they're from crawlers.
And those residential proxy services cost their customer around $0.50/GB up to $20/GB. Do with that knowledge what you will.
> Then you look up their IP address's abuse contact, send an email
Good luck with that. Have you ever tried? AWS and Google have abuse mails. Do you think they read them? Do you think they care? It is basically impossible to get AWS to shutdown a customers systems, regardless of how much you try.
I believe ARIN has an abuse email registered for a Google subnet, with the comment that they believe it's correct, but no one answer last time they tried it, three years ago.
ARIN/Internet registries doesn’t maintain these records themselves, owners of IP netblocks do. Some registries have introduced mandatory abuse contact information (I think at least RIPE) and send a link to confirm the mailbox exists.
The hierarchy is: abuse contact of netblock. If ignored: abuse contact of AS. If ignored: Local internet registry (LIR) managing the AS. If ignored: Internet Registry like ARIN.
I see a possibility of automation here.
Also, report to DNSBL providers like Spamhaus. They rely on reports to blacklist single IPs, escalate to whole blocks and then the next larger subnet, until enough customers are affected.
Well, that's the meta-rub: if they're abusing, block abuse. Rate limits are far simpler, anyway!
In the interest of bringing the AI bickering to HN: I think one could accurately characterize "block bots just in case they choose to request too much data" as discrimination! Robots of course don't have any rights so it's not wrong, but it certainly might be unwise.
Not when the bots are actively programmed to thwart them by using far-flung IP address carousels, request pacing, spoofed user agents and similar techniques. It's open war these days.
That very much reads like the rant of someone who is sick and tired of the state of things.
I’m afraid that it doesn’t change anything in of itself and any sorts of solutions to only allow the users that you’re okay with are what’s direly needed all across the web.
Though reading about the people trying to mine crypto on a CI solution, it feels that sometimes it won’t just be LLM scrapers that you need to protect against but any number of malicious people.
At that point, you might as well run an invite only community.
Source Hut implemented Anubis, and it works so well. I mostly never see the waiting screen. And after it whitelists me for a very long time, so I work without any limitations.
I just worry about the idea of running public/free services on the web, due to the potential for misuse and bad actors, though making things paid also seems sensible, e.g. what was linked: https://man.sr.ht/ops/builds.sr.ht-migration.md
ok, but my answer was about was how to react to request pacing.
If the abuser is using request pacing to make less request then that's making the abuser less abusive. If you're still complaining that request pacing is not pacing the requests down enough because the pacing is designed to just not bring your server down and instead make you consume money, then you can counteract that just by tuning the rate limiting even further down.
The 10s of thousands distinct IP address is another (and perfectly valid) issue, but it was not the point I answered to.
> I personally can’t think of a good reason to specifically try to gate bots
There's been numerous posts on HN about people getting slammed, to the tune of many, many dollars and terabytes of data from bots, especially LLM scrapers, burning bandwidth and increasing server-running costs.
I'm genuinely skeptical that those are all real LLM scrapers. For one, a lot of content is in CommonCrawl and AI companies don't want to redo all that work when they can get some WARC files from AWS.
I'm largely suspecting that these are mostly other bots pretending to be LLM scrapers. Does anyone even check if the bots' IP ranges belong to the AI companies?
For a long time there have been spammers scraping in search of email addresses to spam. There are all kinds of scraper bots with unknown purpose. It's the aggregate of all of them hitting your server, potentially several at the same time.
When I worked at Wikimedia (so ending ~4 years ago) we had several incidents of bots getting lost in a maze of links within our source repository browser (Phabricator) which could account for > 50% of the load on some pretty powerful Phabricator servers (Something like 96 cores, 512GB RAM). This happened despite having those URLs excluded via robots.txt and implementing some rudimentary request throttling. The scrapers were using lots of different IPs simultaneously and they did not seem to respect any kind of sane rate limits. If googlebot and one or two other scrapers hit at the same time it was enough to cause an outage or at least seriously degrade performance.
Eventually we got better at rate limiting and put more URLs behind authentication but it wasn't an ideal situation and would have been quite difficult to deal with had we been much more resource-constrained or less technically capable.
If a bot claims to be from an AI company, but isn't from the AI company's IP range, then it's lying and its activity is plain abuse. In that case, you shouldn't serve them a proof of work system; you should block them entirely.
Blocking abusive actors can be very non-trivial. The proof-of-work system mitigates the amount of effort that needs to be spent identifying and blocking bad actors.
Why? When there are 100s of hopeful AI/LLM scrapers more than willing to do that work for you what possible reason would you have to do that work? The more typical and common human behavior is perfectly capable of explaining this. No reason to reach for some kind of underhanded conspiracy theory when simple incompetence and greed is more than adequate to explain it.
Google really wants everyone to use its spyware-embedded browser.
There are tons of other "anti-bot" solutions that don't have a conflict of interest with those goals, yet the ones that become popular all seem to further them instead.
The good thing about proof of work is that it doesn't specifically gate bots.
It may have some other downsides - for example I don't think that Google is possible in a world where everyone requires proof of work (some may argue it's a good thing) but it doesn't specifically gate bots. It gates mass scraping.
Things like google are still possible. Operators would need to whitelist services.
Alternatively shared resources similar in spirit to common crawl but scaled up could be used. That would have the benefit of democratizing the ability to create and operate large scale search indexes.
As both a website host and website scraper I can see both sides of it. The website owners have very little interest in opening their data up; if they did they'd have made an API for it. In my case it's scraping supermarket prices so obviously big-grocery doesn't want a spot light on their arbitrary pricing patterns.
It's frustrating for us scrapers but from their perspective opening up to bots is just a liability. Besides bots just spamming the servers getting around rate limits with botnets and noise any new features added by bots probably won't benefit them. If I made a bot service that would split your orders over multiple supermarkets, or buy items temporally as prices drop that wouldn't benefit the companies. All the work they've put into their site is to bring them to the status quo and they want to keep it that way. The companies don't want an open internet, only we do. I'd like to see some transparency laws so that large companies need to publish their pricing.
The issue is not whether it’s a human or a bot. The issue is whether you’re sending thousands of requests per second for hours, effectively DDOSing the site, or if you’re behaving like a normal user.
Example problem that I’ve seen posted about a few times on HN: LLM scrapers (or at least, an explosion of new scrapers) exploding and mindlessly crawling every singly HTTP endpoint of a hosted git-service, instead of just cloning the repo. (entirely ignoring robots.txt)
The point of this is that there has recently been a massive explosion in the amount of bots
that blatantly, aggressively, and maliciously ignore and attempt to bypass (mass ip/VPN switching, user agent swapping, etc) anti-abuse gates.
But I find that when it comes to simple serving of content, human vs. bot is not usually what you’re trying to filter or block on. As long as a given client is not abusing your systems, then why do you care if the client is a human?