Hacker News new | past | comments | ask | show | jobs | submit login

For small scale sites you don't even need to do much that requires human intervention. Most bots (or at least most bot-actions) seem to invest very little in sophisticated techniques and rely instead on finding vulnerable servers by casting a very wide net. As long as that is true, you can filter out 99+% of the noise by applying very simple but slightly bespoke techniques.

As long as there continue to be enough cookie-cutter blog/forum/ecommerce sites out there for the bots to exploit, very simple techniques (JS-populated form fields or request parameters, very basic validation of the HTTP headers, taking into account the rate or frequency at which requests are made, etc.) will quickly and cheaply identify almost all of the bot activity.

Of course sophisticated or dedicated bots will still pose a problem, but assuming you're not just standing up a popular off-the-shelf platform without any hardening or customization, you'll need get pretty big (or otherwise valuable) before attracting that kind of attention.

A reasonable analogy here is the observation that simply running sensitive services on non-standard ports (e.g., not running SSH on port 22) will eliminate a ridiculous volume of malware probes against your system. To be clear, that's no substitute for actual robust security practices -- you almost certainly shouldn't have something like SSH world-visible to begin with -- but given how trivially easy it is do something like to change the default port for services you're not expecting the public at large to reach it's absurd that servers are compromised by dumb scripts blinding probing the Internet to exploit well-known and long-ago-patched exploits every day.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: