> The operational tempo achieved proves the use of an autonomous model rather than interactive assistance. Peak activity included thousands of requests, representing sustained request rates of multiple operations per second.
The assumption that no human could ever (program a computer to) do multiple things per second, nor have their code do different things depending on the result of the previous request is... interesting.
(observation is not original to me, it was someone on Twitter who pointed it out)
Great point, it might be just pure ignorance. Even OSS pentesting tooling such as metasploitable have great capabilities. I see how LLM could be leveraged to build custom modules on top of those tools or how can you add basic LLM “decision” making, but this is just another additive tool in the chain.
Every time you have your work "checked over by other serious people", it eliminates 90% of the mistakes. So you have it checked over twice so that 99% of mistakes have been eliminated, and so on. But it never gets to 0% mistakes. That's my experience anyway.
Serious people like to look at things through a magnifying glass. Which makes them miss a lot.
I've seen printed books checked by paid professionals that consisted a "replace all" populated without context. Creating a grammar error on every single page. Or ones where everyone just forgot to add page numbers. Or a large cook book where index and page numbers didn't mach, making it almost impossible to navigate.
I'm talking of pre-AI work, with publisher. Apparently it wasn't obvious for them.
Edited November 14 2025:
Added an additional hyperlink to the full report in the initial section
Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"