I feel we're going to have a hard time over the next months with a stream of these "magic tools" to solve already solved problems and try to milk some money out off managers who got no clue.
Static analysis paired with AI is the middle ground that makes sense to me (working in a similar security space). But the hard part needs to be regular computer science and the AI comes second.
> But the hard part needs to be regular computer science and the AI comes second.
Yes, indeed. The AI could be used to prefilter the list of warnings generated by static analysis to reduce the amount of false positives. To achieve that an AI could use the history of the projects static analysis results to find likely false positives. Or an I could propose a patch to avoid a warning. If it is automatically compiled, passed to the test suite and the whole ci pipeline, it could reduce the manual effort to deal with finding of static analysis tools.
But leaving out the static analysis tools would loose so much value.
We combine static analysis + LLMs to do better detection, triaging and auto-fixing because static analysis alone is broken in many ways.
We've been able to reduce ~30% of tickets for customers with false positive detection, and now be able to detect classes of vulnerabilities in business and code logic that were previously undetectable.