Read more at:
The promise of static application security testing (SAST) has always been the “shift-left” dream, catching vulnerabilities before they ever hit production. But for too long, that promise has been undermined by a frustrating reality with an overwhelming volume of alerts and high false-positive rates. This noise can lead to alert fatigue, wasted developer time and a loss of trust in the very tools designed to protect our codebase.
Meanwhile, as we see, large language models (LLMs) have emerged as powerful code analysis tools, capable of pattern recognition and code generation. Yet, they suffer from their own weaknesses, slow processing, inconsistency and the potential for hallucination.
In our opinion, the path to next-generation code security is not choosing one over the other, but integrating their strengths. So, along with Kiarash Ahi, founder, Virelya Intelligence Research Labs and the co-author of the framework, I decided to do exactly that. Our novel hybrid framework combines the deterministic rigor and the speed of traditional SAST with the contextual reasoning of a fine-tuned LLM to deliver a system that doesn’t just find vulnerabilities, but also validates them. The results we achieved were stark: A 91% reduction in false positives compared to standalone SAST tools, transforming security from a reactive burden into an integrated and more efficient process.


