
Researchers at Protect AI have released Vulnhuntr, a free, open source tool that can find zero-day vulnerabilities in Python codebases using Anthropic’s Claude artificial intelligence (AI) model.
The tool, available on GitHub, provides detailed analysis of the code, proof-of-concept exploits for the vulnerabilities identified, and confidence ratings for each flaw, Protect AI said in its announcement.
Vulnhuntr breaks the codebase into smaller chunks rather than overwhelming the large language model’s (LLM) context window size by loading in the entire file at once. By analyzing the code in a loop, the tool maps out the application and reconstructs the call chain from user input to server output. This way, the LLM can focus on specific sections of the codebase, which the research team says helps decrease false positives and negatives.
Various prompt-engineering techniques guide the LLM in the analysis.
The tool currently focuses on the following types of vulnerabilities that can be exploited remotely: arbitrary file overwrite (AFO), local file inclusion (LFI), server-side request forgery (SSRF), cross-site scripting (XSS), insecure direct object references (IDOR), SQL injection (SQLi), and remote code execution (RCE).
Vulnhuntr’s team says the tool has already discovered more than a dozen zero-day vulnerabilities in popular Python projects on GitHub, including gpt_academic, FastChat, and Ragflow.