Over 36 security vulnerabilities have been disclosed in open-source artificial intelligence (AI) and machine learning (ML) models, exposing various platforms to risks of remote code execution (RCE) and unauthorized data access. These flaws, detected in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported through Protect AI's Huntr bug bounty program, with some vulnerabilities receiving critical CVSS scores.
Critical Vulnerabilities in Lunary
The Lunary toolkit for production-grade language models (LLMs) is the most impacted, with two severe vulnerabilities:
Additionally, Lunary is affected by another IDOR issue (CVE-2024-7473, CVSS score: 7.5), where attackers can alter another user's prompt data by adjusting the user-controlled "id" parameter.
Flaws in ChuanhuChatGPT and LocalAI
In ChuanhuChatGPT, a critical path traversal vulnerability (CVE-2024-5982, CVSS score: 9.1) enables arbitrary code execution, sensitive data exposure, and directory creation.
LocalAI, an open-source self-hosted LLM platform, also faces two vulnerabilities:
Deep Java Library and NVIDIA NeMo AI Framework
The Deep Java Library (DJL) has an RCE vulnerability caused by an arbitrary file overwrite in the untar function (CVE-2024-8396, CVSS score: 7.8). NVIDIA has also patched a path traversal issue in its NeMo AI framework (CVE-2024-0129, CVSS score: 6.3), which could result in code execution and data tampering.
New Jailbreak Technique Exploits ChatGPT Safeguards
Alongside these security flaws, a recent report from Mozilla’s 0Day Investigative Network (0Din) describes a jailbreak method for OpenAI’s ChatGPT. By encoding malicious prompts in hexadecimal and emojis, attackers can bypass ChatGPT’s safety mechanisms and prompt it to perform harmful actions, such as SQL injection creation.
"This jailbreak technique takes advantage of language model optimizations," said researcher Marco Figueroa, "where instructions are followed step-by-step without broader context awareness."
Introducing Vulnhuntr for AI Security
To address zero-day vulnerabilities in Python-based AI/ML projects, Protect AI has introduced Vulnhuntr, an open-source static code analyzer. By splitting code into manageable segments that fit within LLM context limits, Vulnhuntr can identify security weaknesses from user input to server output and detect potential exploits across the project.
Protecting AI/ML Supply Chains
With open-source AI/ML frameworks becoming popular across industries, these security weaknesses highlight the importance of safeguarding the AI/ML supply chain. It is essential for users to apply timely updates and patches to prevent exploits and maintain data security.
© 2016 - 2025 Red Secure Tech Ltd. Registered in England and Wales under Company Number: 15581067