Blog Details

  • Home
  • Blog
  • Open-Source AI Models Face Security Risks, Including RCE and Data Theft
Open-Source AI Models Face Security Risks, Including RCE and Data Theft

Open-Source AI Models Face Security Risks, Including RCE and Data Theft

Over 36 security vulnerabilities have been disclosed in open-source artificial intelligence (AI) and machine learning (ML) models, exposing various platforms to risks of remote code execution (RCE) and unauthorized data access. These flaws, detected in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported through Protect AI's Huntr bug bounty program, with some vulnerabilities receiving critical CVSS scores.

Critical Vulnerabilities in Lunary

The Lunary toolkit for production-grade language models (LLMs) is the most impacted, with two severe vulnerabilities:

  1. CVE-2024-7474 (CVSS score: 9.1): This IDOR flaw allows authenticated users to view or delete external user data, leading to unauthorized access and potential data loss.
  2. CVE-2024-7475 (CVSS score: 9.1): An access control weakness lets attackers modify the SAML configuration to log in as unauthorized users, exposing sensitive information.

Additionally, Lunary is affected by another IDOR issue (CVE-2024-7473, CVSS score: 7.5), where attackers can alter another user's prompt data by adjusting the user-controlled "id" parameter.

Flaws in ChuanhuChatGPT and LocalAI

In ChuanhuChatGPT, a critical path traversal vulnerability (CVE-2024-5982, CVSS score: 9.1) enables arbitrary code execution, sensitive data exposure, and directory creation.

LocalAI, an open-source self-hosted LLM platform, also faces two vulnerabilities:

  1. CVE-2024-6983 (CVSS score: 8.8): An arbitrary code execution flaw that can be triggered via a malicious configuration file upload.
  2. CVE-2024-7010 (CVSS score: 7.5): A timing attack vulnerability that allows attackers to deduce valid API keys by analyzing response times, exposing the API keys one character at a time.

Deep Java Library and NVIDIA NeMo AI Framework

The Deep Java Library (DJL) has an RCE vulnerability caused by an arbitrary file overwrite in the untar function (CVE-2024-8396, CVSS score: 7.8). NVIDIA has also patched a path traversal issue in its NeMo AI framework (CVE-2024-0129, CVSS score: 6.3), which could result in code execution and data tampering.

New Jailbreak Technique Exploits ChatGPT Safeguards

Alongside these security flaws, a recent report from Mozilla’s 0Day Investigative Network (0Din) describes a jailbreak method for OpenAI’s ChatGPT. By encoding malicious prompts in hexadecimal and emojis, attackers can bypass ChatGPT’s safety mechanisms and prompt it to perform harmful actions, such as SQL injection creation.

"This jailbreak technique takes advantage of language model optimizations," said researcher Marco Figueroa, "where instructions are followed step-by-step without broader context awareness."

Introducing Vulnhuntr for AI Security

To address zero-day vulnerabilities in Python-based AI/ML projects, Protect AI has introduced Vulnhuntr, an open-source static code analyzer. By splitting code into manageable segments that fit within LLM context limits, Vulnhuntr can identify security weaknesses from user input to server output and detect potential exploits across the project.

Protecting AI/ML Supply Chains

With open-source AI/ML frameworks becoming popular across industries, these security weaknesses highlight the importance of safeguarding the AI/ML supply chain. It is essential for users to apply timely updates and patches to prevent exploits and maintain data security.

 

© 2016 - 2025 Red Secure Tech Ltd. Registered in England and Wales under Company Number: 15581067