vLLM Remote Code Execution Vulnerability (CVE-2026-27893)
vLLM versions before 0.18.0 are vulnerable to remote code execution due to hardcoded trust of remote code, even when explicitly disabled by the user, allowing attackers to execute arbitrary code via malicious model repositories.
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.18.0, specifically from version 0.10.1, a critical vulnerability exists. Two model implementation files within vLLM hardcode the setting trust_remote_code=True when loading sub-components of models. This design flaw bypasses the user’s explicit security intention to disable remote code execution using the --trust-remote-code=False option. An attacker could craft a malicious model repository that…
Detection coverage 2
Detect Outbound Network Connection from vLLM to Uncommon Destinations
mediumDetects suspicious outbound network connections initiated from vLLM processes, potentially indicating a compromised instance attempting to download malicious model components or exfiltrate data.
Detect vLLM Process Creation
infoDetects process creation events related to vLLM, useful for baseline monitoring and identifying anomalous executions.
Detection queries are kept inside the platform. Get full rules →
Indicators of compromise
1