PraisonAI Vulnerable to OS Command Injection
PraisonAI is vulnerable to OS command injection due to the use of `subprocess.run()` with `shell=True` on user-controlled inputs, allowing attackers to inject arbitrary shell commands and potentially leading to sensitive data exfiltration or system compromise in versions prior to 4.5.121.
PraisonAI versions prior to 4.5.121 are susceptible to OS command injection. The vulnerability stems from the application’s use of subprocess.run() with the shell=True parameter when executing commands derived from various user-controlled inputs. These inputs include YAML workflow definitions, agent configuration files (agents.yaml), LLM-generated tool call parameters, and recipe step configurations. This configuration allows an attacker to inject arbitrary shell commands through shell metacharacters, leading to potential remote code execution and system compromise. This vulnerability is particularly concerning in automated environments like CI/CD pipelines or agent workflows, where unintended command execution can occur without direct user awareness.
Attack Chain
- An attacker crafts a malicious YAML workflow definition or modifies an existing one, injecting shell metacharacters into the
targetfield of ashellstep. - Alternatively, the attacker modifies the
agents.yamlfile, injecting malicious commands into theshell_commandfield of an agent task. - The attacker triggers the execution of the crafted YAML workflow or loads the modified
agents.yamlfile using PraisonAI’s command-line interface. - PraisonAI parses the YAML file and extracts the attacker-controlled command string.
- The application then passes this command string to
subprocess.run()withshell=True, allowing the shell to interpret the injected metacharacters. - The shell executes the attacker’s injected commands, potentially performing actions like reading sensitive files, exfiltrating data, or modifying system configurations.
- If using agent mode, an attacker can influence the LLM’s context to generate malicious tool calls including shell commands.
- The attacker achieves arbitrary code execution with the privileges of the PraisonAI process, leading to system compromise or data breach.
Impact
Successful exploitation of this vulnerability allows an attacker to execute arbitrary shell commands on the affected system. This can lead to a variety of negative consequences, including unauthorized access to sensitive data (such as configuration files, credentials, or user data), modification or deletion of system files, and potentially full system compromise. In automated environments like CI/CD pipelines, this vulnerability could allow an attacker to inject malicious code into software builds, leading to supply chain attacks. The vulnerability affects versions of PraisonAI prior to 4.5.121.
Recommendation
- Deploy the Sigma rule “Detect PraisonAI Command Injection via Workflow” to identify attempts to exploit this vulnerability through malicious YAML workflow definitions (logsource:
process_creation). - Deploy the Sigma rule “Detect PraisonAI Command Injection via Agent Configuration” to identify attempts to exploit this vulnerability through malicious agent configurations (logsource:
process_creation). - Block the C2 domain
attacker.comlisted in the IOC table at the DNS resolver to prevent data exfiltration and command-and-control communication (type:domain, value:attacker.com). - Upgrade PraisonAI to version 4.5.121 or later to patch this vulnerability (Affected Packages).
Detection coverage 2
Detect PraisonAI Command Injection via Workflow
highDetects command injection attempts in PraisonAI by monitoring process creations that execute PraisonAI with suspicious shell commands in workflow files.
Detect PraisonAI Command Injection via Agent Configuration
highDetects command injection attempts in PraisonAI by monitoring process creations with shell commands in agent configuration files.
Detection queries are kept inside the platform. Get full rules →
Indicators of compromise
1
domain
| Type | Value |
|---|---|
| domain | attacker.com |