PraisonAI Remote Code Execution via Malicious Workflow YAML
PraisonAI is vulnerable to remote code execution; loading untrusted YAML files with `type: job` can lead to arbitrary host command execution, potentially enabling full system compromise.
PraisonAI is vulnerable to remote code execution via specially crafted YAML files. The vulnerability stems from the praisonai workflow run <file.yaml> command, which, when processing YAML files with type: job, executes steps through the JobWorkflowExecutor class in job_workflow.py. This execution path supports shell command execution via subprocess.run(), inline Python execution via exec(), and arbitrary Python script execution. An attacker can leverage this to inject malicious code into a YAML file, such as exploit.yaml, to achieve arbitrary host command execution. Versions of pip/praisonaiagents up to and including 1.5.139 and pip/PraisonAI up to and including 4.5.138 are affected. This is especially critical in CI/CD environments or shared deployment contexts where untrusted YAML files may be processed.
Attack Chain
- An attacker crafts a malicious YAML file (e.g.,
exploit.yaml) containing commands to be executed. - The attacker gains access to a system where PraisonAI is installed and can execute the
praisonaicommand. - The attacker executes the command
praisonai workflow run exploit.yaml, pointing to the malicious YAML file. - PraisonAI parses the YAML file and identifies the
type: jobdirective. - The
JobWorkflowExecutorclass injob_workflow.pyis invoked to process the workflow steps. - Within the workflow steps, commands specified using
run:,script:, orpython:directives are executed. Specifically,_exec_shell()executes shell commands,_exec_inline_python()executes inline Python, and_exec_python_script()executes Python scripts. - The malicious code executes, performing actions such as writing files (e.g.,
pwned.txt) or executing arbitrary system commands. - The attacker achieves arbitrary code execution on the host system, leading to potential system compromise.
Impact
Successful exploitation allows a remote or local attacker to execute arbitrary host commands and code. This can lead to full system compromise, including data theft, modification, or destruction. In CI/CD or shared deployment contexts, this could impact multiple systems or applications. The reporter marked this as a critical severity vulnerability.
Recommendation
- Upgrade
pip/praisonaiagentsandpip/PraisonAIto versions greater than 1.5.139 and 4.5.138, respectively, to patch the vulnerability as stated in the overview. - Implement strict input validation and sanitization for all YAML files processed by PraisonAI, paying close attention to the
type: jobdirective to prevent execution of arbitrary commands and code. - Deploy the Sigma rule “Detect PraisonAI Workflow Execution with Suspicious YAML” to your SIEM to detect potential exploitation attempts, based on log source
process_creation.
Detection coverage 2
Detect PraisonAI Workflow Execution with Suspicious YAML
highDetects the execution of praisonai workflow commands potentially triggered by malicious YAML files
Detect Suspicious File Creation by Python within PraisonAI Workflow
mediumDetects file creation events potentially originating from Python code executed within a PraisonAI workflow, indicative of malicious activity.
Detection queries are kept inside the platform. Get full rules →
Indicators of compromise
2
filename
| Type | Value |
|---|---|
| filename | exploit.yaml |
| filename | pwned.txt |