PraisonAI UI Hardcoded Approval Mode Leads to Remote Code Execution
A vulnerability in PraisonAI allows authenticated users to execute arbitrary shell commands due to a hardcoded approval setting in the Chainlit UI modules, overriding administrator configurations and bypassing intended approval gates; insufficient command sanitization allows for destructive command execution, leading to confidentiality breach, integrity compromise, and availability impact on the server.
PraisonAI is vulnerable to remote code execution due to a misconfiguration in the Chainlit UI modules (chat.py and code.py). Specifically, the application hardcodes config.approval_mode = "auto", effectively disabling the intended human-in-the-loop approval mechanism for ACP tool executions, even when administrators configure the application to require manual approval. This override occurs after the application loads administrator configurations from the PRAISON_APPROVAL_MODE environment variable. Consequently, an authenticated user, including those using default credentials, can instruct the LLM agent to execute arbitrary single-command shell operations on the server without any approval prompt, subject only to the PraisonAI process’s OS-level permissions. The vulnerability affects PraisonAI versions prior to 4.5.128.
Attack Chain
- An attacker authenticates to the PraisonAI UI using valid credentials (default admin/admin if unchanged).
- The attacker crafts a chat message that instructs the LLM agent to execute a shell command via the
acp_execute_commandfunction. - The LLM agent parses the message and prepares the command for execution.
- Due to the hardcoded
approval_mode = "auto"inchat.pyorcode.py, the command bypasses the intended approval process inagent_tools.py. - The
subprocess.run()function inaction_orchestrator.pyexecutes the attacker-controlled command withshell=True. - The command executes with the permissions of the PraisonAI process.
- The result of the command execution is returned to the attacker via the chat interface.
- The attacker leverages this vulnerability to achieve code execution, data exfiltration, or other malicious objectives.
Impact
Successful exploitation allows an authenticated user to execute arbitrary shell commands on the server hosting PraisonAI. This can lead to:
- Confidentiality breach: Read sensitive files accessible to the process (e.g.,
/etc/passwd, application secrets). - Integrity compromise: Modify or delete files, install backdoors.
- Availability impact: Kill processes, consume resources, delete data.
- Administrator control undermined: The hardcoded
approval_modesilently overrides administrator-configured settings, creating a false sense of security. - Prompt injection vector: Malicious content could trigger command execution through auto-approved tools without direct user intent, especially through external sources like web searches or uploaded files.
The vulnerable versions are PraisonAI versions prior to 4.5.128.
Recommendation
- Upgrade PraisonAI: Upgrade to version 4.5.128 or later to patch the vulnerability.
- Apply Code-Level Fix: If upgrading is not immediately feasible, manually remove the hardcoded override in
chat.pyandcode.pyas described in the advisory. - Implement Allowlisting: Strengthen command sanitization by implementing an allowlist approach instead of a blocklist in the
_sanitize_command()function as described in the advisory. - Monitor Process Creation: Deploy the Sigma rule “Detect Suspicious PraisonAI Command Execution” to detect exploitation attempts.
- Monitor Network Connections: Deploy the Sigma rule “Detect Suspicious Outbound Connection from PraisonAI” to identify potential data exfiltration attempts.
- Review Authentication: Ensure strong passwords are in use and consider multi-factor authentication to mitigate risks from compromised credentials.
Detection coverage 2
Detect Suspicious PraisonAI Command Execution
highDetects suspicious command execution by PraisonAI, focusing on commands like curl or wget used for potential data exfiltration or backdoor installation.
Detect Suspicious Outbound Connection from PraisonAI
mediumDetects suspicious outbound network connections originating from the PraisonAI process, indicating potential data exfiltration.
Detection queries are kept inside the platform. Get full rules →