FlowiseAI AirtableAgent Remote Code Execution via Prompt Injection
A remote code execution vulnerability exists in FlowiseAI's AirtableAgent.ts due to insufficient input verification when using Pandas, allowing attackers to inject malicious code into the prompt and execute arbitrary code via Pyodide.
FlowiseAI is susceptible to a remote code execution (RCE) vulnerability within the AirtableAgent function. This function, designed to retrieve and process datasets from Airtable.com, is flawed due to the lack of input sanitization. Specifically, user-supplied input is directly incorporated into a prompt template, which is then used to generate Python code executed by Pyodide. By injecting malicious payloads into the prompt, an attacker can bypass the intended behavior of the language model and execute arbitrary Python code, leading to complete system compromise. The vulnerability resides in AirtableAgent.ts and is triggered when the input variable, containing user-supplied data, is passed to the LLMChain without proper validation.
Attack Chain
- An attacker crafts a malicious payload containing a prompt injection designed to execute arbitrary code.
- The attacker submits the crafted payload via the FlowiseAI application to the AirtableAgent function.
- The payload is passed into the
inputvariable without sanitization and incorporated into the prompt template withinsystemPrompt. - The LLMChain uses the crafted prompt, including the injected code, to generate a
pythonCodestring. - The generated
pythonCodestring, containing the malicious code, is passed to thepyodide.runPythonAsync()function. - Pyodide executes the malicious Python code, leading to remote code execution on the FlowiseAI server.
- The attacker gains control of the FlowiseAI instance, potentially accessing sensitive data or pivoting to other systems on the network.
Impact
Successful exploitation of this vulnerability allows for complete remote code execution on the FlowiseAI server. This could lead to the compromise of sensitive data stored within Airtable datasets, as well as the potential for lateral movement to other systems on the network. The lack of input validation opens the door to attackers using prompt injection to bypass security measures and gain unauthorized access.
Recommendation
- Apply input sanitization and validation to the
inputvariable within the AirtableAgent function inAirtableAgent.tsbefore it is incorporated into the prompt template. - Implement strict output filtering on the
pythonCodegenerated by the LLMChain to prevent the execution of potentially malicious code. - Deploy the Sigma rule to detect prompt injection attempts targeting the AirtableAgent function.
- Regularly audit and update FlowiseAI dependencies, including Pyodide and Pandas, to address any known security vulnerabilities.
Detection coverage 2
Detect FlowiseAI AirtableAgent Prompt Injection
criticalDetects potential prompt injection attempts targeting the FlowiseAI AirtableAgent by looking for suspicious keywords in HTTP request parameters.
Detect Python Code Execution via Pyodide in FlowiseAI
highDetects execution of Python code via Pyodide within FlowiseAI, potentially indicating successful prompt injection and code execution.
Detection queries are kept inside the platform. Get full rules →