Skip to content
Threat Feed
critical advisory

FlowiseAI AirtableAgent Remote Code Execution via Prompt Injection

A remote code execution vulnerability exists in FlowiseAI's AirtableAgent.ts due to insufficient input verification when using Pandas, allowing attackers to inject malicious code into the prompt and execute arbitrary code via Pyodide.

FlowiseAI is susceptible to a remote code execution (RCE) vulnerability within the AirtableAgent function. This function, designed to retrieve and process datasets from Airtable.com, is flawed due to the lack of input sanitization. Specifically, user-supplied input is directly incorporated into a prompt template, which is then used to generate Python code executed by Pyodide. By injecting malicious payloads into the prompt, an attacker can bypass the intended behavior of the language model and execute arbitrary Python code, leading to complete system compromise. The vulnerability resides in AirtableAgent.ts and is triggered when the input variable, containing user-supplied data, is passed to the LLMChain without proper validation.

Attack Chain

  1. An attacker crafts a malicious payload containing a prompt injection designed to execute arbitrary code.
  2. The attacker submits the crafted payload via the FlowiseAI application to the AirtableAgent function.
  3. The payload is passed into the input variable without sanitization and incorporated into the prompt template within systemPrompt.
  4. The LLMChain uses the crafted prompt, including the injected code, to generate a pythonCode string.
  5. The generated pythonCode string, containing the malicious code, is passed to the pyodide.runPythonAsync() function.
  6. Pyodide executes the malicious Python code, leading to remote code execution on the FlowiseAI server.
  7. The attacker gains control of the FlowiseAI instance, potentially accessing sensitive data or pivoting to other systems on the network.

Impact

Successful exploitation of this vulnerability allows for complete remote code execution on the FlowiseAI server. This could lead to the compromise of sensitive data stored within Airtable datasets, as well as the potential for lateral movement to other systems on the network. The lack of input validation opens the door to attackers using prompt injection to bypass security measures and gain unauthorized access.

Recommendation

  • Apply input sanitization and validation to the input variable within the AirtableAgent function in AirtableAgent.ts before it is incorporated into the prompt template.
  • Implement strict output filtering on the pythonCode generated by the LLMChain to prevent the execution of potentially malicious code.
  • Deploy the Sigma rule to detect prompt injection attempts targeting the AirtableAgent function.
  • Regularly audit and update FlowiseAI dependencies, including Pyodide and Pandas, to address any known security vulnerabilities.

Detection coverage 2

Detect FlowiseAI AirtableAgent Prompt Injection

critical

Detects potential prompt injection attempts targeting the FlowiseAI AirtableAgent by looking for suspicious keywords in HTTP request parameters.

sigma tactics: execution techniques: T1059.004 sources: webserver, linux

Detect Python Code Execution via Pyodide in FlowiseAI

high

Detects execution of Python code via Pyodide within FlowiseAI, potentially indicating successful prompt injection and code execution.

sigma tactics: execution techniques: T1059.008 sources: process_creation, linux

Detection queries are kept inside the platform. Get full rules →