<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Langchain — CraftedSignal Threat Feed</title><link>https://feed.craftedsignal.io/tags/langchain/</link><description>Trending threats, MITRE ATT&amp;CK coverage, and detection metadata — refreshed continuously.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@craftedsignal.io</managingEditor><webMaster>hello@craftedsignal.io</webMaster><lastBuildDate>Sat, 28 Mar 2026 10:00:00 +0000</lastBuildDate><atom:link href="https://feed.craftedsignal.io/tags/langchain/feed.xml" rel="self" type="application/rss+xml"/><item><title>LangChain Core Path Traversal Vulnerability in Legacy APIs</title><link>https://feed.craftedsignal.io/briefs/2026-03-langchain-path-traversal/</link><pubDate>Sat, 28 Mar 2026 10:00:00 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2026-03-langchain-path-traversal/</guid><description>A path traversal vulnerability in LangChain Core's legacy `load_prompt` functions allows attackers to read arbitrary files by injecting malicious paths into prompt configurations.</description><content:encoded><![CDATA[<p>Multiple path traversal vulnerabilities have been identified within the <code>langchain-core</code> package, specifically affecting the legacy <code>load_prompt</code>, <code>load_prompt_from_config</code>, and <code>.save()</code> methods. These vulnerabilities stem from a lack of validation on file paths embedded within deserialized configuration dictionaries. An attacker who can influence or control the prompt configuration supplied to these functions can exploit this flaw to read arbitrary files on the host filesystem. The scope is constrained by file extension checks, limiting readable files to <code>.txt</code> for templates and <code>.json</code> or <code>.yaml</code> for examples. This issue impacts applications that accept prompt configurations from untrusted sources, such as low-code AI builders and API wrappers exposing <code>load_prompt_from_config()</code>. The vulnerable code resides within <code>langchain_core/prompts/loading.py</code> in the <code>_load_template()</code>, <code>_load_examples()</code>, and <code>_load_few_shot_prompt()</code> functions. This vulnerability is resolved in <code>langchain-core</code> version 1.2.22, and the affected functions are now deprecated.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>Attacker identifies an application using the vulnerable <code>langchain-core</code> library and the legacy <code>load_prompt_from_config()</code> function.</li>
<li>Attacker crafts a malicious prompt configuration dictionary containing a <code>template_path</code>, <code>suffix_path</code>, <code>prefix_path</code>, <code>examples</code>, or <code>example_prompt_path</code> key with a path traversal sequence (e.g., <code>../../etc/passwd</code>) or an absolute path (e.g., <code>/etc/passwd</code>).</li>
<li>The attacker injects the malicious configuration into the application, potentially via a low-code AI builder or an API endpoint that accepts prompt configurations.</li>
<li>The application deserializes the malicious configuration dictionary and passes it to <code>load_prompt_from_config()</code>.</li>
<li><code>load_prompt_from_config()</code> calls the relevant vulnerable function (<code>_load_template()</code>, <code>_load_examples()</code>, or <code>_load_few_shot_prompt()</code>) based on the configuration.</li>
<li>The vulnerable function reads the file specified in the malicious path without proper validation.</li>
<li>The contents of the file are then incorporated into a prompt object.</li>
<li>The application, believing the prompt is benign, processes it further, potentially disclosing the file contents to the attacker via an error message, logging, or other output channels.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>Successful exploitation allows an attacker to read arbitrary files on the system, potentially exposing sensitive information. This includes cloud-mounted secrets (e.g., <code>/mnt/secrets/api_key.txt</code>), configuration files (e.g., <code>requirements.txt</code>), cloud credentials (e.g., <code>~/.docker/config.json</code>), Kubernetes manifests, CI/CD configurations, and application settings. The impact is especially severe in applications that handle sensitive data or operate in cloud environments. While no victim numbers are available, any application using the vulnerable <code>langchain-core</code> versions is at risk.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Upgrade <code>langchain-core</code> to version 1.2.22 or later to patch CVE-2026-34070.</li>
<li>Migrate away from the deprecated <code>load_prompt</code>, <code>load_prompt_from_config</code>, and <code>.save()</code> methods in favor of the <code>dumpd</code>/<code>dumps</code>/<code>load</code>/<code>loads</code> serialization APIs in <code>langchain_core.load</code>.</li>
<li>If you cannot immediately upgrade, sanitize user-supplied prompt configurations to prevent path traversal by rejecting absolute paths and paths containing <code>..</code> sequences.</li>
<li>Deploy the Sigma rule &ldquo;LangChain Path Traversal Attempt&rdquo; to detect attempts to exploit this vulnerability by monitoring process creations involving <code>python</code> and path traversal sequences in command line arguments.</li>
</ul>
]]></content:encoded><category domain="severity">high</category><category domain="type">advisory</category><category>langchain</category><category>path-traversal</category><category>vulnerability</category></item></channel></rss>