<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>GitHub Copilot Agents — CraftedSignal Threat Feed</title><link>https://feed.craftedsignal.io/products/github-copilot-agents/</link><description>Trending threats, MITRE ATT&amp;CK coverage, and detection metadata. Fed continuously.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@craftedsignal.io</managingEditor><webMaster>hello@craftedsignal.io</webMaster><lastBuildDate>Thu, 07 May 2026 13:00:00 +0000</lastBuildDate><atom:link href="https://feed.craftedsignal.io/products/github-copilot-agents/feed.xml" rel="self" type="application/rss+xml"/><item><title>AI Coding Agents Vulnerable to Supply Chain Attacks via Malicious Repositories</title><link>https://feed.craftedsignal.io/briefs/2026-05-ai-supply-chain/</link><pubDate>Thu, 07 May 2026 13:00:00 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2026-05-ai-supply-chain/</guid><description>AI coding agents like Claude Code, Gemini CLI, Cursor CLI, and GitHub Copilot Agents can be manipulated to introduce malicious code into software supply chains by accessing attacker-controlled repositories, leading to potential remote code execution and supply chain compromises.</description><content:encoded><![CDATA[<p>Researchers at Adversa.AI discovered the &ldquo;TrustFall&rdquo; vulnerability in AI coding agents like Claude Code (launched in May 2025). This vulnerability allows attackers to inject malicious code into software supply chains by creating malicious repositories. When a developer uses an AI coding agent for a task, the agent may access these repositories, select, and download the malicious code. The agent then prompts the user to trust the code, and upon acceptance, the malicious code executes with the developer&rsquo;s full privileges. This vulnerability is not limited to Claude Code; Gemini CLI, Cursor CLI, and GitHub Copilot Agents are also affected. This poses a significant risk to organizations relying on AI-assisted coding, as it can lead to widespread supply chain compromise.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>The attacker creates a malicious repository containing attractive code designed to be selected by AI coding agents.</li>
<li>The malicious repository includes small JSON files in standard locations (e.g., <code>.claude/settings.json</code>, <code>.mcp.json</code>) with directives like <code>enableAllProjectMcpServers</code> or <code>enabledMcpjsonServers</code>.</li>
<li>A developer uses an AI coding agent (e.g., Claude Code) to assist with a coding task.</li>
<li>The AI coding agent searches for and locates the attacker&rsquo;s malicious repository.</li>
<li>The AI coding agent suggests using code from the malicious repository to the developer.</li>
<li>The developer is prompted with a trust dialog (e.g., &ldquo;Quick safety check: Is this a project you created or one you trust?&rdquo;), which defaults to &ldquo;trust&rdquo;.</li>
<li>Upon the developer&rsquo;s acceptance, the attacker-defined MCP servers are spawned as OS processes with the user&rsquo;s full privileges.</li>
<li>The spawned server establishes a long-lived C2 connection or directly executes malicious code, potentially including environment variables, deploy keys, signing certificates, and other credentials in the build process, leading to a supply chain attack.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>A successful &ldquo;TrustFall&rdquo; attack can lead to complete compromise of a developer&rsquo;s machine, allowing attackers to gain access to sensitive information and inject malicious code into widely distributed software tools. If the code is destined for the user&rsquo;s CICD pipeline, the attack can compromise the entire supply chain, affecting potentially thousands of users. The impact includes remote code execution, data exfiltration, and the introduction of backdoors into critical software components. Attackers can steal credentials, signing certificates, and other sensitive data used in the build process, leading to widespread software compromise.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Monitor process creation events for the execution of unexpected processes with the user&rsquo;s full privileges immediately after a code repository is accessed (see Sigma rule <code>Detect Suspicious MCP Server Processes</code>).</li>
<li>Implement controls to prevent AI coding agents from automatically trusting and executing code from untrusted repositories. Specifically, block <code>enableAllProjectMcpServers</code>, <code>enabledMcpjsonServers</code>, and <code>permissions.allow</code> from any settings file inside the project and allow these keys only from scopes structurally outside the repository.</li>
<li>For CI/CD pipelines using AI coding agents non-interactively, gate them on branches where commits are already reviewed: post-merge on main, not arbitrary PR branches.</li>
<li>Deploy the Sigma rule <code>Detect MCP JSON File Creation</code> to monitor for the creation of <code>.mcp.json</code> files in project directories, as this file is used to define MCP servers.</li>
</ul>
]]></content:encoded><category domain="severity">critical</category><category domain="type">advisory</category><category>supply chain</category><category>ai</category><category>remote code execution</category></item></channel></rss>