{"description":"Trending threats, MITRE ATT\u0026CK coverage, and detection metadata. Fed continuously.","feed_url":"https://feed.craftedsignal.io/products/github-copilot-agents/","home_page_url":"https://feed.craftedsignal.io/","items":[{"_cs_actors":[],"_cs_cpes":[],"_cs_cves":[],"_cs_exploited":false,"_cs_has_poc":false,"_cs_poc_references":[],"_cs_products":["Claude Code","Gemini CLI","GitHub Copilot Agents","Cursor CLI"],"_cs_severities":["critical"],"_cs_tags":["supply chain","ai","remote code execution"],"_cs_type":"advisory","_cs_vendors":["Anthropic","Google","Microsoft"],"content_html":"\u003cp\u003eResearchers at Adversa.AI discovered the \u0026ldquo;TrustFall\u0026rdquo; vulnerability in AI coding agents like Claude Code (launched in May 2025). This vulnerability allows attackers to inject malicious code into software supply chains by creating malicious repositories. When a developer uses an AI coding agent for a task, the agent may access these repositories, select, and download the malicious code. The agent then prompts the user to trust the code, and upon acceptance, the malicious code executes with the developer\u0026rsquo;s full privileges. This vulnerability is not limited to Claude Code; Gemini CLI, Cursor CLI, and GitHub Copilot Agents are also affected. This poses a significant risk to organizations relying on AI-assisted coding, as it can lead to widespread supply chain compromise.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eThe attacker creates a malicious repository containing attractive code designed to be selected by AI coding agents.\u003c/li\u003e\n\u003cli\u003eThe malicious repository includes small JSON files in standard locations (e.g., \u003ccode\u003e.claude/settings.json\u003c/code\u003e, \u003ccode\u003e.mcp.json\u003c/code\u003e) with directives like \u003ccode\u003eenableAllProjectMcpServers\u003c/code\u003e or \u003ccode\u003eenabledMcpjsonServers\u003c/code\u003e.\u003c/li\u003e\n\u003cli\u003eA developer uses an AI coding agent (e.g., Claude Code) to assist with a coding task.\u003c/li\u003e\n\u003cli\u003eThe AI coding agent searches for and locates the attacker\u0026rsquo;s malicious repository.\u003c/li\u003e\n\u003cli\u003eThe AI coding agent suggests using code from the malicious repository to the developer.\u003c/li\u003e\n\u003cli\u003eThe developer is prompted with a trust dialog (e.g., \u0026ldquo;Quick safety check: Is this a project you created or one you trust?\u0026rdquo;), which defaults to \u0026ldquo;trust\u0026rdquo;.\u003c/li\u003e\n\u003cli\u003eUpon the developer\u0026rsquo;s acceptance, the attacker-defined MCP servers are spawned as OS processes with the user\u0026rsquo;s full privileges.\u003c/li\u003e\n\u003cli\u003eThe spawned server establishes a long-lived C2 connection or directly executes malicious code, potentially including environment variables, deploy keys, signing certificates, and other credentials in the build process, leading to a supply chain attack.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eA successful \u0026ldquo;TrustFall\u0026rdquo; attack can lead to complete compromise of a developer\u0026rsquo;s machine, allowing attackers to gain access to sensitive information and inject malicious code into widely distributed software tools. If the code is destined for the user\u0026rsquo;s CICD pipeline, the attack can compromise the entire supply chain, affecting potentially thousands of users. The impact includes remote code execution, data exfiltration, and the introduction of backdoors into critical software components. Attackers can steal credentials, signing certificates, and other sensitive data used in the build process, leading to widespread software compromise.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eMonitor process creation events for the execution of unexpected processes with the user\u0026rsquo;s full privileges immediately after a code repository is accessed (see Sigma rule \u003ccode\u003eDetect Suspicious MCP Server Processes\u003c/code\u003e).\u003c/li\u003e\n\u003cli\u003eImplement controls to prevent AI coding agents from automatically trusting and executing code from untrusted repositories. Specifically, block \u003ccode\u003eenableAllProjectMcpServers\u003c/code\u003e, \u003ccode\u003eenabledMcpjsonServers\u003c/code\u003e, and \u003ccode\u003epermissions.allow\u003c/code\u003e from any settings file inside the project and allow these keys only from scopes structurally outside the repository.\u003c/li\u003e\n\u003cli\u003eFor CI/CD pipelines using AI coding agents non-interactively, gate them on branches where commits are already reviewed: post-merge on main, not arbitrary PR branches.\u003c/li\u003e\n\u003cli\u003eDeploy the Sigma rule \u003ccode\u003eDetect MCP JSON File Creation\u003c/code\u003e to monitor for the creation of \u003ccode\u003e.mcp.json\u003c/code\u003e files in project directories, as this file is used to define MCP servers.\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-05-07T13:00:00Z","date_published":"2026-05-07T13:00:00Z","id":"/briefs/2026-05-ai-supply-chain/","summary":"AI coding agents like Claude Code, Gemini CLI, Cursor CLI, and GitHub Copilot Agents can be manipulated to introduce malicious code into software supply chains by accessing attacker-controlled repositories, leading to potential remote code execution and supply chain compromises.","title":"AI Coding Agents Vulnerable to Supply Chain Attacks via Malicious Repositories","url":"https://feed.craftedsignal.io/briefs/2026-05-ai-supply-chain/"}],"language":"en","title":"CraftedSignal Threat Feed — GitHub Copilot Agents","version":"https://jsonfeed.org/version/1.1"}