{"description":"Trending threats, MITRE ATT\u0026CK coverage, and detection metadata — refreshed continuously.","feed_url":"https://feed.craftedsignal.io/tags/shadow-ai/","home_page_url":"https://feed.craftedsignal.io/","items":[{"_cs_actors":[],"_cs_cves":[],"_cs_exploited":false,"_cs_products":[],"_cs_severities":["high"],"_cs_tags":["AI","agentic-soc","shadow-ai"],"_cs_type":"advisory","_cs_vendors":[],"content_html":"\u003cp\u003eOrganizations are rapidly adopting AI tools, deploying AI agents, and building AI-powered software, which introduces new attack surfaces. These new surfaces are often unprotected by traditional security controls. This rapid adoption of AI has led to the rise of shadow AI, where employees adopt AI tools without oversight and engineering teams deploy models and agents without adequate visibility and runtime protection. CrowdStrike is releasing new innovations across their Falcon platform to extend AI detection and response (AIDR) capabilities to secure AI workforce adoption and development across endpoints, SaaS environments, and cloud environments. Specifically, CrowdStrike is providing AI Detection and Response for desktop AI applications like ChatGPT, Gemini, Claude, DeepSeek, Microsoft Copilot, O365 Copilot, GitHub Copilot, and Cursor. This will give security teams visibility into employees’ use of these AI apps, including full prompt content, and the ability to detect prompt attacks, data leaks, and access control and content policy violations.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eAn attacker gains initial access to an endpoint, potentially through social engineering or exploiting a software vulnerability (Initial Access).\u003c/li\u003e\n\u003cli\u003eThe attacker leverages a personal AI agent like OpenClaw, taking advantage of its high system permissions and minimal governance, to execute terminal commands (Execution).\u003c/li\u003e\n\u003cli\u003eThe AI agent is used to browse the web and interact with files on the system (Execution).\u003c/li\u003e\n\u003cli\u003eThe attacker leverages the AI agent\u0026rsquo;s capabilities to autonomously take actions that mimic legitimate user behavior, making detection difficult (Defense Evasion).\u003c/li\u003e\n\u003cli\u003eThe AI agent is used to access sensitive data stored on the endpoint, such as credentials, intellectual property, or customer data (Credential Access, Discovery).\u003c/li\u003e\n\u003cli\u003eThe AI agent is used to exfiltrate the stolen data to an external server controlled by the attacker (Exfiltration).\u003c/li\u003e\n\u003cli\u003eThe attacker uses prompt injection techniques to manipulate AI agents to perform malicious actions (Execution).\u003c/li\u003e\n\u003cli\u003eThe attacker gains access to sensitive data, intellectual property, or customer data, leading to financial loss, reputational damage, or regulatory fines (Impact).\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eSuccessful exploitation of AI agents can lead to significant data breaches, exposing sensitive information like customer data, intellectual property, and financial records. The rise of \u0026ldquo;living off the AI land\u0026rdquo; (LOTAIL) techniques makes it harder to detect malicious activity, allowing attackers to remain undetected for longer periods. This can cause financial losses due to data breaches and reputational damage. The sectors most impacted are those heavily adopting AI, including technology, finance, and healthcare, though all sectors are potentially vulnerable.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eDeploy the Falcon AIDR browser extension from the Falcon console to monitor employee AI interactions and detect prompt attacks and data leaks across a range of AI tools on endpoints (AIDR Feature).\u003c/li\u003e\n\u003cli\u003eUtilize AI Discovery in CrowdStrike Falcon Exposure Management to identify AI-related components such as LLMs, Model Context Protocol (MCP) servers, and IDE extensions running across endpoints (Falcon Exposure Management).\u003c/li\u003e\n\u003cli\u003eMonitor Falcon AIDR alerts for suspicious activities related to Microsoft Copilot Studio agents, including prompt injection attacks, data leaks, and policy violations (Falcon AIDR).\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-03-30T06:41:52Z","date_published":"2026-03-30T06:41:52Z","id":"/briefs/2026-04-securing-ai-agents/","summary":"CrowdStrike is introducing new capabilities to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud environments by providing detection and response (AIDR) for desktop AI applications, discovery of AI-related components, and runtime security for agents built in Microsoft Copilot Studio to combat attacks like living off the AI land (LOTAIL) by securing the agentic interaction layer.","title":"Securing AI Agents and Governing Shadow AI","url":"https://feed.craftedsignal.io/briefs/2026-04-securing-ai-agents/"},{"_cs_actors":[],"_cs_cves":[],"_cs_exploited":false,"_cs_products":[],"_cs_severities":["medium"],"_cs_tags":["AI","AI-Security","Shadow-AI","Endpoint-Security","SaaS","Cloud"],"_cs_type":"advisory","_cs_vendors":[],"content_html":"\u003cp\u003eCrowdStrike is addressing the emerging threat landscape created by the rapid adoption of AI tools and agents within organizations. The increasing use of personal AI agents, particularly on developer machines, introduces new attack vectors such as \u0026ldquo;living off the AI land\u0026rdquo; (LOTAIL) exploits, indirect prompt injection, and agentic tool chain attacks. The rise of shadow AI, where employees adopt AI tools without oversight, exacerbates the issue. CrowdStrike\u0026rsquo;s new innovations extend AI Detection and Response (AIDR) capabilities to cover desktop AI applications (ChatGPT, Gemini, Claude, DeepSeek, Microsoft Copilot, O365 Copilot, GitHub Copilot, and Cursor) and expand platform capabilities to secure AI workforce adoption and development across endpoints, SaaS environments, and cloud environments. Falcon AIDR will leverage the Falcon sensor to enable deployment of the Falcon AIDR browser extension from the Falcon console and obtain desktop application telemetry via the sensor\u0026rsquo;s container network interface capability.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eInitial Access (via AI Agent):\u003c/strong\u003e An attacker gains initial access by compromising an AI agent running on an endpoint, potentially through prompt injection or other vulnerabilities in the agent\u0026rsquo;s design.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePrivilege Escalation:\u003c/strong\u003e The attacker leverages the compromised AI agent\u0026rsquo;s existing system permissions, which may be elevated, to gain further access to the system. AI agents often have high privileges to execute terminal commands, browse the web, and interact with files.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eLiving off the AI Land (LOTAIL):\u003c/strong\u003e The attacker uses the compromised AI agent to perform malicious actions that appear as legitimate user behavior, such as executing terminal commands, browsing websites, or interacting with files.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eLateral Movement:\u003c/strong\u003e The attacker utilizes the AI agent\u0026rsquo;s network connectivity to discover and access other systems within the network, including LLM runtimes, MCP servers, and IDE extensions.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eData Exfiltration:\u003c/strong\u003e The attacker uses the AI agent to exfiltrate sensitive data from the compromised systems, such as source code, credentials, or other confidential information.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSupply Chain Compromise:\u003c/strong\u003e The attacker uses access to development environments via compromised AI tools to introduce malicious code into the software supply chain.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePolicy Violation:\u003c/strong\u003e The attacker manipulates the AI agent to violate content policies or access control rules, potentially leading to unauthorized access to sensitive data or systems.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eSuccessful attacks targeting AI agents and shadow AI can lead to significant data breaches, intellectual property theft, and supply chain compromises. The lack of visibility and governance over AI deployments creates a growing attack surface that traditional security controls are ill-equipped to handle. Compromised AI agents can be used to perform a wide range of malicious activities, including data exfiltration, lateral movement, and the introduction of malicious code into the software supply chain. The impact can range from financial losses and reputational damage to the compromise of critical infrastructure and sensitive government systems.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eDeploy the Sigma rule \u0026ldquo;AI Desktop Application Usage Detected\u0026rdquo; to identify and monitor the use of AI desktop applications such as ChatGPT, Gemini, and others within your environment. This rule uses \u003ccode\u003eprocess_creation\u003c/code\u003e logs to detect the execution of these applications (see rule below).\u003c/li\u003e\n\u003cli\u003eEnable and configure AI Discovery in CrowdStrike Falcon Exposure Management to gain visibility into AI-related components running across endpoints, including AI apps, LLM runtimes, MCP servers, and IDE extensions. This leverages \u003ccode\u003eFalcon for IT\u003c/code\u003e telemetry as described in the overview.\u003c/li\u003e\n\u003cli\u003eImplement Falcon AIDR policies to monitor and protect agents built in Microsoft Copilot Studio against prompt injection attacks, data leaks, and policy violations.\u003c/li\u003e\n\u003cli\u003eReview and update access control policies for AI agents to minimize the potential impact of a compromise, focusing on the principle of least privilege.\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-03-28T21:52:45Z","date_published":"2026-03-28T21:52:45Z","id":"/briefs/2026-03-shadow-ai-governance/","summary":"CrowdStrike is introducing innovations to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud environments by extending AI detection and response (AIDR) capabilities to cover desktop AI applications and provide visibility into AI-related components, helping to prevent prompt attacks, data leaks, and policy violations.","title":"CrowdStrike Innovations Secure AI Agents and Govern Shadow AI","url":"https://feed.craftedsignal.io/briefs/2026-03-shadow-ai-governance/"},{"_cs_actors":[],"_cs_cves":[],"_cs_exploited":false,"_cs_products":[],"_cs_severities":["high"],"_cs_tags":["ai","shadow-ai","prompt-injection","data-leak","endpoint-security"],"_cs_type":"advisory","_cs_vendors":[],"content_html":"\u003cp\u003eCrowdStrike is addressing the emerging attack surface presented by the rapid adoption of AI tools, AI agents, and AI-powered software. Traditional security controls are insufficient to protect against novel threats like indirect prompt injection and agentic tool chain attacks, exacerbated by shadow AI. The CrowdStrike Falcon platform is being enhanced with AI Detection and Response (AIDR) capabilities to secure AI workforce adoption and development across endpoints, SaaS environments, and cloud environments. These enhancements include extending runtime security guardrails to agents built in Microsoft Copilot Studio and enhancing endpoint AI security capabilities. These capabilities aim to enable organizations to confidently and securely accelerate AI development and adoption.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eAn attacker gains initial access to a system, potentially through compromised credentials or a software vulnerability, targeting a developer machine with deployed AI tools.\u003c/li\u003e\n\u003cli\u003eThe attacker exploits a personal AI agent like OpenClaw running on the endpoint, leveraging its autonomy and system permissions for malicious purposes (Living off the AI Land - LOTAIL).\u003c/li\u003e\n\u003cli\u003eThe compromised AI agent executes terminal commands, browses the web, and interacts with files, mimicking legitimate user behavior.\u003c/li\u003e\n\u003cli\u003eThe attacker leverages prompt injection techniques to manipulate the AI agent\u0026rsquo;s behavior and access sensitive data.\u003c/li\u003e\n\u003cli\u003eThe AI agent is used to access and exfiltrate sensitive data from the endpoint or connected network, bypassing traditional data loss prevention (DLP) controls.\u003c/li\u003e\n\u003cli\u003eThe attacker uses the AI agent to move laterally within the network, accessing other systems and resources.\u003c/li\u003e\n\u003cli\u003eThe attacker deploys malicious code or tools through the compromised AI agent, further compromising the environment.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eThe exploitation of AI agents and shadow AI can lead to significant data breaches, intellectual property theft, and reputational damage. Organizations face an increasing AI visibility and governance gap. Successful attacks can compromise sensitive data handled by AI applications and agents, leading to regulatory fines and legal liabilities. The lack of visibility into AI component deployments introduces supply chain risks and exploitable vulnerabilities.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eDeploy CrowdStrike Falcon AIDR to gain visibility into employees\u0026rsquo; use of AI applications, including full prompt content, and to detect prompt attacks, data leaks, and access control and content policy violations (CrowdStrike Falcon AIDR).\u003c/li\u003e\n\u003cli\u003eUtilize AI Discovery in CrowdStrike Falcon Exposure Management to automatically discover AI-related components running across endpoints in real time, including AI apps and agents, LLM runtimes, MCP servers, and IDE extensions (CrowdStrike Falcon Exposure Management).\u003c/li\u003e\n\u003cli\u003eImplement runtime security guardrails using Falcon AIDR to monitor Microsoft Copilot Studio agents for prompt injection attacks, data leaks, and policy violations in real time (Falcon AIDR).\u003c/li\u003e\n\u003cli\u003eEnable Sysmon process creation logging to activate the \u0026ldquo;Detect Suspicious AI Agent Processes\u0026rdquo; rule below.\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-03-28T09:23:42Z","date_published":"2026-03-28T09:23:42Z","id":"/briefs/2026-03-securing-ai-agents/","summary":"CrowdStrike is enhancing its Falcon platform with AI Detection and Response (AIDR) to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud, addressing threats like prompt injection attacks, data leaks, and policy violations.","title":"CrowdStrike Falcon Enhancements Secure AI Agents and Govern Shadow AI","url":"https://feed.craftedsignal.io/briefs/2026-03-securing-ai-agents/"},{"_cs_actors":[],"_cs_cves":[],"_cs_exploited":false,"_cs_products":[],"_cs_severities":["medium"],"_cs_tags":["AI-Security","Shadow-AI","Endpoint-Security"],"_cs_type":"advisory","_cs_vendors":[],"content_html":"\u003cp\u003eCrowdStrike is addressing the emerging security challenges posed by the rapid adoption of AI tools and agents within organizations. The increasing use of AI, particularly on endpoints and within SaaS environments, creates new attack surfaces that traditional security measures are ill-equipped to handle. These surfaces include vulnerabilities related to prompt injection, agentic tool chain attacks, and data leaks. The rise of shadow AI, where employees adopt AI tools without proper oversight, further exacerbates these challenges. CrowdStrike\u0026rsquo;s new innovations extend the Falcon platform\u0026rsquo;s AI Detection and Response (AIDR) capabilities across endpoints, SaaS environments, and cloud environments, providing enhanced visibility, governance, and threat detection for AI adoption and development. The goal is to enable organizations to securely accelerate AI initiatives while mitigating the associated risks.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eAn attacker gains initial access to an endpoint, potentially a developer machine, through social engineering or exploiting a software vulnerability.\u003c/li\u003e\n\u003cli\u003eThe attacker leverages a compromised AI agent, such as OpenClaw, or an AI-powered application installed on the endpoint.\u003c/li\u003e\n\u003cli\u003eThe compromised AI agent executes commands on the endpoint, leveraging the agent\u0026rsquo;s high system permissions, to enumerate sensitive files and network resources.\u003c/li\u003e\n\u003cli\u003eThe attacker performs an indirect prompt injection attack against an AI application, modifying the application\u0026rsquo;s behavior to leak sensitive data.\u003c/li\u003e\n\u003cli\u003eThe compromised agent initiates a connection to a command-and-control (C2) server to exfiltrate stolen data.\u003c/li\u003e\n\u003cli\u003eThe attacker exploits a misconfigured Model Context Protocol (MCP) server within the development environment to access sensitive AI models and training data.\u003c/li\u003e\n\u003cli\u003eThe attacker leverages a Copilot Studio agent with insufficient security guardrails to access and exfiltrate sensitive data from a SaaS application.\u003c/li\u003e\n\u003cli\u003eThe attacker successfully exfiltrates sensitive data and potentially gains persistent access to the environment, impacting data confidentiality and integrity.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eA successful attack targeting AI agents and shadow AI can lead to significant data breaches, intellectual property theft, and reputational damage. Organizations may experience compliance violations due to the leakage of sensitive data. The lack of visibility and governance over AI deployments can result in widespread vulnerabilities and increased attack surfaces, potentially affecting thousands of endpoints and cloud environments. The compromise of AI models and training data can lead to the manipulation of AI systems, causing them to make incorrect decisions or provide malicious outputs.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eDeploy the Sigma rule \u003ccode\u003eDetect AI Application Usage\u003c/code\u003e to identify the use of desktop AI applications like ChatGPT, Gemini, and Copilot on endpoints to gain visibility into shadow AI (logsource: \u003ccode\u003eprocess_creation\u003c/code\u003e).\u003c/li\u003e\n\u003cli\u003eUtilize Falcon Exposure Management\u0026rsquo;s AI Discovery capabilities to identify AI-related components running on endpoints, including LLMs, MCP servers, and IDE extensions, to manage AI-related risks.\u003c/li\u003e\n\u003cli\u003eMonitor network connections from processes associated with AI tools for suspicious outbound traffic to detect potential data exfiltration attempts (logsource: \u003ccode\u003enetwork_connection\u003c/code\u003e).\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-03-28T08:12:22Z","date_published":"2026-03-28T08:12:22Z","id":"/briefs/2026-03-ai-security/","summary":"CrowdStrike is enhancing its Falcon platform with new AI detection and response capabilities to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud environments, addressing threats like prompt injection and data leaks.","title":"CrowdStrike Falcon Enhancements for Securing AI Agents and Governing Shadow AI","url":"https://feed.craftedsignal.io/briefs/2026-03-ai-security/"}],"language":"en","title":"CraftedSignal Threat Feed — Shadow-AI","version":"https://jsonfeed.org/version/1.1"}