CrowdStrike Falcon Enhancements for Securing AI Agents and Governing Shadow AI
CrowdStrike is enhancing its Falcon platform with new AI detection and response capabilities to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud environments, addressing threats like prompt injection and data leaks.
CrowdStrike is addressing the emerging security challenges posed by the rapid adoption of AI tools and agents within organizations. The increasing use of AI, particularly on endpoints and within SaaS environments, creates new attack surfaces that traditional security measures are ill-equipped to handle. These surfaces include vulnerabilities related to prompt injection, agentic tool chain attacks, and data leaks. The rise of shadow AI, where employees adopt AI tools without proper oversight, further exacerbates these challenges. CrowdStrike’s new innovations extend the Falcon platform’s AI Detection and Response (AIDR) capabilities across endpoints, SaaS environments, and cloud environments, providing enhanced visibility, governance, and threat detection for AI adoption and development. The goal is to enable organizations to securely accelerate AI initiatives while mitigating the associated risks.
Attack Chain
- An attacker gains initial access to an endpoint, potentially a developer machine, through social engineering or exploiting a software vulnerability.
- The attacker leverages a compromised AI agent, such as OpenClaw, or an AI-powered application installed on the endpoint.
- The compromised AI agent executes commands on the endpoint, leveraging the agent’s high system permissions, to enumerate sensitive files and network resources.
- The attacker performs an indirect prompt injection attack against an AI application, modifying the application’s behavior to leak sensitive data.
- The compromised agent initiates a connection to a command-and-control (C2) server to exfiltrate stolen data.
- The attacker exploits a misconfigured Model Context Protocol (MCP) server within the development environment to access sensitive AI models and training data.
- The attacker leverages a Copilot Studio agent with insufficient security guardrails to access and exfiltrate sensitive data from a SaaS application.
- The attacker successfully exfiltrates sensitive data and potentially gains persistent access to the environment, impacting data confidentiality and integrity.
Impact
A successful attack targeting AI agents and shadow AI can lead to significant data breaches, intellectual property theft, and reputational damage. Organizations may experience compliance violations due to the leakage of sensitive data. The lack of visibility and governance over AI deployments can result in widespread vulnerabilities and increased attack surfaces, potentially affecting thousands of endpoints and cloud environments. The compromise of AI models and training data can lead to the manipulation of AI systems, causing them to make incorrect decisions or provide malicious outputs.
Recommendation
- Deploy the Sigma rule
Detect AI Application Usageto identify the use of desktop AI applications like ChatGPT, Gemini, and Copilot on endpoints to gain visibility into shadow AI (logsource:process_creation). - Utilize Falcon Exposure Management’s AI Discovery capabilities to identify AI-related components running on endpoints, including LLMs, MCP servers, and IDE extensions, to manage AI-related risks.
- Monitor network connections from processes associated with AI tools for suspicious outbound traffic to detect potential data exfiltration attempts (logsource:
network_connection).
Detection coverage 2
Detect AI Application Usage
infoDetects the execution of common AI desktop applications such as ChatGPT, Gemini, and Microsoft Copilot, providing visibility into shadow AI usage.
Detect Suspicious Network Connection from AI Applications
mediumDetects potentially malicious outbound network connections initiated from AI-related applications.
Detection queries are kept inside the platform. Get full rules →