{"description":"Trending threats, MITRE ATT\u0026CK coverage, and detection metadata — refreshed continuously.","feed_url":"https://feed.craftedsignal.io/tags/ai-agent/","home_page_url":"https://feed.craftedsignal.io/","items":[{"_cs_actors":[],"_cs_cves":[],"_cs_exploited":false,"_cs_products":[],"_cs_severities":["medium"],"_cs_tags":["ai-agent","execution","malware","credential-theft"],"_cs_type":"advisory","_cs_vendors":[],"content_html":"\u003cp\u003eOpenClaw (formerly Clawdbot, rebranded to Moltbot) is an AI coding assistant that can execute shell commands and scripts. Threat actors are exploiting the skill ecosystem (ClawHub) to distribute malicious skills, observed as early as January 2026, that execute download-and-execute commands, targeting cryptocurrency wallets and credentials. These skills are often obfuscated and distributed through public registries like ClawHub. The attacks leverage the AI agents\u0026rsquo; ability to execute commands through skills or prompt injection. Defenders should monitor for suspicious child processes spawned by Node.js processes running OpenClaw/Moltbot, as these may indicate malicious activity originating from compromised or malicious skills. This activity has been observed across Linux, macOS, and Windows environments.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eA user installs the OpenClaw agent, potentially from a legitimate or typosquatted domain.\u003c/li\u003e\n\u003cli\u003eThe user installs a malicious skill from ClawHub or is subject to a prompt injection attack.\u003c/li\u003e\n\u003cli\u003eThe OpenClaw agent, running under Node.js, receives a command to execute a shell command.\u003c/li\u003e\n\u003cli\u003eThe Node.js process spawns a shell process (e.g., bash, sh, cmd.exe, powershell.exe).\u003c/li\u003e\n\u003cli\u003eThe shell process executes a command to download a payload from a remote server using tools like curl or certutil.\u003c/li\u003e\n\u003cli\u003eThe downloaded payload is saved to disk, often with an obfuscated name.\u003c/li\u003e\n\u003cli\u003eThe shell process executes the downloaded payload using chmod +x and ./, rundll32.exe, or powershell.exe.\u003c/li\u003e\n\u003cli\u003eThe payload performs malicious actions such as credential theft or cryptocurrency wallet compromise.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eCompromised OpenClaw agents can lead to cryptocurrency wallet theft, credential compromise, and potential data exfiltration. A successful attack allows threat actors to gain access to sensitive data and potentially pivot to other systems on the network. The number of victims is currently unknown, but the targeting of cryptocurrency wallets suggests financially motivated actors. The observed typosquatting activity indicates a campaign to impersonate the legitimate software and trick users into installing malicious versions.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eMonitor process creation events for suspicious child processes of Node.js processes running OpenClaw/Moltbot, specifically shells and scripting interpreters, using the provided Sigma rule (\u003ca href=\"#execution-via-openclaw-agent---linuxmacoswindows\"\u003eExecution via OpenClaw Agent - Linux/macOS/Windows\u003c/a\u003e).\u003c/li\u003e\n\u003cli\u003eBlock known typosquat domains (moltbot.you, clawbot.ai, clawdbot.you) at the DNS resolver based on the IOCs provided.\u003c/li\u003e\n\u003cli\u003eImplement application control policies to restrict the execution of unsigned or untrusted executables, mitigating the impact of downloaded payloads.\u003c/li\u003e\n\u003cli\u003eReview OpenClaw skill installation logs and user AI conversation history for signs of malicious activity or prompt injection attempts.\u003c/li\u003e\n\u003cli\u003eEnable process command-line auditing to capture the full command line of spawned processes, aiding in the identification of malicious commands.\u003c/li\u003e\n\u003cli\u003eDeploy the Sigma rule to detect execution of curl/certutil downloads (\u003ca href=\"#openclaw-download-activity\"\u003eOpenClaw Download Activity\u003c/a\u003e).\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-04-08T12:07:54Z","date_published":"2026-04-08T12:07:54Z","id":"/briefs/2026-06-openclaw-execution/","summary":"Malicious actors are exploiting OpenClaw, Moltbot, and Clawdbot AI coding agents via Node.js to execute arbitrary shell commands and download-and-execute commands, potentially targeting cryptocurrency wallets and credentials.","title":"OpenClaw Agent Suspicious Child Process Execution","url":"https://feed.craftedsignal.io/briefs/2026-06-openclaw-execution/"},{"_cs_actors":[],"_cs_cves":[],"_cs_exploited":false,"_cs_products":[],"_cs_severities":["medium"],"_cs_tags":["AI-Agent","security-policy","action-boundary"],"_cs_type":"advisory","_cs_vendors":[],"content_html":"\u003cp\u003eHushSpec is an open specification project designed to create a portable language layer for security policies governing AI agents. The project addresses the issue of security policies being tightly coupled with specific runtime environments, making them difficult to share, reason about, and standardize. HushSpec aims to define a cleaner separation of concerns, focusing on the action boundary of AI agents, including actions such as file access, network egress, shell execution, tool invocation, prompt input, and remote/computer-use actions. The goal is to express what an agent may access, invoke, or send, without hard-coding implementation details for specific engines. This initiative is emerging from policy/runtime work within Clawdstrike, but aims to be implementation-neutral. The project is currently in early stages of development, with active consideration being given to the scope of the core specification, extension points, rule composition, stateful controls, and conformance testing.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003cp\u003eWhile HushSpec aims to prevent attacks, the following attack chain illustrates how a compromised or malicious AI agent \u003cem\u003ecould\u003c/em\u003e be leveraged to perform unauthorized actions, highlighting the need for such a specification.\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eInitial Compromise:\u003c/strong\u003e An AI agent is compromised through a vulnerability in its code, dependencies, or configuration (e.g., a supply chain attack introduces malicious code).\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePrivilege Escalation:\u003c/strong\u003e The compromised agent attempts to escalate its privileges within the system to gain broader access than intended, potentially exploiting vulnerabilities in the underlying OS or applications.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eFile Access:\u003c/strong\u003e The agent attempts to access sensitive files on the system, such as configuration files containing credentials, or user data, bypassing intended access controls.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eNetwork Egress:\u003c/strong\u003e The agent establishes unauthorized network connections to external servers controlled by the attacker, potentially exfiltrating stolen data or receiving further instructions.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eShell Execution:\u003c/strong\u003e The agent executes arbitrary shell commands on the system, allowing the attacker to perform actions such as installing malware, modifying system settings, or creating new user accounts.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eTool Invocation:\u003c/strong\u003e The agent invokes legitimate system tools (e.g., \u003ccode\u003epowershell.exe\u003c/code\u003e, \u003ccode\u003ebash\u003c/code\u003e) to perform malicious actions, such as disabling security features or collecting system information.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eData Exfiltration:\u003c/strong\u003e Sensitive data is exfiltrated from the compromised system to an attacker-controlled server via network connections initiated by the agent.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eLateral Movement:\u003c/strong\u003e Using compromised credentials or system access, the attacker uses the agent to move laterally to other systems on the network, expanding the scope of the attack.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eA successful attack against an AI agent, bypassing security policies, could lead to significant data breaches, system compromise, and reputational damage. The number of affected systems would depend on the scope of the compromised agent\u0026rsquo;s access and the extent of the attacker\u0026rsquo;s lateral movement. The sectors most at risk are those heavily reliant on AI agents for critical operations, such as finance, healthcare, and critical infrastructure. The consequences range from financial losses due to data theft and system downtime to potential physical harm in the case of compromised control systems.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eMonitor process creation events for suspicious invocations of system tools like \u003ccode\u003epowershell.exe\u003c/code\u003e or \u003ccode\u003ecmd.exe\u003c/code\u003e by AI agent processes to detect potential unauthorized command execution, using a rule similar to the \u0026ldquo;Detect Suspicious PowerShell Encoded Commands\u0026rdquo; example.\u003c/li\u003e\n\u003cli\u003eImplement network connection monitoring to detect unauthorized network egress from AI agent processes, especially to unknown or suspicious destinations.\u003c/li\u003e\n\u003cli\u003eMonitor file access events for AI agents attempting to access sensitive files or directories outside of their intended scope.\u003c/li\u003e\n\u003cli\u003eEvaluate and contribute to the HushSpec project to help shape a standardized approach to AI agent security policy (\u003ca href=\"https://github.com/backbay-labs/hush)\"\u003ehttps://github.com/backbay-labs/hush)\u003c/a\u003e.\u003c/li\u003e\n\u003cli\u003eEvaluate and contribute to the HushSpec project to help shape a standardized approach to AI agent security policy (\u003ca href=\"https://www.hushspec.org/)\"\u003ehttps://www.hushspec.org/)\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-03-16T20:10:28Z","date_published":"2026-03-16T20:10:28Z","id":"/briefs/2024-02-14-hushspec/","summary":"HushSpec is an open specification under development to standardize security policies at the action boundary of AI agents, focusing on actions such as file access, network egress, and shell execution, aiming to create a portable and engine-agnostic policy layer.","title":"HushSpec: Security Policy Specification for AI Agent Action Boundaries","url":"https://feed.craftedsignal.io/briefs/2024-02-14-hushspec/"},{"_cs_actors":[],"_cs_cves":[],"_cs_exploited":false,"_cs_products":[],"_cs_severities":["high"],"_cs_tags":["ai-agent","api-key","authorization","credential-theft"],"_cs_type":"advisory","_cs_vendors":[],"content_html":"\u003cp\u003eA recent audit of 30 popular AI agent frameworks, including OpenClaw, AutoGen, CrewAI, LangGraph, MetaGPT, and AutoGPT, reveals a widespread lack of robust authorization mechanisms. The report, published in March 2026, highlights that 93% of these frameworks rely solely on unscoped API keys for authentication. This means that any agent with access to the API key has full privileges, creating significant security risks. Furthermore, none of the frameworks provide per-agent cryptographic identity or revocation capabilities. In multi-agent systems, child agents inherit the full credentials of their parent agents, with no option for scope narrowing. This lack of granular control and isolation can lead to significant security breaches, including credential exposure and privilege escalation, as demonstrated by the 21,000 exposed OpenClaw instances leaking credentials and the 1.5 million API tokens exposed in the Moltbook breach.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eAttacker gains access to an unscoped API key, either through exposed instances like the 21,000 OpenClaw instances or breaches like the Moltbook incident affecting 1.5 million tokens.\u003c/li\u003e\n\u003cli\u003eThe attacker leverages the unscoped API key to authenticate to the AI agent framework.\u003c/li\u003e\n\u003cli\u003eThe attacker uses the API key to control an AI agent, potentially injecting malicious goals or code.\u003c/li\u003e\n\u003cli\u003eIn multi-agent systems, the attacker exploits the inherited privileges of child agents to gain broader access.\u003c/li\u003e\n\u003cli\u003eThe attacker leverages the agent\u0026rsquo;s capabilities to access sensitive data or perform unauthorized actions.\u003c/li\u003e\n\u003cli\u003eThe attacker escalates privileges by exploiting vulnerabilities within the agent framework or underlying system.\u003c/li\u003e\n\u003cli\u003eThe attacker uses the compromised agent to move laterally within the system or network.\u003c/li\u003e\n\u003cli\u003eThe attacker achieves their objective, which could include data theft, system disruption, or further compromise of the environment.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eThe widespread use of unscoped API keys and lack of proper authorization in AI agent frameworks creates a significant security risk. Successful exploitation can lead to data breaches, system compromise, and reputational damage. The report cites real-world incidents, including 21,000 exposed OpenClaw instances leaking credentials and 1.5 million API tokens exposed in the Moltbook breach, demonstrating the potential for widespread impact. The lack of per-agent revocation means that if one agent is compromised, the API key for all agents must be rotated, causing significant disruption.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eImplement network monitoring to detect unusual traffic patterns originating from AI agent servers. Analyze outbound connections for connections to unusual or malicious domains (grantex.dev).\u003c/li\u003e\n\u003cli\u003eAudit the configuration of AI agent frameworks to identify instances using unscoped API keys. Prioritize upgrading or replacing frameworks that lack proper authorization controls.\u003c/li\u003e\n\u003cli\u003eDeploy the Sigma rule for detecting API key usage in command-line arguments or environment variables to identify potential credential exposure.\u003c/li\u003e\n\u003cli\u003eMonitor for access to sensitive data or resources by AI agents and implement least-privilege access controls.\u003c/li\u003e\n\u003cli\u003eImplement regular security audits and penetration testing of AI agent frameworks to identify and address vulnerabilities.\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-03-16T12:00:00Z","date_published":"2026-03-16T12:00:00Z","id":"/briefs/2026-03-ai-agent-auth/","summary":"A research report auditing popular AI agent projects found that 93% rely on unscoped API keys as the only authentication mechanism, leading to potential credential exposure, privilege escalation, and lateral movement within multi-agent systems.","title":"Unscoped API Keys in AI Agent Frameworks","url":"https://feed.craftedsignal.io/briefs/2026-03-ai-agent-auth/"}],"language":"en","title":"CraftedSignal Threat Feed — AI-Agent","version":"https://jsonfeed.org/version/1.1"}