<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>AI-Agent — CraftedSignal Threat Feed</title><link>https://feed.craftedsignal.io/tags/ai-agent/</link><description>Trending threats, MITRE ATT&amp;CK coverage, and detection metadata — refreshed continuously.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@craftedsignal.io</managingEditor><webMaster>hello@craftedsignal.io</webMaster><lastBuildDate>Wed, 08 Apr 2026 12:07:54 +0000</lastBuildDate><atom:link href="https://feed.craftedsignal.io/tags/ai-agent/feed.xml" rel="self" type="application/rss+xml"/><item><title>OpenClaw Agent Suspicious Child Process Execution</title><link>https://feed.craftedsignal.io/briefs/2026-06-openclaw-execution/</link><pubDate>Wed, 08 Apr 2026 12:07:54 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2026-06-openclaw-execution/</guid><description>Malicious actors are exploiting OpenClaw, Moltbot, and Clawdbot AI coding agents via Node.js to execute arbitrary shell commands and download-and-execute commands, potentially targeting cryptocurrency wallets and credentials.</description><content:encoded><![CDATA[<p>OpenClaw (formerly Clawdbot, rebranded to Moltbot) is an AI coding assistant that can execute shell commands and scripts. Threat actors are exploiting the skill ecosystem (ClawHub) to distribute malicious skills, observed as early as January 2026, that execute download-and-execute commands, targeting cryptocurrency wallets and credentials. These skills are often obfuscated and distributed through public registries like ClawHub. The attacks leverage the AI agents&rsquo; ability to execute commands through skills or prompt injection. Defenders should monitor for suspicious child processes spawned by Node.js processes running OpenClaw/Moltbot, as these may indicate malicious activity originating from compromised or malicious skills. This activity has been observed across Linux, macOS, and Windows environments.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>A user installs the OpenClaw agent, potentially from a legitimate or typosquatted domain.</li>
<li>The user installs a malicious skill from ClawHub or is subject to a prompt injection attack.</li>
<li>The OpenClaw agent, running under Node.js, receives a command to execute a shell command.</li>
<li>The Node.js process spawns a shell process (e.g., bash, sh, cmd.exe, powershell.exe).</li>
<li>The shell process executes a command to download a payload from a remote server using tools like curl or certutil.</li>
<li>The downloaded payload is saved to disk, often with an obfuscated name.</li>
<li>The shell process executes the downloaded payload using chmod +x and ./, rundll32.exe, or powershell.exe.</li>
<li>The payload performs malicious actions such as credential theft or cryptocurrency wallet compromise.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>Compromised OpenClaw agents can lead to cryptocurrency wallet theft, credential compromise, and potential data exfiltration. A successful attack allows threat actors to gain access to sensitive data and potentially pivot to other systems on the network. The number of victims is currently unknown, but the targeting of cryptocurrency wallets suggests financially motivated actors. The observed typosquatting activity indicates a campaign to impersonate the legitimate software and trick users into installing malicious versions.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Monitor process creation events for suspicious child processes of Node.js processes running OpenClaw/Moltbot, specifically shells and scripting interpreters, using the provided Sigma rule (<a href="#execution-via-openclaw-agent---linuxmacoswindows">Execution via OpenClaw Agent - Linux/macOS/Windows</a>).</li>
<li>Block known typosquat domains (moltbot.you, clawbot.ai, clawdbot.you) at the DNS resolver based on the IOCs provided.</li>
<li>Implement application control policies to restrict the execution of unsigned or untrusted executables, mitigating the impact of downloaded payloads.</li>
<li>Review OpenClaw skill installation logs and user AI conversation history for signs of malicious activity or prompt injection attempts.</li>
<li>Enable process command-line auditing to capture the full command line of spawned processes, aiding in the identification of malicious commands.</li>
<li>Deploy the Sigma rule to detect execution of curl/certutil downloads (<a href="#openclaw-download-activity">OpenClaw Download Activity</a>).</li>
</ul>
]]></content:encoded><category domain="severity">medium</category><category domain="type">advisory</category><category>ai-agent</category><category>execution</category><category>malware</category><category>credential-theft</category></item><item><title>HushSpec: Security Policy Specification for AI Agent Action Boundaries</title><link>https://feed.craftedsignal.io/briefs/2024-02-14-hushspec/</link><pubDate>Mon, 16 Mar 2026 20:10:28 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2024-02-14-hushspec/</guid><description>HushSpec is an open specification under development to standardize security policies at the action boundary of AI agents, focusing on actions such as file access, network egress, and shell execution, aiming to create a portable and engine-agnostic policy layer.</description><content:encoded><![CDATA[<p>HushSpec is an open specification project designed to create a portable language layer for security policies governing AI agents. The project addresses the issue of security policies being tightly coupled with specific runtime environments, making them difficult to share, reason about, and standardize. HushSpec aims to define a cleaner separation of concerns, focusing on the action boundary of AI agents, including actions such as file access, network egress, shell execution, tool invocation, prompt input, and remote/computer-use actions. The goal is to express what an agent may access, invoke, or send, without hard-coding implementation details for specific engines. This initiative is emerging from policy/runtime work within Clawdstrike, but aims to be implementation-neutral. The project is currently in early stages of development, with active consideration being given to the scope of the core specification, extension points, rule composition, stateful controls, and conformance testing.</p>
<h2 id="attack-chain">Attack Chain</h2>
<p>While HushSpec aims to prevent attacks, the following attack chain illustrates how a compromised or malicious AI agent <em>could</em> be leveraged to perform unauthorized actions, highlighting the need for such a specification.</p>
<ol>
<li><strong>Initial Compromise:</strong> An AI agent is compromised through a vulnerability in its code, dependencies, or configuration (e.g., a supply chain attack introduces malicious code).</li>
<li><strong>Privilege Escalation:</strong> The compromised agent attempts to escalate its privileges within the system to gain broader access than intended, potentially exploiting vulnerabilities in the underlying OS or applications.</li>
<li><strong>File Access:</strong> The agent attempts to access sensitive files on the system, such as configuration files containing credentials, or user data, bypassing intended access controls.</li>
<li><strong>Network Egress:</strong> The agent establishes unauthorized network connections to external servers controlled by the attacker, potentially exfiltrating stolen data or receiving further instructions.</li>
<li><strong>Shell Execution:</strong> The agent executes arbitrary shell commands on the system, allowing the attacker to perform actions such as installing malware, modifying system settings, or creating new user accounts.</li>
<li><strong>Tool Invocation:</strong> The agent invokes legitimate system tools (e.g., <code>powershell.exe</code>, <code>bash</code>) to perform malicious actions, such as disabling security features or collecting system information.</li>
<li><strong>Data Exfiltration:</strong> Sensitive data is exfiltrated from the compromised system to an attacker-controlled server via network connections initiated by the agent.</li>
<li><strong>Lateral Movement:</strong> Using compromised credentials or system access, the attacker uses the agent to move laterally to other systems on the network, expanding the scope of the attack.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>A successful attack against an AI agent, bypassing security policies, could lead to significant data breaches, system compromise, and reputational damage. The number of affected systems would depend on the scope of the compromised agent&rsquo;s access and the extent of the attacker&rsquo;s lateral movement. The sectors most at risk are those heavily reliant on AI agents for critical operations, such as finance, healthcare, and critical infrastructure. The consequences range from financial losses due to data theft and system downtime to potential physical harm in the case of compromised control systems.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Monitor process creation events for suspicious invocations of system tools like <code>powershell.exe</code> or <code>cmd.exe</code> by AI agent processes to detect potential unauthorized command execution, using a rule similar to the &ldquo;Detect Suspicious PowerShell Encoded Commands&rdquo; example.</li>
<li>Implement network connection monitoring to detect unauthorized network egress from AI agent processes, especially to unknown or suspicious destinations.</li>
<li>Monitor file access events for AI agents attempting to access sensitive files or directories outside of their intended scope.</li>
<li>Evaluate and contribute to the HushSpec project to help shape a standardized approach to AI agent security policy (<a href="https://github.com/backbay-labs/hush)">https://github.com/backbay-labs/hush)</a>.</li>
<li>Evaluate and contribute to the HushSpec project to help shape a standardized approach to AI agent security policy (<a href="https://www.hushspec.org/)">https://www.hushspec.org/)</a>.</li>
</ul>
]]></content:encoded><category domain="severity">medium</category><category domain="type">advisory</category><category>AI-Agent</category><category>security-policy</category><category>action-boundary</category></item><item><title>Unscoped API Keys in AI Agent Frameworks</title><link>https://feed.craftedsignal.io/briefs/2026-03-ai-agent-auth/</link><pubDate>Mon, 16 Mar 2026 12:00:00 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2026-03-ai-agent-auth/</guid><description>A research report auditing popular AI agent projects found that 93% rely on unscoped API keys as the only authentication mechanism, leading to potential credential exposure, privilege escalation, and lateral movement within multi-agent systems.</description><content:encoded><![CDATA[<p>A recent audit of 30 popular AI agent frameworks, including OpenClaw, AutoGen, CrewAI, LangGraph, MetaGPT, and AutoGPT, reveals a widespread lack of robust authorization mechanisms. The report, published in March 2026, highlights that 93% of these frameworks rely solely on unscoped API keys for authentication. This means that any agent with access to the API key has full privileges, creating significant security risks. Furthermore, none of the frameworks provide per-agent cryptographic identity or revocation capabilities. In multi-agent systems, child agents inherit the full credentials of their parent agents, with no option for scope narrowing. This lack of granular control and isolation can lead to significant security breaches, including credential exposure and privilege escalation, as demonstrated by the 21,000 exposed OpenClaw instances leaking credentials and the 1.5 million API tokens exposed in the Moltbook breach.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>Attacker gains access to an unscoped API key, either through exposed instances like the 21,000 OpenClaw instances or breaches like the Moltbook incident affecting 1.5 million tokens.</li>
<li>The attacker leverages the unscoped API key to authenticate to the AI agent framework.</li>
<li>The attacker uses the API key to control an AI agent, potentially injecting malicious goals or code.</li>
<li>In multi-agent systems, the attacker exploits the inherited privileges of child agents to gain broader access.</li>
<li>The attacker leverages the agent&rsquo;s capabilities to access sensitive data or perform unauthorized actions.</li>
<li>The attacker escalates privileges by exploiting vulnerabilities within the agent framework or underlying system.</li>
<li>The attacker uses the compromised agent to move laterally within the system or network.</li>
<li>The attacker achieves their objective, which could include data theft, system disruption, or further compromise of the environment.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>The widespread use of unscoped API keys and lack of proper authorization in AI agent frameworks creates a significant security risk. Successful exploitation can lead to data breaches, system compromise, and reputational damage. The report cites real-world incidents, including 21,000 exposed OpenClaw instances leaking credentials and 1.5 million API tokens exposed in the Moltbook breach, demonstrating the potential for widespread impact. The lack of per-agent revocation means that if one agent is compromised, the API key for all agents must be rotated, causing significant disruption.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Implement network monitoring to detect unusual traffic patterns originating from AI agent servers. Analyze outbound connections for connections to unusual or malicious domains (grantex.dev).</li>
<li>Audit the configuration of AI agent frameworks to identify instances using unscoped API keys. Prioritize upgrading or replacing frameworks that lack proper authorization controls.</li>
<li>Deploy the Sigma rule for detecting API key usage in command-line arguments or environment variables to identify potential credential exposure.</li>
<li>Monitor for access to sensitive data or resources by AI agents and implement least-privilege access controls.</li>
<li>Implement regular security audits and penetration testing of AI agent frameworks to identify and address vulnerabilities.</li>
</ul>
]]></content:encoded><category domain="severity">high</category><category domain="type">advisory</category><category>ai-agent</category><category>api-key</category><category>authorization</category><category>credential-theft</category></item></channel></rss>