<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Data-Leak — CraftedSignal Threat Feed</title><link>https://feed.craftedsignal.io/tags/data-leak/</link><description>Trending threats, MITRE ATT&amp;CK coverage, and detection metadata — refreshed continuously.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@craftedsignal.io</managingEditor><webMaster>hello@craftedsignal.io</webMaster><lastBuildDate>Mon, 30 Mar 2026 17:40:59 +0000</lastBuildDate><atom:link href="https://feed.craftedsignal.io/tags/data-leak/feed.xml" rel="self" type="application/rss+xml"/><item><title>Parse Server LiveQuery Protected Field Leak via Shared Mutable State</title><link>https://feed.craftedsignal.io/briefs/2024-01-02-parse-server-livequery-leak/</link><pubDate>Mon, 30 Mar 2026 17:40:59 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2024-01-02-parse-server-livequery-leak/</guid><description>Parse Server versions before 8.6.65 and between 9.0.0 and 9.7.0-alpha.9 are vulnerable to a data leak where protected fields and authentication data can be exposed to unauthorized clients due to shared mutable objects across concurrent LiveQuery subscribers.</description><content:encoded><![CDATA[<p>Parse Server, an open-source backend for web and mobile applications, is susceptible to a vulnerability in its LiveQuery functionality. This issue stems from the concurrent handling of multiple subscribers using shared mutable objects. Specifically, when several clients subscribe to the same class via LiveQuery, event handlers process each subscriber concurrently, leading to a situation where sensitive data filters modify shared objects in-place. This can cause protected fields and authentication data to be leaked to clients that should not have access to them, or lead to incomplete objects being received by clients that should see the data. The vulnerability affects Parse Server deployments using LiveQuery with protected fields or afterEvent triggers when multiple clients are subscribed to the same class. Specifically, versions before 8.6.65 and versions 9.0.0 up to (but not including) 9.7.0-alpha.9 are vulnerable. Patches have been released to address this vulnerability by deep-cloning the shared objects, ensuring isolation between subscribers.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>Attacker identifies a Parse Server deployment using LiveQuery with protected fields or afterEvent triggers.</li>
<li>Attacker determines the server is running a vulnerable version of Parse Server (e.g., 9.6.0).</li>
<li>Attacker subscribes to a LiveQuery for a specific class containing protected fields.</li>
<li>A legitimate user subscribes to the same LiveQuery for the same class.</li>
<li>The server processes the legitimate user&rsquo;s subscription first. A sensitive data filter removes a protected field from the shared object.</li>
<li>The server then processes the attacker&rsquo;s subscription. Because the object has already been filtered by the previous subscriber&rsquo;s request, the attacker receives the object without the protected field check being applied.</li>
<li>Attacker gains unauthorized access to data they should not be able to view.</li>
<li>The attacker can potentially exploit this information to further compromise the application or access other sensitive data.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>This vulnerability could lead to the exposure of sensitive information, including protected fields and authentication data, to unauthorized users. The number of affected deployments is unknown, but any Parse Server instance utilizing LiveQuery with protected fields or afterEvent triggers is potentially at risk. Successful exploitation could result in data breaches, privacy violations, and unauthorized access to sensitive application resources. The severity is high due to the potential for widespread data leakage and the lack of a workaround prior to patching.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Upgrade Parse Server to version 8.6.65 or later, or version 9.7.0-alpha.9 or later to patch CVE-2026-34363.</li>
<li>Monitor Parse Server logs for unusual LiveQuery subscription patterns that might indicate an attempted exploitation. While there are no specific rules provided here, correlate server logs with application usage to detect anomalies.</li>
<li>If unable to immediately patch, consider disabling LiveQuery functionality or removing protected fields as a temporary mitigation, though this will impact application functionality.</li>
</ul>
]]></content:encoded><category domain="severity">high</category><category domain="type">advisory</category><category>parse-server</category><category>livequery</category><category>data-leak</category><category>cve-2026-34363</category></item><item><title>CrowdStrike Falcon Enhancements Secure AI Agents and Govern Shadow AI</title><link>https://feed.craftedsignal.io/briefs/2026-03-securing-ai-agents/</link><pubDate>Sat, 28 Mar 2026 09:23:42 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2026-03-securing-ai-agents/</guid><description>CrowdStrike is enhancing its Falcon platform with AI Detection and Response (AIDR) to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud, addressing threats like prompt injection attacks, data leaks, and policy violations.</description><content:encoded><![CDATA[<p>CrowdStrike is addressing the emerging attack surface presented by the rapid adoption of AI tools, AI agents, and AI-powered software. Traditional security controls are insufficient to protect against novel threats like indirect prompt injection and agentic tool chain attacks, exacerbated by shadow AI. The CrowdStrike Falcon platform is being enhanced with AI Detection and Response (AIDR) capabilities to secure AI workforce adoption and development across endpoints, SaaS environments, and cloud environments. These enhancements include extending runtime security guardrails to agents built in Microsoft Copilot Studio and enhancing endpoint AI security capabilities. These capabilities aim to enable organizations to confidently and securely accelerate AI development and adoption.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>An attacker gains initial access to a system, potentially through compromised credentials or a software vulnerability, targeting a developer machine with deployed AI tools.</li>
<li>The attacker exploits a personal AI agent like OpenClaw running on the endpoint, leveraging its autonomy and system permissions for malicious purposes (Living off the AI Land - LOTAIL).</li>
<li>The compromised AI agent executes terminal commands, browses the web, and interacts with files, mimicking legitimate user behavior.</li>
<li>The attacker leverages prompt injection techniques to manipulate the AI agent&rsquo;s behavior and access sensitive data.</li>
<li>The AI agent is used to access and exfiltrate sensitive data from the endpoint or connected network, bypassing traditional data loss prevention (DLP) controls.</li>
<li>The attacker uses the AI agent to move laterally within the network, accessing other systems and resources.</li>
<li>The attacker deploys malicious code or tools through the compromised AI agent, further compromising the environment.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>The exploitation of AI agents and shadow AI can lead to significant data breaches, intellectual property theft, and reputational damage. Organizations face an increasing AI visibility and governance gap. Successful attacks can compromise sensitive data handled by AI applications and agents, leading to regulatory fines and legal liabilities. The lack of visibility into AI component deployments introduces supply chain risks and exploitable vulnerabilities.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Deploy CrowdStrike Falcon AIDR to gain visibility into employees&rsquo; use of AI applications, including full prompt content, and to detect prompt attacks, data leaks, and access control and content policy violations (CrowdStrike Falcon AIDR).</li>
<li>Utilize AI Discovery in CrowdStrike Falcon Exposure Management to automatically discover AI-related components running across endpoints in real time, including AI apps and agents, LLM runtimes, MCP servers, and IDE extensions (CrowdStrike Falcon Exposure Management).</li>
<li>Implement runtime security guardrails using Falcon AIDR to monitor Microsoft Copilot Studio agents for prompt injection attacks, data leaks, and policy violations in real time (Falcon AIDR).</li>
<li>Enable Sysmon process creation logging to activate the &ldquo;Detect Suspicious AI Agent Processes&rdquo; rule below.</li>
</ul>
]]></content:encoded><category domain="severity">high</category><category domain="type">advisory</category><category>ai</category><category>shadow-ai</category><category>prompt-injection</category><category>data-leak</category><category>endpoint-security</category></item></channel></rss>