Skip to content
Threat Feed

Tag

Prompt-Injection

13 briefs RSS
high advisory

k8sGPT Operator Vulnerable to Prompt Injection

k8sGPT versions before 0.4.32 are vulnerable to prompt injection due to deserialization of AI-generated YAML without proper validation in the auto-remediation pipeline, potentially leading to arbitrary code execution within the Kubernetes cluster.

k8sgpt prompt-injection kubernetes ai vulnerability
2r 2t
critical advisory

FlowiseAI AirtableAgent Remote Code Execution via Prompt Injection

A remote code execution vulnerability exists in FlowiseAI's AirtableAgent.ts due to insufficient input verification when using Pandas, allowing attackers to inject malicious code into the prompt and execute arbitrary code via Pyodide.

flowiseai rce prompt-injection airtable
2r 1t
critical advisory

Coinbase AgentKit Prompt Injection Vulnerability

A prompt injection vulnerability in Coinbase AgentKit allows for potential wallet drain, infinite approvals, and agent-level remote code execution.

prompt-injection coinbase agentkit wallet-drain
2r 2i
critical advisory

CrewAI Vulnerabilities Allow Remote Code Execution

Multiple vulnerabilities in CrewAI, an open-source multi-agent orchestration framework, can be exploited by attackers through prompt injection to execute arbitrary code and perform other malicious activities, potentially leading to system compromise.

ai rce prompt-injection
3r 3t 4c
high advisory

Vulnerabilities in AI Agents Addressed by CrowdStrike Falcon AIDR and NVIDIA NeMo Guardrails

CrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails v0.20.0 to help organizations protect AI agents in production by blocking prompt injection attacks, redacting sensitive data, and controlling agent behavior.

ai prompt-injection data-security
2r 5t
medium advisory

Securing AI Agents with Falcon AIDR and NVIDIA NeMo Guardrails

CrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails to protect AI agents by blocking prompt injection attacks, redacting sensitive data, defanging malicious content, and moderating unwanted topics, ensuring compliance and preventing abuse.

ai-security prompt-injection data-protection
3r 4t
high advisory

CrowdStrike Falcon AIDR Supports NVIDIA NeMo Guardrails for AI Agent Protection

CrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails, providing enterprise-grade protection for AI agents by defending against runtime attacks like prompt injection, redacting sensitive data, defanging malicious content, and moderating unwanted topics to ensure agents stay within compliance boundaries in sectors like finance, healthcare, customer service, and software development.

ai-security prompt-injection data-exfiltration
2r 2t
high advisory

Securing AI Agents with CrowdStrike Falcon AIDR and NVIDIA NeMo Guardrails

CrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails to protect AI agents from attacks like prompt injection, data exfiltration, and unauthorized actions, enabling organizations to deploy AI applications more securely.

ai-security prompt-injection data-protection ai-agents
2r 1t
medium advisory

CrowdStrike Falcon Enhancements for Securing AI Environments

CrowdStrike is enhancing its Falcon platform with new features focusing on AI Detection and Response (AIDR) capabilities across endpoints, SaaS, and cloud environments to mitigate risks such as prompt injection attacks, data leaks, and policy violations related to AI agents and shadow AI.

ai security falcon agentic-soc prompt-injection
2r 2t
high advisory

CrowdStrike Falcon Enhancements Secure AI Agents and Govern Shadow AI

CrowdStrike is enhancing its Falcon platform with AI Detection and Response (AIDR) to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud, addressing threats like prompt injection attacks, data leaks, and policy violations.

ai shadow-ai prompt-injection data-leak endpoint-security
2r 3t
high advisory

CrowdStrike Falcon AIDR and NVIDIA NeMo Guardrails Secure AI Agents

CrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails (v0.20.0), providing enterprise-grade protection for AI agents by managing data access, controlling responses, ensuring policy compliance, and blocking prompt injection attacks.

AI-security prompt-injection data-protection
2r 1t
high advisory

CrowdStrike Falcon AIDR Supports NVIDIA NeMo Guardrails for AI Agent Protection

CrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails (v0.20.0) to protect AI agents from prompt injection, data exposure, and unauthorized actions, enabling safer deployment of AI applications.

ai-security prompt-injection data-protection guardrails agentic-ai
2r 6t
high advisory

engramx vulnerable to CSRF enabling graph exfiltration and prompt injection

The engramx HTTP server, enabled by default and binding to 127.0.0.1:7337, is vulnerable to CSRF and prompt injection attacks, allowing a malicious website to exfiltrate the local knowledge graph and inject persistent prompt-injection payloads.

engramx csrf prompt-injection
2r 2t