Tag
k8sGPT Operator Vulnerable to Prompt Injection
2 rules 2 TTPsk8sGPT versions before 0.4.32 are vulnerable to prompt injection due to deserialization of AI-generated YAML without proper validation in the auto-remediation pipeline, potentially leading to arbitrary code execution within the Kubernetes cluster.
FlowiseAI AirtableAgent Remote Code Execution via Prompt Injection
2 rules 1 TTPA remote code execution vulnerability exists in FlowiseAI's AirtableAgent.ts due to insufficient input verification when using Pandas, allowing attackers to inject malicious code into the prompt and execute arbitrary code via Pyodide.
Coinbase AgentKit Prompt Injection Vulnerability
2 rules 2 IOCsA prompt injection vulnerability in Coinbase AgentKit allows for potential wallet drain, infinite approvals, and agent-level remote code execution.
CrewAI Vulnerabilities Allow Remote Code Execution
3 rules 3 TTPs 4 CVEsMultiple vulnerabilities in CrewAI, an open-source multi-agent orchestration framework, can be exploited by attackers through prompt injection to execute arbitrary code and perform other malicious activities, potentially leading to system compromise.
Vulnerabilities in AI Agents Addressed by CrowdStrike Falcon AIDR and NVIDIA NeMo Guardrails
2 rules 5 TTPsCrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails v0.20.0 to help organizations protect AI agents in production by blocking prompt injection attacks, redacting sensitive data, and controlling agent behavior.
Securing AI Agents with Falcon AIDR and NVIDIA NeMo Guardrails
3 rules 4 TTPsCrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails to protect AI agents by blocking prompt injection attacks, redacting sensitive data, defanging malicious content, and moderating unwanted topics, ensuring compliance and preventing abuse.
CrowdStrike Falcon AIDR Supports NVIDIA NeMo Guardrails for AI Agent Protection
2 rules 2 TTPsCrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails, providing enterprise-grade protection for AI agents by defending against runtime attacks like prompt injection, redacting sensitive data, defanging malicious content, and moderating unwanted topics to ensure agents stay within compliance boundaries in sectors like finance, healthcare, customer service, and software development.
Securing AI Agents with CrowdStrike Falcon AIDR and NVIDIA NeMo Guardrails
2 rules 1 TTPCrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails to protect AI agents from attacks like prompt injection, data exfiltration, and unauthorized actions, enabling organizations to deploy AI applications more securely.
CrowdStrike Falcon Enhancements for Securing AI Environments
2 rules 2 TTPsCrowdStrike is enhancing its Falcon platform with new features focusing on AI Detection and Response (AIDR) capabilities across endpoints, SaaS, and cloud environments to mitigate risks such as prompt injection attacks, data leaks, and policy violations related to AI agents and shadow AI.
CrowdStrike Falcon Enhancements Secure AI Agents and Govern Shadow AI
2 rules 3 TTPsCrowdStrike is enhancing its Falcon platform with AI Detection and Response (AIDR) to secure AI agents and govern shadow AI across endpoints, SaaS, and cloud, addressing threats like prompt injection attacks, data leaks, and policy violations.
CrowdStrike Falcon AIDR and NVIDIA NeMo Guardrails Secure AI Agents
2 rules 1 TTPCrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails (v0.20.0), providing enterprise-grade protection for AI agents by managing data access, controlling responses, ensuring policy compliance, and blocking prompt injection attacks.
CrowdStrike Falcon AIDR Supports NVIDIA NeMo Guardrails for AI Agent Protection
2 rules 6 TTPsCrowdStrike Falcon AIDR now supports NVIDIA NeMo Guardrails (v0.20.0) to protect AI agents from prompt injection, data exposure, and unauthorized actions, enabling safer deployment of AI applications.
engramx vulnerable to CSRF enabling graph exfiltration and prompt injection
2 rules 2 TTPsThe engramx HTTP server, enabled by default and binding to 127.0.0.1:7337, is vulnerable to CSRF and prompt injection attacks, allowing a malicious website to exfiltrate the local knowledge graph and inject persistent prompt-injection payloads.