{"description":"Trending threats, MITRE ATT\u0026CK coverage, and detection metadata — refreshed continuously.","feed_url":"https://feed.craftedsignal.io/products/instructlab/","home_page_url":"https://feed.craftedsignal.io/","items":[{"_cs_actors":[],"_cs_cves":[{"cvss":8.8,"id":"CVE-2026-6859"}],"_cs_exploited":false,"_cs_products":["InstructLab"],"_cs_severities":["critical"],"_cs_tags":["cve","code-execution","huggingface","instructlab"],"_cs_type":"advisory","_cs_vendors":["Red Hat"],"content_html":"\u003cp\u003eInstructLab contains a critical vulnerability (CVE-2026-6859) in its \u003ccode\u003elinux_train.py\u003c/code\u003e script. The script unconditionally sets \u003ccode\u003etrust_remote_code=True\u003c/code\u003e when interacting with the HuggingFace model hub. This design flaw allows a remote attacker to inject arbitrary Python code into the training process. The attacker only needs to convince a user to execute the \u003ccode\u003eilab train\u003c/code\u003e, \u003ccode\u003eilab download\u003c/code\u003e, or \u003ccode\u003eilab generate\u003c/code\u003e command while specifying a malicious model hosted on HuggingFace. Successful exploitation results in arbitrary code execution within the context of the InstructLab process, potentially leading to complete system compromise.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eAttacker creates a malicious model on the HuggingFace Hub. This model contains embedded Python code designed for malicious purposes.\u003c/li\u003e\n\u003cli\u003eAttacker social engineers a user to execute \u003ccode\u003eilab train\u003c/code\u003e, \u003ccode\u003eilab download\u003c/code\u003e, or \u003ccode\u003eilab generate\u003c/code\u003e commands.\u003c/li\u003e\n\u003cli\u003eUser executes the command, specifying the attacker\u0026rsquo;s malicious model from the HuggingFace Hub.\u003c/li\u003e\n\u003cli\u003eThe \u003ccode\u003elinux_train.py\u003c/code\u003e script, due to the hardcoded \u003ccode\u003etrust_remote_code=True\u003c/code\u003e, downloads the malicious model.\u003c/li\u003e\n\u003cli\u003eThe script loads the model, triggering the execution of the attacker\u0026rsquo;s embedded Python code.\u003c/li\u003e\n\u003cli\u003eThe attacker\u0026rsquo;s code executes within the InstructLab process, allowing for arbitrary actions.\u003c/li\u003e\n\u003cli\u003eThe attacker achieves persistence by modifying system files or creating new services.\u003c/li\u003e\n\u003cli\u003eThe attacker gains full control of the compromised system, potentially exfiltrating data or causing further damage.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eSuccessful exploitation of this vulnerability allows a remote attacker to execute arbitrary Python code on the target system. This can lead to complete system compromise, allowing the attacker to steal sensitive data, install malware, or disrupt operations. While the number of affected systems is currently unknown, any system running a vulnerable version of InstructLab and interacting with the HuggingFace Hub is at risk.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eDeploy the Sigma rules provided below to detect suspicious process creation events related to InstructLab executing code from temporary directories or with unusual network activity.\u003c/li\u003e\n\u003cli\u003eMonitor process creation events for the execution of Python scripts with \u003ccode\u003etrust_remote_code=True\u003c/code\u003e within InstructLab\u0026rsquo;s processes using the provided Sigma rule.\u003c/li\u003e\n\u003cli\u003eImplement strict controls and validation for models downloaded from HuggingFace, even if \u003ccode\u003etrust_remote_code=True\u003c/code\u003e is required.\u003c/li\u003e\n\u003cli\u003eApply any available patches or updates for InstructLab to address CVE-2026-6859 as provided by Red Hat.\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2026-04-22T14:17:07Z","date_published":"2026-04-22T14:17:07Z","id":"/briefs/2026-04-instructlab-code-execution/","summary":"InstructLab is vulnerable to arbitrary code execution because the `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace, allowing remote attackers to execute code by convincing a user to load a malicious model.","title":"InstructLab Arbitrary Code Execution via Malicious HuggingFace Model","url":"https://feed.craftedsignal.io/briefs/2026-04-instructlab-code-execution/"}],"language":"en","title":"CraftedSignal Threat Feed — InstructLab","version":"https://jsonfeed.org/version/1.1"}