<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>InstructLab — CraftedSignal Threat Feed</title><link>https://feed.craftedsignal.io/products/instructlab/</link><description>Trending threats, MITRE ATT&amp;CK coverage, and detection metadata — refreshed continuously.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@craftedsignal.io</managingEditor><webMaster>hello@craftedsignal.io</webMaster><lastBuildDate>Wed, 22 Apr 2026 14:17:07 +0000</lastBuildDate><atom:link href="https://feed.craftedsignal.io/products/instructlab/feed.xml" rel="self" type="application/rss+xml"/><item><title>InstructLab Arbitrary Code Execution via Malicious HuggingFace Model</title><link>https://feed.craftedsignal.io/briefs/2026-04-instructlab-code-execution/</link><pubDate>Wed, 22 Apr 2026 14:17:07 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2026-04-instructlab-code-execution/</guid><description>InstructLab is vulnerable to arbitrary code execution because the `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace, allowing remote attackers to execute code by convincing a user to load a malicious model.</description><content:encoded><![CDATA[<p>InstructLab contains a critical vulnerability (CVE-2026-6859) in its <code>linux_train.py</code> script. The script unconditionally sets <code>trust_remote_code=True</code> when interacting with the HuggingFace model hub. This design flaw allows a remote attacker to inject arbitrary Python code into the training process. The attacker only needs to convince a user to execute the <code>ilab train</code>, <code>ilab download</code>, or <code>ilab generate</code> command while specifying a malicious model hosted on HuggingFace. Successful exploitation results in arbitrary code execution within the context of the InstructLab process, potentially leading to complete system compromise.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>Attacker creates a malicious model on the HuggingFace Hub. This model contains embedded Python code designed for malicious purposes.</li>
<li>Attacker social engineers a user to execute <code>ilab train</code>, <code>ilab download</code>, or <code>ilab generate</code> commands.</li>
<li>User executes the command, specifying the attacker&rsquo;s malicious model from the HuggingFace Hub.</li>
<li>The <code>linux_train.py</code> script, due to the hardcoded <code>trust_remote_code=True</code>, downloads the malicious model.</li>
<li>The script loads the model, triggering the execution of the attacker&rsquo;s embedded Python code.</li>
<li>The attacker&rsquo;s code executes within the InstructLab process, allowing for arbitrary actions.</li>
<li>The attacker achieves persistence by modifying system files or creating new services.</li>
<li>The attacker gains full control of the compromised system, potentially exfiltrating data or causing further damage.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>Successful exploitation of this vulnerability allows a remote attacker to execute arbitrary Python code on the target system. This can lead to complete system compromise, allowing the attacker to steal sensitive data, install malware, or disrupt operations. While the number of affected systems is currently unknown, any system running a vulnerable version of InstructLab and interacting with the HuggingFace Hub is at risk.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Deploy the Sigma rules provided below to detect suspicious process creation events related to InstructLab executing code from temporary directories or with unusual network activity.</li>
<li>Monitor process creation events for the execution of Python scripts with <code>trust_remote_code=True</code> within InstructLab&rsquo;s processes using the provided Sigma rule.</li>
<li>Implement strict controls and validation for models downloaded from HuggingFace, even if <code>trust_remote_code=True</code> is required.</li>
<li>Apply any available patches or updates for InstructLab to address CVE-2026-6859 as provided by Red Hat.</li>
</ul>
]]></content:encoded><category domain="severity">critical</category><category domain="type">advisory</category><category>cve</category><category>code-execution</category><category>huggingface</category><category>instructlab</category></item></channel></rss>