<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Keras (&gt;= 3.13.0, &lt; 3.13.2) — CraftedSignal Threat Feed</title><link>https://feed.craftedsignal.io/products/keras--3.13.0--3.13.2/</link><description>Trending threats, MITRE ATT&amp;CK coverage, and detection metadata — refreshed continuously.</description><generator>Hugo</generator><language>en</language><managingEditor>hello@craftedsignal.io</managingEditor><webMaster>hello@craftedsignal.io</webMaster><lastBuildDate>Wed, 03 Jan 2024 12:00:00 +0000</lastBuildDate><atom:link href="https://feed.craftedsignal.io/products/keras--3.13.0--3.13.2/feed.xml" rel="self" type="application/rss+xml"/><item><title>Keras Model Loader Vulnerable to Denial-of-Service via Malicious HDF5 Shape Bombs</title><link>https://feed.craftedsignal.io/briefs/2024-01-03-keras-dos/</link><pubDate>Wed, 03 Jan 2024 12:00:00 +0000</pubDate><author>hello@craftedsignal.io</author><guid isPermaLink="true">https://feed.craftedsignal.io/briefs/2024-01-03-keras-dos/</guid><description>Keras model loader is vulnerable to denial-of-service by loading specially crafted .keras files containing HDF5-based weight files with maliciously oversized dataset metadata, leading to immediate memory exhaustion during model loading.</description><content:encoded><![CDATA[<p>A denial-of-service vulnerability exists in Keras versions 3.0.0 through 3.12.0 and 3.13.0 through 3.13.1 due to improper handling of HDF5 dataset metadata within .keras model files. An attacker can craft a malicious .keras archive containing a valid model.weights.h5 file, where the HDF5 dataset declares an extremely large shape while storing minimal data. This &ldquo;shape bomb&rdquo; exploits the KerasFileEditor, which loads user-supplied .keras model files. When Keras attempts to load the model, it executes <code>result[key] = value[()]</code>, causing h5py to allocate RAM proportional to the dataset&rsquo;s declared shape (e.g., 8.88 PiB). This leads to immediate memory exhaustion, Python/TensorFlow crashes, Jupyter kernel kills, system instability, and ultimately, a full Denial of Service. This allows an attacker to crash any environment or pipeline that loads untrusted .keras models, including MLOps backends, training services, model upload endpoints, or automated pipelines. The vulnerability was reported on May 6, 2026.</p>
<h2 id="attack-chain">Attack Chain</h2>
<ol>
<li>An attacker crafts a malicious <code>.keras</code> file.</li>
<li>The malicious <code>.keras</code> file includes a <code>model.weights.h5</code> file.</li>
<li>The <code>model.weights.h5</code> file contains HDF5 dataset metadata declaring an extremely large shape (e.g., 50,000,000 x 50,000,000).</li>
<li>The HDF5 dataset uses gzip compression to keep the file size small (100-400 KB).</li>
<li>A victim system attempts to load the malicious <code>.keras</code> model using KerasFileEditor.</li>
<li>Keras attempts to load the entire dataset into memory using <code>value[()]</code>, triggering h5py.</li>
<li>h5py attempts to allocate RAM proportional to the declared shape, leading to extreme memory exhaustion (e.g. 8.88 PiB).</li>
<li>The Python/TensorFlow interpreter crashes, resulting in a Denial of Service.</li>
</ol>
<h2 id="impact">Impact</h2>
<p>Successful exploitation of this vulnerability leads to a denial-of-service condition. Observed damage includes immediate memory exhaustion (8+ PiB allocation attempts), crashes of the TensorFlow/Python interpreter, and the killing of Jupyter kernels. This can break automated model-upload pipelines and crash MLOps servers that process user models. In one proof-of-concept, a Google Colab compute quota dropped from 83 hours to 4 hours after only a few tests. Platforms allowing user-uploaded Keras models, such as training services, inference endpoints, and AutoML tools, are particularly vulnerable.</p>
<h2 id="recommendation">Recommendation</h2>
<ul>
<li>Implement input validation on <code>.keras</code> files to check for excessively large HDF5 dataset shapes before loading models.</li>
<li>Monitor Python/TensorFlow processes for abnormal memory allocation patterns indicative of a memory exhaustion attack.</li>
<li>Apply patches and updates for Keras to address CVE-2026-0897 as they become available from Google.</li>
<li>Deploy the Sigma rule &ldquo;Detect Suspicious Keras Model Loading&rdquo; to identify potential exploitation attempts based on process execution and file access patterns.</li>
<li>Block access to the malicious URL <code>https://drive.google.com/file/d/1XAj57epTBWpj93GwHprHvb14WS9wpl5m/view?usp=drivesdk</code> at the network perimeter.</li>
</ul>
]]></content:encoded><category domain="severity">medium</category><category domain="type">advisory</category><category>keras</category><category>denial-of-service</category><category>hdf5</category><category>model-loading</category><category>shape-bomb</category></item></channel></rss>