{"description":"Trending threats, MITRE ATT\u0026CK coverage, and detection metadata — refreshed continuously.","feed_url":"https://feed.craftedsignal.io/tags/model-loading/","home_page_url":"https://feed.craftedsignal.io/","items":[{"_cs_actors":[],"_cs_cves":[{"cvss":7.5,"id":"CVE-2026-0897"}],"_cs_exploited":false,"_cs_products":["Keras (\u003e= 3.0.0, \u003c= 3.12.0)","Keras (\u003e= 3.13.0, \u003c 3.13.2)","Google Colab"],"_cs_severities":["medium"],"_cs_tags":["keras","denial-of-service","hdf5","model-loading","shape-bomb"],"_cs_type":"advisory","_cs_vendors":["Google"],"content_html":"\u003cp\u003eA denial-of-service vulnerability exists in Keras versions 3.0.0 through 3.12.0 and 3.13.0 through 3.13.1 due to improper handling of HDF5 dataset metadata within .keras model files. An attacker can craft a malicious .keras archive containing a valid model.weights.h5 file, where the HDF5 dataset declares an extremely large shape while storing minimal data. This \u0026ldquo;shape bomb\u0026rdquo; exploits the KerasFileEditor, which loads user-supplied .keras model files. When Keras attempts to load the model, it executes \u003ccode\u003eresult[key] = value[()]\u003c/code\u003e, causing h5py to allocate RAM proportional to the dataset\u0026rsquo;s declared shape (e.g., 8.88 PiB). This leads to immediate memory exhaustion, Python/TensorFlow crashes, Jupyter kernel kills, system instability, and ultimately, a full Denial of Service. This allows an attacker to crash any environment or pipeline that loads untrusted .keras models, including MLOps backends, training services, model upload endpoints, or automated pipelines. The vulnerability was reported on May 6, 2026.\u003c/p\u003e\n\u003ch2 id=\"attack-chain\"\u003eAttack Chain\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eAn attacker crafts a malicious \u003ccode\u003e.keras\u003c/code\u003e file.\u003c/li\u003e\n\u003cli\u003eThe malicious \u003ccode\u003e.keras\u003c/code\u003e file includes a \u003ccode\u003emodel.weights.h5\u003c/code\u003e file.\u003c/li\u003e\n\u003cli\u003eThe \u003ccode\u003emodel.weights.h5\u003c/code\u003e file contains HDF5 dataset metadata declaring an extremely large shape (e.g., 50,000,000 x 50,000,000).\u003c/li\u003e\n\u003cli\u003eThe HDF5 dataset uses gzip compression to keep the file size small (100-400 KB).\u003c/li\u003e\n\u003cli\u003eA victim system attempts to load the malicious \u003ccode\u003e.keras\u003c/code\u003e model using KerasFileEditor.\u003c/li\u003e\n\u003cli\u003eKeras attempts to load the entire dataset into memory using \u003ccode\u003evalue[()]\u003c/code\u003e, triggering h5py.\u003c/li\u003e\n\u003cli\u003eh5py attempts to allocate RAM proportional to the declared shape, leading to extreme memory exhaustion (e.g. 8.88 PiB).\u003c/li\u003e\n\u003cli\u003eThe Python/TensorFlow interpreter crashes, resulting in a Denial of Service.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"impact\"\u003eImpact\u003c/h2\u003e\n\u003cp\u003eSuccessful exploitation of this vulnerability leads to a denial-of-service condition. Observed damage includes immediate memory exhaustion (8+ PiB allocation attempts), crashes of the TensorFlow/Python interpreter, and the killing of Jupyter kernels. This can break automated model-upload pipelines and crash MLOps servers that process user models. In one proof-of-concept, a Google Colab compute quota dropped from 83 hours to 4 hours after only a few tests. Platforms allowing user-uploaded Keras models, such as training services, inference endpoints, and AutoML tools, are particularly vulnerable.\u003c/p\u003e\n\u003ch2 id=\"recommendation\"\u003eRecommendation\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eImplement input validation on \u003ccode\u003e.keras\u003c/code\u003e files to check for excessively large HDF5 dataset shapes before loading models.\u003c/li\u003e\n\u003cli\u003eMonitor Python/TensorFlow processes for abnormal memory allocation patterns indicative of a memory exhaustion attack.\u003c/li\u003e\n\u003cli\u003eApply patches and updates for Keras to address CVE-2026-0897 as they become available from Google.\u003c/li\u003e\n\u003cli\u003eDeploy the Sigma rule \u0026ldquo;Detect Suspicious Keras Model Loading\u0026rdquo; to identify potential exploitation attempts based on process execution and file access patterns.\u003c/li\u003e\n\u003cli\u003eBlock access to the malicious URL \u003ccode\u003ehttps://drive.google.com/file/d/1XAj57epTBWpj93GwHprHvb14WS9wpl5m/view?usp=drivesdk\u003c/code\u003e at the network perimeter.\u003c/li\u003e\n\u003c/ul\u003e\n","date_modified":"2024-01-03T12:00:00Z","date_published":"2024-01-03T12:00:00Z","id":"/briefs/2024-01-03-keras-dos/","summary":"Keras model loader is vulnerable to denial-of-service by loading specially crafted .keras files containing HDF5-based weight files with maliciously oversized dataset metadata, leading to immediate memory exhaustion during model loading.","title":"Keras Model Loader Vulnerable to Denial-of-Service via Malicious HDF5 Shape Bombs","url":"https://feed.craftedsignal.io/briefs/2024-01-03-keras-dos/"}],"language":"en","title":"CraftedSignal Threat Feed — Model-Loading","version":"https://jsonfeed.org/version/1.1"}