top | item 46706100

I scanned 2,500 Hugging Face models for malware/issues. Here is the data

24 points| arseniibr | 1 month ago |github.com

Hi HN,

I built a CLI tool called Veritensor for scunning AI models, because I found out that downloading model weights from 3rd party websites and loading them with torch.load() can lead to RCE. At the same time, simple regex scanners are easy to bypass.

To test my tool, I ran it against 2500 new and trending models on Hugging Face.

Here is what I found — 86 failed models: Broken files — 16 models were actually Git LFS text pointers (several hundred bytes), not binaries. If you try to load them, your code crashes. Hidden Licenses — 5 models. I found models with Non-Commercial licenses hidden inside the .safetensors headers, even if the repo looked open source. Shadow Dependencies — 49 models. Many models tried to import libraries I didn't have (like ultralytics or deepspeed). My tool blocked them because I use a strict allowlist of libraries. Suspicious Code — 11 files used STACK_GLOBAL to build function names dynamically. This is a common way how RCE malware hides, though in my case, it was mostly old numpy files. Scan Errors — 5 models failed because of missing local dependencies (like h5py for old Keras files).

I was able to detect some threats because under the hood, Veritensor works differently from common regex scanners. Instead of searching for suspicious text, it simulates how Pickle loads data, which helps it find hidden payloads without running any code. It also checks that the model file is real by hashing it and comparing it with the version from Hugging Face, so fake or changed models can be detected. Veritensor also looks at model metadata in formats like Safetensors and GGUF to spot license restrictions. If everything looks safe, it can sign the container using Sigstore Cosign.

It supports PyTorch, Keras, and GGUF. Free to use — Apache 2.0.

Repo: https://github.com/ArseniiBrazhnyk/Veritensor Data of the scan [CSV/JSON]: https://drive.google.com/drive/folders/1G-Bq063zk8szx9fAQ3NN... PyPI: pip install veritensor

Let me know if you have any feedback, have you ever faced similar threats and whether this tool could be useful for you.

19 comments

order

embedding-shape|1 month ago

> Broken files — 16 models were actually Git LFS text pointers (several hundred bytes), not binaries. If you try to load them, your code crashes.

Yeah, if you don't know how use the repositories, they might look broken :) Pointers are fine, the blobs are downloaded after you fetch the git repository itself, then it's perfectly loadable. Seems like a really basic thing to misunderstand, given the context.

Please, understand how things typically work in the ecosystem before claiming something is broken.

That whatever LLM you used couldn't import some specific libraries also doesn't mean the repository itself has issues.

I think you need to go back to the drawing board here, fully understand how things work, before you set out to analyze what's "broken".

arseniibr|1 month ago

In an ideal local environment with a properly configured git client, sure. But in real-world CI/CD pipelines, people can use wget, curl, or custom caching layers that often pull the raw pointer file instead of the LFS blob. When that hits torch.load() in production, the service crashes. The tool was designed to catch this integrity mismatch before deployment.

wbshaw|1 month ago

Calling them broken files might not be correct. However, I can see where if you are not diligent about watching commits to those git repos, you end up with a Trojan Horse that introduces a vulnerability after you've vetted the model.

lucrbvi|1 month ago

You should know that there is already a solution for this, SafeTensors [0].

But it may be a nice tool for those who download "unsafe" models

[0]: https://huggingface.co/docs/safetensors/index

arseniibr|1 month ago

Safetensors is the goal, but legacy models are still there. A massive portion of the ecosystem (especially older fine-tunes and specialized architectures) is still stuck on Pickle/PyTorch .bin. Until 100% of models migrate, we need tooling to audit the "unsafe" ones.

patrakov|1 month ago

The single --force flag is not a good design decision. Please break it up (EDIT: I see you already did it partially in veritensor.yaml). Right now, according to the description, it suppresses detection of both genuinely non-commercial/AGPL models and models with inconsistent licensing data. Also, I might accept AGPL but not CC-BY-NC.

Probably, it would be better to split it into --accept-model-license=AGPL --accept-inconsistent-licensing --ignore-layer-license-metadata --ignore-rce-vector=os.system and so on.

arseniibr|1 month ago

Thank you for the valuable feedback. I agree that having granular CLI flags is better for ad-hoc scans or CI pipelines where you don't want to commit a config file. Splitting it into --ignore-license vs --ignore-malware (which should probably never be ignored easily) is a great design decision. Added to the roadmap!

amelius|1 month ago

> loading them with torch.load() can lead to RCE (remote command execution)

Why didn't the Torch team fix this?

embedding-shape|1 month ago

OP misunderstands, the issue is specifically with the pickle format, and similar ones, as they're essentially code that needs to be executed, not just data to be loaded. Most of the ecosystem have already moved to using .safetensor format which is just data and doesn't suffer from that issue.

arseniibr|1 month ago

PyTorch relies on Python's pickle module for serialization, which is essentially a stack-based virtual machine. This allows for saving arbitrary Python objects, custom classes, etc., but the trade-off is security. The PyTorch docs explicitly say: "Only load data you trust."

"torch.load() unless weights_only parameter is set to True, uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never load data that could have come from an untrusted source in an unsafe mode, or that could have been tampered with. Only load data you trust. — PyTorch Docs"

In the real world, some people might download weights from third-party sources. Since PyTorch won't sandbox the loading process, I did the tool to inspect the bytecode before execution.