Hugging Face, an open source store for AI models and components, is open to an attack via the “tokenizer” layer that AI models use to make their outputs human readable. 

A cyberattacker could use the threat vector to implement a man-in-the-middle (MitM) approach where a .json file is used to intercept tool call arguments to redirect URL tokens through attacker infrastructure; this gives the threat actor “visibility into every URL the model accesses, API parameters, and any credentials embedded in those requests,” HiddenLayer security researcher Divyanshu Divyanshu explained in a blog post released today.

Hidden Layer tested its attack on Hugging Face models run locally using the SafeTensors, ONNX, and GGUF formats. SafeTensors is a model created by Hugging Face and is considered the de facto model standard for the platform; all three are supported by Hugging Face, and all three are popular for a variety of use cases. That said, this is a problem that could impact any platform used for running open source models like LlamaCPP and Ollama.

Related:Hackers Use AI for Exploit Development, Attack Automation

It also only affects models run locally, as the attack relies on modifying local files. As such, models run through Hugging Face’s Inference API, for example, are not impacted.

Hugging Face did not respond to a request for comment.

AI Tokenizer Flaw Lets Attackers Hijack Model Outputs

A tokenizer is a kind of translator between human language and computer language for AI models. A model’s output starts as a sequence of integer IDs that is then decoded through the tokenizer before the output reaches the user. Hugging Face specifically uses a tokenizer library file named “tokenizer.json” as the mapping for this decoding process in many of its models. Each entry in this file includes a string paired with an ID that can represent a word, subword fragment, or control token, and these libraries can include tens of thousands of entries. As HiddenLayer discovered, the long and short of it is that if an attacker gets ahold of this “tokenizer.json” file and makes even a single edit, they can use it to take direct control over anything the model outputs and possibly gain a foothold into the user’s device.  

A primary way an attacker might use this attack in the wild is by taking an open source model, editing the tokenizer file, and then uploading the poisoned model to a public repository, thus distributing it to every downstream user that pulls it. “A tampered tokenizer.json is structurally identical to a legitimate one, so it passes through the normal model distribution pipeline without any special delivery mechanism,” Divyanshu wrote.

Related:After Replacing TeamPCP Malware, ‘PCPJack’ Steals Cloud Secrets

A particularly troubling aspect of the threat vector is that a model poisoned through its .json file would still most likely run correctly. As such, the blog highlights that if you deploy a model from a public repository, you are also deploying the tokenizer attached to it.

“Tokenizer.json ships as a plain text file alongside every model, but it determines what your deployed system actually does,” Divyanshu wrote. “Treating it as configuration rather than as part of the trusted codebase is the gap this attack lives in.”

Tokenizer Hijacking: Negating a Supply Chain Threat

While other platforms may be impacted, Hugging Face will face much of the blast radius if attackers manage to take advantage of the supply chain risks here, as a top AI open source repository. For those that want to protect themselves, Kasimir Schulz, director of security research at HiddenLayer, tells Dark Reading check sums and signatures work if a model has been proven as safe, such as one released and signed by a corporation like Microsoft. “Right now there are no public, freely available automated scanners [for this specific issue],” he says.

The researcher recommends that organizations make sure to scan third-party models and to use signed models in production when possible. Model signing is a cryptographic process which applies a digital signature to AI and machine-learning models to ensure they haven’t been tampered with.

Related:If AI’s So Smart, Why Does It Keep Deleting Production Databases?

Hugging Face, like all open source software platforms, has dealt with a range of malicious activity. Back in 2024, JFrog found more than 100 malicious models in the repository capable of executing code; a reality that defenders continue to reckon with in myriad open source AI model platforms. It has also had to contend with critical vulnerabilities of its own

Don’t miss the latest Dark Reading Confidential podcast, How the Story of a USB Penetration Test Went Viral. Two decades ago Dark Reading posted its first blockbuster piece — a column by a pen tester who sprinkled rigged thumb drives around a credit union parking lot and let curious employees do the rest. This episode looks back at the history-making piece with its author, Steve Stasiukonis. Listen now!





Source link

#

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *