Artificial Intelligence & Machine Learning
,
Governance & Risk Management
,
Next-Generation Technologies & Secure Development
Companies Eager for Tools Are Putting AI’s Transformative Power Ahead of Security
Hackers targeting a popular open-source project for running artificial intelligence Ollama could run into a big “Probllama” if they haven’t yet patched.
See Also: Introduction to Rubrik’s Ruby AI
Security researchers from Wiz disclosed Monday they discovered an easy-to-exploit Ollama remote code execution vulnerability whose name is – and probably couldn’t be better than – Probllama.
The flaw, tracked as CVE-2024-37032, received a patch on May 7. A “large number” of instances haven’t upgraded as of earlier this month, Wiz said.
Companies have “largely sidelined” AI security measures in favor of focusing on its transformative power, while customers are rapidly adopting new tools and infrastructure to get a competitive edge, Wiz said. But these tools are the “perfect targets” for threat actors, as their early stage of development usually lacks standardized security features such as authentication, and it is relatively easier to find flaws in a younger code base.
Ollama doesn’t have a built-in authentication process, so the researchers recommend not exposing installations to the internet unless the user companies have their own security processes in place. “The critical issue is not just the vulnerabilities themselves but the inherent lack of authentication support in these new tools,” said Sagi Tzadik, a Wiz researcher.
Other tools similar to Ollama, such as TorchServe and Ray Anyscale were also previously vulnerable to RCE flaws.
Ollama simplifies the process of packaging and deploying artificial intelligence models. Its server provides API endpoints, including one that allows users to download models from Ollama’s registry and other private registries.
Compatible with LLMs such as Meta’s Llama, Microsoft’s Phi and Mistral’s models, Ollama is one of the most popular projects to run an AI model on, with hundreds of thousands of pulls per month on the Docker Hub repository and over 70,000 stars on GitHub.
When users pull a model from a private registry, hackers can embed a malicious manifest file and potentially compromise the environment hosting a vulnerable Ollama server. The attackers can exploit the flaw by sending a specially crafted HTTP request to the Ollama API server, which is public in Docker installations.
The flaw is caused by insufficient validation on the server side, Tzadik said.
“When pulling a model from a private registry – by querying the http://[victim]:11434/api/pull
API endpoint, it is possible to supply a malicious manifest file that contains a path traversal payload in the digest field,” he said in an email.
Hackers could use that payload to read and corrupt files without permission on the compromised system and remotely execute vulnerable code. The ease of exploit was “extremely severe” in Docker installations, Tzadik said.
Laura Adams is a tech enthusiast residing in the UK. Her articles cover the latest technological innovations, from AI to consumer gadgets, providing readers with a glimpse into the future of technology.