Pressure to quickly deploy and connect AI applications is causing a growing security headache for CISOs with thousands of servers left exposed online, new research suggests.

More than 10,000 Ollama AI servers are publicly exposed to the internet with no authentication layer to protect them from threat actors, Trend Micro found.

Ollama is a lightweight, extensible open-source framework for building and running language models locally; it is popular with developers needing greater control and privacy while running LLMs.

The stat, revealed in Trend Micro's State of AI Security report for H1 2025, highlights the security threats that can arise when harried developers accidentally misconfigure servers said Chief Enterprise Platform Officer Rachel Jin.

"Too much AI infrastructure is already being built from unsecured and/or unpatched components, creating an open door for threat actors," she added.

The growing MCP issue

Research also found model context protocol (MCP) servers are increasingly exposed, with API security platform Pynt reporting 72% of the 281 most popular MCPs were exposed to "at least one sensitive capability" and 13% accepted inputs from "untrusted sources".

MCP is another open-source framework, developed by Anthropic to facilitate connections between AI agents and third-party data sources. It gained traction after being adopted by OpenAI in March 2025, with data from PulseMCP showing 5,265 MCP servers are now online.

Pynt's report said half of agents connected to at least three MCP servers were at "high risk" of exploitation, with the risk compounding with the number of servers, up to 92% for a system with 10 MCPs.

In June, AppSec company Backslash also discovered hundreds of “misconfigured or carelessly built” model context protocol servers put at risk of exploitation by excessive network access and permissions.

Mitigating the issue

Trend Micro said hasty configurations were one of the main reasons for AI security threats, with the number of exposed Ollama servers rising from 3,000 in November to more than 10,000, while 2,000 Redis servers and 200 Chroma DB were also "completely unprotected" and exposed online.

The news comes as exploitation of four Ollama vulnerabilities and a Chroma DB flaw were demonstrated at this year's Pwn2Own event, allowing data to be read, written and deleted.

Research by AI security company Lasso also found exposed MCPs could be hit by an "IdentityMesh" vulnerability that allows lateral movement through agentic architecture to facilitate data exfiltration and malware distribution.

See also: NVIDIAScape: Critical NVIDIA bug poses “systemic risk to the AI ecosystem”, gives root

Trend Micro encouraged developers to take advise from the Coalition for Secure AI (CoSAI) to mitigate the exposure of their AI infrastructure and introduce model signing, machine readable model cards, zero trust for AI and MCP security.

Pynt added MCP users should ensure user approval is required for all calls to an MCP server, disable servers that are not being used, and containerize or segregate MCP servers with system access or the ability to execute code.

New guidance from the Open Worldwide Application Security Project (OWASP) on securing agentic solutions advised a “holistic approach” and said teams need to understand the specific security requirements of the architecture and frameworks selected to support their solutions.

OWASP said developers should limit database permissions and use sandboxing to test agent-generated code before deployment and avoid issues rising up the AI supply chain.

The link has been copied!