The UK’s National Cyber Security Centre (NCSC) is keen to bring in fresh blood to its Vulnerability Research Initiative, including across AI. 

Contrary to a flurry of press releases from excited cybersecurity vendors hitting The Stack’s inbox this week, the VRI is not new and has been running quietly as a formal initiative since at least 2019, we confirmed.

The NCSC’s VRI is not a government bug bounty programme (i.e. there’s no financial reward for vulnerability disclosure; something that may dissuade those not mission-driven, or keen to build a good relationship with the UK’s friendly signals/cyber intelligence agency front window.) 

“Developing deep understanding and expertise of technologies, security mitigations and products takes time. Technology growth is constant, ever complex, security is improving, and thus VR is getting harder. This means the NCSC demand for VR continues to grow,” the NCSC noted this week.

“In future we want to extend our engagement with experts on particular topics, e.g. application of AI to VR” its team said in a July 14 blog. “If you’d like to contact the team, you can reach us at vri@ncsc.gov.uk. 

“We’d like to know about your VR skillset and areas of expertise.”

(Don’t, for the love of god, spam them with rubbish AI-generated vulnerability disclosures that an overstretched team will have to triage; in fact, don't send disclosures to that email address at all. Guidance on vulnerability disclosure to HMG can be found here)

See also: Hallucinated vulnerability disclosure for Curl generates disgust

The public notice comes as security researchers continue to explore the potential for genAI to support vulnerability research/exploit development.

A powerful case in point was the recent use by one of a DeepSeek model to quickly develop an exploit for a vulnerability in the LangFlow toolchain.

They had spotted a vulnerability report, used the LLM to cross reference the difference between the patched version and earlier vulnerable version and through this automated reverse engineering quickly seen the attack path.  The incident suggested that as soon as a public pull request (PR) to fix a bug lands, anyone with an LLM can potentially weaponise it at pace.

As one enterprise CISO noted to The Stack at the time,a key takeaway for defenders is to watch repositories, not just CVEs: Consider subscribing to security-relevant PR feeds and diff them automatically; assume patch-diff disclosure: Release clear advisories with patches; partial obscurity buys no time at all; monitor outgoing traffic to LLM APIs for pasted diffs of your codebase; and to ensure layered authentication, they concluded.

“Even with auth added, Langflow still lets authenticated users run arbitrary Python. Treat it like Jupyter — behind SSO and network segmentation,” as they told us in May, preferring not to be named. 

See also: CIA CIO La’Naia Jones on AI and the spy agency's tech priorities

The link has been copied!