I’ve reported quite a few vulnerabilities over the years, and the process is never perfect. Sometimes you’re chasing the right contact, sometimes the bug is debated as “not really a bug,” and often the fix takes longer than expected, writes Nir Chako, Senior Security Researcher, Pentera.

But despite where the friction arises, the rules tend to be accepted by both vendors and researchers. They look like this: Responsible disclosure = Report → Fix → Public disclosure (usually within ~90 days).

At its core, responsible disclosure is built on a shared understanding: researchers and vendors working together to reduce risk. That final step, of public disclosure, isn’t ceremonial; it’s essential. It’s how we hold systems accountable, how defenders stay informed, and how the broader community learns and adapts.

Importantly, responsible disclosure has never been about shaming vendors.  Bugs happen. Complex systems break in unexpected ways. Attackers are clever and persistent, often chaining together issues that weren’t exploitable on their own.

There’s no shame in someone discovering a vulnerability. In fact, fixing it transparently is one of the best ways to demonstrate security maturity. So what happens when that final step is quietly taken off the table?

A Vulnerability Reported. A Problem Buried.

Not long ago, I discovered a significant vulnerability in a widely used software. I submitted it directly to the vendor’s official channel, along with detailed reproduction steps and context.

It wasn’t just acknowledged; it was validated. The company agreed it was a real issue acknowledging it as high severity with a CVSS score of 7.5. But their response didn’t come with a timeline for a fix or a plan for disclosure. Instead, they kindly pointed me to the fine print of their submission process.

By submitting the vulnerability, I had implicitly agreed to a set of legal terms that prohibited any form of public disclosure, indefinitely. I checked if there was another disclosure process that would have resulted in a different outcome, but they said the policy stands regardless of the submission process.

They also informed me that no CVE would be issued.

Effectively, the vulnerability, though acknowledged and potentially impactful, was now locked in a box. No advisory. No community alert. No patching timeline. And no way for customers to even know they were exposed, unless the vendor issues notice.

Responsible Disclosure > Silence by Design.

On paper, the process looked fine: a report submitted, validated, and rewarded. But in practice, the structure served a different purpose. It prioritized limiting liability over enabling transparency.

And that’s a problem. Vulnerabilities don’t go away because they’re hidden. Attackers don’t respect terms of service. If an issue exists, someone else, potentially with more nefarious intentions, will eventually find it. Meanwhile, the only person being restricted is the one trying to fix the problem.

This is where responsible disclosure begins to lose its meaning. If the rules prevent transparency, researchers are left in an untenable position: follow the process and stay silent, or speak out and risk being labeled irresponsible even if the disclosure is technically accurate and security-motivated.

The Shifting Dynamics of Vulnerability Disclosure

This isn’t an isolated incident. Increasingly, I’m seeing vendors deploy disclosure programs not as pathways to transparency, but as tools for control. What was once a collaborative effort is becoming a one-sided agreement: “Give us your research. But you don’t get to talk about it.”

The goal seems obvious, avoid headlines, avoid scrutiny, and quietly delay or deprioritize remediation.

The Stack keeps its security reporting and think pieces free for public interest reasons. Subscribing for £250 a year gets you access to in-depth analysis, CISO and CIO interviews and early invites to our events. It also supports independent journalism. Join peers subscribing today.

Get inside the tent

It’s a troubling evolution. Responsible disclosure has always relied on a delicate balance: vendors get time to fix, and researchers retain the right to publish. That balance creates trust. And that trust ensures the system works even when friction arises.

But when vendors break that balance, when they invoke legal language to prevent researchers from publishing their findings (after responsibly disclosing them), they undermine the very system they claim to support.

The New Dilemma

This situation left me with a question I’ve never had to ask in such stark terms:

If responsible disclosure prevents public awareness, is it still responsible?

There’s no easy answer. I continue to believe in working with vendors and giving them time to remediate. But I also believe defenders have a right to know when they’re exposed. And I don’t believe researchers should be forced to choose between staying silent or being labeled reckless  especially when the alternative is quiet inaction from the vendor.

A Call for Industry-Wide Standards

What we need now isn’t more good intentions. It’s clearer structure. Because today, vendors can sidestep disclosure, decline a CVE, prohibit publication, and keep customers in the dark all without violating any industry rule. That’s not the failure of a single company. It’s a failure of the framework.

This is why I’m encouraged by CISA’s recent announcement outlining its updated vision for the CVE program. Their goal is to reframe CVE as a public good, with an emphasis on timely, transparent, and standardized vulnerability disclosure. In a world of increasingly complex software supply chains, that kind of leadership is badly needed.

But vision alone won’t fix the problem. We need consistent, enforceable standards that ensure:

  • Vulnerabilities acknowledged by vendors can’t be contractually buried
  • CVEs can’t be withheld solely to avoid attention
  • Researchers retain the right to publish after a reasonable disclosure window

Organizations like CISA, MITRE, and the CVE Program should play a leading role, not just as administrators, but as advocates for transparency. And yet, we’ve seen how fragile even those institutions can be.

Earlier this year, MITRE’s CVE program nearly lost funding, threatening the backbone of coordinated disclosure. A last-minute intervention kept it afloat, but it was a wake-up call. If that system falters, more power may shift to the vendors themselves and some are already showing how they’ll use it. 

That’s not responsible disclosure. It’s reputation management, and while PR and Comms teams may celebrate, the security community should not.

A System Out of Balance

Security research is the engine that drives progress. It’s how we find the weaknesses before attackers do. But if that research is censored, not for purposes of national security, but by vendors protecting their brand, then we’re not safer. We’re creating a false sense of security.

To be clear, the team I worked with on this issue was professional and respectful. They engaged in good faith, but the structure they were operating within, and the terms I apparently had no choice but to accept, led to an outcome where the vulnerability is known, but cannot be shared.

If I challenge that silence by publishing, I’ll be framed as irresponsible. I may face legal threats. The vendor, meanwhile, faces no obligation to disclose, because only I know the vulnerability exists.

That’s the system. And it’s one we need to fix.

Are we okay with a model where transparency is optional for vendors but mandatory for researchers? Where silence is enforced, and accountability is a matter of choice?

If we want responsible disclosure to mean something moving forward, we need to ensure the responsibility goes both ways.

The link has been copied!