Skip to content

Search the site

AI rush means bugs are a question of security AND safety

Bugcrowd founder says now is the time for the crowd to identify bias and other AI threats

Photo by James Wainscoat / Unsplash

Enterprises and vendors alike are generating a rash of unintended consequences as they rush AI-flavoured platforms and products to market, the founder of crowd-sourced security vendor BugCrowd has warned.

Casey Ellis was speaking in the wake of Bugcrowd snagging $102 million of growth stage investment, which it will use to “double down on innovation” and focus on “adjacent use cases”. This could include M&A activity, Ellis said.

“When it comes to innovation, it comes down to applying the crowd in more ways out to market. There's a really interesting opportunity in the market right now to actually redefine what pen testing actually is,” Ellis said.

The company is increasingly using the crowd-sourced model it pioneered to help it identify AI risks, such as bias. In January it emerged that Bugcrowd is working with the US Department of Defence’s Chief Digital and Artificial Intelligence Office on a bias bounty program. “That's a brand new field of security and safety research,” said Ellis.

The Biden executive orders on AI, he said, had sharpened the focus on the potential of bias in training data, and its potential scaling and unintended consequences. “We've started to basically deploy the creativity of the crowd into identifying bias, so that it can be fixed.”

However, bias is just one threat raised by AI, said Ellis, which as a security and safety domain is “vast, and we’re only as a collective kind of wrapping our heads around it.”

He explained, “What AI is doing us bringing a bit of a convergence between safety and security.”

This had been true of the debate around election security and information warfare, as well as the automotive sector, he said. “This is a phenomenon that’s been coming over the hill for quite some time. Now AI has forced the conversation. The main reason that we do security is to keep users safe.”

He said that AI and ML were hardly new territory for Bugcrowd, which had been using machine learning, AI, and NLP within its platform for years. “With the depositing of generative AI and LLM 's onto the onto the internet over the past 15 months or so, that's given us new opportunities to use generative AI technology within the platform.”

There are two prongs to its AI strategy, Ellis said. One was how it was “attacking it,” such as its efforts around bias detection.

Bugcrowd is working with the likes of Anthropic, Google and Amazon. “We've been working with Open AI since probably about two months before Chat GPT dropped.” Working with such organizations made it easier to build a bigger crowd of experts in the domain, he said.

“The other is how we’re making use of it,” he said.

Internally, it was using Gen AI in the triage queue to help its people understand vulnerabilities more quickly, “So they can address reproducibility and severity and then how can we make sure that that's getting to the customer so they can get to the point quickly.”

See also: Researchers go back to the 80s to jailbreak LLMs :-)

One aspect he highlighted was GenAI’s ability to address the fact that “everyone has a different mode of communication.” An individual might be a phenomenal hacker, he said, “But they don't necessarily have a developed skill set when it comes to communicating risk and impact to someone who speaks English.”

Further ahead, he said, “What gets really exciting going forward is [the ability to] fine tune and custom train models to be able to actually look at the overall dataset of what the crowd is doing on the platform and start getting into vulnerability prediction.” That includes working out “what the bug is actually worth.”

And that is more imperative that ever, because of the way AI had captured the imagination of the market. The fact that organizations can build on LLMs so quickly accelerates the rush to market, he said.

“What that does is it creates a need to speed when it comes to getting serial input on what might go wrong, and that's where we fit in.”

Hackers good and bad were using AI as a tool to make themselves more effective. Meanwhile, AI itself was a target in terms of bias testing.

And, he said, “There’s AI as a threat in terms of, the integration of AI into existing systems and the unintended consequences that come from that.”

This really isn't unique to AI, he said. “It's this idea of integrating and pushing technology out to market as quickly as possible to compete with others that are doing the same thing. And then kind of skipping security in the process.”

In that respect, at least, this wave of innovation is the same as every other wave that’s crashed before. The difference is scale.

Join peers following The Stack on LinkedIn