Appian’s CSO has highlighted the need for AI bill of materials, or AI-BOMs, as concern grows about companies and service providers rushing AI-based services into production without taking full account of the security implications.

Andrew Cunje was speaking in the wake of JP Morgan CISO Patrick Opet’s open letter about the security shortcuts cloud companies were taking which were causing a “dangerous concentration risk”.

He said there were two ways to think about Opet’s comments. “One is, you probably don't want to race so far ahead before you have a strategy in place for what you're going to be building, or what or how your company is going to be using a tool like AI.”

Equally imporrtant, Cunje said, is "making sure that you're selecting a vendor that is highly trusted.” He pointed out that Appian’s AI offerings are based on private models hosted for customers. The company expanded its AI agents strategy this week, with CEO Matt Calkins saying its approach would give customers full visibility into how their agents worked.

But more formal, even officially sanctioned, approaches were likely, he continued. When it came to security, Cunje said there had been a strategic shift over the last two years when it came to partnerships between public and private entities.

This was partly about the modernization of systems, but also a demand for more transparency. “We're seeing a lot of the nation states, including US Federal, including Australia, asking to see vulnerability data. They want to get direct feeds of this type of stuff.”

Sofware bill of materials (SBOMs) were part of this shift, he said. “We're even looking into the idea of an AI bom. So where you can see for the AI that you're using with Appian, these are all of the services, these are all of the packages, these are the models that are leveraged.”

This would create an “artifact to build trust, he said. “I think there’s a bit of leading the industry as well…creating as much trust and transparency in AI, but then two, leading customers to solid decisions.”

This was similar to Google deprecating old protocols “and pushing customers in the correct direction.”

“[It] would be a slice of the SBOM with slightly more detail and characteristics. This is something that you know in the works now”

He said this tied in with the EU AI act, the NIST AI Act, “and we're seeing a number of the other 30 frameworks that we already deal with from a regulatory perspective start to think about AI.

"So as those emerge, we'll be bringing all of that guidance in and implementing their recommendations. But also we are trying to lead the pack there a little bit.”

A putative AI-BOM won’t solve all AI security concerns of course. General enterprise security disciplines were still required, he said.

Cunje said “good hygiene around identity is critical to building infrastructure.” And this had to extend beyond humans. “With the rise of AI agents are another identity that you have to maintain and have good, secure processes and good hygiene around as well.”

Join peers following The Stack on LinkedIn

The link has been copied!