Large language models are largely commoditized with the main players now offering little differentiation in terms of performance or accuracy, Appian’s CTO claimed this week.
The process automation and low code vendor put AI, particularly Agentic AI, at the heart of its user conference in Denver this week.
But it centred the conversation on what AI can do within the context of business processes, dialling back on some of the wilder promises rivals have made for the technology.
Appian CTO Mike Beckley said in an interview with The Stack that, “AI has been one of those elements that’s come in waves and always been over-promised and underdelivered.”
In 1999, when the company was founded, he said, “It was downright dreadful. We used to make fun of the AI recommendation algorithms used for e-commerce sites at the time.”
The technology has clearly come on in leaps and bounds since then. But, he continued, its customer base is “serious organizations doing serious things. Health care, finance, government, national security and so on.”
When it came to banking, for example, regulators might be worried about the use of AI if it led to applicants being denied mortgages for the wrong reasons.
“We’re not using AI that way. We’re using AI to make sure all the paperwork is in order. Which still leads to much better results.”
Likewise, one example that was cited repeatedly through the conference was insurance companies using Agentic AI to speed up insurance claims.
Cynics might suggest that insurance companies have little interest in speeding up insurance claims, as their business depends on not paying out.
Beckley said, “I can't speak for all the insurance industry, but if they want to go slow, they don't need me.”
He said, “The insurance industry learned a long time ago…the claim size grows the more you wait. So they want to use the AI technology to get a validation of what the damages were in that car crash as fast as possible.”
The firm has in fact seen tremendous adoption of AI by its insurance customers and across its customer base as a whole, according to Beckley.
“We consumed more tokens in Q1 than we consumed in all of 2024. It's a parabolic increase in AI adoption amongst our customers and on the insurance front.”
One insurance client had used Appian’s process automation to get into the reinsurance business, and had generated $2bn in new business and was now looking to use AI to speed up client onboarding, he claimed.
Beckley said that regulated businesses had traditionally been conservative about adopting new technology. But, “I would say that that we are seeing significant leading investment in those industries, mostly because they have the capital to do it. There isn't a shortage of interest in every industry. It's just there's usually a shortage of means.”
Appian’s own AI proposition draws heavily on AWS Bedrock. The cloud giant has invested in Appian, and the two firms have a strategic collaboration agreement.
“That allows us to bring large language models and vector databases and other associated technologies for securely deploying models to important business cases, and do that much faster,” said Beckley. “And do it in a way which eliminates the need for our customers to be concerned about the how.”
At the same time, he said, “There are different model providers, and today, most of our pioneering work has been done with Anthropic.”
There were new innovations around LLMs every day, he continued. This meant that applications need to continually get better as the technology improves. But at the same time, “They can't break your intake process for your patients can't break because the large language model had some cool new feature.”
Large language models are increasingly commoditized, he said. “The difference in accuracy between these models is sometimes less than a percentage point by the benchmarks and the costs of using them are dramatically falling.”
Nevertheless, “Customers themselves today are, experimenting with all of those open source frameworks for agentic technology, or working directly with what they get from the LLM providers.” This had nothing to do with process, he said.
But it made more sense to embed AI in existing processes, than try and build and impose guardrails after the fact. This also guarded against hallucinations, he said, as agents for example can only do what they’re allowed to do.
“So if you're willing to give up on the idea that we should just let AI do everything and instead say, wait a minute, maybe AI is here to help us. And if we set the goals and we set the constraints and we provide the tools, then AI can provide creative solutions for us. But it's still up to us to be in charge.”