Security pros and tech leaders might be agog over the possibilities and threats posed by AI, but they are failing to tackle an incontinent approach to identity management or resolve competing security and dev priorities.
Sysdig’s 2024 Cloud Native Security and Usage Report shows that while generative AI has dominated mainstream tech discourse for over a year, it has yet to embed itself meaningfully in cybersecurity teams.
Sysdig cybersecurity strategist Crystal Morin pointed out that Gen AI and LLMs were an “enterprise level topic” and there may be plenty of individual use for things like crafting emails or copy or assisting in code writing.
“But as far as security implementation and use goes, it's a little slower,” she said. Though this will change over time. “I'm sure the data that we found is going to change tremendously over the next year.
Machine learning and other techniques are embedded in cybersecurity, for anomaly detection for example. Overall, data analysis and correlation account for 85 percent of AI adoption.
AI in IAM? It's coming...
There are clear opportunities to use gen AI tools for identity management, vulnerability management, and picking up unused permissions, said Morin.
“I think orgs are just being cautiously optimistic about it. They need to find out how to use AI securely before they just go full blown.”
They also need to address a backlog of other, far more mundane security problems, such as identify management.
Sysdig found that just 20 percent of cloud native app protection users were making the effort to review/manage IDs on a weekly basis.
An amazing 98 percent of permissions granted go unused. Almost two thirds of cloud users and roles were actually “non humans”, it found.
But, said Morin, developers likely needed many permissions when working on a project – but once projects are complete, “that full privileged access test user needs to be removed. And that's not necessarily happening.”
The continuing use of public registries for downloading images or components was also “disappointing” Morin said. Public registry use for hosting or downloading container images was 66 percent, the highest level seen since the firm started the study back in 2019.
“We know that we should not be using public sources. Because there is a higher risk of vulnerabilities not being fixed. You're relying on open-source software to make those fixes for you.”
This worked together with a lack of management of hardware limits to spot potential issues. “Crypto mining is the easiest example,” she said.
This was typically down to “convenience” with tech pros not wanting to limit themselves and have to repeatedly ask for more resources.
Another neglected area, said Morin, was drift control, the idea “that an immutable workload should not change during runtime; therefore, any observed change is potentially evident of malicious activity.”
Just 25 percent of cloud users received drift alerts, and just 4 percent fully leverage drift control policies by automatically blocking unexpected executions. Needless to say, false positives are a big concern, and Sysdig said the numbers “speak broadly to the state of security maturity regarding continuous delivery and infrastructure automation practices”.
Nevertheless, “critical and high vulnerabilities in use” were down by half, suggesting teams were paying down their “high risk vulnerability debt when presented with actionable, well-scoped remediation priorities.”
Overall, shift left remains a “not yet fully realized” goal, Runtime scans showed a 91 percent vulnerability policy failure rate, while CI/CD build pipelines had a 71 percent failure rate. The report said the shift-left mantra suggests these numbers should be flipped, with dev orgs spotting failed builds, correcting the code, then redeploying.
This cloud be down to a disconnect between Dev (Ops) and Security teams, said Morin. Or it could be security looking to help dev speed things along, “and we’ll take care of it on the back end.”
What was really needed is cohesive teamwork, and each group understanding the other’s real needs.
But, said Morin, this “could be a great opportunity at this point for security leaders to say, hey, let's really start working together on this. Let's go get some happy hours. Let's go out and have some fun and get to know each other and then come back to the office and get stuff done.”