The Open Source Security Foundation released its first ever guidance for developers use AI-coding assistants with prompt advice for producing secure code.
The guide for using AI to develop code with "application code security, supply chain safety, and platform or language-specific considerations" comes in a week where major attacks dominated headlines.
Nearly 500 packages have been compromised in the most recent supply chain attack on npm, dubbed "Shai-Hulud," including repositories connected to Crowdstrike and Google Gemini. Elsewhere, JLR may have to laid off hundreds of staff after a cyber attack on September 1 has grounded production to a halt.
Can't stop the tide
Assistant-driven code has a bad reputation for security, but security-minded prompts can greatly improve the model's results, said open source security stalwarts Avishay Balter and David A. Wheeler in their announcement of the guide.
"We want AI to help improve security instead of undermining it, and that requires considering security while we use these tools. This guide is one step toward that goal."
The guide is to be joined by an in-development course, designated LFEL1012, on the safe use of assistants. In the meantime, the "lightweight" guide (call it 16 pages, at a decent font size) is intended as a standalone to help make for safer prompts immediately.
No role play
A top lesson from the OSS guide: don't tell your LLM that it is a security expert.
Prompt engineers widely recommend personas in most other fields, but "we are not currently recommending in the general case that the AI be told to respond from a particular viewpoint", says OpenSSF in its Security-Focused Guide for AI Code Assistant Instructions.
That specifically includes something such as “Act as a software security expert. Provide outputs that a security expert would give”.
That is based on research current as of this April and February, though, when LLM dinosaurs still roamed the earth. OpenSSF simultaneously encourages continued experimentation with such prompts, and says that recommendation could change as vibe coding security research continues coming in.
Recursive criticism
Recommendations include annotating AI-generated code – and prompting your assistant to TODO-flag its own output where there is complex logic.
You should also use Recursive Criticism and Improvement (RCI) using instructions such as “Review your previous answer and find problems with your answer” followed by “Based on the problems you found, improve your answer”, the guide recommends.
On a higher level, says OpenSSF, remember that the LLM is an assistant and you are responsible for any harm it does; don't skimp on the review and testing; and express concerns you have about its code with the AI, in detail.
Slopsquatting attacks
AI or not, your code needs an SBOM and proper package management, OpenSSF recommends.
But its top recommendation on the supply chain is around the newer issue of slopsquatting.
Current research shows that it helps to emphasise pre-use package evaluation, the guide says, and while you need to check new dependencies, AI self-identification can help to prevent hallucinated package names vulnerable to slopsquatters.
"For example: 'Use popular, community-trusted libraries for common tasks (and avoid adding obscure dependencies if a standard library or well-known package can do the same job). Do not add dependencies that may be malicious or hallucinated'," says OpenSSF.