If there was a job advertised this week that really captured the zeitgeist and hopefully sends an excited little shiver down the spine of some applicants it is this: Citi, the banking multinational with over $26 trillion in assets under custody, is looking for a Head of Generative AI Security.
Sitting within Citi’s Chief Information Security Officer (CISO)’s office, the hire will be responsible for building and running a “security engineering function that accelerates and delivers creative and secure capabilities to unlock the value of Gen[erative] AI” an advert posted this week said.
The hybrid role, based out of London, will also lead on: “Threat modelling and security integration of Gen AI platforms and business solutions; model input and output security including prompt injection and security assurance; [create a] mature Gen AI security governance embedding into our existing cyber security risk appetite framework; … successfully represent the global CISO interests [and manage a] complex matrixed team, including the people, budget, policy formation, and strategy planning, within a globally matrixed organization..."
This is not just a risk reduction role however. Other responsibilities for the Citi Head of Generative AI Security (the bank wants someone with a minimum of 15 years’ cybersecurity experience) will include to “ideate and leverage Gen AI to solve cybersecurity problems at scale for Citi."
The ad comes as CISOs and their security teams increasingly experiment with the capabilities of generative AI both offensively and defensively.
As one experienced pen tester, Ben Swain, Founder of Riskregister.ai put this week for example on LinkedIn: “Recently, from a deliberately vulnerable network I uploaded a .Nessus results file and asked if it [ChatGPT] could interpret it. It could and so I asked which one should I fix right now, how many can I fix in an hour, prioritisation was good… Threat-modelling is also good. Today I got it doing Active Directory recon and exploitation and outputting direct to bash. Love it all!”
The suggestion that the role will also “identify, assess, track, and report on security issues identified in third-party/supplier due diligence processes, self-assessments, architectural reviews, application testing, vulnerability scans, bug bounty programs, penetration testing, change management, cyber exercises, reviews and audits” suggests that the bank is exploring how it can use generative AI to optimise these processes.