OpenAI
Open models hold potential for “substantial harms, such as risks to security, equity, civil rights, or other harms due to, for instance, affirmative misuse, failures of effective oversight, or lack of clear accountability mechanisms”
Company says it is releasing examples "to give the public a sense of what AI capabilities are on the horizon" as one expert, NVIDIA's Dr Jim Fan emphasised that "if you think OpenAI Sora is a creative toy like DALLE... think again"
"In every insider threat case, there is a combination of network activity and employee behaviour. The malicious activity crosses both physical and electronic modalities..."
"To data scientists and developers in the domain, the answers to these questions may be laughably obvious and the questions naive, but to most end-users they will not be."
"We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft"
Altman was "not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”