OpenAI says it will start referring some user’s prompts to human reviewers and then potentially the police, if its team detects risk of harm.

Anthropic meanwhile said in terms updated on August 28, that its Claude models will use customers’ data to train its models (across Free, Pro, and Max tiers) unless they opt out – and retain their data for five years.

(The change, announced on August 28, applies only to consumer users; the new terms do “not apply to services under our Commercial Terms, including Claude for Work, Claude Gov, Claude for Education, or API use, including via third parties such as Amazon Bedrock and…Vertex AI.”)

… plus c'est la même chose?

OpenAI meanwhile said that it was making the shift for safeguarding reasons: “When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team… who are authorized to take action.”

Plus ça change, plus c'est la même chose in a surveillance capitalism economy where the consumer is a data cash cow? Perhaps: Yet whilst the twin prerogatives of harm-avoidance guardrails and telemetry for product improvement are hardly novel behaviours in SaaS-land, they do once more cast generative AI model use into the enterprise privacy spotlight. 

After all, CISOs and CIOs around the world have been grappling with how to manage employee AI use – and often failing: “Even with organizations that have Enterprise licenses, we see a huge amount of inadvertent personal account use. CASB/SASE tools just don't have the granularity to enforce at this level,” as Harmonic Security CEO Alistair Paterson puts it. 

Get the full story: Subscribe for free

Join peers managing over $100 billion in annual IT spend and subscribe to unlock full access to The Stack’s analysis and events.

Subscribe now

Already a member? Sign in