Microsoft told customers on Thursday it believed it had fully remediated CW1226324, a bug that meant Microsoft 365 Copilot Chat was had been processing sensitive emails since at least January 21.
Microsoft said in an update to customers seen by The Stack that some impacted users were reporting Copilot was honouring stay-out flags again.
"We've received feedback from some affected users that the deployment fix has resolved impact," the update reads.
"We're monitoring the fix as it fully saturates in the remaining affected environments, and we expect full remediation by our next scheduled update."
"We've received reports of an issue"
Microsoft issued a service alert for CW1226324 on February 3, but it went largely unnoticed until it was first reported by Sergiu Gatlan at BleepingComputer two weeks later.
The bug may have flown under the radar due to the innocuous description Microsoft applied to the issue throughout.
"We've received reports of an issue with the Microsoft 365 Copilot chat improperly summarizing email messages," Microsoft's first alert on the bug states.
Users had, in fact, reported what amounts to a data loss prevention (DLP) policy bypass. Copilot ingested email messages in Outlook's drafts and sent items folders, regardless of whether they were marked as restricted or if an organisation had DLP policies in place.
Microsoft did not respond to questions including whether Copilot exfiltrated any data from organisations and whether sensitive information leaked beyond trust boundaries within organisations, a question of interest to various regulators The Stack spoke to.
3 hours, but 28 days
A log of Microsoft alerts from February 3, seen by The Stack, shows Redmond identified a "code issue" – and had started to deploy a patch – within three hours of the "service degradation issue" first being logged.
A week later, Microsoft tagged the issue as ongoing since January 21, suggesting Copilot had been incorrectly ingesting emails for nearly two weeks by the time it was first logged as a bug.
It's still unclear whether the new timeline relates to an update that broke DLP or to the first user reports of the behaviour.
On February 18, Microsoft said it was reaching out to "a subset" of affected users, and on February 19 it said it was still monitoring as its fix "saturates in the remaining affected environments."
DLP trust
The incident raises some fundamental questions about DLP and trust boundaries when using AI tools, such as when you layer AI over an environment where it stands with regard to DLP.
"This doesn’t mean that sensitivity labels and the DLP policy are bad. It just means that a bug is stopping them working as expected," said Tony Redmond, an Office 365 MVP responsible for the long-running Office 365 for IT Pros ebook series.
He characterised the incident as an "embarrassing security glitch" that points to poor Microsoft code review.
But it also speaks to trust assumptions, argued AWS governance architect Harry Mylonas.
"I define this as a terminal failure of 'contractual sovereignty'. Most C-Suites believe that if they apply a label, they have satisfied their duty of care. This incident proves that in a shared global control plane, your security is only as strong as the provider's latest code push."
Update | Microsoft provided a statement after publication of this article. It reads, in full:
"We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers."
Views on the incident? Get in touch.