Microsoft has admitted to a technical error that caused its AI-powered workplace assistant to access and summarise some users’ confidential emails unintentionally.

The company has promoted Microsoft 365 Copilot Chat as a secure generative AI tool for organisations using Microsoft 365. However, it confirmed that a recent issue resulted in the system surfacing information from emails stored in users’ Draft and Sent folders — including messages marked as confidential.

Microsoft said it has since deployed a configuration update worldwide for enterprise customers and stressed that the problem did not grant access to anyone who was not already authorised to view the information.

In a statement, the company explained it had “identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop.”

While Microsoft said its access controls and data protection policies remained intact, it acknowledged that the behaviour did not align with the intended Copilot experience, which is designed to exclude protected content from AI responses.

The issue was first reported by tech outlet Bleeping Computer, which cited a service alert indicating that emails carrying confidentiality labels were being incorrectly processed by Copilot Chat. According to the notice, a work tab within the tool summarised emails even when sensitivity labels and data loss prevention policies were in place to restrict sharing.

Reports suggest Microsoft became aware of the issue in January. A related service notice was also shared on an IT support dashboard for NHS workers in England, where the root cause was described as a “code issue.”

The NHS confirmed that any processed draft or sent emails would remain accessible only to their creators and that no patient information had been exposed.

Concerns over rapid AI rollout

Copilot Chat integrates with applications such as Microsoft Outlook and Microsoft Teams, allowing users to summarise emails, generate responses and retrieve information across workplace systems.

Although enterprise AI tools typically include enhanced security safeguards, some experts argue that incidents like this underline the risks of deploying generative AI at speed. As companies race to integrate advanced AI features into existing platforms, they warn that occasional data handling errors may be difficult to avoid — even with strict governance frameworks in place.

LEAVE A REPLY

Please enter your comment!
Please enter your name here