Microsoft’s Copilot Chat feature was found to have a bug that allowed the AI to read and summarize customers’ confidential emails without permission, a vulnerability that has been active since January.
The vulnerability, tracked as CW1226324, bypassed data loss prevention (DLP) policies, meaning emails labeled as “confidential” were processed by Microsoft 365 Copilot Chat despite restrictions meant to block sensitive information from entering the large language model. This issue affects paying Microsoft 365 customers using the AI-powered chat feature within Office software products, including Word, Excel, and PowerPoint. The bug was first reported by Bleeping Computer, and Microsoft later confirmed the issue. According to the reports, Microsoft began rolling out a fix for the vulnerability earlier in February.
Microsoft has not disclosed the number of customers affected by the breach. The company’s confirmation of the bug highlights the ongoing challenges in ensuring the security and privacy of AI-powered features, particularly those that process sensitive information. The incident also raises concerns about the potential risks associated with the use of AI tools in business environments, where confidential data is often handled.
In a related development, the European Parliament’s IT department has blocked built-in AI features on work-issued devices, citing concerns that the AI tools could potentially upload confidential correspondence to the cloud. This move underscores the growing awareness of the potential risks associated with AI-powered tools and the need for robust security measures to protect sensitive information.




