Microsoft on Thursday revealed that it’s delaying the rollout of the controversial artificial intelligence (AI)-powered Recall feature for Copilot+ PCs.
To that end, the company said it intends to shift from general availability to a preview available first in the Windows Insider Program (WIP) in the coming weeks.
“We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security,” it said in an update.
“This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available to all Copilot+ PC users.”
First unveiled last month, Recall was originally slated for a broad release on June 18, 2024, but has since waded into controversial waters after it was widely panned as a privacy and security risk and an alluring target for threat actors looking to steal sensitive information.
The feature is designed to screenshot everything users do on their PCs and turn them into a searchable database using an on-device AI model.
Windows Central also reported that Microsoft was “overly secretive” about Windows Recall during development and chose not to test it publicly as part of the Windows Insider Program.
The backlash prompted Redmond to make Recall an opt-in feature, in addition to making a slew of other security changes that require users to authenticate via Windows Hello in order to view the content.
It also reiterated that the feature is further protected by “just in time” decryption that ensures the snapshots are only decrypted and made accessible when the user authenticates using their biometrics or a PIN.
The delay follows Microsoft President Brad Smith’s testimony to Congress during a House Committee on Homeland Security hearing about the tech giant’s security lapses in recent years following high-profile breaches by Chinese and Russian state hackers.
Smith, in his written testimony, said Microsoft would commit to prioritizing security issues and that it’s “more important even than the company’s work on artificial intelligence.”
If anything, the move highlights the growing scrutiny and caution surrounding the deployment of AI capabilities, as companies increasingly grapple with balancing innovation and driving responsible and trustworthy use of the technology.
The development comes days after Apple unveiled a new methodology called Private Cloud Compute (PCC) that aims to perform AI processing tasks on the cloud in a privacy-preserving manner.