An AI-powered system could soon be tasked with evaluating up to 90% of updates made to Meta-owned apps like Instagram and WhatsApp, according to internal documents reportedly seen by NPR. Reports Technology News

The move marks a significant shift from human-led privacy evaluations to automation. Since a 2012 agreement with the Federal Trade Commission (FTC) — originally established when the company was still known as Facebook — Meta has been required to conduct privacy risk assessments on its products and updates.

Until now, these reviews have primarily been performed by human privacy experts, who assess how product changes could impact users’ personal data and overall privacy.

The introduction of AI into this process is being positioned as a way to streamline and scale compliance efforts across Meta’s sprawling suite of services. However, the change may raise concerns among privacy advocates and regulators about the effectiveness and accountability of automated decision-making in safeguarding user data.

Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an “instant decision” with AI-identified risks, along with requirements that an update or feature must meet before it launches.

This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”

In a statement, a Meta spokesperson said the company has “invested over $8 billion in our privacy program” and is committed to “deliver innovative products for people while meeting regulatory obligations.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here