Receive free Meta Platforms updates
We’ll send you a myFT Daily Digest email rounding up the latest Meta Platforms news every morning.
Facebook parent Meta has been accused of “neglecting” an initiative designed to remove harmful content and misinformation online, in an escalating dispute about how the tech giant works with human rights groups to moderate its platforms.
Researchers at the media non-profit group Internews released a scathing report on Wednesday into the social media company’s Trusted Partner program — a longstanding initiative whereby 645 global human rights and civil society groups are able to report damaging material such as hate speech and threats to activists and journalists.
Internews argued that recent lay-offs at the company have left the initiative “under-resourced and understaffed”, leading to “operational failures”.
“This lack of resourcing undermines a critical program focused on user safety and platform integrity,” said Rafiq Copeland, global platform accountability adviser at Internews, and the report’s author. “It is our hope that this program and others like it can be reinvigorated. People’s lives depend on it.”
Meta has undergone significant restructuring in recent months, including a flattening of the management structure and job cuts affecting about 20,000 staff, against a backdrop of tough macroeconomic conditions and increased demands from investors.
Dubbed the “year of efficiency”, the cuts have raised fears that content moderation in particular could be endangered as financial resources have focused on areas such as building new artificial intelligence products.
Among other criticisms, Internews found that response times to the reporting of dangerous content or actions were “erratic” and could take months, apart from in cases related to the war in Ukraine, which were prioritised.
In another sign of a deepening rift with partners, Internews said Meta first agreed to collaborate on its investigation when proposed in 2021, before later declining to participate in 2022 without explanation.
In comments within the report, Meta disputed the characterisation of many of the claims in the report but said it was “working to develop new methods of sharing information about the overall impact and performance” of the program.
It also acknowledged “the need for clear reporting guidelines and tracking mechanisms for Trusted Partner reports”, adding it was in the process of “developing standard reporting templates, tailored for different harmful content types . . . to further facilitate reporting from partners.” Meta did not respond to further requests for comment.
The critical report into Meta included surveys and interviews with 24 trusted partners, who described difficulties around the company’s reporting mechanism, a lack of consultation with experts around its policies and a lack of transparency.
Meta has previously come under fire from human rights groups for failing to sufficiently police its platforms in areas of conflict such as Myanmar.
The social media giant has also faced allegations that it has failed to be transparent and glosses over its failings — an accusation even made by its own independent oversight board, a “Supreme Court” style body set up to rule on sensitive content issues.
Meta, which received about 1,000 reports a month from trusted partners, had told Internews that more than 50 people work on the program across its content and policy teams. But the non-profit said Meta was “arguably deliberately obfuscating” by not making clear what percentage of these staffers’ time was devoted to the program.
Read the full article here