Facebook Inc. Chief Executive Officer Mark Zuckerberg pushed his idea this week that Big Tech can self-police content by publishing reports and data on how well the industry removes objectionable posts. The problem is Facebook has a system in place already that’s done little to improve accountability, according to outside experts.

“Transparency can help hold the companies accountable as to what accuracy and effectiveness they’re achieving,” Zuckerberg told Congress on Thursday. Facebook wouldn’t have to change much if such a system was the industry norm, he added. “As a model, Facebook has been doing something to this effect for every quarter.”

Zuckerberg has pushed his proposal many times amid widening calls to make social media companies more responsible for the content users post. As tech platforms come under fire for an increase in harmful posts, from hate speech to threats of violence, U.S. lawmakers are debating how to reform Section 230 of the Communications Decency Act, which shields companies from liability for user-generated content.

]While a crackdown on Big Tech has been deliberated for years, the call for renewed action comes after social media companies were criticized for playing a part in spreading misinformation that fueled the Capitol riots in January and false claims about Covid-19. Thursday’s hearing brought Congress no closer to a legislative solution, giving Facebook an opportunity to influence the outcome.

“If one company does something, it at least allows the discussion to move forward,” said Jenny Lee, a partner at Arent Fox LLP who has represented technology clients on Section 230.

However, the self-reported numbers aren’t as transparent as they sound. Facebook, for instance, reported in February that more than 97 per cent of content categorized as hate speech was detected by its software before being reported by a user, and that it acted on 49% of bullying and harassing content on is main social network in the fourth quarter before it was flagged by users, up from 26 per cent in the third quarter. But the denominator of the equation is what Facebook’s AI took down -- not the total amount of harmful content. And Facebook doesn’t share how many people viewed the postings before they were removed, or how long they were up.

“It was a bit shocking and frustrating that Zuckerberg was mentioning that report as something that the industry should aspire to,” said Fadi Quran, campaigns director at Avaaz, which tracks misinformation and other harmful content on Facebook. When the social media company disclosed how much violent content it removes, “did they take it down within minutes or within days?” he added.

The report focuses on AI, which means it doesn’t disclose content that Facebook’s human users flag as a violation of policy, what share is removed once reported, or whether it reviews those reports at all.

A system like Facebook’s, which relies on machine learning, has significant flaws when applied broadly, according to Emma Llansó, a director at the Center for Democracy & Technology. “You really start increasing the risk that the automated systems are going to miss something by having false negatives, and have false positives where totally acceptable speech is taken down in error.”

The pitfalls of Facebook’s reliance on AI were outlined earlier this year by the company’s external oversight board, an independent panel that Facebook created to review its most contentious content decisions. The board recently overturned Instagram’s decision to remove an image raising awareness for breast cancer symptoms, even though breast cancer awareness was an allowed exception to the company’s nudity policy.

“The incorrect removal of this post indicates the lack of proper human oversight which raises human rights concerns,” the board wrote. “The detection and removal of this post was entirely automated.” The panel requested that users should be notified if their content was taken down by AI and given the option to have a human take a look at an appeal.

Facebook has said giving users that kind of option would be operationally difficult. Facebook’s services have more than 3 billion users, and about 15,000 content moderators, some of whom are working from home due to the pandemic -- and can’t look at the most sensitive content outside of the office for legal reasons.

The shortage of human staff, as well as the AI still in development, poses particular challenges for Facebook’s global network. “We need to build systems that handle this content in more than 150 languages,” Zuckerberg said Thursday. “And we need to do it quickly. And unfortunately, there are some mistakes in trying to do this quickly and effectively.”

The content transparency reports contain no data about about the languages or geography of the posts Facebook is enforcing its rules against. It also doesn’t say anything about misinformation -- another key area of concern for lawmakers.

“That transparency report gives almost zero transparency,” Quran said.