This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated.

AUSTIN (Nexstar) — Former Facebook employee Frances Haugen captured the world’s attention in Oct. 2021 when she went public with complaints that the company’s own research shows how it magnifies hate and misinformation. On Monday, the Facebook whistleblower spoke at South by Southwest to further draw attention to issues she said the media giant has yet to address.

Haugen, who worked at Google and Pinterest before joining Facebook (now Meta) in 2019, said she had asked to work in an area of the company that fights misinformation after losing a friend to online conspiracy theories.

“It’s really easy to be dismissive of the severity of misinformation. But I lived with watching a college-educated, smart, insightful, funny person, go down the rabbit hole,” she said. “That made me realize that once we begin to not have an ability to have shared facts … we don’t have a path back to reconciliation.”

In April 2018, CEO Mark Zuckerberg testified before Congress with promises that artificial intelligence, or AI, technology would be the solution for misbehavior on Facebook, like fake news, propaganda and hate speech.

“Over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content,” Zuckerberg told lawmakers in the 10-hour hearing.

During her panel at SXSW, Haugen refuted this and criticized Facebook’s reliance on AI as means for fact checking and regulating content on its platform.

“Facebook has claimed third party fact checking will save us; AI will save us. And the reality is it doesn’t,” she said.

The data scientist said according to Facebook’s own reserach, AI only helps reduce 3 to 5% of hate speech content, .08% of violence-inciting content and 8% of graphic violent content.

“The AI cannot fix itself. It has to have humans watching who have context, who can understand what are the spots we need to fix,” Haugen said.

She also used the example of one of Twitter’s relatively newer functions, that requires users to click on an article before sharing it — saying research shows that “tiny bit of friction” lowers misinformation by 10 to 15%.

“If you have to click on a link, you can still share it and say whatever you want, but you have to click on it. Have you been censored? I don’t think so,” Haugen said.

Haugen said she believes Zuckerberg could add features to Facebook that would help decrease misinformation and disinformation without censoring users but chooses not to use out of fear of losing profits.

“I cannot listen to Mark Zuckerberg say earnestly that he is a defender of freedom of speech … he has tools he could use today to stop misinformation that don’t involve picking winners and losers in the marketplace of ideas,” she said.

Haugen gave a variety of proposed solutions to these problems, but all of her pitches pointed back to more transparency in tech giants.

“We have to have legislatively-supported, required transparency,” she said. “The solution here is about putting humans in the loop … because the problem with Facebook is not bad people. It’s not bad ideas. It’s about product choices that give the most reach to the most extreme ideas.”

Zuckerberg also made an appearance at SXSW virtually to talk about the metaverse, a 3D virtual “world” of mixed virtual and augmented reality.