One of the ideas Zuckerberg presented Friday indicates that the company wants to go further in “disrupting fake news economics,” and is considering more policies like the one it just announced, along with stronger “ad farm detection.”
Another promises stronger detection of misleading content. “This means better technical systems to detect what people will flag as false before they do it themselves,” Zuckerberg wrote.
News Feed can already make some guesses about whether a post is authentic based on the user behavior around it. On Friday, Zuckerberg specified that Facebook currently watches for things like “people sharing links to myth-busting sites such as Snopes” to determine whether a post might be misleading or false. Zuckerberg didn’t go into specifics about what more Facebook might be looking to do on this front.
Facebook also indicated that it’s trying to find ways to rely more on users and third parties to help flag and classify fake stories. Zuckerberg listed “easy reporting” methods for users, and listening more to “third party verification” services like fact checking sites. Zuckerberg also said Facebook was considering how to use third-party and user reports of fake news as a source for displaying warnings on fake or misleading content.
The site would also improve the quality of articles that appear in “related articles” under news stories that are posted to Facebook. And, Zuckerberg said, Facebook would “continue to work with journalists and others in the news industry” on the issue.
While Facebook has attracted the majority of scrutiny this week, the platform is hardly the only company struggling to address the spread of fake news on the Internet. On Monday, the top Google hit for the search “final election count” was a site falsely reporting that Trump had won the popular vote. Like Facebook, Google has also taken steps this week to try to stop fake news writers from using their ad services to make money.