Facebook has long relied on its users to report content that is problematic but soon enough it found itself with another problem on its hands – users who falsely reported certain information as untrue.
It’s not the first platform to do this – in the wake of a new wave of fake news, Russian interferences and people who abuse the company policies, other platforms, such as Twitter, are changing their approach towards risk. Twitter is now analyzing the behavior of other accounts within a person’s network as a risk factor. This allows it to judge if a person’s tweets should be spread around.
So Facebook has, as of recently, began to assign the users a reputation score, which serves as an indicator for their trustworthiness. Obviously the score is not meant to be an absolute indicator about the user’s overall credibility but it serves as a new type of measurement that adds up into the behavioral clues that Facebook will, from now on, take into account.
In addition, Facebook is also monitoring which publishers are considered more trustworthy by the users.
Of course, the systems are not very transparent and the companies that feature them are not too keen on openly discussing them. The scores Facebook is giving out are not even being made public.
Facebook relies on fact-checkers as well, via third-party services like Snopes and PolitiFact but even they, according to a report, are bothered by Facebook’s lack of transparency – even though the fact-checkers monitor the stories, apparently it’s Facebook who ultimately decides which stories stay and which don’t.