Facebook Drops “Fake News” Red Flags After Users Appear Not to Have Trusted Them

Just one year after introducing its “fake news” red flags, Facebook has discontinued the tool after having to face the fact that it just didn’t work. In fact, many users were sharing articles tagged by the social media giant as a result of their being tagged. Facebook, it appears, is does not have the credibility it seems to have thought it had.

The “Disputed” article flagging feature — introduced in December 2016 — was designed to stop the spread of articles the leadership of Facebook considers as misinformation. Of course, Facebook — with its record for manipulating users’ newsfeeds, censoring posts expressing views differing from those of its leadership, and violating users’ privacy — is not, perhaps, the best judge of what is and is not “fake news.” It is certainly not the sole arbiter of truth.

In announcing its decision to ditch the “fake news” red flags, Facebook Product Manager Tessa Lyons wrote in a blog post, “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs — the opposite effect to what we intended.” In other words, for those already inclined to doubt the veracity of Facebook calling an article “fake news,” the red flag served more as a “hey, maybe there’s something to this” flag.

{modulepos inner_text_ad}

In her blog post (odd that it wasn’t a Facebook note), Lyons explained that the “Disputed” flag would be replaced with a feature that lists “related articles” next to the information that Facebook deems as unreliable. This new feature (to replace the old new feature) is based on research conducted by Facebbok that showed that when questionable articles had related articles next to them, users were less likely to repost them. Lyons wrote, “False news undermines the unique value that Facebook offers: the ability for you to connect with family and friends in meaningful ways. It’s why we’re investing in better technology and more people to help prevent the spread of misinformation.”

There is no question that misinformation is spread on social media. This writer has seen more than his fair share of it. But Facebook’s attempts to “prevent the spread of misinformation” is coupled with its efforts to spread misinformation. That is a recipe for failure. The selective spread of misinformation is still misinformation.

Facebook has been found to have routinely manipulated the way users view information. This has been accomplished by a variety of methods, including psychological manipulation, censorship (including blocking posts altogether), and filtering users’ newsfeeds so that they don’t see certain articles — even when those articles are posted by friends.

In July 2015, in the wake of the Supreme Court’s overstepping of its constitutional boundaries by forcing the issue of same-sex “marriage,” Facebook conducted a psychological experiment on its users, manipulating many to change their profile pictures using the “Celebrate Pride” rainbow filter. This was not Facebook’s first foray into psychological experimentation using unwitting users as unwitting lab rats. As The Atlantic reported at the time, in June of 2014, “Facebook manipulated what users saw when they logged into the site as a way to study how it would affect their moods.” This was accomplished by creating what were essentially “test groups” and filtering their timelines to show some groups only positive posts from their friends while showing other groups only negative posts from their friends. Facebook then monitored the “mood” that each user selected from the drop-down menu when making a post of their own.

In a previous experiment, the social media giant manipulated the way people voted in the 2010 congressional elections. By filtering the political posts and ads that were shown to Facebook users, and monitoring the websites those users visited after seeing those posts and ads, Facebook was able to steer users to a particular point of view and influence their voting patterns.

Then in April 2016, Facebook altered the algorithm that determines which posts and articles users see. The new formula meant that even if a large number of your friends shared a particular article, you might not ever see it in your newsfeed. The company has also blocked users from posting certain things, essentially censoring those users.

During the 2016 elections, Facebook was seen burying posts and articles favorable to Donald Trump while promoting posts and articles favorable to Hillary Clinton.

Given that Facebook has such an obvious agenda, it is little wonder that users aren’t ready to take the company’s “fake news” flags at face value. The new model of listing “related articles” next to “disputed” articles is likely just another attempt by Facebook to help guide the thinking of unwitting users.

Addressing the change, Lyons wrote, “Overall, we’re making progress. Demoting false news (as identified by fact-checkers) is one of our best weapons because demoted articles typically lose 80 percent of their traffic. This destroys the economic incentives spammers and troll farms have to generate these articles in the first place.”

One is left to wonder what it will take to destroy the economic (and ideological) incentives of Facebook to manipulate, censor, and control the flow of information.

Photo: coffeekai/iStock Editorial/Getty Images Plus