Facebook Asks Users: “Are You Concerned that Someone You Know Is Becoming an Extremist?”

The Silicon Valley thought police at Facebook are testing a new feature that asks users if they fear that their friends are becoming radicalized by content they’ve seen on the platform. On Thursday, multiple Facebook users reported receiving pop-up messages asking if they need “support” from the tech giant on escaping “violent extremist groups.”

Among the first to report the Orwellian new message that Facebook reports it is “testing” was RedState editor Kira Davis, who tweeted, “Hey has anyone had this message pop up on their FB? My friend (who is not an ideologue but hosts lots of competing chatter) got this message twice. He’s very disturbed.”

Davis’s message was accompanied by a screenshot of one of the new Facebook messages, which read: “Are you concerned that someone you know is becoming an extremist? We care about preventing extremism on Facebook. Others in your situation have received confidential support.”

The frightening message — which sounds as if the tech giant is attempting to instill a Soviet-style informer culture on users — is accompanied by a link which the recipient may follow to “get support.”

Twitter users then began reporting another Facebook message that claims: “You may have been exposed to harmful extremist content recently.” In addition to that piece of disturbing news was the statement: “Violent groups try to manipulate your anger and disappointment. You can take action now to protect yourself and others.”

{modulepos inner_text_ad}

According to Facebook, the new messages are part of a test of a program meant to assist users who have been exposed to what Facebook terms “extremist content.”

In an e-mail statement, a Facebook spokesperson explained:

This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk. We are partnering with NGO’s and academic experts in this space and hope to have more to share in the future.

The test is reportedly only running in the United States currently and is considered a pilot program for a more global approach in preventing radicalization on the site.

Facebook said that the test was a part of its commitment to the Christchurch Call to Action, an agreement signed in 2019 in the aftermath of shootings at two mosques in Christchurch, New Zealand, which were live-streamed on Facebook. The Christchurch Call to Action is an attempt to keep so-called extremist content off the internet.

The “Call to Action” was signed by dozens of nations including New Zealand, France, the European Commission, and the United Kingdom. Among the tech industry signers are Facebook, Amazon, Google, and Twitter. Initially, the Trump administration in the United States declined to sign the agreement citing First Amendment concerns. Then, in May of this year, the Biden administration quietly signed on to the agreement on behalf of the United States.

In addition to Facebook’s commitment to the Christchurch Call to Action, Democrats in the United States have been urging Facebook to address what they call “misinformation” on their platform. In the wake of the January 6 unrest at the Capitol, Democrats on the House Energy and Commerce Committee sent a letter to Facebook urging the tech giant to do more to address what they called “dangerous and divisive rhetoric” on the site.

“The Committee is deeply concerned about dangerous and divisive rhetoric thriving on Facebook’s platform and is considering legislation to address these issues. From conspiracy theorists peddling false information to extremist voices urging and organizing violence, Facebook has become a breeding ground for polarization and discord,” the committee wrote.

The letter was signed by Representatives Frank Pallone, Jr. (D-N.J.), Mike Doyle, Jr. (D-Pa.), Diana Degette (D-Colo.), and Jan Schakowsky (D-Ill.).

Conservatives argue that Facebook has no right to restrict content on their platform and no place in defining what content should be deemed “extremist.” As an Internet platform, Facebook enjoys protections against liability under Section 230 of the Communications Decency Act. Under those protections, Facebook cannot be held liable for posts made by others on its site.

As soon as an Internet platforms begin censoring information on their sites, conservatives argue, they are acting as publishers and not neutral websites. They no longer deserve protection from liability.

Admittedly, Facebook and the other tech giants are in a tough spot. They’re facing pressure from Democrats in America and worldwide left-wing NGOs to limit content that is objectionable to them. If Facebook does that, it risks rightly being called out for censorship by conservatives.

When faced with such a dilemma it would be prudent for Facebook — at least in America — to err on the side of freedom.