Facebook Implements a Problematic Content Plan: Remove, Reduce, Inform

Scammers, fake news creators, and suggestive content creators will have a range of new Facebook features to deal with in the coming weeks. With attempts to remove contentious content from the site, reduce the reach of those that aren’t taken down, and inform audiences when they’ve encountered such material all in the works, we’re breaking the action items below.

Holding Groups Accountable

As Facebook users and brands shift their conversations away from the news feed and into niche groups, the company is attempting to thwart the spread of offensive, divisive, and inaccurate content that may get shared – even privately. In a blog post, Facebook claims that an unnamed technology allows them to, “proactively detect many types of violating content posted in groups before anyone reports them and sometimes before few people, if any, even see them.”

What group administrators and moderators should know is that they will now be held accountable for posts that violate the platform’s Community Standards. When member posts contain violations, Facebook will look at who approved the content for Group visibility and could remove the entire group if they believe the admins have acted recklessly.

To help keep track of these violations, a new Group Quality feature for admins (similar to the Page Quality tab that was introduced earlier this year) will provide an overview of flagged content and false news found in the group. A group with multiple violations, or one that shares links to malicious or false news websites, will have their reach downgraded.

In addition to these new features, members will soon be able to remove all of their posts and comments from a group should they choose to leave it.

Penalizing Fringe Link Sharing

For years, fringe publishers have been using shocking, divisive, and partisan headlines to provoke engagement on Facebook. Now, articles from those sites will begin to see a decrease in reach as the platform stifles their influence on the news feed.

The new ranking signal, known as Click-Gap, “looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on news feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content.” In short, the algorithm will now cross-check the performance of links both on and off Facebook in order to determine if its hosting website is reputable.

Moderating Risky Instagram Content

Even if your photo and video content doesn’t unequivocally violate Instagram’s Community Guidelines, that doesn’t mean it’s safe. Posts that the platform finds inappropriate (including sexually or violently suggestive content) will now be excluded from Explore and hashtag search pages.  

Bringing Verification to Messenger

As Messenger use increases, Facebook will begin displaying Verified Badges on conversations with brand pages that have earned the checkmark. The move hopes to help users avoid scammers that use fake accounts to pretend to be someone they are not.

Once those fake accounts are identified, users will be able to easily block them through an updated block option and list of settings that will help them control whether people “such as friends of your friends, people with your phone number or people who follow you on Instagram” can reach users via Messenger at all.