Exclusive Content:

Home Office Blunder: Thousands of Deportation-Intended Migrants Missing Before Rwanda Flights

A recent revelation has cast a glaring spotlight on...

Taxes: here is the (large) amount of the advance that the tax authorities will pay you on Monday January 15

The end-of-year holidays have just ended and it is...

Weather: what will the weather be like in February, March and April?

At the start of 2024, the temperatures on the...

United States | Facebook changes the settings for accessing problematic content

spot_img

(Washington) Should information identified as false by verifiers be masked on social media? In the United States, Facebook now leaves the choice to its users, a significant change officially aimed at giving less power to its algorithm, but which could benefit conspiracy theorists according to some specialists.

Until now, this algorithm relegated by default to the bottom of users’ feeds content reported by Facebook’s partner verification services, including AFP.

But a new social network parameter now allows Internet users to make this choice themselves and therefore potentially make this false or misleading content more visible.

This option offers to “reduce further” the visibility of those content deemed problematic, pushing them “even lower on the feed so that you can’t see them at all”, or “not reduce” their visibility, with the the opposite effect, allowing users to have more access to these posts and increasing the likelihood of seeing them.

“We’re giving people on Facebook the ability to better control the algorithm that prioritizes content on their feed,” said a spokesperson for parent company Meta, adding that it’s meeting “the expectations of users who want to have the ability to decide what they see on our apps”.

Appeared last May, this feature has not been the subject of specific communication from Facebook, leaving American Internet users to discover it for themselves in the settings.

This development comes in a particularly polarized political climate in the United States, where the moderation of content on social networks is a particularly sensitive subject.

Conservatives accuse the government of pressuring platforms to censor or remove content, under the false pretense of verification.

On Tuesday, a federal judge in Louisiana, for example, limited the possibilities of meetings between senior officials of the administration or state agencies and social networks concerning questions of verification of content.

And disinformation scholars from respected institutions, like Stanford University’s Internet Observatory, accused of promoting censorship – which they deny – face lawsuits from law enforcement agencies. conservative activists and the launching of a congressional commission of inquiry.

For many researchers, this new parameter introduced by Facebook about 18 months before the 2024 presidential election raises fears of an explosion of problematic content on social networks.

“Making more discreet content that reviewers consider problematic is central to Facebook’s strategy to fight misinformation,” David Rand, a professor at MIT, told AFP. “Allowing people to just change that seems to me to put this program in great jeopardy. »

Meta for its part wanted to be reassuring, recalling that the content will always be presented as having been identified as misleading or false by independent verifiers, and added to think about offering the option in other countries.

“This is the result of the work we have been doing for a long time in this area and will align the features for Facebook users with those existing on Instagram,” said the group’s spokesperson.

The network also now makes it possible to decide how often the user will be confronted with “low quality content”, such as clickbait or spam, or “sensitive content”, violent or shocking.

Verification bodies, for whom it is impossible to verify all content, are regularly attacked online by people questioning their assessment, even when it is obvious that the subject matter is presented in a false or incomplete manner. .

According to Emma Llanso of the Center for Democracy and Technology, a user “lacking confidence in the role of verifiers will be inclined” to activate this new feature “to try to avoid seeing the verifications carried out”. Facebook should study the effects on “exposure to misinformation” before rolling out its feature elsewhere in the world, “ideally by sharing the results,” she said.

Latest articles

Tragic Crash at White House Perimeter Gate Claims Driver’s Life, Secret Service Clarifies Incident

Tragic Accident at White House Gate In a tragic turn of events, a driver lost...

Anne Hathaway Captivates in The Idea of You: A Deep Dive Film Analysis

Anne Hathaway's Compelling Performance: Delving into the Heart of "The Idea of You" Anne Hathaway's...

Nvidia and AMD Stocks React as Semiconductor Sector Faces Turbulence

The semiconductor market experienced significant fluctuations as Nvidia and AMD stocks reacted to industry...

Adrian Newey Announces Departure: Red Bull Racing Faces Transition in F1 Design Leadership

End of an Era: Adrian Newey Announces Departure from Red Bull Racing In a significant...

More like this

Home Office Blunder: Thousands of Deportation-Intended Migrants Missing Before Rwanda Flights

A recent revelation has cast a glaring spotlight on the Home Office, as it...

Taxes: here is the (large) amount of the advance that the tax authorities will pay you on Monday January 15

The end-of-year holidays have just ended and it is nice to benefit from an...

Weather: what will the weather be like in February, March and April?

At the start of 2024, the temperatures on the thermometer are enough to make...