https://sputnikglobe.com/20220328/questions-still-unanswered-about-possible-biased-moderation-on-major-social-media-platforms-1094248386.html
Questions Still Unanswered About Possible Biased Moderation on Major Social Media Platforms
Questions Still Unanswered About Possible Biased Moderation on Major Social Media Platforms
Sputnik International
Algorithm-ranged newsfeeds were introduced as a way of helping users avoid missing out on information they consider important amid the endless stream of news... 28.03.2022, Sputnik International
2022-03-28T03:14+0000
2022-03-28T03:14+0000
2023-04-14T12:56+0000
world
social media platform
facebook
x (formerly twitter)
https://cdn1.img.sputnikglobe.com/img/102528/33/1025283307_0:0:2048:1152_1920x0_80_0_0_4f0300e8e7fe01f43b867ece6d075626.jpg
A heated debate over the so-called shadowban has been raging in the aftermath of the tumultuous presidential elections of 2016 and 2020, when Republicans claimed that social media was silencing conservative voices.The open moderation of different categories of content, from disinformation to hate speech, is understandable, although these policies have also faced criticism, but shadowbans are difficult to track as they do not directly fall under platforms’ moderation policies and protocols, with their algorithms remaining undisclosed.The most common problems that users under shadow bans face are their username or hashtag not showing up in search suggestions, a decrease in follower engagement, and likes or replies being blocked.Social media networks say they reduce the activity of some accounts, usually categorized as spammers or advertisers, but the question of whether this ban can be applied to “ideologically undesirable” content remains unanswered.Some have doubted the effectiveness of shadow bans, but the issue appears to have a significant impact in this information-aggressive environment, where every second of being at the top of the news cycle is at a premium.Possible biased algorithms or shadow bans appear to be insidious, as it changes the whole idea of perceiving information. Purported manipulation with the number of subscribers, views, likes, reposts and appearances in users’ newsfeeds can construct opinions that may be based on faulty assessments, related, for example, to the scale of support or opposition and various other social sentiments.An analysis by Dr Robert Epstein, who also spoke in the Senate on this issue some years ago, showed that Google searches “can take a 50/50 split among undecided voters and change it to a 90/10 split with no one knowing they have been manipulated.” Epstein called the effect of search engine manipulation “one of the most powerful forms of influence ever discovered in the behavioral sciences.”The coverage of reviewed voters has been questioned, with some media outlets saying that Facebook, Google, Twitter, and their peers “want their products to be used by everyone.” Nevertheless, the controversy reignited after Project Veritas published a video from 2019, in which Google's head of innovation, Jen Gennay, talks about the corporation's plans to adapt its algorithms in a way that would prevent then-president Donald Trump from winning the presidential election in 2020. The video was later removed from YouTube.Twitter has explained its policies by saying that it “does not shadowban accounts.” The company's moderation policy states: "We do take actions to downrank accounts that are abusive, and mark them accordingly so people can still click through and see these Tweets if they so choose."Ahead of the most recent US presidential election, Facebook CEO Mark Zuckerberg said in an interview that the platform would be able to handle misinformation and attempts at “meddling” in the elections.The vague formulation of what the company considers “just wrong” has left many users concerned. Letting the platforms decide what is “wrong” or “right” could lead to some users being treated differently or unfairly.Among recent examples are not only Facebook’s decision to let its users spread hate and calls of violence against the Russian military, but also Twitter's tolerance toward calls for murder by Senator Lindsey Graham, who tweeted that assassinating President Vladimir Putin would be a “great service” to Russia and the world.The recent story about the New York Times admitting that emails from Hunter Biden’s “laptop from hell” were authentic also demonstrates that miscalculations can be made when deciding what is “wrong.” Meanwhile, Twitter’s blocking of The New York Post’s reports about Hunter Biden in 2020 in the midst of the presidential race led to sporadic coverage of the story, which might have affected the outcome of the election.The accounts of some state institutions, for example, the Hungarian government and the Swiss Federal Communications Office, along with many mass media, including Russia Today and Sputnik News Agency, are believed to be subjected to shadowbans.These possible manipulations have raised concerns amid growing uncertainty about major platforms’ political impartiality. As many political and advocacy non-extremist groups around the world have accused Twitter and Facebook of favoring certain sets of political and social beliefs, the platforms’ ability to secure a safe and open environment for discussion has been put in serious doubt.So far, these reports failed to provide any direct evidence of manipulating what we encounter in the social media sphere, but it is clear that further study is necessary so as to confirm or deny these allegations.
https://sputnikglobe.com/20220311/from-antipathy-to-calls-for-death-how-meta-greenlighted-hate-speech-against-russian-soldiers-1093788273.html
https://sputnikglobe.com/20220326/gop-wants-to-subpoena-hunter-biden-over-leaked-emails-if-they-regain-majority-in-congress-1094227404.html
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2022
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/102528/33/1025283307_233:0:1769:1152_1920x0_80_0_0_efd5621f8aa530946fbfd89ff8a54821.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
social media platform, facebook, x (formerly twitter)
social media platform, facebook, x (formerly twitter)
Questions Still Unanswered About Possible Biased Moderation on Major Social Media Platforms
03:14 GMT 28.03.2022 (Updated: 12:56 GMT 14.04.2023) Algorithm-ranged newsfeeds were introduced as a way of helping users avoid missing out on information they consider important amid the endless stream of news. At the same time, as tech giants have taken control of what users see first, they have faced accusations of biased moderation.
A heated debate over the so-called shadowban has been raging in the aftermath of the tumultuous presidential elections of 2016 and 2020, when Republicans claimed that social media was silencing conservative voices.
The open moderation of different categories of content, from disinformation to hate speech, is understandable, although these policies have also faced criticism, but shadowbans are difficult to track as they do not directly fall under platforms’ moderation policies and protocols, with their algorithms remaining undisclosed.
The most common problems that users under shadow bans face are their username or hashtag not showing up in search suggestions, a decrease in follower engagement, and likes or replies being blocked.
Social media networks say they reduce the activity of some accounts, usually categorized as spammers or advertisers, but the
question of whether this ban can be applied to “ideologically undesirable” content remains unanswered.
Some have doubted the effectiveness of shadow bans, but the issue appears to have a significant impact in this information-aggressive environment, where every second of being at the top of the news cycle is at a premium.
Possible biased algorithms or shadow bans appear to be insidious, as it changes the whole idea of perceiving information. Purported manipulation with the number of subscribers, views, likes, reposts and appearances in users’ newsfeeds can construct opinions that may be based on faulty assessments,
related, for example, to the scale of support or opposition and various other social sentiments.
An analysis by Dr Robert Epstein, who also spoke in the Senate on this issue some years ago, showed that Google searches “can take a 50/50 split among undecided voters and change it to a 90/10 split with no one knowing they have been manipulated.” Epstein called the effect of search engine manipulation “one of the most powerful forms of influence ever discovered in the behavioral sciences.”
The coverage of reviewed voters has been questioned, with
some media outlets saying that Facebook, Google, Twitter, and their peers
“want their products to be used by everyone.” Nevertheless, the controversy reignited after Project Veritas published a video from 2019, in which Google's head of innovation, Jen Gennay, talks about the corporation's plans to adapt its algorithms in a way that would prevent then-president Donald Trump from winning the presidential election in 2020. The video was later removed from YouTube.
“Just imagine search results keep driving you to bias stories against your core values, will you start to question your core values? Will you change your mind? What if only nasty news were the search results on the person you actually like and think would be someone you would vote for?” said cybersecurity expert Gary Miliefsky.
Twitter has explained its policies by saying that it “does not shadowban accounts.” The company's moderation policy states: "We do take actions to downrank accounts that are abusive, and mark them accordingly so people can still click through and see these Tweets if they so choose."
Ahead of the most recent US presidential election, Facebook CEO
Mark Zuckerberg said in an interview that the platform would be able to handle misinformation and attempts at “meddling” in the elections.
“If you’re saying something that is just wrong, we don’t take that down, but we stop it from spreading, generally. That’s a much more sensitive topic,” said Zuckerberg.
The vague formulation of what the company considers “just wrong” has left many users concerned. Letting the platforms decide what is “wrong” or “right” could lead to some users being treated differently or unfairly.
Among recent examples are not only Facebook’s decision to let its users
spread hate and calls of violence against the Russian military, but also Twitter's tolerance toward calls for murder by Senator Lindsey Graham,
who tweeted that assassinating President Vladimir Putin would be a “great service” to Russia and the world.
The recent story about the New York Times admitting that emails from Hunter Biden’s “laptop from hell” were authentic also demonstrates that miscalculations can be made when deciding what is “wrong.” Meanwhile, Twitter’s blocking of The New York Post’s reports about Hunter Biden in 2020 in the midst of the presidential race led to sporadic coverage of the story, which might have affected the outcome of the election.
The accounts of some state institutions, for example, the Hungarian government and the Swiss Federal Communications Office, along with many mass media, including Russia Today and Sputnik News Agency, are believed to be subjected to shadowbans.
These possible manipulations have
raised concerns amid growing uncertainty about major platforms’ political impartiality. As many political and advocacy non-extremist groups around the world have accused Twitter and Facebook of favoring certain sets of political and social beliefs, the platforms’ ability to secure a safe and open environment for discussion
has been put in serious doubt.
So far, these reports failed to provide any direct evidence of manipulating what we encounter in the social media sphere, but it is clear that further study is necessary so as to confirm or deny these allegations.