Twitter on Tuesday introduced a new feature to enable its users to flag content that could consist of misinformation, a vicious network that has significantly grown during the pandemic.
“We’re testing a feature for you to report Tweets that seem misleading – as you see them,” the social network said from its safety and security account. Starting Tuesday, a button would be visible to some users from the United States, South Korea and Australia to choose “it’s misleading” after clicking “report tweet.”
Users can then be more specific, flagging the misleading tweet as potentially containing misinformation about “health,” “politics” and “other.”
“We’re assessing if this is an effective approach so we’re starting small,” the San Francisco-based company averred. “We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work.”
Twitter, just like Facebook and YouTube, frequently comes under comments from critics who say it does not work enough to stop the spread of misinformation. However, the platform does not possess the resources unlike its Silicon Valley neighbours but try to keep the privacy policies intact and stable. Twitter began blocking users in March who has been warned five times about spreading false information about vaccines.
The network has averred that “it hopes to eventually use a system that relies on both human and automated analysis to detect suspicious posts.”
During the initial phase of the Covid-19 vaccine establishment, the spread of rumours became more prevalent, catering to which Biden said Facebook and other platforms were responsible for “killing” people in allowing false info around the shots to spread.