Twitter, one of the most popular social media platforms, has promised users that warning labels will be put in place in order to restrict the spread of misinformation about election and Coronavirus. The information was released on Friday.
This is the latest development that Twitter is taking to prevent the spread of deliberate false information as voters cast their ballots amid a pandemic. Just like Facebook and other social media platforms, Twitter has announced a series of new rules to stop continuous hoaxes and false claims which major purpose is to deceive voters.
How will Twitter restrict spread of misinformation?
The popular social media platform company, which have millions of users, will take effective and aggressive decisions in order to limit the impact of posts it labels as false and misleading. Most importantly, tweets with false claims from American political figures, candidates or parties and other high-profile U.S. users will be hidden from users and placed behind warning screens. Users will have no access to these tweets except they click past the warnings to read it.
“Twitter will hide misleading tweets behind warning screens, similar to the ones it has used for posts that break its rules but are left up because of public interest.” The company said.
The social media platform made it known to the public that, “Some or all of the content shared in this Tweet is disputed and may be misleading,” warning sign will be read. That label will also will appear prominently above the tweet, once users click past the warning screen.
The reason is that it will be extremely difficult for such tweets to spread, especially when users are unable to share it. Users won’t be allowed to reply to them or retweet without adding comment. And the tweets will not be recommended by Twitter’s algorithms, meaning users won’t see them in their main timelines.
The drastic action was taken to further reduce the visibility of misleading information, and will also encourage people to reconsider if they want to amplify these Tweets as reported by Twitter officials, Vijaya Gadde and Kayvon Beykpour in a blog post on Friday.
Will Twitter be successful to restrict spread of misinformation?
Despite the continuous effort to stop misinformation, Twitter has refused to remove certain posts in most cases. It previously used these kinds of warnings on tweets that violated its rules but which it determined should remain online because of public interest, including abusive posts from political leaders and harmful tweets about the coronavirus.
The expanded use of warning labels is likely to have a remarkable and lasting impact on one of Twitter’s most prolific and controversial users; President Trump. He has continued to make false claims, about issues such as mail-in voting, that Twitter has labeled as deceptive. Under the new policy, more of his posts could be hidden behind warning labels and thus have their views reduced.
As the election is fast approaching (less than a month to go), social media companies are increasingly alarmed at the potential that their platforms will be used to manipulate or intimidate voters or to undermine the legitimacy of the election.
Facebook has vowed to enforce more stringent rules on voters’ intimidation in election.
The two most popular social media platforms, Twitter and Facebook, have fought hard to curtail the viral spread of misinformation and hoaxes, which often spreads fast before fact checks and corrections can catch up.
There are some boundaries Twitter says users cannot cross from now on. On Friday, the company clarified that it would take down posts that try to interfere with the election process or its aftermath, including calls for “violent action.”
Twitter new rules aim to prevent confusion around the 2020 vote. It gave more details on plans to label posts that claim victory before election results have been finally announced. It will direct users to official information about the election and will only consider a race “authoritatively called” if it has been announced by state election officials or in independent, public projections from at least two “authoritative, national news outlets.”
Earlier this week, Facebook made the announcement that it would also crack down on voter intimidation, including removing posts that use “militarized language” in urging people to monitor polling places. Concerns have been growing over possible confrontations after Donald Trump Jr., the president’s son, posted a video on social media calling for people to join an “Army for Trump.” Facebook also plans to label premature claims of victory.
Before users can retweet something labeled as misinformation, they will see an alert that the tweet is “disputed” and possible links to the real information.
What else Twitter will do to restrict spread of misinformation?
Other measures Twitter announced on Friday encourage users to think before posting. If a user tries to retweet something labeled as misinformation, she will be shown an alert directing her to “credible information about the topic” before she can continue.
The changes to how misleading information is displayed and shared — whether from high-profile figures or everyday users — go into effect next week and will be permanent.
Some additional restrictions will take effect on Oct. 20 and extend at least until the end of election week.
During that time, Twitter will temporarily prompt users to “quote tweet” — adding their own commentary — rather than simply retweet a post. It will also stop recommending tweets from people whom users do not already follow, a step meant to slow viral amplification.
And it will make changes to the trends it recommends to U.S. users, adding a description to explain why a given term is trending.
These measures were put in place to assist people more quickly in gaining an informed understanding of the high volume public conversation in the U.S. and also help to restrict the spread of false information, as announced to the public by the Company.