Who should we blame whenever misinformation pops in social media like Facebook? The person reading the fake news, the one originally posted it, or Facebook allowed the post? Amid the pandemic and the upcoming US Elections 2020, here's how Facebook tries to curve the ball towards a reliable social media platform.
Facebook's "dos and dont's" in removing a fake post
The one solution that probably worked against COVID-19 myths is the fact-checking tool of Facebook.
If you observe nowadays, the platform always has a fact-checking tool. It allows users to identify if what they'd read or what they about to read is legitimate info or not.
The Coronavirus Information Center lets Facebook users be transferred to a reliable WHO or government site in order to fact-check the content of a shared post online.
WhatsApp sharing limit
Facebook's social media brother WhatsApp also created a way to stop misinformation.
In April, WhatsApp added a series of added user protection layers to know which messages were forwarded multiple times a day.
Messenger also follows this scheme.
One thing they should have done
Though we're grateful that Facebook has been doing a lot for the platform, there's one thing that they're not doing right now.
This is the process of removing groups as they see fits, according to ZDNet.
You see, Facebook only makes anti-vaccination or anti-quarantine--groups that share most fake news-- disappear from the suggested posts or public feed.
The groups reportedly were not being removed effectively.
Could they have done something better?
This article is owned by Tech Times
Written by Jamie Pancho