Potentially Passive AggressiveHarmfully misleading.
My feelings were hurt so this post is wrong.
This doesn't make sense to me. If there's literally a tag on the post informing people that it's fake, why does it matter how many people see it?Facebook tried it and made the problem worse by having the fake ones stuck out more and therefore more people read it.
Best practice would be to simply make them disappear.
I remember awhile back Facebook would automatically display a Politifact article or whatever beneath some links. But that stopped incredibly quickly.
Fuck them for using the Bernie tweet as an example though.
This doesn't make sense to me. If there's literally a tag on the post informing people that it's fake, why does it matter how many people see it?
I just assumed people who got their news from Facebook just read article headlines though. I didn't realize people would actually click the links themselves.The reason Facebook stopped doing it is because it drove up traffic to those sites. Highlighting a link in bright colors, even under the warning of fake news, brings attention to it, and therefore more clicks. If the same technique is applied to Twitter, the same thing will happen.
The problem is that highlighting these things as they are actually gives the fake content more visibility and helps set people's opinions about the content in stone. This approach didn't work when Facebook tried it and it won't work here, people know a lot of what they see is misleading but don't want to put the time in to understand why. The only way to stop the spread of fake and misleading information is to delete it entirely and if they don't plan to do that at some point in the process it's doomed to fail. Ideally content that amasses enough reports, from verified accounts with a good reputation, to warrant this tag should be automatically deleted and only brought back after moderators have approved it.what do y'all want? These companies obviously don't have perfect fact check or video AI yet and even their 10,000+ contractors don't get through all their complaints quick, so should they do nothing like Facebook and YouTube?
I'd rather them try to get something like this going so that while they review info, the conspiracy assholes in the GOP have Orange all over their pages. I'm sure it won't be perfectly fair and will have some abuses, but we don't know the system yet. It may be setup so that verified fact checkers get a portal to be linked to controversial tweets before they flag them just because random users chose to spam reports. We don't know how it will be when it finally rolls out.
The other part of this also is that being flagged for misinformation should reduce the organic reach of the post, so things like RTs, etc, would not be surfaced on people's feeds.I remember awhile back Facebook would automatically display a Politifact article or whatever beneath some links. But that stopped incredibly quickly.
Fuck them for using the Bernie tweet as an example though.
This doesn't make sense to me. If there's literally a tag on the post informing people that it's fake, why does it matter how many people see it?
I just assumed people who got their news from Facebook just read article headlines though. I didn't realize people would actually click the links themselves.