• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Jeremy

Member
Oct 25, 2017
6,639
The right simply doesn't use the same teleological standards to judge claims that the left does, so trying to act like pointing out facts from experts is going to matter is pointlessly optimistic.
 

WillyFive

Avenger
Oct 25, 2017
6,989
Facebook tried it and made the problem worse by having the fake ones stuck out more and therefore more people read it.

Best practice would be to simply make them disappear.
 

TrueSloth

Member
Oct 27, 2017
6,070
On one hand, if implemented well, it can help people better understand and criticize the news they consume. On the other hand, it's going to be a politicized position that people will think their news sources are being silenced because it's accurate and twitter just doesn't like them.
 
Oct 25, 2017
1,355
It being powered by users almost guarantees that this will be a trash fire. I can't wait to be warned about everything, including equal rights, video game opinions, Star Wars trivia, and "good" anime.
 
OP
OP

BAD

Member
Oct 25, 2017
9,569
USA
what do y'all want? These companies obviously don't have perfect fact check or video AI yet and even their 10,000+ contractors don't get through all their complaints quick, so should they do nothing like Facebook and YouTube?

I'd rather them try to get something like this going so that while they review info, the conspiracy assholes in the GOP have Orange all over their pages. I'm sure it won't be perfectly fair and will have some abuses, but we don't know the system yet. It may be setup so that verified fact checkers get a portal to be linked to controversial tweets before they flag them just because random users chose to spam reports. We don't know how it will be when it finally rolls out.
 

Conkerkid11

Avenger
Oct 25, 2017
13,990
I remember awhile back Facebook would automatically display a Politifact article or whatever beneath some links. But that stopped incredibly quickly.

Fuck them for using the Bernie tweet as an example though.

Facebook tried it and made the problem worse by having the fake ones stuck out more and therefore more people read it.

Best practice would be to simply make them disappear.
This doesn't make sense to me. If there's literally a tag on the post informing people that it's fake, why does it matter how many people see it?
 

WillyFive

Avenger
Oct 25, 2017
6,989
I remember awhile back Facebook would automatically display a Politifact article or whatever beneath some links. But that stopped incredibly quickly.

Fuck them for using the Bernie tweet as an example though.


This doesn't make sense to me. If there's literally a tag on the post informing people that it's fake, why does it matter how many people see it?

The reason Facebook stopped doing it is because it drove up traffic to those sites. Highlighting a link in bright colors, even under the warning of fake news, brings attention to it, and therefore more clicks. If the same technique is applied to Twitter, the same thing will happen.
 

Tawpgun

Banned
Oct 25, 2017
9,861
Meh I'm down to try it as trial for a month or two.

I don't think it would work well but I'm curious.
 

Conkerkid11

Avenger
Oct 25, 2017
13,990
The reason Facebook stopped doing it is because it drove up traffic to those sites. Highlighting a link in bright colors, even under the warning of fake news, brings attention to it, and therefore more clicks. If the same technique is applied to Twitter, the same thing will happen.
I just assumed people who got their news from Facebook just read article headlines though. I didn't realize people would actually click the links themselves.
 

Dali

Member
Oct 27, 2017
6,184
On some level I'm convinced the people that eat the shit they are fed by pundits and alt right talking heads know it's bullshit but just don't care. So they'll just ignore warnings and write them off as "fake news." This is a good idea for more insidious lies like the Bernie one in the example.
 

Dyle

One Winged Slayer
The Fallen
Oct 25, 2017
30,126
what do y'all want? These companies obviously don't have perfect fact check or video AI yet and even their 10,000+ contractors don't get through all their complaints quick, so should they do nothing like Facebook and YouTube?

I'd rather them try to get something like this going so that while they review info, the conspiracy assholes in the GOP have Orange all over their pages. I'm sure it won't be perfectly fair and will have some abuses, but we don't know the system yet. It may be setup so that verified fact checkers get a portal to be linked to controversial tweets before they flag them just because random users chose to spam reports. We don't know how it will be when it finally rolls out.
The problem is that highlighting these things as they are actually gives the fake content more visibility and helps set people's opinions about the content in stone. This approach didn't work when Facebook tried it and it won't work here, people know a lot of what they see is misleading but don't want to put the time in to understand why. The only way to stop the spread of fake and misleading information is to delete it entirely and if they don't plan to do that at some point in the process it's doomed to fail. Ideally content that amasses enough reports, from verified accounts with a good reputation, to warrant this tag should be automatically deleted and only brought back after moderators have approved it.
 
Oct 27, 2017
7,741
The platform should use some objective measure of expertise or skill as a weighting for expert opinion on a topic and doll out warnings regarding accuracy of info if a verified expert / thought leader for a particular topic raises a red flag.

Connections, rankings, followers, etc. on a network like LinkedIn could be useful for determining expert / thought leader status, as would be the frequency of citations of literature authored by an expert on a topic.

It's almost like we are unwilling to take what we've learned from peer review and the scientific method and apply it to social media.
 
Last edited:

finalflame

Product Management
Banned
Oct 27, 2017
8,538
I remember awhile back Facebook would automatically display a Politifact article or whatever beneath some links. But that stopped incredibly quickly.

Fuck them for using the Bernie tweet as an example though.


This doesn't make sense to me. If there's literally a tag on the post informing people that it's fake, why does it matter how many people see it?
The other part of this also is that being flagged for misinformation should reduce the organic reach of the post, so things like RTs, etc, would not be surfaced on people's feeds.
 

WillyFive

Avenger
Oct 25, 2017
6,989
I just assumed people who got their news from Facebook just read article headlines though. I didn't realize people would actually click the links themselves.

Article headlines offer no information. Most of it is clickbait, especially those aimed at the type of people would believe the stuff fake news sites sell.
 

maximumzero

Member
Oct 25, 2017
23,009
New Orleans, LA
These being community -led means that you'll have folks reporting actual "fake news" and the Trump cult reporting anything that goes against Dear Leader.

Basically everything's gonna be labeled fake news outside of cat videos in the end.
 

samoyed

Banned
Oct 26, 2017
15,191
Yes let's crowdsource fact checking because as we all know social media users are great at policing themselves.
 

dude

Member
Oct 25, 2017
4,705
Tel Aviv
And who gets to decide what is labeled misleading, and just as important, what doesn't?
I don't get people who asked for this, why give some corporation the power to literally decide what is fact?