Mark Zuckerburg might think that fake news on Facebook didn’t sway the election, but Associate Professor Zeynep Tufekci (and many others) aren’t buying it. In a piece for the New York Times (where she is a regular contributor and a must-read), Tufekci writes:
He is also contradicting Facebook’s own research.
In 2010, researchers working with Facebook conducted an experiment on 61 million users in the United States right before the midterm elections. One group was shown a “go vote” message as a plain box, while another group saw the same message with a tiny addition: thumbnail pictures of their Facebook friends who had clicked on “I voted.” Using public voter rolls to compare the groups after the election, the researchers concluded that the second post had turned out hundreds of thousands of voters.
In 2012, Facebook researchers again secretly tweaked the newsfeed for an experiment: Some people were shown slightly more positive posts, while others were shown slightly more negative posts. Those shown more upbeat posts in turn posted significantly more of their own upbeat posts; those shown more downbeat posts responded in kind. Decades of other research concurs that people are influenced by their peers and social networks.
All of this renders preposterous Mr. Zuckerberg’s claim that Facebook, a major conduit for information in our society, has “no influence.”
Mike Caufield, in a series of posts on his blog (which, again, are all a must-read), breaks down just how powerful fake news has become on the platform and how that can also impact Google search results (I’m sold; I’m moving to DuckDuckGo), while also showing how facebook, in its very design, is made to keep you from leaving the platform, making it more likely that you will spread the fake news and believe it.
His example is a piece from the fake news site, Denver Guardian, and the number of shares it received, versus the number of shares of newsworthy pieces from legitimate news sources. The chart is from Caufield’s post:
So what do we do? Facebook has banned ads from fake news sites, but it doesn’t help with the sharing of fake news among users. There is word that perhaps a band of renegade facebook employees are taking up the cause, but that’s relying on the companies to clean house, which I am personally skeptical of (given their policies on “real name” and their refusal to deal with trolls, deciding instead to ban academics). Caufield and Kin Lane are now working on a tool to help educate on the spread of fake news, but barring being able to build a tool, there are some concrete actions we all can take, right now, and even share with our students.
Avoid websites that end in “lo” ex: Newslo (above). These sites specialize in taking a piece of accurate information and then packaging that information with other false or misleading “facts.”
Watch out for websites that end in “.com.co” as they are often fake versions of real news sources.
Watch out if known/reputable news sites are not also reporting on the story. Sometimes lack of coverage is the result of corporate media bias and other factors, but there should typically be more than one source reporting on a topic or event.
Odd domain names generally equal odd and rarely truthful news.
Lack of author attribution may, but not always, signify that the news story is suspect and requires verification.
Some news organizations are also letting bloggers post under the banner of particular news brands; however, many of these posts do not always go through the same editing process (ex: BuzzFeed Community Posts, Kinja blogs, Forbes blogs).
Check the “About Us” tab on websites or look up the website on Snopes or Wikipedia for more information about the source.
If the story makes you REALLY ANGRY it’s probably a good idea to keep reading about the topic via other sources to make sure the story you read wasn’t purposefully trying to make you angry (with potentially misleading or false information) in order to generate shares and ad revenue.
It’s always best to read multiple sources of information to get a variety of viewpoints and media frames. Some sources not specifically included in this list (although their practices at times may qualify them for addition), such as The Daily Kos, The Huffington Post, and Fox News, vacillate between providing legitimate, problematic, and/or hyperbolic news coverage, requiring readers and viewers to verify and contextualize information with other sources.
Conversely, there is another list from Fight With Facts, Not Rumors. There are some really helpful reminders (like, check the date of the piece - which is a mistake I frequently make, even when I click on it to read the article), as well as checking sources, as we report and give signal boosts to those issues that do, in fact, need a boost.
I’ll leave this post with a quote from Kin Lane:
As I prepare to dive in and support this work I wanted to remind folks that this is not exclusive to Facebook and Twitter, or just during this election. We suck at understanding history or considering the future when we adopt new technologies -- this is often intentional. We need to make sure we are this critical when any new technology comes along, and work hard to understand the historical motives and ideology behind the tech, as well as get better at exploring possible dystopian futures brought on by each technological tool we are unleashing in our personal and professional worlds.
May we all remember this.
What are you doing to help your students and social media circles understand and see through fake news?