Reality has had a tough year. When the president of the United States is denying that thousands of Americans died in a hurricane, journalists face an uphill battle.
That battle has been particularly fraught on social networks, where malicious actors have spent the past several years peddling hoaxes and sowing division.
But today I bring you at least one reason for optimism. “Trends in the Diffusion of Misinformation on Social Media,” a new study from authors at Stanford University and New York University, analyzed the performance of stories posted on fake news sites from January 2015 to July 2018. Here’s what they found (emphasis mine):
Interactions with these sites on both Facebook and Twitter rose steadily through the end of 2016. Interactions then fell sharply on Facebook while they continued to rise on Twitter, with the ratio of Facebook engagements to Twitter shares falling by approximately 60 percent. We see no similar pattern for other news, business, or culture sites, where interactions have been relatively stable over time and have followed similar trends on the two platforms both before and after the election.
For the study, authors Hunt Allcott, Matthew Getzkow, and Chuan Yu assembled a list of 570 sites that had been identified as peddlers of false stories in previous stories. It then measured engagements for a range of publishers — big mainstream ones, small mainstream ones, and niche business culture sites, alongside the fake ones.
Here’s the key paragraph from the study’s findings:
The results show that interactions with the fake news sites in our database rose steadily on both Facebook and Twitter from early 2015 to the months just after the 2016 election. Interactions then declined by more than half on Facebook, while they continued to rise on Twitter. The ratio of Facebook engagements to Twitter shares was roughly steady at around 40:1 from the beginning of our period to late 2016, then fell to roughly 15:1 by the end of our sample period. In contrast, 2 interactions with major news sites, small news sites, and business and culture sites have all remained relatively stable over time, and have followed similar trends on Facebook and Twitter both before and after the 2016 election. While this evidence is far from definitive, we see it as consistent with the view that the overall magnitude of the misinformation problem may have declined, at least temporarily, and that efforts by Facebook following the 2016 election to limit the diffusion of misinformation may have had a meaningful impact.
So what are the caveats? The authors mention a few. One, new publishers of fake news pop up and disappear all the time. This study measures only the performance of longer-lasting sites — although, given their relative stability on the platform, they are likely some of the largest peddlers of misinformation. As the authors note, fake-news publishers often change their domain names to evade detection and further confuse people.
Here are a few more. The extent to which bad content is shared publicly is only one way to measure the health of a platform. Platforms also strike against misinformation preemptively by banning fake accounts as they are created, filtering hoaxes from search results and trends, and so on. The ratios identified in this study don’t take those into account. Moreover, sometimes people share bad content in order to debunk it — a quote-tweet saying “This is garbage,” for example. It does not appear that the authors took this kind of sharing into account, though I’m inquiring about it.
Finally, Facebook in particular has shrunk the reach of nearly all news sites, thanks to changes to the News Feed rolled out earlier this year. That shouldn’t affect how we view the ratios identified by the study, but it’s important to remember that most people are seeing less news in their feed, period.
Still, I appreciate the value of this study, and the work by the platforms that it represents. Often executives offer us only boilerplate statements about “making progress”; this study offers a look at what progress might look like.
It also offers a look at the magnitude of the problem ahead. On Twitter, false news stories get between 4 million and 6 million engagements a month, and have since the election, the authors found. On Facebook, fake-news engagement has fallen dramatically from 2016, when it hovered around 200 million — roughly the same as engagement on legitimate news stories. But fake news still gets 70 million engagements a month — more than enough to pollute the information ecosystem to the point of making it unreliable.
I’d still rather look for the silver lining here. Facebook demonetized publishers of fake news by preventing them from letting them use its advertising tools. Partnerships with fact checkers allowed trusted sources to evaluate whether news stories were false, and down-rank them accordingly. And it deprioritized news articles in favor of posts from friends and family — a mixed blessing, to be sure, but perhaps useful in the narrow case of discouraging hoaxes.
I’d love it if future studies examined the relative effectiveness of each of these approaches. I also think there’s an opportunity here for the platforms to share best practices with each other, and (in more limited ways) with the public.
In the meantime, I’m glad to see some progress that can be quantified. Here are the charts:
Ryan Gallagher has another Dragonfly scoop. Censored terms in Google’s “exploratory” search engine will include “human rights,” “student protest,” and “Nobel Prize” in Mandarin, he writes:
Google built a prototype of a censored search engine for China that links users’ searches to their personal phone numbers, thus making it easier for the Chinese government to monitor people’s queries, The Intercept can reveal.
The search engine, codenamed Dragonfly, was designed for Android devices, and would remove content deemed sensitive by China’s ruling Communist Party regime, such as information about political dissidents, free speech, democracy, human rights, and peaceful protest.
Jay Rosen interviewed Jack Dorsey, who rolls out a new metaphor for Twitter: a public park. I’m going to digest this for a bit before a have any comment. But there are lots of good things in here — it’s maybe the best Dorsey interview of the current cycle. He talks about the intricacies of editing tweets; about how the Black Twitter community began putting presence indicators in their display names, and lots more. But this is the most relevant snippet given the past few months’ discussion of Twitter’s self-concept:
DORSEY: Upon further reflection, we are being used more like what you would find in Washington Square Park. You walk into Washington Square Park and there’s a bunch of people who, when I walk in, there’s a bunch of people there who are not expecting me to walk in and aren’t expecting me to do the things that I intend to do and might see it out of the corner of their eye and might come over and listen or interact or whatnot. In that public square, there’s all these things that happen and some are amazing, and some are stupid, and some are silly, and some are really terrible. There’s a guy in the corner with a megaphone broadcasting his thoughts and then he recognizes you and he says, “Jay, get the hell over here. You’re a terrible person and I hate you,” and all these other things. And it’s completely directed at you.
And at that point, people recognize it and they tell him to stop, or the park stewards or police come over and say, “Here’s a warning and if you keep attacking this one person who doesn’t want it and is not even paying attention to you, then you’re out.” So that action right there was not neutrality, it was being impartial to the conduct and with an eye towards more of the collective, with an eye towards like, “We need to make Washington Square Park something that people actually want to be at and recognize that there’s going to be people who choose unhealthy behaviors and we’re going to at least demonstrate what is not healthy and what could be healthier.”
Vietnam is one of a growing number of countries demanding that tech companies store data about their users locally — in part so they can crack down on dissent, critics say. Vietnam is now formally asking Facebook to open an office in the country so it can do this. Facebook no-commented:
In July, seventeen U.S. lawmakers urged the chief executives of Facebook and Google to resist changes stipulated by the new law.
Last week, acting information minister Hung said Vietnam should promote home-grown social networks in order to compete with Google and Facebook and capture more of the social network market share in Vietnam, state media reported.
Davey Alba, who just wrote a long piece on Facebook and the Philippines, finds a post that says the execution of Sen. Antonio Trillanes IV, a high-profile critic of Philippine President Rodrigo Duterte, is “fun to think about.” Facebook says it doesn’t violate the community guidelines:
And so it remains — with 12,000 reactions, nearly 3,000 shares, and more than 1,200 comments. Here’s one: “Can we just shoot him now and explain later?” Here’s another: “Just cremate Trillanes alive.” (Facebook deleted a handful of comments after BuzzFeed News inquired about the post.)
Facebook has activated its Crisis Response page for Hurricane Florence in North Carolina.
“If there’s anything worse than an old-school restaurant being uncool,” writs Alex Vadukul, “it’s an old-school restaurant suddenly becoming hot.” Forlini’s, an old and basically unremarkable Italian restaurant in New York City, suddenly became hot thanks to Instagram:
Interest in Forlini’s undeniably accelerated last May after Vogue magazine hosted its lavish pre-Met Gala party there. Alexa Chung, Kate Bosworth and Hailey Baldwin attended, a D.J. played until morning, and the event had its own hashtag: “#spaghettiandMetballs.”
The Forlini’s selfie became a coveted social media accomplishment shortly after the spectacle. Recently, a Vogue writer named Brooke Bobb happened to be dining at Forlini’s, and she had some thoughts on the phenomenon. “It’s becoming a spot,” said Ms. Bobb, 31. “One of those places that has become Instagrammable. It’s not really about the food. It’s about looking cool on the couches. Getting a million likes from sitting in the booths and posing like models. That’s just what happens when something goes viral now.”
NFL star Odell Beckham Jr. is the star of “I Am More: OBJ,” a new documentary series coming to Facebook Watch.
The 16-episode series, which bows Friday, Sept. 14, is produced by Uninterrupted, the sports-media company founded by LeBron James and his business partner Maverick Carter. Subsequent episodes of “I Am More: OBJ” will hit on Fridays throughout the course of the NFL season on Facebook Watch and the show’s page at facebook.com/IAMMOREOBJ.
Twitch Hires Its First Head of Diversity and Inclusion
Four of Amazon-owned streaming service Twitch’s top executives are now women, Lisa Marie Segarra reports:
Rangachary and Weaver also bring the total number of women in Twitch’s C-Suite to four out of seven. It’s an anomaly in the male-dominated tech industry, especially for a company with an emphasis on gaming, an industry that is dominated by men.
Tech Giants Join Forces to Score AI Chips ($)
Facebook is among a group of tech giants involved in benchmarking the latest AI chips, Aaron Tilley reports. This is a very contentious process!
The companies involved in creating the new benchmark say they’re eager to make the test fair. Last May, Google, Baidu, Intel, Arm and a number of AI chip startups were among the first to join the project. Since then, The Information has learned, the U.S.-based tech giants investing the most in AI—Facebook, Microsoft and Amazon—have quietly joined the effort as unofficial participants, as has Nvidia, an important maker of chips for AI applications, according to multiple participants in the project.
The outcome of the project could help determine how investments are made in the artificial intelligence chip market, forecast to be worth more than $90 billion in 2025.
The problem with real news — and what we can do about it
Rob Wijnberg, who crowdfunded a Dutch news site called De Correspondent, says that the current news environment is “all about sensational, exceptional, negative, and current events,” which has badly warped our sense of reality. “It gives us a deeply skewed view of probability, history, progress, development, and relevance,” Wijnberg writes. He proposes that other organizations follow his lead:
At De Correspondent in the Netherlands, we try to tell precisely those stories that aren’t news, but news-worthy nevertheless. Or, as we often say, that reveal not the weather but the climate. Those stories are written by correspondents who don’t have a news-driven schedule to meet, and thus can take the time they need to develop an area of expertise and learn to recognize and describe the truly influential developments of our time. Our ultimate goal: to replace the sensational with the foundational and the recent with the relevant.
Let’s end the week on a heartwarming note. While the social world is known for its ruthless copying, the lower-stakes world of internet browser building is positively collegial. What else to make of the apparently longstanding tradition in which browser makers celebrate one another with cake?