Facebook is up there with Rupert Murdoch’s News Corp when it comes to sowing the seeds of doubt about climate science, conducting just six fact checks a month on billions of posts per day. Elizabeth Minter reports.
“Climate change is real. The science is unambiguous and the need to act grows more urgent by the day.” So proclaimed Facebook in a September 2020 blog post announcing its plan to step up the fight against climate change.
Yet Facebook’s actions belie that commitment, according to a global report by Stop Funding Heat, released this week.
Facebook conducts just six climate fact-checks per month, according to publicly available records, even while Facebook users share 4.75 billion items each day.
Facebook’s public policies contain no mention of climate misinformation. The misinformation policies Facebook does have are not effectively enforced, with loopholes allowing climate denial content to go viral, the report finds.
And while charities and campaigns have to jump through hoops to post ads such as “Fracking is ruining our community” and “How can we better tackle climate change?”, corporate ads, even those that indulge in greenwashing, have free rein.
The report On the Back Burner: How Facebook’s Inaction on Misinformation Fuels the Global Climate Crisis evaluated more than 100 academic studies, empirical reports and journalistic investigations. It found that Facebook is failing to tackle climate denial on its platform and prevent the spread of climate misinformation.
The lead researcher and the report’s writer, Sean Buchan, said: “Facebook is playing the fiddle while the planet burns.”
Climate change not an ‘impending harm’
Facebook’s Community Standards note that: “Reducing the spread of false news on Facebook is a responsibility that we take seriously.”
Facebook has in fact acted to remove false news, including all references to Holocaust denial and it prohibits “misrepresentation” regarding voter registration or voting for example. In April 2020, CEO Mark Zuckerberg personally extended the “harmful” definition to cover public health misinformation.
But no signs of this being extended to cover climate misinformation, notes the report.
Last September, Nick Clegg, Facebook’s Vice President of Global Affairs, said while talking in the context of climate change misinformation that “we only remove stuff where there is an obvious link to immediate and impending real-world harm” – the clear implication being that Facebook does not consider climate change an “impending harm”.
Further muddying the waters is that when US senators asked Facebook last year whether “the spread of false information on the climate crisis [is] included in Facebook’s understand[ing] of false news”, Facebook responded definitively “Yes.” (False news is Facebook’s general term for misinformation.)
However, as of last month, climate change was not mentioned at all in the Community Standards, nor in any of the 2020 Community Standards reports.
Open to corporate misinformation
Facebook’s advertising policies specifically mention discrimination, health misinformation, political misinformation, and even a specific clause on vaccines, but again there is no mention of climate change. The report argues that this leaves ads open to corporate misinformation.
“While ads relating to social or political issues, including environmental politics, require pre-approval by Facebook and disclaimers in the ad, Facebook’s definition excludes items of a commercial nature, even if they relate specifically to renewable or sustainable energy or fossil fuels.”
Facebook’s September 2020 blog Stepping Up The Fight Against Climate Change mentions two specific initiatives:
- The Climate Science Information Center, a portal-like page that “connect[s] people to factual and up-to-date climate information”; and
- A third-party fact-checking program that actively works to reduce “false news” on the platform. Some “70 independent fact checking organisations covering over 60 languages … can and do check climate science content.”
However, the report says the first initiative, the CSIC, has only been rolled out in 16 countries and fails to use best practice to debunk myths.
As for the fact-checking program, Facebook’s full list of fact-checking partners lists one climate science specialist.
“General fact-checks of climate content likely to appear on Facebook occur at a frequency of around 1.5 per week. …
Ultimately, we cannot know the full extent of climate fact-checking on the platform unless Facebook shares this information.”
The report notes that there are doubts that a fact-checking program is capable of keeping up with misinformation campaigns.
A recent essay in Technology Review outlined two major arguments: First, Facebook does not have internal incentives to ever combat misinformation appropriately. But even if it did, the misinformation strategy is “tenuous at best”.
The essayist Karen Hao writes:
“Misinformation and hate speech constantly evolve. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way.
“An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out.
Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post… making their message illegible to the AI while still obvious to a human.”
The report notes that climate misinformation is set to rise this year, with the pivotal 26th Conference of Parties climate summit in Glasgow.
With vested interests no doubt seeking to stop climate action and undermine science by any means necessary, including using social platforms, it is critical for Facebook to step up its game if its public commitments are to mean anything.