Read Time: 8 mins
The real ‘fake news’: how to spot misinformation and disinformation online
by Andrea Bellemare, CBC News
July 04, 2019
For starters, let’s stop calling it ‘fake news’
So you think a story, photo or video you’ve seen online might be fake — or exaggerated, at least. Maybe you spotted a photo that’s generating outrage or ridicule, or a headline that seems too bizarre to be accurate.
But you’re not sure.
How do you know if what you’re seeing is real? How can you find out where it’s coming from?
This guide will give you some tips on how to evaluate what you’re reading and seeing, so you’ll be better equipped to decide whether to trust it.
First off, we’re going to avoid using the term “fake news.” The Digital, Culture, Media and Sport Committee of Parliament in the United Kingdom recommended against using “fake news” in favour of more specific terms:
“The term ‘fake news’ is bandied around with no clear idea of what it means, or agreed definition. The term has taken on a variety of meanings, including a description of any statement that is not liked or agreed with by the reader. We recommend that the Government rejects the term ‘fake news,’ and instead puts forward an agreed definition of the words ‘misinformation‘ and ‘disinformation.'”
For this guide, we’ll use the terms ‘misinformation’ and ‘disinformation’ instead.
The worst kind of disinformation might be incredibly hard to spot, but much of it isn’t, and you can easily equip yourself to be a more critical news consumer.
What is the difference between disinformation and misinformation?
The U.K. government has very useful definitions of both terms. Here, we’ve simplified those definitions to make them easier to understand.
Disinformation is the deliberate creation and/or sharing of false information in order to mislead.
Misinformation is the act of sharing information without realizing it’s wrong.
What does disinformation look like?
Kaleigh Rogers, CBC’s senior reporter covering disinformation, investigated claims made in a blog post about Justin Trudeau that circulated widely on social media. The post claimed Justin Trudeau’s government sent $465 million in foreign aid to Afghanistan, only to see it “disappear.”
Rogers found that the figure of $465 million is partly correct — the federal government announced that funding in 2016 — but that amount actually is just part of the total sum of foreign aid Canada has sent to Afghanistan.
Other claims in the article were either misleading or wrong. A report cited to bolster the “disappearance” claim is actually a report about U.S. aid to Afghanistan that doesn’t mention Canada at all. Rogers also detailed the amount of funding former prime minister Stephen Harper set aside for Afghanistan and cited international criticism directed at Trudeau for not providing enough foreign aid.
The Canadian Anti-Hate Network also told Rogers that the group behind the post regularly circulates false or misleading stories online to spread anti-immigrant and anti-Muslim sentiment.
What does misinformation look like?
Radio-Canada’s disinformation reporter Jeff Yates was struck by how popular a story from CBC P.E.I. was on Facebook. The story was about a new law in the province that punishes drivers who illegally pass school buses by suspending their drivers’ licences for a period of time.
The story was picked up on social media and posted to many pages in the United States because people thought the law applied to their own communities. It was the most popular CBC News story on the social media platform in the past year (June 2018 – June 2019) and generated 5.8 million Facebook interactions — 37 times more interactions than there are people living in P.E.I.
Some people who posted the story outside of P.E.I. knew it didn’t apply to their region; others thought it applied to them when it did not. So this was a case of misinformation, because some people spread the information under the mistaken belief that the law applied to them.
What kinds of misinformation and disinformation are out there?
The U.K. Parliament’s Digital, Culture, Media and Sport Committee suggested some useful definitions for the kinds of fake content you’re likely to see online:
• Fabricated content: completely false content.
• Manipulated content: content that includes distortions of genuine information or imagery — a headline, for example, that is made more sensationalist to serve as “clickbait.”
• Imposter content: material involving impersonation of genuine sources — by using the branding of an established news agency, for instance.
• Misleading content: information presented in a misleading way — by, for example, presenting comment as fact.
• False context of connection: factually accurate content that is shared with false contextual information — for example, a headline that does not reflect the content of an article.
• Satire and parody: humorous but false stores presented as if they are true. Although this isn’t usually categorized as fake news, it may unintentionally fool readers.
Let’s look at those categories in more detail:
These are the stories, images or websites that are totally fake. These stories may come from unknown outlets or social media accounts that aren’t well-known, or don’t have a lot of followers. The websites themselves may try to appear as if they’re legitimate.
Radio-Canada’s Jeff Yates discovered a group of websites that looked like English-language, Quebec-based local newspaper websites. He found a site called The Sherbrooke Times which looks like a local site. But no such newspaper exists; the site’s office was listed in Toronto and the articles were poor translations of articles taken from French-language Quebec media. The network of fake sites is actually based in Ukraine, and Yates discovered their goal was to generate ad revenue.
Recently, The Tyee debunked an ad circulating online that appeared to show NDP leader Jagmeet Singh standing in front of a $5.5 million mansion. The headline on the ad said, “Jagmeet Singh Shows off His New Mansion.” The photo of Singh was a real photo taken by a Reuters photographer. The house shown in the photo is a real mansion available for rent called the Villa Fiona; Jagmeet Singh doesn’t own it and it’s located in Los Angeles, not B.C.
An example of this type of content would be a story that appears to come from a reputable online news source. The story might have the correct branding and colours but still seem slightly ‘off’, or the headline might be something that the real outlet would never publish. One giveaway might be the URL — if it’s incorrect, has extra letters or numbers or doesn’t end in .com or .ca.
A recent example is this fake version of The Washington Post, distributed both in a printed version and as a website (democracyawakensinaction.org: website currently not loading), with false stories about U.S. President Donald Trump departing the White House. (The URL of the website — my-washingtonpost.com — gave the game away.)
Online content can become misleading when an opinion piece is circulated online as objective reporting, when one element of a story is blown out of proportion to attract clicks, or when an entire story is presented by a special interest group as proving or disproving something — when it actually might not do anything of the sort.
Jeff Yates investigated a story that was the most popular piece of misleading news in Quebec for a time — and turned out to be something that originated in Antibes Juan-les-Pins, France. In Antibes, the mayor’s assistant sent a letter to parents of schoolchildren saying that any requests to remove or increase the number of pork dishes in school cafeterias for religious or personal reasons would be denied due to the principle of secularism.
The mayor’s assistant never mentioned the religion of those making the requests, but a far-right blog drew a link to Muslims. One of the blog’s writers wrote an open letter supporting the mayor of Antibes, saying he was right to refuse any “concession to Islam.”
This is an example of a fact (no change to pork dishes on school menus) that was altered to fit an agenda — in this case, anti-Muslim attitudes.
A similar open letter subsequently circulated on social media in Quebec, congratulating the mayor of Dorval. It even included a note the mayor’s secretary supposedly sent to parents — which actually was just the open letter from the right-wing French blog with the locations changed.
Here we see misleading content sliding into fabrication: the mayor of Dorval never said any such thing and none of the content reflects anything that actually happened in Dorval.
False context of connection
This commonly happens during a natural disaster — when, for example, photos circulate purporting to show a terrible flood, while the images themselves might not be from the actual event mentioned, the same location, the same year or even the same continent. The origins of these images are often easy to track through reverse-image searches — but they can fool a lot of people in the meantime.
Kaleigh Rogers found that some CBC stories were being shared online as if they were new, such as a story from 2014 about an RCMP study that found hundreds of cases of police corruption. Some of the comments on the story posts mentioned Prime Minister Justin Trudeau, even though the study period was from 1995 to 2005 and Trudeau wasn’t elected prime minister until 2015.
Satire and parody
Stories written as satire or parody are sometimes passed around as if they’re true. The American satirical news outlet The Onion fools people regularly.
Jack Warner, the former vice president of FIFA, famously cited an Onion story in his defence when he was indicted on corruption charges in the U.S. in 2015. In video posted to his Facebook page, Warner said “all this thing has stemmed from the failed U.S. bid to host a World Cup” — a rumour the L.A. Times noted had been stoked by Russian President Vladimir Putin.
In the video, Warner holds up a copy of an Onion article titled, “FIFA Frantically Announces 2015 Summer World Cup In United States.”
“If FIFA is so bad, why is it the USA wants to keep the FIFA World Cup?” Warner asked.
(The U.S. is hosting the World Cup in 2026, along with Canada and Mexico.)
A deepfake is video, audio or images that have been altered with artificial intelligence software to make it seem as if a real person said or did something they didn’t actually say or do. The term “deepfake” is a combination of the words “deep learning” (by A.I.) and “fake.”
One good example is a video of actor Bill Hader from an appearance on ‘Late Night with Conan O’Brien’ in 2005. During his conversation with O’Brien, Hader imitated actor Al Pacino. In a deepfake released this year, Pacino’s face appears on Hader’s body during the imitation.
Actor Jordan Peele and Buzzfeed famously made a deepfake video to demonstrate the dangers implicit in the technology. In it, former U.S. President Barack Obama appears to be speaking about the dangers of deepfakes; Peele provided Obama’s “voice” and video of Obama was matched to Peele’s “performance.”
There are several different methods for making deepfakes, some more complicated than others.
Some deepfakes are easy to spot because the people in the videos don’t look quite real (a phenomenon known as “the uncanny valley”), or look like they’re wearing masks that “slip” as they move around. The Daily Dot also notes that skin tones might change near the edge of a person’s face, or the person might have double chins or double eyebrows.
Another way to spot a deepfake — according to an American professor who makes them — is to watch the eyes; performers in deepfake videos sometimes don’t blink as often as real people.
“When a deepfake algorithm is trained on face images of a person, it’s dependent on the photos that are available on the internet that can be used as training data. Even for people who are photographed often, few images are available online showing their eyes closed,” wrote Siwei Lyu, director of the Computer Vision and Machine Learning Lab at the University at Albany, State University of New York.
Lyu also said that, since he published his post about blinking, he’s seen videos that have fixed the blinking problem.
Because the technology is always improving, it will get harder to detect deepfakes. That’s why it’s so important to find out where a video came from, who made it and whether there are other versions of the video showing the same person doing the same thing. Context can determine if the video is real or not.
That notorious video of U.S. House Speaker Nancy Pelosi that made her appear drunk is not a deepfake. The video was slowed down — a standard editing technique — and Pelosi did give the speech shown in the video. So this was an altered video circulated with misleading information to make it appear to be something it was not. (Some people have dubbed such videos “shallowfakes.”)