Why Using Facebook Should Require a Media Literacy Test


We don’t let people start driving motor vehicles until they’ve completed driver training and then an exam for a very good reason: vehicles are dangerous for drivers, passengers and pedestrians. Social networks and the misleading and harmful content they spread are also dangerous to society, so some media literacy – and testing – should be a requirement for using them.

Social media companies like Facebook and Twitter would surely oppose such an idea, calling it onerous and extreme. But they deliberately misunderstand the enormity of the threat disinformation poses to democratic societies.

The Capitol Riot gave us a glimpse of the kind of American disinformation that helped create — and illustrates why it is so dangerous. On January 6, the nation witnessed an unprecedented attack on our seat of government that left seven dead and lawmakers fearing for their lives. The rioters who caused this chaos planned their march on the Capitol on social media, including in Facebook groups, and were spurred into violent action by months of misinformation and conspiracy theories about the presidential election. , which they believed was “stolen” from Donald Trump. .

While major social networks have made significant investments in combating misinformation, it may be impossible to remove all or even most of them. That’s why it’s time to shift the focus away from efforts to curb misinformation and its spread, to providing people with tools to recognize and reject it.

Media literacy should definitely be taught in schools, but this type of training should also be available where people actually encounter misinformation, i.e. on social media. Large social networks that distribute news and information should require users to take a short media literacy course, then a quiz, before to log in. Social networks, if necessary, should be compelled to do so by force of law.

Moderation is hard

So far, we’ve relied on major social networks to protect their users from misinformation. They use AI to locate and remove, label or reduce the spread of misleading content. The law even protects social networks from lawsuits for the content moderation decisions they make.

But relying on social media to control misinformation is clearly not enough.

First, the tech companies that run social media often have a financial incentive to allow misinformation to linger. The content delivery algorithms they use favor hyper-partisan and often half-true-or-fake content because it consistently gets the most engagement in the form of likes, shares, and user comments. It creates ad views. It’s good for business.

Second, major social networks are forced into an endless process of expanding censorship as propagandists and conspiracy theorists find more and more ways to spread fake content. Facebook and other companies (like Parler) have learned that taking a purist approach to free speech, that is, allowing any speech that is not illegal under US law, does not is not practical in numerical spaces. Censoring certain types of content is responsible and good. In its latest capitulation, Facebook announced on Monday that it would ban any posting of debunked theories about vaccines (including those for COVID-19), such as those that they cause autism. But it is impossible for even well-meaning censors to keep up with the endless ingenuity of disinformation purveyors.

There are logistical and technical reasons for this. Facebook relies on 15,000 content moderators (most of them under contract) to control the posts of its 2.7 billion users worldwide. And it’s increasingly turning to AI models to find and moderate harmful or false posts, but the company itself admits that those AI models can’t even understand certain kinds of harmful speech, like in memes or videos.

That’s why it may be better to help consumers of social content detect and reject misinformation and refrain from spreading it.

“I’ve recommended that platforms do media literacy training directly, on their sites,” says Paul Barrett, disinformation and content moderation researcher, deputy director of the University’s Stern Center for Business and Human Rights. of New York (NYU). “There is also the question of whether there should be a media literacy button on the site, looking you in the face, so that a user can access media literacy data at any time.”

A quick introduction

Social media users, young and old, desperately need tools to recognize both disinformation (false content spread innocently, out of ignorance of the facts) and disinformation (false content knowingly spread for political or financial reasons), including the skills to find out who created content. and analyze why.

These are important elements of media literacy, which also involves the ability to cross-check information with additional sources, assess the credibility of authors and sources, recognize the presence or absence of rigorous journalistic standards, and to create and/or share media in a manner that reflects its credibility, according to the United Nations Educational, Scientific and Cultural Organization (UNESCO).

Assembling a toolkit of basic media literacy tools – perhaps specific to “media literacy” – and presenting them directly on social media sites serves two purposes. It arms social media users with practical media literacy tools to analyze what they see, and also warns them that they are likely to encounter biased or misleading information on the other side of the screen. connection.

This is important because not only do social networks make misleading or fake content available, they deliver it in a way that can disarm a user’s bullshit detector. The algorithms used by Facebook and YouTube favor content likely to elicit an emotional, often partisan, reaction from the user. And if a member of Party A comes across a report about a shameful act committed by a leader of Party B, they may believe it and then share it without realizing that the ultimate source of the information is Party A. Often, the creators of such content distort (or completely shatter) the truth to maximize emotional or partisan response.

It works great on social media: A 2018 Massachusetts Institute of Technology study of Twitter content found that lies are 70% more likely to be retweeted than the truth, and lies spread to 1,500 people about six times faster than the truth.

But media literacy training also works. The Rand Corporation conducted a review of available research on the effectiveness of media literacy and found ample evidence in numerous studies that research subjects became less likely to fall into fake content after various education trainings to the media. Other organizations, including the American Academy of Pediatrics, the Centers for Disease Control and Prevention and the European Commission, have come to similar conclusions and have strongly recommended media training in schools.

Facebook has already taken steps to embrace media literacy. It has partnered with the Poynter Institute to develop media literacy training tools for children, millennials and seniors. The company also donated $1 million to the News Literacy Project, which teaches students to scrutinize the origin of a story, make and critique judgments about news, detect and dissect viral rumors. and recognize confirmation bias. Facebook also hosts a “media literacy library” on its site.

But everything is voluntary. Requiring training and a quiz as a condition of access to the site is something different. “Platforms would be very hesitant to do this because they fear turning away users and reducing engagement,” says NYU’s Barrett.

If social networks do not act voluntarily, they could be forced to require media literacy by a regulatory body like the Federal Trade Commission. From a regulatory perspective, this might be easier to accomplish than asking Congress to require media literacy in public schools. It could also be a more targeted way to mitigate the real risks posed by Facebook, compared to other proposals such as breaking up the company or removing its shield against lawsuits stemming from user content.

Americans became aware of misinformation when the Russians armed Facebook to interfere in the 2016 election. But while Robert Mueller’s report proved that Russians spread misinformation, the causal line between that and the decisions of actual voting remained unclear. For many Americans, January 6 made the threat of disinformation to our democracy real.

As more tangible damage is directly caused by misinformation on social media, it will become even clearer that people need help sharpening their bullshit detectors before going online.

Previous UNM-Taos Digital Media Arts Program Receives Funding From Better Call Saul: UNM Newsroom
Next Candidate for the D200 for media education at the OPRF