I distinctly remember in elementary school and middle school being shown and told how people lie on the internet. We were taught that Wikipedia, unfamiliar websites, and more was able to be edited by people that have no obligation to tell the truth whether that be incomplete information or something as silly as the Pacific Northwest Tree Octopus. Perhaps everyone needs that reminder again.
As social media has come more and more into play though, falsified content has left individual websites and forums and joined the rest of the internet into the large social media platforms like Facebook, Twitter, and Reddit, going mainstream.
What is ‘Fake News’ anyway?
I’ve seen everything from a bit of truth being warped and exaggerated for a political end to surreal fabrications one might find on sites like Infowars. It can be upsetting to see these types of posts in the first place and even more disheartening to read the replies.
Social media makes it easy for one person to mistakenly spread something false information to their friends and family – and thus increase the amount of trust one might have when seeing it for the first time. It also allows for personal investment into yelling wars in comment sections and rabbit holes of content with growing levels of extremism. It can even spread faster than more verified content.
Disinformation through fake news sites has in recent years too made the jump to the local in addition to the national or international, with sites operating to fill news deserts’ lack of sufficient reporting with propagandistic content while Facebook groups with individuals sharing false content produced by organizations create different realities for different groups of people depending on what information they are exposed to.
This is not to say that bias and misrepresentation are not present in so-called ‘mainstream’ sources, because they are. There are whole outlets dedicated to calling that out, and without acknowledging those concerns distrust in large outlets among the public whether for genuine or manipulated reasons only fuels the march of disinformation.
As mentioned in the “Spread of Fake News” chapter in Mobile and Social Media Journalism, author Lee McIntyre defines the world of post-truth we live in now as the “political subordination of reality”. I think this is a good analysis of what the conclusion of this state of existence can lead to – an erosion of what truth can be and instead something entirely utilitarian towards advancing a conflict and recruiting to a political end.
So then, what is to be done?
However, the people that consume that kind of fake news content – they are the ones that can be shown otherwise. So what is the solution. How do you fix ‘fake’ news when many sources of fake news and their promoters frame themselves as a truthful alternative?
I believe media literacy efforts should be redoubled in schools and in public information initiatives to show people the signs (insertion of opinion, missing context, unfamiliar site, etc.) of when a website may not be trusted. Those in teaching and mentoring positions should not assume media literacy just because someone is a frequent user of social media.
More people should be equipped to recognize media propagandistic in nature and treat analytical works as they are – arguments to prompt you to consider their point of view based on the sources they used and their validity, rather than news itself. Encourage people to scrutinize sources of what passes as opinion content and be generally skeptical of this type of article, as lot of what passes as opinion online can easily just be falsities.
Journalists and high school teachers should take efforts to speak to people in elementary, middle, and/or high school more about what the news media is, what it does, and what it is not in order to distill greater media literacy and connection to legitimate reporting and what should go into it. It’s going to take a lot more than Snopes or Politifact articles that might not even be read to convince someone an article is wrong. People need to understand more about how media is written, how it functions, and what differentiates something that can be verified to something that is engineered for a different purpose.
I don’t think the social media companies are to blame as intentional actors in this environment for wanting to spread fake news, but still should take action to prevent it further as their algorithms to maximize engagement are a contributor. I think having disputed articles marked as such with additional sources listed and removing consistently proven bad actors is helpful, as Facebook-owned outlets and Twitter have done. However, many have come to see “big tech” in a bad light, so these alone may not be trusted by some due to their profit and engagement motives and there are legitimate concerns over these companies deciding what is and is not disinformation.
Although again, it is not the bad actors themselves who are the ones to keep away from trusting misinformation, but those on the beginning edges of that rabbit hole. Legislation requiring social media companies to publish explicit content guidelines and to provide reasoning for bans and deletions of posts could be helpful to increase trust and transparency too.
It is a divided time right now with people able to curate or be pushed into a seemingly infinite of sequestered political rabbit holes, and while it cannot be fixed immediately, we have to realize most people do not know how to “use” the internet, and phenomenon like Qanon, harmful alternative medicine, and more show that not everyone can recognize that a Pacific Northwest Tree Octopus might not be a real animal.