The disinformation disease: Will the cure help or hinder?
This week my four-year-old daughter started at a new kindy. She reported that one of the boys had said that she was “not a good baker in the sandpit.” Objectively he might have had a point, but I recited that old saw “Sticks and stones may break my bones, but words will never hurt me.” As fatherly advice, this seemed to work. However, it is not at all clear that this adage still commands unanimous agreement.
“Sticks and stones may break my bones, but words will never hurt me.”
For example, the Department of Internal Affairs is undertaking a Content Regulatory Review premised on the belief that digital media “has resulted in a significant increase of potential for New Zealanders to be exposed to harmful content”. Since “content” is defined as “any communicated material,” it seems that words may indeed hurt.
Then there is governmental concern about “disinformation”. According to the 2020 briefing to Incoming Ministers on COVID-19, a key focus of the public health response has been “countering misinformation, rumour and disinformation.” Last year the Department of Prime Minister and Cabinet advertised for a senior analyst to “actively monitor open source social media channels” to help curb disinformation.
We can acknowledge all of this [objectionable content] and still be wary of content regulation by government bodies.
One does not need to look hard for examples of objectionable online content. The 2019 attack on Christchurch’s Al Noor Mosque was livestreamed. A scroll through Twitter will make you question our collective sanity. Recently police partially blamed social media as an incentive for children to carry out ram raids. But we can acknowledge all of this and still be wary of content regulation by government bodies.
The key questions are: Who decides what content is harmful, and how do they decide what is harmful? The Content Regulatory Review’s definition of “harm” is exceptionally broad. It includes damage to an individual’s emotional and mental wellbeing. Another definition is that which causes individuals to lose trust in “key public institutions.” The dangers of catching too much in this definitional drag net are obvious. Does harm include being distressed or offended by what we read? What about when institutions deserve to lose our trust?
Who decides what content is harmful, and how do they decide what is harmful?
Curbing “disinformation” is also fraught. This label can easily cover opinions and facts which are politically inconvenient, unpopular, or unproven. An excellent blogpost by the IT and legal expert, “The Countrey Justice,” summed it up well:
“…there seems to be an assumption that citizens are unable to make up their own minds about the validity of certain content and that essentially the whole of society is gullible and needs to be protected from itself. This is no more than a form of, at best, patronizing paternalism … fostered by a strong belief that the few know what is best for the many.”
And this paternalism is unnecessary. According to the Acumen Edelman 2022 Trust Barometer, New Zealanders are already world leaders in our scepticism of the media, particularly social media.
When the Government decides which content we can consume, then we are being treated like four-year-olds in the sandpit.
We shouldn’t lie and should be polite in online discourse. But when the Government decides which content we can consume, then we are being treated like four-year-olds in the sandpit.go back