IMG_4665

Big tech regulation needs more than industry codes

By 'Alapasita Pomelile August 02, 2022

This past week saw the birth of the Aotearoa New Zealand Code of Practice for Online Safety and Harms. Five of the world’s big Tech companies (Meta, Google, TikTok, Twitch and Twitter), in a joint effort to reduce harmful online content, are now signatories to this industry code that sets the benchmark for online safety in the Asia Pacific region. The Code is a framework outlining principles and voluntary commitments to safer online practices on digital platforms.

Tech companies constantly shape our understanding of the world, so much so that we often forget they operate in a largely unregulated environment.

With the internet being a borderless terrain, this industry code can be seen as a step in the regulatory direction. On the other hand, it raises questions like, can we honestly regulate online content and harm? As presented in a previous column, what constitutes “harm”? Who gets to decide what that is?

Big tech companies dictate and curate the online world in our modern society. We consume news, schooling, and communication with others—and the world—using their platforms. We interact with technology and digital systems daily, which means we interact with them by the rules of these platforms. Tech companies constantly shape our understanding of the world, so much so that we often forget they operate in a largely unregulated environment. So when tech companies, who have vast digital power and are unaccountable for it, lead the charge on any form of regulation, societies need to pay attention.

Big tech companies decide the rules of engagement within their platforms. Whether we’re passive or active users, we are beholden to their rules, terms, and conditions. Digital platforms can enforce their laws in a way that any king could only dream of. These companies also collect data on us, further consolidating their digital power. This means we self-regulate our behaviour without anyone needing to tell us yes or no.

If people are still involved in creating and using digital systems and platforms, is regulating online content enough?

In 2020, Meta CEO Mark Zuckerberg called for governments to collaborate with online platforms to develop and adopt new regulation for online content, noting, “It’s impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services—all with their own policies and processes—we need a more standardized approach.” Industry codes are a step in that direction. However, what is often missing from the discussion is, if people are still involved in creating and using digital systems and platforms, is regulating online content enough?

In addition to traditional modes of regulation, governance, certification, and rules applied to digital platforms and products, should we also regulate people?

It’s healthy to check in every now and then to ask yourself, what sort of person are you becoming?

For policymakers, it’s time to step up and assess what roles and functions within the influential tech industry need oversight. They cannot be left to decide for themselves what’s right and wrong. For the public, it may be a matter of regulating our online behaviours—is the right to freedom of speech a license to be unwise and unruly with our words online? When engaging online, it’s healthy to check in every now and then to ask yourself, what sort of person are you becoming as you’re immersed in this digital world?

go back
IMG_4665

Maxim Institute is an independent charitable trust that relies on the generous support of families, community groups, trusts, and individuals—without them, we wouldn’t exist.

We’d love to have you join our Community of Supporters. We need people like you to help us continue this work—and to grow it—so we can respond to today’s challenges and opportunities and help create a better future for the next generation.