A new initiative promises to bring together the biggest social media firms in a code of conduct meant to combat misinformation and hate speech online.
The Aotearoa Code of Practice for Online Safety and Harms requires companies to regulate the amount of harmful content on their digital platforms in New Zealand. Created by non-profit online safety organisation Netsafe, the initiative has already drawn support from Meta (Facebook & Instagram), Google (YouTube), Amazon (Twitch), Twitter, and TikTok.
Code of conduct for social media
Netsafe’s Aotearoa Code tasks companies with keeping their platforms and services free from misinformation and hate speech. If the public believes any of its signatories breached the code, they can file an official complaint. Those that will be proven liable will face sanctions, including being expelled from the agreement.
Each member company is required to publish an annual report on how they are adhering to the code. It focuses on how they are able to use their systems, policies, processes, and tools to prevent the spread of harmful content.
Read more: How to counter hate speech at work
There are seven themes that social media companies should look out for. These are bullying or harassment, child sexual exploitation & abuse, hate speech, incitement of violence, misinformation & disinformation, and violent or graphic content.
Netsafe CEO Brent Carey describes the Aotearoa Code as the first of its kind in the world.
“Having this code, which is filling some regulatory gaps, is a good first step to try to address some of these emerging issues, especially around hate speech, misinformation and disinformation,” Carey said, as quoted by Stuff NZ.
The drafters of the Aotearoa Code clarified that it was not intended to replace obligations to existing laws or other voluntary regulations. The code serves as a “living document in that it is required to be regularly reviewed”.
Read more: 6 signs your co-worker is gaslighting you
Reducing harmful content online
Several events have led to the creation of the Aotearoa Code. Earlier this year, thousands of people forcibly occupied Parliament grounds to protest New Zealand’s vaccine mandates and Covid restrictions. The incident was believed to have been instigated through social media misinformation.
In 2019, a terrorist attack at a mosque in Christchurch was broadcasted on Facebook via livestream. In the 24 hours following the shootings, Meta had to remove 1.5 million videos showing the attack from its platform.
Other countries have also started adopting initiatives to prevent the spread of harmful content on social media. Some have even enacted legislation to help curb such behaviour online.