The recently established online safety regulator has unveiled its initial draft codes of practice following the enactment of the Online Safety Act just last week. These initial codes primarily address illegal online content, encompassing issues like child sexual abuse material, grooming content, and fraud.
Revealing New Guidelines
Following the extensive journey of the Online Safety Act through the UK's legislative process over the course of several years, The Verge reported that the regulatory body Ofcom has unveiled its initial guidelines for how technology companies can adhere to this substantial legislation.
These proposals outline the procedures that social media platforms, search engines, online and mobile games, and adult content websites should adopt in addressing illegal content, such as child sexual abuse material (CSAM), terrorism-related content, and fraudulent activities.
It's important to note that these guidelines, which have been released today, are presented in the form of proposals. Ofcom's objective is to solicit input and feedback before these guidelines are ultimately sanctioned by the UK Parliament toward the conclusion of the following year.
Ofcom's regulations extend to addressing various forms of illegal harm, including content that promotes suicide or self-harm, harassment, revenge pornography, sexual exploitation, and the distribution of drugs and firearms.
Search services are required to offer "crisis prevention information" when users input suicide-related queries, and companies must conduct risk assessments when updating recommendation algorithms to prevent the amplification of illegal content.
If users suspect non-compliance with the rules, they will have the option to lodge complaints directly with Ofcom. The regulator has the authority to impose fines on violators, which could amount to as much as £18 million or 10 percent of their global turnover, whichever is higher. In severe cases, offending websites may face blocking in the UK.
Confronting More Sensitive Issues
While the current consultation primarily addresses less contentious aspects of the Online Safety Act, such as reducing the dissemination of content that was already illegal in the UK, future updates will confront more sensitive issues.
These include content that, while legal, may be harmful to children, underage access to pornography, and safeguards for women and girls. Ofcom will also need to address a potentially controversial section that critics argue could have significant implications for end-to-end encryption in messaging applications.
The mentioned section empowers Ofcom to mandate that online platforms utilize what's referred to as "accredited technology" for identifying CSAM. However, encrypted messaging services like WhatsApp, as well as digital rights organizations, argue that such scanning would entail compromising the encryption systems of apps and infringing on user privacy.
Whitehead has indicated that Ofcom intends to initiate consultations on this matter next year, creating uncertainty about the complete ramifications of this provision on encrypted messaging services.
While today's consultation doesn't place a spotlight on artificial intelligence (AI), Daily Gazette reported that it doesn't exempt AI-generated content from the regulations. The Online Safety Act strives to address online harms in a "technology-neutral" manner, regardless of the means of creation.
Therefore, AI-generated CSAM would be within the purview of these rules due to its nature as CSAM, and a deepfake used for fraudulent purposes would be subject to these rules due to the fraudulent activity. Ofcom's focus lies on regulating the context, not the technology itself.
Related Article : UK Ofcom Proposes Investigation into Amazon, Microsoft for Alleged Abuse of Market Power