Over the coming weeks we’ll be publishing a series of articles on the Online Safety Bill. Today, as an introduction, we’ll look through the basics of the Bill, and what it means for organisations going forward.
As part of the Government’s promise to make the UK the safest place in the world to be online, they have strengthened their Online Safety Bill, after the first draft had been published in May 2021. The Bill, which also aims to protect free speech, has gone through Parliamentary scrutiny, with a clearer version now being published.
One of the key areas that the Bill covers is imposing rules on organisations that host user-generated content. These would be organisations that allow users to post their own content or use their platform to speak to other users online. The big hitters here would be platforms such as Youtube (Google), Facebook (Meta) and Tiktok (Bytedance). This would also cover search engines, who will be required to focus on minimising the likelihood of harmful search results being shown to users. The Bill states that those who fail to comply with the regulations set out in the Bill could face fines from the regulator which can equal up to 10% of the organisation’s revenue or, (in the most extreme cases) a ban. All platforms will need to remove illegal content, particularly when it comes to content relating to terrorism, or child abuse and sexual exploitation.
There will also be obligations put on platforms that are likely to be used by children to prevent them from seeing content that whilst isn’t illegal in nature, would still be harmful to children. This would be content relating to topics such as self harm or eating disorders. As well as this, platforms that show pornographic content will need to ensure that children are unable to access it. The largest platforms will also need to revisit how they govern certain content that is legal but still harmful, that adults can access. These topics include harassment, abuse, as well as content relating to self harm and eating disorders.
In these organisations' terms and conditions, they will need to clearly outline their stance on these topics, state what they will allow and won’t allow, and ensure that they enforce these rules. Giving users greater control over the content they see is also a focus of the Bill, and organisations will need to allow users greater power of choosing who they interact with and the content they see. A big change is also giving users the option to verify their identity. One of the big issues with social media platforms is the ability to send abuse to people anonymously, so this is a step in the right direction. However, with many people believing that it should be a requirement to verify your identity online to cut down on online abuse, there may still be work to do on this front.
It is highlighted that freedom of expression will not be impacted as a result of the Bill, with the focus being on making sure that organisations have systems in place and the correct processes carried out to prevent harm to users, as opposed to putting in place regulation or state removal of content. The final key imposition on larger scale organisations will be the requirement to put systems in place to prevent the publishing of harmful advertising on their platform, which will look to stop scam adverts that have a terrible impact on users who fall victim to them.
The full Online Safety Bill can be read here.