The UK Online Safety Bill
The UK Online Safety Bill
Privacy versus protection, weighing the legal and operational constraints.
The UK Online Safety Bill received its first reading in parliament on 17th March 2022. That stage is purely a formal introduction of the Bill to the House of Commons. Before the Bill becomes law it will be the subject of debate in the Commons and Lords. It will also be scrutinised by Commons and Lords committees who have the power to make amendments. Now is the time for interested parties to look at the proposed contents of the Bill and make their views known.
At present the Bill is 225 pages long and is unlikely to get much shorter. The core aim is to protect children and other vulnerable users from exploitation online. This is a laudable goal but in order to work must affect some of the ways businesses use the Internet.
Providers of user-to-user and search services will have a duty of care to protect users. This will put liability on them should suitable protection measures not be in place. It does not only apply to the big players such as Facebook or Google. Any business providing some sort of service that allows users to contact each other or share data could be covered by the new law. This would include web forums or chat agents commonly used for interaction with customers. Few companies will be coding these from scratch but when an existing engine such as phpBB (the web forum) or HelpScout (the Live Chat engine) is in use the host will be responsible for its safe use.
The Internet is continually evolving. By the time this act becomes law there might be new services in general use that could come under the user-to-user duty of care. As a first step businesses should look at the services that they are currently using or considering adopting and ask those providers how they plan to act under the auspices of the new act. It must be expected that a service provides suitable security to protect users while also having some system in place to monitor and report inappropriate content.
Users need to ensure that any services they are responsible for do not become engines for the transmission or storage of potentially harmful material. Procedures for this should already be in place. A simple example would be the moderation of comments on a webpage. Messages are usually held before they are published and obvious Spam can be automatically filtered. A human administrator will be looking at all comments, deleting the Spam and only publishing comments that are for the good of the business. An unmoderated comment system with no Spam filter could quickly be taken over for just about any purpose and the site operator is liable.
The Online Safety Bill has caused some debate on the issue of privacy versus protection. The UK government backed ‘No Place to Hide’ campaign aims to prevent social media companies from enforcing end to end encryption on their communication channels. End to end encryption is already the default on platforms such as WhatsApp, Teams and Zoom. Others such as Facebook Messenger are planning to make the service the default. The core of the conflict is between the rights of personal privacy and the duty of care to protect the vulnerable. The hint is that anyone who needs to encrypt conversations must have something to hide.
If end to end encryption is not in place a content provider can monitor traffic to highlight signs of illicit activity and further investigate this traffic. A police body could require the provider to divulge evidence of illegal activity. This will make it easier to comply with the Online Safety Bill but requires that records of such activity are kept and as with any computer files these would be vulnerable to a data breach.