Food and beverage companies, the automotive industry, and financial services are all subject to regulation and accountability measures in order to ensure high degrees of ethics, fairness, and transparency. Tech companies, on the other hand, have often argued that any legislation will limit their ability to act effectively, turn profits, and do what they became powerful for. Currently, there’s a slate of bills and legislation around the world that finally aim to curtail these powers, like the UK’s long-awaited Online Safety Bill. That bill will pass in 2023, but its limitations mean that it won’t be effective. The Online Safety Bill has been in the works for several years, and effectively places the duty of care for monitoring illegal content onto platforms themselves. It could potentially also impose an obligation on platforms to restrict content that is technically legal but could be considered harmful, which would set a dangerous precedent for free speech and the protection of marginalized groups. In 2020 and 2021, YouGov and BT (along with the charity I run, Glitch) found that 1.8 million people surveyed said they’d suffered threatening behavior online in the past year. Twenty-three percent of those surveyed were members of the LGBTQIA community, and 25 percent of those surveyed said that they had experienced racist abuse online. In 2023, legislation aimed at tackling some of these harms will come into effect in the UK, but it won’t go far enough. Campaigners, think tanks, and experts in this area have raised numerous concerns around the effectiveness of the Online Safety Bill as it currently stands. The think tank Demos emphasizes that the bill doesn’t specifically name minoritized groups—such as women and the LGBTQIA community—even though these communities tend to be disproportionately affected by online abuse. The Carnegie UK Trust noted that while the term “significant harm” is used in the bill, there are no specific processes to define what this is or how platforms would have to measure it. Academics and other groups have raised the alarm over the bill’s proposal to drop the previous Section 11 requirement that Ofcom should “encourage the development and use of technologies and systems for regulating access to [electronic] material.” Other groups have raised concerns about the removal of clauses around education and future proofing—making this legislation reactive and ineffective, as it won’t be able to account for harms that may be caused by platforms that haven’t gained prominence yet. Platforms have to change, and other countries have passed legislation trying to make this possible. Already, we’ve seen Germany enact NetzDG in 2017, the first country in Europe to take a stance against hate speech on social networks—platforms with more than 2 million users have a seven-day window to remove illegal content or face a maximum fine of up to 50 million euros. In 2021, EU lawmakers set out a package of rules on Big Tech giants through the Digital Markets Act, which stops platforms from giving their own products preferential treatment, and, in 2022, we’ve seen progress with the EU AI Act, which involved extensive consultation with civil society organizations to adequately address concerns around marginalized groups and technology, a working arrangement that campaigners in the UK have been calling for. In Nigeria, the federal government issued a new internet code of practice as an attempt to address misinformation and cyberbullying, which involved specific clauses to protect children from harmful content. In 2023, the UK will pass legislation aimed at tackling similar harms, finally making progress on a regulatory body for tech companies. Unfortunately, the Online Safety Bill won’t contain the adequate measures to actually protect vulnerable people online, and more will need to be done.