鶹ýAV

Skip to main content

We Need Product Safety Regulations for Social Media

As social media more frequently exposes people to brutality and untruths, we need to treat it like a consumer product, and that means product safety regulations

A person holds a smartphone in their hands
Credit:

Like many people, I’ve used Twitter, or X, less and less over the last year. There is no one single reason for this: the system has simply become less useful and fun. But when the terrible news about the attacks in Israel broke recently, I turned to X for information. Instead of updates from journalists (which is what I used to see during breaking news events), I was confronted with graphic images of the attacks that were brutal and terrifying. I wasn’t the only one; some of these posts had millions of views and thousands of people.

This wasn’t an ugly episode of bad content moderation. It was the strategic use of social media to amplify a terror attack made possible by unsafe product design. This misuse of X could happen because, over the past year, Elon Musk has systematically dismantled many of the systems that kept Twitter users safe and laid off nearly all the employees who worked on trust and safety at the platform. The events in Israel and Gaza have served as a reminder that social media is, before anything else, a consumer product. And like any other mass consumer product, using it carries big risks.

When you get in a car, you expect it will have functioning brakes. When you pick up medicine at the pharmacy, you expect it won’t be tainted. But it wasn’t always like this. The safety of cars, pharmaceuticals and dozens of other products was terrible when they first came to market. It took much research, many lawsuits, and regulation to figure out how to get the benefits of these products without harming people.

Like cars and medicines, social media needs product safety standards to keep users safe. We still don’t have all the answers on how to build those standards, which is why social media companies must share more information about their algorithms and platforms with the public. The bipartisan would give users the information they need now to make the most informed decisions about what social media products they use and also let researchers get started figuring out what those product safety standards could be.

Social media risks go beyond amplified terrorism. The dangers that algorithms designed to maximize attention represent to teens, and particularly to girls, with still-developing brains have become . Other product design elements, often called “dark patterns,” designed to keep people using for longer also appear to tip young users into social media overuse, with eating disorders and suicidal ideation. This is why 41 states and the District of Columbia , the company behind Facebook and Instagram. The complaint against the company accuses it of engaging in a “scheme to exploit young users for profit” and building product features to keep kids logged on to its platforms longer, while knowing that was damaging to their mental health.

Whenever they are criticized, Internet platforms have . They say it’s their users’ fault for engaging with harmful content in the first place, even if those users are children or the content is financial fraud. They also claim to be defending free speech. It’s true, , and some repressive regimes abuse this process. But the current issues we are facing aren’t really about content moderation. X’s policies already prohibit violent terrorist imagery. The content was widely seen anyway only because Musk took away the people and systems that stop terrorists from leveraging the platform. Meta isn’t being sued because of the content its users post but because of the product design decisions it made while allegedly knowing they were dangerous to its users. Platforms already have systems to remove violent or harmful content. But if their feed algorithms recommend content faster than their safety systems can remove it, that’s simply unsafe design.

More research is desperately needed, but some things are becoming clear. Dark patterns like autoplaying videos and endless feeds are particularly dangerous to children, whose brains are not developed yet and who often lack the mental maturity to put their phones down. Engagement-based recommendation algorithms disproportionately recommend extreme content.

In other parts of the world, authorities are already taking steps to hold social media platforms accountable for their content. In October, about the spread of terrorist and violent content as well as hate speech on the platform. Under the Digital Services Act, which came into force in Europe this year, platforms are required to take action to stop the spread of this illegal content and can be fined up to 6 percent of their global revenues if they don’t do so. If this law is enforced, maintaining the safety of their algorithms and networks will be the most financially sound decision for platforms to make, since ethics alone do not seem to have generated much motivation.

In the U.S., the legal picture is murkier. The case against Facebook and Instagram will likely take years to work through our courts. Yet, there is something that Congress can do now: pass the bipartisan . This bill would finally require platforms to disclose more about how their products function so that users can make more informed decisions. Moreover, researchers could get started on the work needed to make social media safer for everyone.

Two things are clear: First, online safety problems are leading to real, offline suffering. Second, social media companies can’t, or won’t, solve these safety problems on their own. And those problems aren’t going away. As X is showing us, even safety issues like the amplification of terror that we thought were solved can pop right back up.  As our society moves online to an ever-greater degree, the idea that anyone, even teens, can just “stay off social media” becomes less and less realistic. It’s time we require social media to take safety seriously, for everyone’s sake.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of 鶹ýAV.

Laura Edelson is an assistant professor of computer science at Northeastern University and former chief technologist at the Department of Justice's Antitrust Division.
More by Laura Edelson