What is the Social Media Regulation Act

The Social Media Regulation Act is a new law that aims to restrict the online interactions of individuals under 18 and prevent companies from enticing minors to certain websites.

The first version of this law in the United States has been has been enacted in the state of Utah, and would take effect starting in March 1, 2024.

With this law social media companies must implement a curfew for minors in the state, prohibiting them from accessing their accounts between 10:30 PM to 6:30 AM. The legislation mandates companies to grant access to a parent or guardian for their child's accounts, and adults must confirm their age to access social media platforms, or risk losing their account. The proposed legislation is a response to concerns raised by experts and policymakers across the country regarding the potential negative impact of social media on the mental health of young users, according to Axios' Kim Bojórquez and Erin Alberty.

Here are some details from the version of the law passed in the state of Utah:

  • Requires a social media company to obtain the consent of a parent or guardian before a Utah resident under the age of 18 may maintain or open an account.
  • rohibits a social media company from permitting a Utah resident to open an account if that person does not meet age requirements under state or federal law
  • Requires that for accounts held by a Utah minor, certain social media companies:
    • Shall prohibit direct messaging with certain accounts
    • May not show the minor's account in search results
    • May not display advertising
    • May not collect, share, or use personal information from the account, with certain exceptions
    • May not target or suggest ads, accounts, or content
    • Shall limit hours of access, subject to parental or guardian direction
  • Requires a social media company to provide a parent or guardian access to the content and interactions of an account held by a Utah resident under the age of 18;

This latest iteration of a law aimed at regulating minors' use of social media is more restrictive than the 2022 California law, the Age-Appropriate Design Code Act, which bars tech firms from using children's personal data in any way that may be harmful to their physical or mental well-being. Web platforms that are commonly accessed by children will need to implement data privacy measures, including setting user preferences to high-privacy by default, presenting privacy policies in a language that children can comprehend, and prohibiting the use of children's personal information for any purpose other than the one for which it was originally collected.

As of 2022 34 of the 50 US states, including Arkansas, Texas, Ohio, Louisiana, New Jersey and New York have or are considering laws designed to regulate the use of social media by minors or in general.

Why do governments want to Regulate Social Media?

Governments aim to restrict social media access for minors to protect them from potential harm such as cyberbullying, exposure to inappropriate content, and exploitation. Social media can have a negative impact on children's mental health and well-being, and governments want to mitigate these risks by regulating social media companies and imposing restrictions on the use of their platforms by minors. In addition, social media can be used as a tool for online radicalization and propaganda, which can pose a threat to national security. Governments seek to strike a balance between protecting minors and upholding free speech and privacy rights.

Social Media and Mental Health

Nearly all researchers now agree that there are correlations between (crude measures of) time spent using social media and (crude self report measures of) mental health problems, but there is heated disagreement about the size and significance of these effects.

Social media can have a negative impact on children's mental health and well-being in a variety of ways. One of the main concerns is that social media can expose children to cyberbullying, which can be just as harmful as physical bullying. Cyberbullying can involve harassment, threats, and humiliation, and it can lead to feelings of isolation, anxiety, and depression. Social media can also create unrealistic expectations and foster feelings of inadequacy and low self-esteem. This is especially true with image-based platforms such as Instagram, which can promote an unattainable and idealized standard of beauty that can negatively impact young users' body image and self-worth.

Moreover, social media can lead to addiction and sleep deprivation, which can further exacerbate mental health problems. Studies have found that excessive social media use can disrupt sleep patterns, reduce the quality of sleep, and increase feelings of fatigue and irritability. This can have a negative impact on children's cognitive and emotional functioning and can contribute to mood disorders such as anxiety and depression.

Finally, social media can create a sense of social isolation and loneliness, despite providing the illusion of connectedness. Social media can lead to a lack of face-to-face interactions and a reduced sense of belonging, which can contribute to feelings of depression and anxiety. Overall, while social media can provide numerous benefits for children, including educational and social opportunities, it is important to recognize the potential risks and implement measures to mitigate them.

Social Media Regulation Around The World

The European Union

The European Union (EU) has been at the forefront of creating internet and social media laws that address a range of issues affecting digital platforms. A major area of focus has been on content moderation, with the EU introducing regulations that require platforms to take a proactive approach to remove illegal and harmful content.

These regulations aim to ensure that online platforms operate in a responsible and transparent manner, and that user-generated content does not violate laws or infringe on the rights of others. In addition to removing illegal content, the EU also requires platforms to have an appeals and review process for content moderation decisions, giving users a means to challenge decisions made by platforms.

Another important aspect of the EU's internet and social media laws is the imposition of transparency obligations on platforms' terms of service and content moderation decisions. This requires platforms to be more open about their policies and the actions they take to moderate content, providing users with greater clarity on how their data is being used.

The EU has also taken steps to regulate the algorithms used by social media platforms to scale content moderation practices. This has become increasingly important as the volume of user-generated content continues to grow, making it more difficult for platforms to moderate content manually. The EU's regulations aim to ensure that algorithms used by platforms are transparent, accountable, and fair, and that they do not unfairly discriminate against any particular group of users.

Germany

Since 2018, Germany's NetzDG law has been in effect and applies to companies with over two million registered users within the country. These companies are mandated to establish procedures for reviewing complaints about content hosted on their platform, with a requirement to remove any material that is deemed to be obviously illegal within 24 hours of notification. In addition, companies are required to provide updates every six months about their progress in implementing these measures.

Individuals may face a fine of up to €5 million ($5.6 million; £4.4 million), while companies may be fined up to €50 million for failing to comply with these regulations. The aim of the law is to create a safer online environment by ensuring that companies take responsibility for monitoring and moderating the content that they host.

Australia

In 2019, Australia introduced the Sharing of Abhorrent Violent Material Act, which imposes criminal penalties on social media companies for the dissemination of violent material. The act also allows for tech executives to face potential jail sentences of up to three years and financial penalties of up to 10% of a company's global turnover.

The introduction of the act was prompted by the live-streaming of the New Zealand shootings on Facebook, which highlighted the need for greater regulation of social media platforms to prevent the spread of harmful content. The act seeks to hold social media companies accountable for the content that is shared on their platforms and to provide a strong deterrent against the dissemination of violent and abhorrent material.

China

Twitter, Google, and WhatsApp are among the sites that are blocked in China. Instead, Chinese providers such as Weibo, Baidu, and WeChat are used. The Chinese government has been successful in limiting access to virtual private networks that allow users to bypass the blocks on sites. The Cyberspace Administration of China has closed hundreds of websites and mobile apps, with a focus on illegal gambling and piracy. China employs hundreds of thousands of cyber-police who monitor social media and censor politically sensitive messages. Censored words are automatically filtered from social platforms, and new sensitive words are added to the list.

Popular Posts