Meta Platforms is establishing its “Teen Accounts” feature on Facebook and Messenger on Tuesday. The accounts will have more stringent privacy settings for users.
This is the latest effort to quiet critics who say the social networking company is not doing enough to protect young users from online harms.
Meta expands teen protections with stricter controls and content filters
As part of the new policy updates announced Tuesday, teens under 16 will no longer be able to host live videos without parental approval or share images that may contain nudity via direct messages.
The company said that enhanced privacy and parental controls introduced on Instagram last year will address concerns about how teens spend their time on social media.
With Teen Accounts, users under 18 are shielded from sensitive content, restricted from messaging certain accounts, and prevented from having publicly discoverable profiles. While users aged 16 and 17 can adjust these settings, younger teens require parental consent to make any changes.
According to Meta, 97% of teens aged 13 to 15 have opted to keep these protections enabled, and over 54 million users currently have Teen Accounts.
Importantly, Meta’s teen-specific content filters will override recent policy changes that allow limited hate speech under certain contexts. For users under 18, this means content containing derogatory language targeting transgender or non-binary individuals will remain blocked.
“There is no change to how we treat content that exploits children or content that encourages suicide, self-injury or eating disorders, nor do our bullying and harassment policies change for people under 18,” a Meta spokesperson said.
Meta has stepped up its efforts to safeguard teens following years of intense scrutiny from lawmakers, parents, and regulators over its failure to adequately protect young users online.
Last year, more than 30 U.S. states filed a lawsuit against the company, accusing it of exploiting young people through its platforms.
Meta CEO Mark Zuckerberg also faced tough questioning during a congressional hearing focused on shielding children from online sexual predators. In a packed hearing room, he issued a public apology to the families of children who had been victims of sexual exploitation on social media.
Judge allows key claims to proceed in landmark case
U.S. District Judge Yvonne Gonzalez Rogers noted a 102-page decision that many consumer protection claims brought by the state attorney generals of 34 states are “cognizable.”
She denied Meta’s bid to dismiss part of the states’ claims under the Children’s Online Privacy Protection Act, or COPPA, which prohibits collecting data from social media users younger than 13 without notifying and obtaining permission from their parents.
Meta sought to have these claims thrown out as it argued neither Facebook nor Instagram is directed at children.
While Meta argued that third-party content aimed at children shouldn’t classify its platforms as being directed at children, Judge Gonzalez Rogers disagreed. She ruled that content hosted by a platform—even if posted by third parties—can be considered when determining whether the platform or parts of it are directed at children under the law.
The judge found that Meta’s design, development, and implementation of certain product features could reasonably be seen as unfair or unconscionable under relevant federal and state laws. However, she also noted that Section 230 of the Communications Decency Act—which protects online platforms from liability for user-generated content—places limitations on parts of the case against the company.
As far as the states’ consumer protection claims were concerned, some Facebook and Instagram features that the states claim get children hooked are protected under Section 230 from liability for content posted by users, Gonzalez Rogers wrote.
By challenging these features, the judge said she had already concluded in a related ruling from 2023 that the lawsuits directly target the platforms’ roles as publishers of third-party content and run afoul of Section 230.
The protected features include infinite scroll and autoplay, ephemeral content, disruptive audiovisual and vibration notifications and alerts, and quantification and displays of “likes.”
However, other features, such as appearance-altering filters, which restrict time spent on the platform and Instagram’s multiple account function, aren’t shielded under Section 230 because they don’t involve altering the publishing of third-party content, Gonzalez Rogers said, again referencing her ruling from 2023.
Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More
No comments yet