- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Summary
A Danish study found Instagram’s moderation of self-harm content to be “extremely inadequate,” with Meta failing to remove any of 85 harmful posts shared in a private network created for the experiment.
Despite claiming to proactively remove 99% of such content, the platform’s algorithms were shown to promote self-harm networks by connecting users.
Critics, including psychologists and researchers, accuse Meta of prioritizing engagement over safety, with vulnerable teens at risk of severe harm.
The findings suggest potential non-compliance with the EU’s Digital Services Act.
The content they watch there is questionable.