• Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    5
    ·
    1 day ago

    I couldn’t agree more. Human moderators, especially unpaid ones simply aren’t the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I’d much rather tell an AI moderator what I’m interested in seeing and what not and have it analyze the content to see what needs to be filtered out.

    Take this thread for example:

    Cool. I think he should piss on the 3rd rail.

    This pukebag is just as bad as Steve. Fuck both of them.

    What a cunt.

    How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.

    • Viri4thus@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      Translation: An AI would allow me to maybe have an echo chamber since human moderators won’t work for me for free.

    • MissGutsy@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.

      Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn’t keep up with moderation. I don’t remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.

      Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn’t need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support. So no matter what you think of AI and if it’s moral, this is actually one of the few good applications in my opinion

      • mPony@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support

        How in the actual hell can Facebook not provide medical support to these people, after putting them through actual hell? That is actively evil of them.

        • boonhet@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          The real answer? They use people in countries like Nigeria that have fewer laws

        • MissGutsy@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I agree, but it’s also not surprising. I think somebody else posted the article about kenyan Facebook moderators in this comment section somewhere if you want to know more

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?

      At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.

      That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.