• TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    3
    ·
    2 days ago

    It’s a two-pass solution, but it makes it a lot more reliable.

    So your technique to “make it a lot more reliable” is to ask an LLM a question, then run the LLM’s answer through an equally unreliable LLM to “verify” the answer?

    We’re so doomed.

    • Apepollo11@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      edit-2
      1 day ago

      Give it a try.

      The key is in the different prompts. I don’t think I should really have to explain this, but different prompts produce different results.

      Ask it to create something, it creates something.

      Ask it to check something, it checks something.

      Is it flawless? No. But it’s pretty reliable.

      It’s literally free to try it now, using ChatGPT.

        • Apepollo11@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Hey, maybe you do.

          But I’m not arguing anything contentious here. Everything I’ve said is easily testable and verifiable.