• Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      13 hours ago

      He could see AI being used more immediately to address certain “low-hanging fruit,” such as checking for application completeness. “Something as trivial as that could expedite the return of feedback to the submitters based on things that need to be addressed to make the application complete,” he says. More sophisticated uses would need to be developed, tested, and proved out.

      Oh no, the dystopian horror…

  • Eggyhead@fedia.io
    link
    fedilink
    arrow-up
    10
    ·
    17 hours ago

    If it’s trained carefully, professionally, responsibly, with bonafide medical research data exclusively, I can see it being a boon to healthcare professionals. I just don’t know if I can trust that will happen in the timeline we live in.

    • gndagreborn@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      36 minutes ago

      Open evidence is a legit tool my colleagues and classmates use every day. Open AI is leagues behind them especially in terms of HIPAA compliance.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    19 hours ago

    damn i see that chatbots don’t want to stay behind rfk jr in body count

    will they learn that safety regulations are written in blood? who am i kidding, that’s not their blood

  • NarrativeBear@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    18 hours ago

    Most PCs no longer have floppy disk readers or CD drives, where are they going to put the placebo or drugs in. /s