misk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 month agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square109fedilinkarrow-up1512arrow-down119
arrow-up1493arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 month agomessage-square109fedilink
minus-squareHeyListenWatchOut@lemmy.worldlinkfedilinkEnglisharrow-up11·1 month ago …a spellchecker on steroids. Ask literally any of the LLM chat bots out there still using any headless GPT instances from 2023 how many Rs there are in “strawberry,” and enjoy. 🍓
minus-squareSemperverus@lemmy.worldlinkfedilinkEnglisharrow-up12arrow-down1·1 month agoThis problem is due to the fact that the AI isnt using english words internally, it’s tokenizing. There are no Rs in {35006}.
minus-squareSterile_Technique@lemmy.worldlinkfedilinkEnglisharrow-up5·1 month agoThat was both hilarious and painful. And I don’t mean to always hate on it - the tech is useful in some contexts, I just can’t stand that we call it ‘intelligence’.
minus-squarePieisawesome@lemmy.worldlinkfedilinkEnglisharrow-up3·1 month agoLLMs don’t see words, they see tokens. They were always just guessing
Ask literally any of the LLM chat bots out there still using any headless GPT instances from 2023 how many Rs there are in “strawberry,” and enjoy. 🍓
This problem is due to the fact that the AI isnt using english words internally, it’s tokenizing. There are no Rs in {35006}.
That was both hilarious and painful.
And I don’t mean to always hate on it - the tech is useful in some contexts, I just can’t stand that we call it ‘intelligence’.
LLMs don’t see words, they see tokens. They were always just guessing