schizoidman@lemm.ee to Technology@lemmy.worldEnglish · 2 days agoDeepSeek's distilled new R1 AI model can run on a single GPU | TechCrunchtechcrunch.comexternal-linkmessage-square22linkfedilinkarrow-up1175arrow-down118
arrow-up1157arrow-down1external-linkDeepSeek's distilled new R1 AI model can run on a single GPU | TechCrunchtechcrunch.comschizoidman@lemm.ee to Technology@lemmy.worldEnglish · 2 days agomessage-square22linkfedilink
minus-squareIrdial@lemmy.sdf.orglinkfedilinkEnglisharrow-up3arrow-down1·edit-22 days agoOn my Mac mini running LM Studio, it managed 1702 tokens at 17.19 tok/sec and thought for 1 minute. If accurate, high-performance models were more able to run on consumer hardware, I would use my 3060 as a dedicated inference device
On my Mac mini running LM Studio, it managed 1702 tokens at 17.19 tok/sec and thought for 1 minute. If accurate, high-performance models were more able to run on consumer hardware, I would use my 3060 as a dedicated inference device