• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 3rd, 2023

help-circle
  • Multiple reasons, but it starts with terrible education. They don’t know how to verify information at best, and at worst don’t want to, they believe they win when they piss off someone they don’t like. This means the information they get on current topics is generally something pushed by someone either paying an algorithm to target them or by things like AstroTurfing.

    There’s a lot of value for the hegemony class in getting people to not care about the climate, endless war, genocide, and human rights in general.

    And then you have the out of touch authoritarian liberals/neoliberals that control the liberal side of the mainstream media.









  • No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.

    Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.

    Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.


  • I’d agree the first part but to say all Ai is snake oil is just untrue and out of touch. There are a lot of companies that throw “Ai” on literally anything and I can see how that is snake oil.

    But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It’s already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore’s law).