Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.
I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).
There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.
Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.
Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.
It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.
Yeah, the hype is really leaning on that singularitarian angle and the investor class is massively overextended.
I’m glad that the general public is finally getting on down the hype cycle, this peak of inflated expectations has lasted way too long, but it should have been obvious three years ago.
Like, I get that I’m supposedly brighter and better educated than most folks, but I really don’t feel like you need college level coursework in futures studies to be able to avoid obvious scams like cryptocurrency and “AI”.
I feel like it has to be deliberate, a product of marketing effects, because some of the most interesting new technologies have languished in obscurity for years because their potential is disintermediative and wouldn’t offer a path to further expanding the corporate dominion over computing.
Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.
I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).
There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.
What we have now is “neat.” It’s freaking amazing it can do what it does. However it is not the AI from science fiction.
deleted by creator
Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.
Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.
It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.
deleted by creator
Yeah, the hype is really leaning on that singularitarian angle and the investor class is massively overextended.
I’m glad that the general public is finally getting on down the hype cycle, this peak of inflated expectations has lasted way too long, but it should have been obvious three years ago.
Like, I get that I’m supposedly brighter and better educated than most folks, but I really don’t feel like you need college level coursework in futures studies to be able to avoid obvious scams like cryptocurrency and “AI”.
I feel like it has to be deliberate, a product of marketing effects, because some of the most interesting new technologies have languished in obscurity for years because their potential is disintermediative and wouldn’t offer a path to further expanding the corporate dominion over computing.
Lots of it is very very good and totally functional. It’s just that for normal people, “AI” is now equal to chatbots.
This is so we’ll said.
I’m stealing this.
I’m going to use it to explain while I simultaneously have so much derision for modern AI, while I also enjoy it.
I like McDonald’s toys. I just don’t use them for big person work.