Because it’s a naive take. Technology can help us make the world a better place for all - standing in the way are greedy pigs and asshole, ignorant politicians.
AI, or rather LLMs, can barf out a lot of passable text quickly. That can be useful as a starting point for something useful, if a human mind is willing and able to review and repair it. It’s like having an idiot intern.
But the number of people who use LLMs in a way that reflects and understanding of their limitations is diminishingly small. Most people just don’t assume that something that looks valid needs to be fully and critically reviewed. That’s why we’ve had multiple cases of lawyers having ChatGPT write theis legal briefs based on hallucinated legal precedent.
Why not both?
Because it’s a naive take. Technology can help us make the world a better place for all - standing in the way are greedy pigs and asshole, ignorant politicians.
“AI” isn’t the problem, our approach to it is.
Kinda sorta.
AI, or rather LLMs, can barf out a lot of passable text quickly. That can be useful as a starting point for something useful, if a human mind is willing and able to review and repair it. It’s like having an idiot intern.
But the number of people who use LLMs in a way that reflects and understanding of their limitations is diminishingly small. Most people just don’t assume that something that looks valid needs to be fully and critically reviewed. That’s why we’ve had multiple cases of lawyers having ChatGPT write theis legal briefs based on hallucinated legal precedent.
That’s not a problem of the technology though, that’s human idiocy.