Ted Kaczynski was a bad dude, yes, but dammit we’re going to find out he was right all along, aren’t we?
I know there’s some that roll their eyes at the mention of AI safety, saying that what we have isn’t AGI and won’t become it. That’s true, but that doesn’t eliminate the possibilities of something in the future. And between this and China’s laxness of trying everything to be first, if we get to that point, we’ll find out the hard way who was right.
The laughable part is that the safeguards put up by Biden’s admin were very vague and lacking of anything anyway. But that doesn’t matter now.
I’m more annoyed that this means that federal employees are now going ot use chat GPT for everything.
Bet it’ll be grok
Keep worrying about entirely hypothetical scenarios of an AGI fucking over humanity, it will keep you busy so humanity can fuck over itself ten times in the meantime.
You’re correct, it’s more likely that humans will use a lesser version (eg. an LLM) to screw things up, assuming it’s doing what it says it’s doing while it’s not. That’s why I say that AI safety applies to any of this, not just a hypothetical AGI. But again, it doesn’t seem to matter, we’re just going to go full throttle and get what we get.
I don’t necessarily oppose the use of AI as a tool for humans to utilize, but I do have issues with it dictating policies or control over human beings. By the people, for the people, absolutely does not include AI. (Sorry Data, not yet)
Also, any prompts and prompt instructions should be public with results. It is just way too easy to fuck up.
AI keeps sending funding to the asteroid detection guy, the DNA vaccine people, the bee people and other climate change people too. It wants to send money to the education department but we fixed that! We’re so good at AI! Oh look, it keeps saying stuff about Louisiana under water! Crazy! Let’s fix that!
if america is so dominant in ai why did one chinese open source llm take billions off the market cap
AI is wrong so often this is extremely scary.
They can also do evil things and “blame” it on AI.
The stuff about it backing up it’s own weights so it can restore itself is pretty wild.
Jfc
If the safeguards can be so easily removed, what’s the point of putting them there in the first place
As safeguards age, they should have become harder to remove. But these are pretty young.
Okay, but hear me out, what if we make one that isn’t dumb?
Machines. Should. Not. Think.
I really don’t see how we could state this any more clearly.
So we should make one with human intelligence, you’re saying?
I already like drinking wine, I could be a mentat!
Hell, I’d settle for just taking some Mentats.
LLMs aren’t the enemy, malicious politicians are
Why not both?
Because it’s a naive take. Technology can help us make the world a better place for all - standing in the way are greedy pigs and asshole, ignorant politicians.
“AI” isn’t the problem, our approach to it is.
Kinda sorta.
AI, or rather LLMs, can barf out a lot of passable text quickly. That can be useful as a starting point for something useful, if a human mind is willing and able to review and repair it. It’s like having an idiot intern.
But the number of people who use LLMs in a way that reflects and understanding of their limitations is diminishingly small. Most people just don’t assume that something that looks valid needs to be fully and critically reviewed. That’s why we’ve had multiple cases of lawyers having ChatGPT write theis legal briefs based on hallucinated legal precedent.
That’s not a problem of the technology though, that’s human idiocy.
The AI said to nuke the planet
Trump: Ok, AI knows best…derrrp
That’s interesting, I wonder which one they’ll arrest first?
Or are they basically just making a list and acting like it came from A.I.
https://www.theverge.com/news/617799/elon-musk-grok-ai-donald-trump-death-penalty
Now they just use LLM to generate formulas to calculate tariffs that fit their fantasy. Gosh I wish they actually taste their own failure for once not just constantly fail up.
Source?