“That’s right voters I’m spineless and have no original ideas” -every politician
Meanwhile the American president uses no intelligence at all. Artificial or otherwise
It’s literally a cult.
I’ve dreamed of such a cult at some point, like Chinese fortune cookies, but those are less exploitable.
It surely can’t hurt, if it’s to sanity check your highly paid advisors…
Except those prompts are retained by OpenAI, and you don’t know who’s got access to that. They’ve had chats leak before.
If that’s your problem with this story then you’re just saying they should run a local instance of chatgpt on their own local computers
I mean, I don’t really love that he uses AI to begin with, but getting an opinion on top of what you get from your advisors that you hired could be good in theory. Except that most AIs are programmed to be equally sycophantic and they’re just plain wrong most of the time.
To me the most irresponsible part is sending your thoughts as the leader of a country straight to a US run company, you just know those prompts are being digested by the NSA directly, and possibly other intelligence agencies.
The typical pattern for leaders is to get “second opinions” from advisors who end up telling them whatever they want to hear, so… maybe asking the equivalent of a magic 8 ball is a marginal improvement?
I would rather have the politicians consult a plain old magic 8 ball than one controlled by Scam Altman.
Most LLMs are literally "tell you whatever you want to hear " machines unfortunately. I’ve gotten high praise from ChatGPT for all my ideas until I go “but hang on, wouldn’t this factor stop it from being feasible” and then it agrees with me that my original idea was a bit shit lmao
“You have to be very careful,” Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information.
I mean, sending queries to a search engine or an LLM are about the same in terms of exposing one’s queries.
If the guy were complaining about information from an LLM not being cited or something, then I think I could see where he was coming from more.
I really don’t get it. These things are brand new. How can anyone get so into these things so quickly. I don’t take advice from people I barely know, much less ones that can be so easily and quickly reprogrammed.
Because that’s what it is really trained for: to produce correct grammar and plausible sentences. It’s really an unbelievable leap from computer-generated text from preceding approaches where, in a matter of a few years, you went from little more than gibberish to stuff that’s so incredibly realistic that it can be mistaken for intelligent conversation, easily passing the Turing Test (I had to actually go to Wikipedia to check and, indeed, this was verified this year - note that this in particular is for recent models)
So you have something that is sufficiently realistic that it can appear to be a human conversation partner. Human beings aren’t (yet) well-equipped to deal with something which appears to be human but whose behaviour diverges from typical human behaviour so radically (most relevantly, it won’t readily admit to not knowing something).
One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don’t know enough to know it.
I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that’s how your job works.
This is what I call “confidently wrong”. If you ask it about things you have no clue about, it seems incredibly well-informed and insightful. Ask it something you know deeply, and you’ll easily see it’s just babbling and spouting nonsense - sure makes you wonder about those earlier statements it made, doesn’t it?
This is the unintentional uncanny valley for me in AI.
I ( was forced to ) use chatGTP for work. It can talk about everything and sounds very confident and seems reliable to always come up with something to help you solve your problems.
You talk with it about some niche content and suddenly have an ardent fan of said niche content responding. It surely knows every little bit of info of that niche and surprises you with funny, but apt quotes from your favorite show in the middle of conversations about something else.
This is just from a tiny bit of interaction, while at work.
I can imagine people completely overwhelmed, by having their thoughts confirmed and supported by something that seems so intelligent, responsive and remembers all your conversations. It literally remembers each word.
For many people it may be the first time in their life, that they experience a positive response to their thoughts. Not only that, they also found someone eager to talk with them about it.
Everyones initial use of chatbots should be on the thing they are most knowledgable about so they can get an idea of how wrong it can be and how it can be useful but you have to treat it like some eager wet behind the ears intern just did for you.
You can see it on this government policies
I would genuinely have more trust in a politician consulting I Ching than ChatGPT - at least the book of changes requires some thinking and interpretation, fucking hell…
Sigh, all the tech-bros that convinced the general public that AI are in any way intelligent or that they can think, reason or understand concepts like truth/lies or right/wrong should go on trial for crimes against humanity…
Most people don’t care whether AI is intelligent, can it think or reason or understand concepts. What matters is that it can give helpful replies, and it does that a lot. In my experience maybe 1 reply out of 10 is seriously flawed, others are either mostly helpful or just tell me something I already knew until I reprompt for more, which, again, also works well most of the time (especially when you allow it to search for more information online). So if you wanted to say it’s dangerous in some ways, this is definitely not the proper way to say it, since neither it being dangerous nor it being right or wrong or helpful or useless has anything to do with intelligence, ability to think, reason, feel, comprehend or whatever.
Some of y’all are crazy reactionary. There’s absolutely nothing wrong with asking an ai chatbot for an additional opinion. The ai shouldn’t be making the decisions, and the ai shouldn’t be the only way you look for opinions, but there’s nothing wrong with ai being ONE OF the opinions you consider
there absolutely is something wrong with sending the basis for decisions in matters of state to a foreign actor, though.
As i wrote in another comment, you can run a local instance of chatgpt or other ai on your own computer that doesn’t use a net connection
of course you can. why would a career politician who has very visibly been interested only in politics since his teens know how to do that?
Hah, true. But the point still stands that using an llm itself wasn’t a bad thing
i’d say it’s still bad because this is the leader of a government consulting with a stochastic parrot instead of experts.
Presumably it wasn’t instead of, it was in addition to, and therefore totally fine
it’s still not fine, as per my first point. it’s leaking information to foreign interests.
AI chat bots don’t have their own opinions. All they do is regurgitate other opinions, and you have no idea the motivation for how those opinions are weighted.
How is that any different than you. Objectively prove to everyone here than none of your opinions have ever been influence by anything youve ever seen, read or heard.
As i said in other comments, none of that matters. The point is to gather ideas that you otherwise might not have come across. And also as said already, ai generated opinion pieces should never be the only opinions you hear.
What use is an opinion that can neither be explained or defended by the person giving it? How is that useful to a person making decisions for millions of people?
It can do those things. And you need to re-read what’s already been said on this comments page, because how it’s useful has been explained several times already
It really can’t. It does not understand things.
It really can. I’ve used ai, I’ve personally seen it do it tons of times. It can make cohesive arguments backed up by reasonable rationales. It doesn’t need to “understand” anything to do this. Do you need me to write some prompts and copy n paste ai’s response to prove it to you?
Since it’s so easy to do i just went ahead and did it. Here’s my prompts and ai’s replies. It will write any opinion you want it to talk about, and give cogent reasonings and arguments for that opinion.
Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is bad. Support this with clear arguments explaining your position
Why Removing Car Lanes for Bike Lanes Can Be Counterproductive
Reallocating car lanes to create bike lanes may seem progressive, but it can have unintended consequences that undermine its goals. In densely populated urban areas, vehicular traffic remains essential for commerce, emergency services, and commuting—especially for individuals who cannot rely on biking due to age, disability, or long travel distances. When car lanes are removed, congestion intensifies, leading to increased emissions as drivers idle longer and search for alternative routes.
Public transportation often depends on road infrastructure too. Buses trapped in traffic delays become less reliable, disproportionately affecting lower-income riders who depend on them. Additionally, essential services like ambulances and delivery trucks face new bottlenecks, risking delays that can have serious implications.
The assumption that cyclists will replace drivers ignores demographic and logistical realities. Bike lanes benefit a specific subset of urban users—but they shouldn’t come at the cost of broader mobility. Instead of eliminating car lanes, cities should consider expanding bike networks where space allows or investing in multimodal infrastructure that improves mobility for all users.
Urban planning should strive for balance, not substitution. Redesigning streets must enhance accessibility across the board, not create new friction between commuters who are simply trying to get where they need to go.
Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is good. Support this with clear arguments explaining your position
Opinion: Why Converting Car Lanes to Bike Lanes Benefits Everyone
Urban spaces must evolve to meet the needs of modern communities—and converting car lanes into bike lanes is one of the smartest, most equitable moves cities can make. First, bike lanes promote healthier lifestyles. By making cycling safer and more accessible, people are more likely to trade short car trips for active commuting, which supports physical health and lowers emissions.
Second, cities thrive on efficiency. Cars take up tremendous space and often carry just one passenger. Bikes, by contrast, use a fraction of that space and cause significantly less wear on infrastructure. Fewer car lanes mean reduced traffic volume over time as behavior adjusts, alleviating congestion—not worsening it.
Third, these changes make streets safer. Studies consistently show that well-designed bike infrastructure leads to fewer traffic collisions and fatalities. Cyclists separated from speeding vehicles feel safer, and drivers operate with greater awareness.
Finally, this shift reflects a cultural transformation: from car dominance to sustainable mobility. Prioritizing bike infrastructure signals that our cities value cleaner air, quieter streets, and more livable neighborhoods.
Reshaping roads isn’t just about bikes—it’s about building cities for people, not just machines. Let’s pedal forward together.
I’m going to say this as nicely as possible, you’ve been taken in. You may think it is good at this because it appears to be and it relays information with confidence. Sometimes it even gets things entirely correct. But it is not good at these things and will often get things wrong. Important details are inaccurate or even entirely made up. It does not vet sources, and it could never do so because it does not understand anything it is relaying to you. It is a tool designed to give you a more or less visual approximation of what you requested, but it can never truly understand what it’s talking about so it is entirely unreliable.
You’re being downvoted because you bought the snake oil and are now trying to promote it to other people.
LLMs can defend what you tell it to defend. What are you on about?
No it cannot. It does not understand anything so it cannot actually defend its points. It can make something that looks like a defense, but it doesn’t understand what it is telling you. It can spit text back at you until the cows come home but none of it can ever be trusted or relied on.
it sounds like you’ve never used an LLM, mate.
You don’t need to get philosophical into the definition of what is understanding to realize they give you arguments as valid as anyone else would.
Just throw out LLM ideas you don’t find reasonable and only use ideas that you yourself find reasonable. You don’t instantly turn into a zombie when you use LLM. You can still use your head.
It doesn’t work like that. Which is why con artists and propaganda function often by “offering advice”.
You protect yourself against “propaganda” by educating yourself and not falling for emotions. Not reading LLM responses won’t safe you from propaganda.
I certainly don’t want my attorney wasting time considering the opinions and motivations of the person that’s suing me.
Well you’re gonna have a shitty attorney then
Lmao. You people are fucking idiots.
Are you MAGA or something? Do you not understand how attorneys bill their clients?
Yes. By an hourly rate which includes considerations of youre opponents position. Do tou not understand how to develope a proper legal argument. My god you people ar stupid.
But it doesn’t know anything. At all. Does Sweden not have a fuck ton of people that are trained to gather intelligence?
It doesn’t matter if it knows anything or not. The purpose is to acquire other ideas that you and the people in your cabinet didn’t think of. Or ideas they didn’t want to say, because no one wants to tell the boss that their idea is bad. It’s a GOOD thing when a politician seeks out multiple different viewpoints to consider. It doesn’t matter if it’s a viewpoint created by “a fancy auto-complete” as some haters like to say
What a treasonist piece of shit.
Oh no man does research of course Americans are upset here lmao
Asking an llm isn’t research, he might as well have been consulting a magic 8 ball.
Sounds like you have a skill issue
His first opinion would be what the US tells him.
What a stupid fucking thing to admit. Let’s hope he’s out on his ass next election.
It’s weird for a head of state to consult their mentally challenged imaginary friend?
William MacKenzie King, the longest serving Prime Minister in Canada used to commune with spirits via psychic mediums including those of his dead dogs. It was only revealed after his death but was a big part of his life.
I agree it’s weird.
Didn’t Nancy Regan, wife of former USA President Ronald Regan, did this as well. (Ronald was apparently not mentally fit for the last few years as well.)
Nor was he mentally fit for the first years
Bad news friend. The number of atheist heads of state is surprisingly low.












