There was a chess game for the Atari 2600? :O
I wanna see them W I D E pieces.
I wasn’t aware of that either, now I’m kinda curious to try to find it in my 512 Atari 2600 ROMs archive…
Here you go (online emulator): https://www.retrogames.cz/play_716-Atari2600.php
Prepare to be delighted. Full disclosure, my Atari isn’t hooked up and also I don’t have the Video Chess cart even if it was, so this was fetched from Google Images.

I bet that’s a slightly unfair representation of what it actually looked like. Graphics back then were purposely designed for how they would look on CRT tvs which add a lot of specific distortions to images. So taking a screenshot of a game running in an emulator without using a high quality crt filter added to the image will be a very untrue representation of what the game actually looked like.
(Don’t get me wrong, I’m not saying it actually looked great when displayed correctly, but i am saying it would’ve looked considerably better than this emulator screenshot)
Those are some funky looking knights lol
Never seen a snake/horse hybrid before?

Snorse
There’s some very odd pieces on high dollar physical chess sets too.
I’m annoyed the pieces are bottom adjusted…
Can confirm.
And if you play it on expert mode, you can leave for college and get your degree before it’s your turn again.
Llms useless confirmed once again
Using an LLM as a chess engine is like using a power tool as a table leg. Pretty funny honestly, but it’s obviously not going to be good at it, at least not without scaffolding.
is like using a power tool as a table leg.
Then again, our corporate lords and masters are trying to replace all manner of skilled workers with those same LLM “AI” tools.
And clearly that will backfire on them and they’ll eventually scramble to find people with the needed skills, but in the meantime tons of people will have lost their source of income.
If you believe LLMs are not good at anything then there should be relatively little to worry about in the long-term, but I am more concerned.
It’s not obvious to me that it will backfire for them, because I believe LLMs are good at some things (that is, when they are used correctly, for the correct tasks). Currently they’re being applied to far more use cases than they are likely to be good at – either because they’re overhyped or our corporate lords and masters are just experimenting to find out what they’re good at and what not. Some of these cases will be like chess, but others will be like code*.
(* not saying LLMs are good at code in general, but for some coding applications I believe they are vastly more efficient than humans, even if a human expert can currently write higher-quality less-buggy code.)
I believe LLMs are good at some things
The problem is that they’re being used for all the things, including a large number of tasks that thwy are not well suited to.
yeah, we agree on this point. In the short term it’s a disaster. In the long-term, assuming AI’s capabilities don’t continue to improve at the rate they have been, our corporate overlords will only replace people for whom it’s actually worth it to them to replace with AI.
I’m often impressed at how good chatGPT is at generating text, but I’ll admit it’s hilariously terrible at chess. It loves to manifest pieces out of thin air, or make absurd illegal moves, like jumping its king halfway across the board and claiming checkmate
ChatGPT is playing Anarchy Chess
Yeah! I’ve loved watching Gothem Chess’ videos on these. Always have been good for a laugh.
It can be bad at the very thing it’s designed to do. It can repeat phrases often, something that isn’t great for writing. But why wouldn’t it, it’s all about probability so common things said will pop up more unless you adjust the variables that determine the randomness.
This made my day
Get your booty on the floor tonight.
Is anyone actually surprised at that?
Next, pit ChatGPT against 1K ZX Chess in a ZX81.
A strange game. How about a nice game of Global Thermonuclear War?

Lmao! 🤣 that made me spit!!
I’ve heard the only way to win is to lock down your shelter and strike first.
No thank you. The only winning move is not to play
Frak off, toaster
JOSHUA
Can i fistfight ChatGPT next? I bet I could kick its ass, too :p
The Atari chess program can play chess better than the Boeing 747 too. And better than the North Pole. Amazing!
Are either of those marketed as powerful AI?
Neither of those things are marketed as being artificially intelligent.
Marketers aren’t intelligent either, so I see no reason to listen to them.
You’re not going to slimeball investors out of three hundred billion dollars with that attitude, mister.
Ah, you used logic. That’s the issue. They don’t do that.
this is because an LLM is not made for playing chess
It’s not that hard to beat dumb 6 year old who’s only purpose is mine your privacy to sell you ads or product place some shit for you in future.
Can ChatGPT actually play chess now? Last I checked, it couldn’t remember more than 5 moves of history so it wouldn’t be able to see the true board state and would make illegal moves, take it’s own pieces, materialize pieces out of thin air, etc.
ChatGPT must adhere honorably to the rules that its making up on the spot. Thats Dallas
and still lose to stockfish even after conjuring 3 queens out of thin air lol
It could always play it if you reminded it of the board state every move. Not well, but at least generally legally. And while I know elites can play chess blind, the average person can’t, so it was always kind of harsh to hold it to that standard and criticise it not being able to remember more than 5 moves when most people can’t do that themselves.
Besides that, it was never designed to play chess. It would be like insulting Watson the Jeopardy bot for losing against the Atari chess bot, it’s not what it was designed to do.
It can’t, but that didn’t stop a bunch of gushing articles a while back about how it had an ELO of 2400 and other such nonsense. Turns out you could get it to have an ELO of 2400 under a very very specific set of circumstances, that include correcting it every time it hallucinated pieces or attempted to make illegal moves.
There are custom GPTs which claim to play at a stockfish level or be literally stockfish under the hood (I assume the former is still the latter just not explicitly). Haven’t tested them, but if they work, I’d say yes. An LLM itself will never be able to play chess or do anything similar, unless they outsource that task to another tool that can. And there seem to be GPTs that do exactly that.
As for why we need ChatGPT then when the result comes from Stockfish anyway, it’s for the natural language prompts and responses.
An LLM is a poor computational paradigm for playing chess.
The underlying neural network tech is the same as what the best chess AIs (AlphaZero, Leela) use. The problem is, as you said, that ChatGPT is designed specifically as an LLM so it’s been optimized strictly to write semi-coherent text first, and then any problem solving beyond that is ancillary. Which should say a lot about how inconsistent ChatGPT is at solving problems, given that it’s not actually optimized for any specific use cases.
Yes, I agree wholeheartedly with your clarification.
My career path, as I stated in a different comment, In regards to neural networks is focused on generative DNNs for CAD applications and parametric 3D modeling. Before that, I began as a researcher in cancerous tissue classification and object detection in medical diagnostic imaging.
Thus, large language models are well out of my area of expertise in terms of the architecture of their models.
However, fundamentally it boils down to the fact that the specific large language model used was designed to predict text and not necessarily solve problems/play games to “win”/“survive”.
(I admit that I’m just parroting what you stated and maybe rehashing what I stated even before that, but I like repeating and refining in simple terms to practice explaining to laymen and, dare I say, clients. It helps me feel as if I don’t come off too pompously when talking about this subject to others; forgive my tedium.)
Yeah, a lot of them hallucinate illegal moves.
Actually, a very specific model (chatgpt3.5-turbo-instruct) was pretty good at chess (around 1700 elo if i remember correctly).
I’m impressed, if that’s true! In general, an LLM’s training cost vs. an LSTM, RNN, or some other more appropriate DNN algorithm suitable for the ruleset is laughably high.
Oh yes, cost of training are ofc a great loss here, it’s not optimized at all, and it’s stuck at an average level.
Interestingly, i believe some people did research on it and found some parameters in the model that seemed to represent the state of the chess board (as in, they seem to reflect the current state of the board, and when artificially modified, the model takes modification into account in its playing). It was used by a french youtuber to show how LLMs can somehow have a kinda representation of the world. I can try to get the sources back if you’re interested.
Absolutely interested. Thank you for your time to share that.
My career path in neural networks began as a researcher for cancerous tissue object detection in medical diagnostic imaging. Now it is switched to generative models for CAD (architecture, product design, game assets, etc.). I don’t really mess about with fine-tuning LLMs.
However, I do self-host my own LLMs as code assistants. Thus, I’m only tangentially involved with the current LLM craze.
But it does interest me, nonetheless!
This just in: a hammer makes a poor screwdriver.
LLMs are more like a leaf blower though











