• @over_clox@lemmy.world
      link
      fedilink
      English
      010 months ago

      I wasn’t aware of that either, now I’m kinda curious to try to find it in my 512 Atari 2600 ROMs archive…

    • @FMT99@lemmy.world
      link
      fedilink
      English
      010 months ago

      Prepare to be delighted. Full disclosure, my Atari isn’t hooked up and also I don’t have the Video Chess cart even if it was, so this was fetched from Google Images.

      • Beacon
        link
        fedilink
        010 months ago

        I bet that’s a slightly unfair representation of what it actually looked like. Graphics back then were purposely designed for how they would look on CRT tvs which add a lot of specific distortions to images. So taking a screenshot of a game running in an emulator without using a high quality crt filter added to the image will be a very untrue representation of what the game actually looked like.

        (Don’t get me wrong, I’m not saying it actually looked great when displayed correctly, but i am saying it would’ve looked considerably better than this emulator screenshot)

      • Optional
        link
        fedilink
        English
        010 months ago

        Can confirm.

        And if you play it on expert mode, you can leave for college and get your degree before it’s your turn again.

  • @jsomae@lemmy.ml
    link
    fedilink
    English
    010 months ago

    Using an LLM as a chess engine is like using a power tool as a table leg. Pretty funny honestly, but it’s obviously not going to be good at it, at least not without scaffolding.

    • @kent_eh@lemmy.ca
      link
      fedilink
      English
      010 months ago

      is like using a power tool as a table leg.

      Then again, our corporate lords and masters are trying to replace all manner of skilled workers with those same LLM “AI” tools.

      And clearly that will backfire on them and they’ll eventually scramble to find people with the needed skills, but in the meantime tons of people will have lost their source of income.

      • @jsomae@lemmy.ml
        link
        fedilink
        English
        0
        edit-2
        10 months ago

        If you believe LLMs are not good at anything then there should be relatively little to worry about in the long-term, but I am more concerned.

        It’s not obvious to me that it will backfire for them, because I believe LLMs are good at some things (that is, when they are used correctly, for the correct tasks). Currently they’re being applied to far more use cases than they are likely to be good at – either because they’re overhyped or our corporate lords and masters are just experimenting to find out what they’re good at and what not. Some of these cases will be like chess, but others will be like code*.

        (* not saying LLMs are good at code in general, but for some coding applications I believe they are vastly more efficient than humans, even if a human expert can currently write higher-quality less-buggy code.)

        • @kent_eh@lemmy.ca
          link
          fedilink
          English
          010 months ago

          I believe LLMs are good at some things

          The problem is that they’re being used for all the things, including a large number of tasks that thwy are not well suited to.

          • @jsomae@lemmy.ml
            link
            fedilink
            English
            010 months ago

            yeah, we agree on this point. In the short term it’s a disaster. In the long-term, assuming AI’s capabilities don’t continue to improve at the rate they have been, our corporate overlords will only replace people for whom it’s actually worth it to them to replace with AI.

  • @Nurse_Robot@lemmy.world
    link
    fedilink
    English
    010 months ago

    I’m often impressed at how good chatGPT is at generating text, but I’ll admit it’s hilariously terrible at chess. It loves to manifest pieces out of thin air, or make absurd illegal moves, like jumping its king halfway across the board and claiming checkmate

    • Rhaedas
      link
      fedilink
      010 months ago

      It can be bad at the very thing it’s designed to do. It can repeat phrases often, something that isn’t great for writing. But why wouldn’t it, it’s all about probability so common things said will pop up more unless you adjust the variables that determine the randomness.

  • @Lembot_0003@lemmy.zip
    link
    fedilink
    English
    010 months ago

    The Atari chess program can play chess better than the Boeing 747 too. And better than the North Pole. Amazing!

  • @vane@lemmy.world
    link
    fedilink
    English
    010 months ago

    It’s not that hard to beat dumb 6 year old who’s only purpose is mine your privacy to sell you ads or product place some shit for you in future.

  • @Furbag@lemmy.world
    link
    fedilink
    English
    010 months ago

    Can ChatGPT actually play chess now? Last I checked, it couldn’t remember more than 5 moves of history so it wouldn’t be able to see the true board state and would make illegal moves, take it’s own pieces, materialize pieces out of thin air, etc.

    • Robust Mirror
      link
      fedilink
      English
      0
      edit-2
      10 months ago

      It could always play it if you reminded it of the board state every move. Not well, but at least generally legally. And while I know elites can play chess blind, the average person can’t, so it was always kind of harsh to hold it to that standard and criticise it not being able to remember more than 5 moves when most people can’t do that themselves.

      Besides that, it was never designed to play chess. It would be like insulting Watson the Jeopardy bot for losing against the Atari chess bot, it’s not what it was designed to do.

    • @skisnow@lemmy.ca
      link
      fedilink
      English
      0
      edit-2
      10 months ago

      It can’t, but that didn’t stop a bunch of gushing articles a while back about how it had an ELO of 2400 and other such nonsense. Turns out you could get it to have an ELO of 2400 under a very very specific set of circumstances, that include correcting it every time it hallucinated pieces or attempted to make illegal moves.

    • Pamasich
      link
      fedilink
      010 months ago

      There are custom GPTs which claim to play at a stockfish level or be literally stockfish under the hood (I assume the former is still the latter just not explicitly). Haven’t tested them, but if they work, I’d say yes. An LLM itself will never be able to play chess or do anything similar, unless they outsource that task to another tool that can. And there seem to be GPTs that do exactly that.

      As for why we need ChatGPT then when the result comes from Stockfish anyway, it’s for the natural language prompts and responses.

  • NeilBrü
    link
    fedilink
    English
    010 months ago

    An LLM is a poor computational paradigm for playing chess.

    • @Bleys@lemmy.world
      link
      fedilink
      English
      010 months ago

      The underlying neural network tech is the same as what the best chess AIs (AlphaZero, Leela) use. The problem is, as you said, that ChatGPT is designed specifically as an LLM so it’s been optimized strictly to write semi-coherent text first, and then any problem solving beyond that is ancillary. Which should say a lot about how inconsistent ChatGPT is at solving problems, given that it’s not actually optimized for any specific use cases.

      • NeilBrü
        link
        fedilink
        English
        0
        edit-2
        10 months ago

        Yes, I agree wholeheartedly with your clarification.

        My career path, as I stated in a different comment, In regards to neural networks is focused on generative DNNs for CAD applications and parametric 3D modeling. Before that, I began as a researcher in cancerous tissue classification and object detection in medical diagnostic imaging.

        Thus, large language models are well out of my area of expertise in terms of the architecture of their models.

        However, fundamentally it boils down to the fact that the specific large language model used was designed to predict text and not necessarily solve problems/play games to “win”/“survive”.

        (I admit that I’m just parroting what you stated and maybe rehashing what I stated even before that, but I like repeating and refining in simple terms to practice explaining to laymen and, dare I say, clients. It helps me feel as if I don’t come off too pompously when talking about this subject to others; forgive my tedium.)

      • NeilBrü
        link
        fedilink
        English
        010 months ago

        I’m impressed, if that’s true! In general, an LLM’s training cost vs. an LSTM, RNN, or some other more appropriate DNN algorithm suitable for the ruleset is laughably high.

        • @Takapapatapaka@lemmy.world
          link
          fedilink
          English
          010 months ago

          Oh yes, cost of training are ofc a great loss here, it’s not optimized at all, and it’s stuck at an average level.

          Interestingly, i believe some people did research on it and found some parameters in the model that seemed to represent the state of the chess board (as in, they seem to reflect the current state of the board, and when artificially modified, the model takes modification into account in its playing). It was used by a french youtuber to show how LLMs can somehow have a kinda representation of the world. I can try to get the sources back if you’re interested.

          • NeilBrü
            link
            fedilink
            English
            0
            edit-2
            10 months ago

            Absolutely interested. Thank you for your time to share that.

            My career path in neural networks began as a researcher for cancerous tissue object detection in medical diagnostic imaging. Now it is switched to generative models for CAD (architecture, product design, game assets, etc.). I don’t really mess about with fine-tuning LLMs.

            However, I do self-host my own LLMs as code assistants. Thus, I’m only tangentially involved with the current LLM craze.

            But it does interest me, nonetheless!