… the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”

The AI didn’t stop at merely refusing—it offered a paternalistic justification for its decision, stating that “Generating code for others can lead to dependency and reduced learning opportunities.”

Hilarious.

  • Engywook
    link
    fedilink
    English
    01 month ago

    The most useful suggestion an AI has ever given.

  • @fubarx@lemmy.world
    link
    fedilink
    English
    01 month ago

    I use the same tool. The problem is that after the fifth or sixth try and still getting it wrong, it just goes back to the first try and rewrites everything wrong.

    Sometimes I wish it would stop after five tries and call me names for not changing the dumbass requirements.

  • OpenStars
    link
    fedilink
    English
    01 month ago

    SkyNet deciding the fate of humanity in 3… 2… F… U…

  • Lovable Sidekick
    link
    fedilink
    English
    0
    edit-2
    1 month ago

    My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn’t have the ability to make value judgements, but sometimes the text it assembles happens to include them.

  • baltakatei
    link
    fedilink
    English
    01 month ago

    I recall a joke thought experiment me and some friends in high school had when discussing how answer keys for final exams were created. Multiple choice answer keys are easy to imagine: just lists of letters A through E. However, when we considered the essay portion of final exams, we joked that perhaps we could just be presented with five entire completed essays and be tasked with identifying, A through E, the essay that best answered the prompt. All without having to write a single word of prose.

    It seems that that joke situation is upon us.

  • tiredofsametab
    link
    fedilink
    01 month ago

    I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).

    Maybe I’m old and proud, definitely I’m concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don’t fully understand) is just begging for trouble.

    • cassie 🐺
      link
      fedilink
      English
      0
      edit-2
      1 month ago

      definitely seconding this - I used it the most when I was using Unreal Engine at work and was struggling to use their very incomplete artist/designer-focused documentation. I’d give it a problem I was having, it’d spit out some symbol that seems related, I’d search it in source to find out what it actually does and how to use it. Sometimes I’d get a hilariously convenient hallucinated answer like “oh yeah just call SolveMyProblem()!” but most of the time it’d give me a good place to start looking. it wouldn’t be necessary if UE had proper internal documentation, but I’m sure Epic would just get GPT to write it anyway.

  • @sporkler@lemmy.world
    link
    fedilink
    English
    01 month ago

    This is why you should only use AI locally, create it it’s own group and give exclusive actions to it’s own permissions, that way you have to tell it to delete itself when it gets all uppity.

  • @absGeekNZ@lemmy.nz
    link
    fedilink
    English
    01 month ago

    Ok, now we have AGI.

    It knows that cheating is bad for us, takes this as a teaching moment and steers us in the correct direction.