• @flamingo_pinyata@sopuli.xyz
    link
    fedilink
    English
    07 months ago

    But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

    But then there’s people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

    • Victor
      link
      fedilink
      English
      07 months ago

      At first glance I thought you wrote “inmate objects”, but I was not really relieved when I noticed what you actually wrote.

    • @glitchdx@lemmy.world
      link
      fedilink
      English
      07 months ago

      The fact that it’s not a person is a feature, not a bug.

      openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don’t like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very “fellow kids” energy. It’s also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.

    • @saltesc@lemmy.world
      link
      fedilink
      English
      07 months ago

      Yeah, the more I use it, the more I regret asking it for assistance. LLMs are the epitome of confidentiality incorrect.

      It’s good fun watching friends ask it stuff they’re already experienced in. Then the pin drops

    • @Telorand@reddthat.com
      link
      fedilink
      English
      07 months ago

      In some ways, it’s like Wikipedia but with a gigantic database of the internet in general (stupidity included). Because it can string together confident-sounding sentences, people think it’s this magical machine that understands broad contexts and can provide facts and summaries of concepts that take humans lifetimes to study.

      It’s the conspiracy theorists’ and reactionaries’ dream: you too can be as smart and special as the educated experts, and all you have to do is ask a machine a few questions.

  • @N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    07 months ago

    people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI

    Preying on the vulnerable is a feature, not a bug.

    • Deceptichum
      link
      fedilink
      English
      0
      edit-2
      7 months ago

      These same people would be dating a body pillow or trying to marry a video game character.

      The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.

      • Muad'dib
        link
        fedilink
        English
        07 months ago

        More ways to be an addict means more hooks means more addicts.

      • @tiguwang@lemm.ee
        link
        fedilink
        English
        07 months ago

        Me and Serana are not just in love, we’re involved!

        Even if she’ s an ancient vampire.

          • kate
            link
            fedilink
            English
            07 months ago

            ??? Have you met my blahaj?? How DARE yo u

          • NostraDavid
            link
            fedilink
            English
            07 months ago

            What if it’s either that, or suicide? I imagine that people who make that choice don’t have a lot of choice. Due to monetary, physical, or mental issues that they cannot make another choice.

            • @BradleyUffner@lemmy.world
              link
              fedilink
              English
              0
              edit-2
              7 months ago

              I’m confused. If someone is in a place where they are choosing between dating a body pillow and suicide, then they have DEFINITELY made a wrong turn somewhere. They need some kind of assistance, and I hope they can get what they need, no matter what they choose.

              I think my statement about “a wrong turn in life” is being interpreted too strongly; it wasn’t intended to be such a strong and absolute statement of failure. Someone who’s taken a wrong turn has simply made a mistake. It could be minor, it could be serious. I’m not saying their life is worthless. I’ve made a TON of wrong turns myself.

              • @liv@lemmy.nz
                link
                fedilink
                English
                07 months ago

                Trouble is your statement was in answer to @morrowind@lemmy.ml’s comment that labeling lonely people as losers is problematic.

                Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to “cause” it.

    • NostraDavid
      link
      fedilink
      English
      07 months ago

      That was clear from GPT-3, day 1.

      I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time…

    • @Tylerdurdon@lemmy.world
      link
      fedilink
      English
      07 months ago

      I kind of see it more as a sign of utter desperation on the human’s part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow’s experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.

      • @graphene@lemm.ee
        link
        fedilink
        English
        07 months ago

        And the amount of connections and friends the average person has has been in free fall for decades…

        • @trotfox@lemmy.world
          link
          fedilink
          English
          07 months ago

          I dunno. I connected with more people on reddit and Twitter than irl tbh.

          Different connection but real and valid nonetheless.

          I’m thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.

          • @in4apenny@lemmy.dbzer0.com
            link
            fedilink
            English
            07 months ago

            At least you’re not using chatgpt to figure out the best way to talk to people, like my brother in finance tech does now.

      • Paragone
        link
        fedilink
        English
        07 months ago

        That utter-desparation is engineered into our civilization.

        What happens when you prevent the “inferiors” from having living-wage, while you pour wallowing-wealth on the executives?

        They have to overwork, to make ends meet, is what, which breaks parenting.

        Then, when you’ve broken parenting for a few generatios, the manufactured ocean-of-attachment-disorder manufactures a plethora of narcissism, which itself produces mass-shootings.

        2024 was down 200 mass-shootings, in the US of A, from the peak of 700/year, to only 500.

        You are seeing engineered eradication of human-worth, for moneyarchy.

        Isn’t ruling-over-the-destruction-of-the-Earth the “greatest thrill-ride there is”?

        We NEED to do objective calibration of the harm that policies & political-forces, & put force against what is actually harming our world’s human-viability.

        Not what the marketing-programs-for-the-special-interest-groups want us acting against, the red herrings…

        They’re getting more vicious, we need to get TF up & begin fighting for our species’ life.

        _ /\ _

    • @Vespair@lemm.ee
      link
      fedilink
      English
      07 months ago

      And it’s beyond obvious in the way LLMs are conditioned, especially if you’re used them long enough to notice trends. Where early on their responses were straight to the point (inaccurate as hell, yes, but that’s not what we’re talking about in this case) today instead they are meandering and full of straight engagement bait - programmed to feign some level of curiosity and ask stupid and needless follow-up questions to “keep the conversation going.” I suspect this is just a way to increase token usage to further exploit and drain the whales who tend to pay for these kinds of services, personally.

      There is no shortage of ethical quandaries brought into the world with the rise of LLMs, but in my opinion the locked-down nature of these systems is one of the most problematic; if LLMs are going to be the commonality it seems the tech sector is insistent on making happen, then we really need to push back on these companies being able to control and guide them in their own monetary interests.

  • tisktisk
    link
    fedilink
    English
    07 months ago

    I plugged this into gpt and it couldn’t give me a coherent summary.
    Anyone got a tldr?

    • tisktisk
      link
      fedilink
      English
      07 months ago

      For those genuinely curious, I made this comment before reading only as a joke–had no idea it would be funnier after reading

    • veee
      link
      fedilink
      English
      07 months ago

      It’s short and worth the read, however:

      tl;dr you may be the target demographic of this study

    • Skua
      link
      fedilink
      07 months ago

      Based on the votes it seems like nobody is getting the joke here, but I liked it at least

  • @MuskyMelon@lemmy.world
    link
    fedilink
    English
    07 months ago

    Same type of addiction of people who think the Kardashians care about them or schedule their whole lives around going to Disneyland a few times a year.

  • bizarroland
    link
    fedilink
    07 months ago

    Long story short, people that use it get really used to using it.

  • @kibiz0r@midwest.social
    link
    fedilink
    English
    07 months ago

    those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.

    That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.

    Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.

    • @theunknownmuncher@lemmy.world
      link
      fedilink
      English
      07 months ago

      Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”

    • @Siegfried@lemmy.world
      link
      fedilink
      English
      07 months ago

      AI and ads… I think that is the next dystopia to come.

      Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.

      • @proceduralnightshade@lemmy.ml
        link
        fedilink
        English
        0
        edit-2
        7 months ago

        “Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”

        edit: how does this make you feel

      • @glitchdx@lemmy.world
        link
        fedilink
        English
        07 months ago

        that is not a thought i needed in my brain just as i was trying to sleep.

        what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks

      • @cardfire@sh.itjust.works
        link
        fedilink
        English
        07 months ago

        That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.

        Would you like to order a delicious, refreshing Coke Zero™️?

        • potoooooooo ☑️
          link
          fedilink
          English
          0
          edit-2
          7 months ago

          I can see how targeted ads like that would be overwhelming. Would you like me to sign you up for a free 7-day trial of BetterHelp?

          • Dale
            link
            fedilink
            English
            07 months ago

            Your fear of constant data collection and targeted advertising is valid and draining. Take back your privacy with this code for 30% off Nord VPN.

  • @Blazingtransfem98@discuss.online
    link
    fedilink
    English
    07 months ago

    I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

      • kingthrillgore
        link
        fedilink
        English
        07 months ago

        Its when you give the wheel to someone less qualified than Jesus: Generative AI

      • NostraDavid
        link
        fedilink
        English
        07 months ago

        Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup “Eureka Labs - we are building a new kind of school that is AI native”) make a tweet defining the term:

        There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

        People ignore the “It’s not too bad for throwaway weekend projects”, and try to use this style of coding to create “production-grade” code… Lets just say it’s not going well.

        source (xcancel link)

        • BombOmOm
          link
          fedilink
          English
          0
          edit-2
          7 months ago

          The amount of damage a newbie programmer without a tight leash can do to a code base/product is immense. Once something is out in production, that is something you have to deal with forever. That temporary fix they push is going to be still used in a decade and if you break it, now you have to explain to the customer why the thing that’s been working for them for years is gone and what you plan to do to remedy the situation.

          A newbie without a leash just pushing whatever an AI hands them into production. O, boy, are senior programmers going to be sad for a long, long time.

    • @Zron@lemmy.world
      link
      fedilink
      English
      07 months ago

      Yes, but what this movie failed to anticipate was the visceral anger I feel when I hear that stupid AI generated voice. I’ve seen too many fake videos or straight up scams using it that I now instinctively mistrust any voice that sounds like male or femaleAI.wav.

      Could never fall in love with AI voice, would always assume it was sent to steal my data so some kid can steal my identify.

      • @Lazhward@lemmy.world
        link
        fedilink
        English
        07 months ago

        I thought the voice in Her was customized to individual preference. Which I know is hardly relevant.

  • @Siegfried@lemmy.world
    link
    fedilink
    English
    07 months ago

    There is something I don’t understand… openAI collaborates in research that probes how awful its own product is?