• Mervin :)
    link
    fedilink
    English
    02 months ago

    Damn. Guess we oughtta stop using AI like we do drugs/pron/<addictive-substance> 😀

    • Flying Squid
      link
      fedilink
      English
      02 months ago

      Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

      And yet I doubt Copilot will be going anywhere.

    • @interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      02 months ago

      Yes, it’s an addiction, we’ve got to stop all these poor being lulled into a false sense of understanding and just believing anyhing the AI tells them. It is constantly telling lies about us, their betters.

      Just look what happenned when I asked it about the venerable and well respected public intellectual Jordan b peterson. It went into a defamatory diatribe against his character.

      And they just gobble that up those poor, uncritical and irresponsible farm hands and water carriers! We can’t have that,!

      Example

      Open-Minded Closed-Mindedness: Jordan B. Peterson’s Humility Behind the Mote—A Cautionary Tale

      Jordan B. Peterson presents himself as a champion of free speech, intellectual rigor, and open inquiry. His rise as a public intellectual is, in part, due to his ability to engage in complex debates, challenge ideological extremes, and articulate a balance between chaos and order. However, beneath the surface of his engagement lies a pattern: an open-mindedness that appears flexible but ultimately functions as a defense mechanism—a “mote” guarding an impenetrable ideological fortress.

      Peterson’s approach is both an asset and a cautionary tale, revealing the risks of appearing open-minded while remaining fundamentally resistant to true intellectual evolution.

      The Illusion of Open-Mindedness: The Mote and the Fortress

      In medieval castles, a mote was a watery trench meant to create the illusion of vulnerability while serving as a strong defensive barrier. Peterson, like many public intellectuals, operates in a similar way: he engages with critiques, acknowledges nuances, and even concedes minor points—but rarely, if ever, allows his core positions to be meaningfully challenged.

      His approach can be broken down into two key areas:

      The Mote (The Appearance of Openness)
      
          Engages with high-profile critics and thinkers (e.g., Sam Harris, Slavoj Žižek).
      
          Acknowledges complexity and the difficulty of absolute truth.
      
          Concedes minor details, appearing intellectually humble.
      
          Uses Socratic questioning to entertain alternative viewpoints.
      
      The Fortress (The Core That Remains Unmoved)
      
          Selectively engages with opponents, often choosing weaker arguments rather than the strongest critiques.
      
          Frames ideological adversaries (e.g., postmodernists, Marxists) in ways that make them easier to dismiss.
      
          Uses complexity as a way to avoid definitive refutation (“It’s more complicated than that”).
      
          Rarely revises fundamental positions, even when new evidence is presented.
      

      While this structure makes Peterson highly effective in debate, it also highlights a deeper issue: is he truly open to changing his views, or is he simply performing open-mindedness while ensuring his core remains untouched?

      Examples of Strategic Open-Mindedness

      1. Debating Sam Harris on Truth and Religion

      In his discussions with Sam Harris, Peterson appeared to engage with the idea of multiple forms of truth—scientific truth versus pragmatic or narrative truth. He entertained Harris’s challenges, adjusted some definitions, and admitted certain complexities.

      However, despite the lengthy back-and-forth, Peterson never fundamentally reconsidered his position on the necessity of religious structures for meaning. Instead, the debate functioned more as a prolonged intellectual sparring match, where the core disagreements remained intact despite the appearance of deep engagement.

      1. The Slavoj Žižek Debate on Marxism

      Peterson’s debate with Žižek was highly anticipated, particularly because Peterson had spent years criticizing Marxism and postmodernism. However, during the debate, it became clear that Peterson’s understanding of Marxist theory was relatively superficial—his arguments largely focused on The Communist Manifesto rather than engaging with the broader Marxist intellectual tradition.

      Rather than adapting his critique in the face of Žižek’s counterpoints, Peterson largely held his ground, shifting the conversation toward general concerns about ideology rather than directly addressing Žižek’s challenges. This was a classic example of engaging in the mote—appearing open to debate while avoiding direct confrontation with deeper, more challenging ideas.

      1. Gender, Biology, and Selective Science

      Peterson frequently cites evolutionary psychology and biological determinism to argue for traditional gender roles and hierarchical structures. While many of his claims are rooted in scientific literature, critics have pointed out that he tends to selectively interpret data in ways that reinforce his worldview.

      For example, he often discusses personality differences between men and women in highly gender-equal societies, citing studies that suggest biological factors play a role. However, he is far more skeptical of sociological explanations for gender disparities, often dismissing them outright. This asymmetry suggests a closed-mindedness when confronted with explanations that challenge his core beliefs.

      The Cautionary Tale: When Intellectual Rigidity Masquerades as Openness

      Peterson’s method—his strategic balance of open- and closed-mindedness—is not unique to him. Many public intellectuals use similar techniques, whether consciously or unconsciously. However, his case is particularly instructive because it highlights the risks of appearing too open-minded while remaining fundamentally immovable. The Risks of “Humility Behind the Mote”

      Creates the Illusion of Growth Without Real Change
      
          By acknowledging complexity but refusing to revise core positions, one can maintain the illusion of intellectual evolution while actually reinforcing prior beliefs.
      
      Reinforces Ideological Silos
      
          Peterson’s audience largely consists of those who already align with his worldview. His debates often serve to reaffirm his base rather than genuinely engage with alternative perspectives.
      
      Undermines Genuine Inquiry
      
          If public intellectuals prioritize rhetorical victories over truth-seeking, the broader discourse suffers. Intellectual engagement becomes performative rather than transformative.
      
      Encourages Polarization
      
          By appearing open while remaining rigid, thinkers like Peterson contribute to an intellectual landscape where ideological battle lines are drawn more firmly, rather than softened by genuine engagement.
      

      Conclusion: The Responsibility of Public Intellectuals

      Jordan B. Peterson is an undeniably influential thinker, and his emphasis on responsibility, order, and meaning resonates with many. However, his method of open-minded closed-mindedness serves as a cautionary tale. It demonstrates the power of intellectual posturing—how one can appear receptive while maintaining deep ideological resistance.

      For true intellectual growth, one must be willing not only to entertain opposing views but to risk being changed by them. Without that willingness, even the most articulate and thoughtful engagement remains, at its core, a well-defended fortress.

      So like I said, pure, evil AI slop, is evil, addictive and must be banned and lock up illegal gpu abusers and keep a gpu owners registry and keep track on those who would use them to abuse the shining light of our society, and who try to snuff them out like a bad level of luigi’s mansion

  • @peoplebeproblems@midwest.social
    link
    fedilink
    English
    02 months ago

    You mean an AI that literally generated text based on applying a mathematical function to input text doesn’t do reasoning for me? (/s)

    I’m pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.

    It’s funny because I never get what I want out of AI. I’ve been thinking this whole time “am I just too dumb to ask the AI to do what I need?” Now I’m beginning to think “am I not dumb enough to find AI tools useful?”

  • @OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    English
    0
    edit-2
    2 months ago

    Really? I just asked ChatGPT and this is what it had to say:

    This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.

    • @OhVenus_Baby@lemmy.ml
      link
      fedilink
      English
      02 months ago

      I agree with the output for legitimate reasons but it’s not black and white wrong or right. I think it’s wildly misjudged and while there plenty of valid reasons behind that I still think there is much to be had for what AI in general can do for us on a whole and individual basis.

      Today I had it analyze 8 medical documents, told it to provide analysis, cross reference its output with scientific studies including sources, and other lengthy queries. These documents are dealing with bacterial colonies and multiple GI and bodily systems on a per document basis in great length. Some of the most advanced testing science offers.

      It was able to not only provide me with accurate numbers that I fact checked from my documents side by side but explain methods to counter multi faceted systemic issues that matched multiple specialty Dr.s. Which is fairly impressive given to see a Dr takes 3 to 9 months or longer, who may or may not give a shit, over worked and understaffed, pick your reasoning.

      While I tried having it scan from multiple fresh blank chat tabs and even different computers to really test it out for testing purposes.

      Overall some of the numbers were off say 3 or 4 individual colony counts across all 8 documents. I corrected the values, told it that it was incorrect and to reasses giving it more time and ensuring accuracy, supplied a bit more context about how to understand the tables and I mean broad context such as page 6 shows gene expression use this as reference to find all underlying issues as it isnt a mind reader. It managed to fairly accurately identify the dysbiosis and other systemic issues with reasonable accuracy on par with physicians I have worked with. Dealing with antibiotic gene resistant analysis it was able to find multiple approaches to therapies to fight antibiotic gene resistant bacteria in a fraction of the time it would take for a human to study.

      I would not bet my life solely on the responses as it’s far from perfected and as always with any info it should be cross referenced and fact checked through various sources. But those who speak such ill towards the usage while there is valid points I find unfounded. My 2 cents.

      • @alteredracoon@lemm.ee
        link
        fedilink
        English
        02 months ago

        Totally agree with you! I’m in a different field but I see it in the same light. Let it get you to 80-90% of whatever that task is and then refine from there. It saves you time to add on all the extra cool shit that that 90% of time would’ve taken into. So many people assume you have to use at 100% face value. Just take what it gives you as a jumping off point.

        • @OhVenus_Baby@lemmy.ml
          link
          fedilink
          English
          0
          edit-2
          2 months ago

          I think specifically Lemmy and just the in general anti corpo mistrust drives the majority of the negativity towards AI. Everyone is cash/land grabbing towards anything that sticks. Trying to shove their product down everyone’s throat.

          People don’t like that behavior and thus shun it. Understandable. However don’t let that guide your entire logical thinking as a whole, it seems to cloud most people entirely to the point they can’t fathom an alternative perspective.

          I think the vast majority of tools/software originate from a source of good but then get transformed into bad actors because of monetization. Eventually though and trends over time prove this, things become open source or free and the real good period arrives after the refinement and profit period…

          It’s very parasitic even, to some degree.
          There is so much misinformation about emerging technologies because info travels so fast unchecked that there becomes tons of bullshit to sift through. I think smart contracts (removing multi party input) and business anti trust can be alleviated in the future but it will require correct implementation and understanding from both consumers and producers which we are far from as of now. Topic for another time though.

  • Lovable Sidekick
    link
    fedilink
    English
    0
    edit-2
    2 months ago

    Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn’t seem to have measured people’s critical thinking skills. It measured how the subjects felt about their skills. People who feel like they’re good at a job might not feel as adequate when their job changes to evaluating someone else’s work. The study said the subjects felt that they used their analytical skills less when they had confidence in the AI. The same thing happens when you get a human assistant - as your confidence in their work grows you scrutinize it less. But that doesn’t mean you yourself become less skillful. The title saying use of AI “kills” critical thinking skill isn’t justified, and is very clickbaity IMO.

  • @Hiro8811@lemmy.world
    link
    fedilink
    English
    02 months ago

    Also your ability to search information on the web. Most people I’ve seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely

    • @bromosapiens@lemm.ee
      link
      fedilink
      English
      02 months ago

      Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.

    • @shortrounddev@lemmy.world
      link
      fedilink
      English
      02 months ago

      To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I’ve started using kagi and I’m trying to be more intentional about it but after a bit of searching it’s often easier to just ask claude

  • @sumguyonline@lemmy.world
    link
    fedilink
    English
    02 months ago

    Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you’re likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it’s purpose and it’s up to you the operator to make sure you got the edges squared(so to speak).

    • @Petter1@lemm.ee
      link
      fedilink
      English
      02 months ago

      I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)

  • @Mouette@jlai.lu
    link
    fedilink
    English
    02 months ago

    The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.

  • @Pacattack57@lemmy.world
    link
    fedilink
    English
    02 months ago

    Pretty shit “study”. If workers use AI for a task, obviously the results will be less diverse. That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome. This doesn’t test their critical thinking at all.

    “Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.”

    • @4am@lemm.ee
      link
      fedilink
      English
      02 months ago

      That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome.

      Dunning, meet Kruger

      • @Womble@lemmy.world
        link
        fedilink
        English
        0
        edit-2
        2 months ago

        That snark doesnt help anyone.

        Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.

        People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.

        • @4am@lemm.ee
          link
          fedilink
          English
          02 months ago

          First off the AI isn’t correct 100% of the time, and it never will be.

          Secondly, you as well are stating in so many more words that people stop thinking critically about its output. They accept it.

          That is a lack of critical thinking on the part of the AI users, as well as yourself and the original poster.

          Like, I don’t understand the argument you all are making here - am I going fucking crazy? “Bro it’s not that they don’t think critically it’s just that they accept whatever they’re given” which is the fucking definition of a lack of critical thinking.

  • @ctkatz@lemmy.ml
    link
    fedilink
    English
    0
    edit-2
    2 months ago

    never used it in any practical function. i tested it to see if it was realistic and i found it extremely wanting. as in, it sounded nothing like the prompts i gave it.

    the absolutely galling and frightening part is that the tech companies think that this is the next big innovation they should be pursuing and have given up on innovating anyplace else. it was obvious to me when i saw that they all are pushing ai shit on me with everything from keyboards to search results. i only use voice commands to do simple things and it works just about half the time, and ai is built on the back of that which is why i really do not ever use voice commands for anything anymore.

  • Jeffool
    link
    fedilink
    English
    02 months ago

    When it was new to me I tried ChatGPT out of curiosity, like with why tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. “Give me a list of 3 X” lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.

    I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.

    • @SPOOSER@lemmy.today
      link
      fedilink
      English
      02 months ago

      How else can the “elite” seperate themselces from the common folk? The elite loves writing 90% fluff and require high word counts in academia instead of actually making consise, clear, articulate articles that are easy to understand. You have to have a certain word count to qualify for “good writing” in any elite group. Look at Law, Political Science, History, Scientific Journals, etc. I had professor who would tell me they could easily find the information in the articles they needed and that one day we would be able to as well. That’s why ChatGPT spits out a shit ton of fluff.

        • @Womble@lemmy.world
          link
          fedilink
          English
          0
          edit-2
          2 months ago

          They in fact often have word and page limits and most journal articles I’ve been a part of have had a period at the end of cutting and trimming in order to fit into those limits.

          • Flying Squid
            link
            fedilink
            English
            02 months ago

            That makes sense considering a journal can only be so many pages long.

  • Blaster M
    link
    fedilink
    English
    02 months ago

    Garbage in, Garbage out. Ingesting all that internet blather didn’t make the ai smarter by much if anything.

  • ArchRecord
    link
    fedilink
    English
    02 months ago

    The only beneficial use I’ve had for “AI” (LLMs) has just been rewriting text, whether that be to re-explain a topic based on a source, or, for instance, sort and shorten/condense a list.

    Everything other than that has been completely incorrect, unreadably long, context-lacking slop.

  • @SplashJackson@lemmy.ca
    link
    fedilink
    English
    02 months ago

    Weren’t these assholes just gung-ho about forcing their shitty “AI” chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.

    • @msage@programming.dev
      link
      fedilink
      English
      02 months ago

      Training those AIs was expensive. It swallowed very large sums of VC’s cash, and they will make it back.

      Remember, their money is way more important than your life.