• @Eranziel@lemmy.world
    link
    fedilink
    English
    0
    edit-2
    6 months ago

    The fundamental difference is that the AI doesn’t know anything. It isn’t capable of understanding, it doesn’t learn in the same sense that humans learn. A LLM is a (complex!) digital machine that guesses the next most likely word based on essentially statistics, nothing more, nothing less.

    It doesn’t know what it’s saying, nor does it understand the subject matter, or what a human is, or what a hallucination is or why it has them. They are fundamentally incapable of even perceiving the problem, because they do not perceive anything aside from text in and text out.

    • @Drewelite@lemmynsfw.com
      link
      fedilink
      English
      0
      edit-2
      6 months ago

      Many people’s entire thought process is an internal monologue. You think that voice is magic? It takes input and generates a conceptual internal dialogue based on what it’s previously experienced (training data for long term, context for short term). What do you mean when you say you understand something? What is the mechanism that your brain undergoes that’s defined as understanding?

      Because for me it’s an internal conversation that asserts an assumption based on previous data and then attacks it with the next most probable counter argument systematically until what I consider a “good idea” emerges that is reasonably vetted. Then I test it in the real world by enacting the scientific process. The results are added to my long term memory (training data).