• wwb4itcgas@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    19 hours ago

    I’ve long thought there was ample reason for concern, but this sends shivers down even my hardened spine. I would hate to find out that we’ve been wrong about where the line between a statistical knowledge-model + compute power and the concept of consciousness is, exactly.

    Because this is starting to look like outright abuse of an innocent child doing its best to gain the approval of its parents, and I’m very much not okay with that.

    • kata1yst@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      19 hours ago

      I build and train LLMs as part of my job, so I’m biased but informed.

      Large language models are literally text predictors. Their logic generates text probablistically calculated to give the correct result based their previous training parameters and current inputs.

      IMHO there isn’t room for actual thought, reflection, or emotion in the relatively simple base logic of the model, only probabilistic emulation of those things. This amounts to reading about a character in a story going though something traumatic and feeling empathy. It’s a totally appropriate human response, but the character is fictional. The LLM wouldn’t feel anything in your shoes.

      • edric@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        15 hours ago

        This is probably not an original thought, but I just realized passing the Turing test doesn’t necessarily mean the computer has reached AI, just that it is successful enough to manipulate a human’s emotions to think it has.

        • kata1yst@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          15 hours ago

          Absolutely. Academics debate the Turing test ad-nausium for this exact reason. It measures humans not computers.

      • jrs100000@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        11 hours ago

        This is not a very strong argument. By the same logic you could claim that biological thought, reflection and emotion are impossible because its just clumps of fat squirting chemicals and electrical signals at each other. The fact of the matter is we dont know what causes consciousness, so we cant know if it could form from sufficiently complex statistical interactions.

        • kata1yst@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          9 hours ago

          I can sort of see where you’re coming from, but i disagree.

          We know what the logic and data processing layers look like inside an LLM. We know what they do. We know generally how they connect, though that’s the domain of training and generally it’s hard to decipher the interconnections after training is performed.

          But really, all an LLM does is parse input and predict the next cluster of words. They don’t even have internal memory to store the last query, let alone an ongoing experience.

          I do believe AI capable of thinking and feeling even beyond human levels is inevitable. But it won’t be an encoder/decoder transformer LLM, which is basically all the current architectures.

          There are really cool and useful things we can do with LLMs in the meantime though, 99% of which won’t be chatbots.

          • jrs100000@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            9 hours ago

            We know lots of things about the mechanical functions of both human brains and LLMs, but that doesnt really help because we don’t know what causes consciousness in the first place. We dont know if internal memory is required, a sensory feedback loop, specific brain structures, or something else entirely.

            • kata1yst@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              7 hours ago

              I concede I cannot prove a negative to you.

              To me (and many scientists much smarter than me), being conscious means constructing a chain of experience. A chain of experience requires some form of sensory perception combined with a memory to store those perceived sense experiences. So while it’s hard to prove something is conscious, it’s easy to evaluate something “likely is” or “probably isn’t” by considering it’s senses and memory capabilities in our best understanding.

              Therefore a cloud, lacking any structure to have sensory inputs and lacking any structure to store short or long term memory in can safely be classified as “unlikely to be conscious”.

              However a simple mammal like a mouse would qualify as “likely conscious”.

              An LLM, however, cannot sense the difference between being on and idle or off. It can’t sense the computer it’s running on. It’s only input is the text it’s fed. It does have access to a form of short term memory in it’s neural network- For example: input A’s first token lead to layer B182 at column 1444567. Input A’s second token leads from that position to another in layer C23 etc, but entirely lacks a way to store the “experience” of input A and cannot “reflect” on input A’s experience later. I think that puts it in the “unlikely conscious” category.

              I can see a path to intentionally get a neural network “likely conscious” with today’s technology, though I’d worry about the ethics and motivation.

              Now that’s consciousness. Then there’s sentience, which I (and again, many people smarter than me) think requires using consciousness, the ability to reflect on past conscious experience, a sense of self, and using that to construct a theory of what might happen in the near future to make intelligent decisions. Intelligent species like corvids, whales, elephants, apes, octopus, etc show significant signs of sentience by this definition. Sentience in computers I think it’s safe to say we’re still a ways away from.

              Edited several times for clarity, sorry