Part of the problem is there is no hard line between what is conscious and what isn’t.
Oh sure, there are things you can definitely call conscious or non-conscious. A dog is conscious, a rock is non-conscious. But what about the things that fall somewhere in the murky middle?
Are jellyfish conscious? They have no brain, and seem to react only to the most basic of stimuli. On the other hand, they do exhibit behaviour like courtship and mating, show aversion to things that harm them, and have been shown to have at least a rudimentary form of learning where they will associate certain stimuli with specific outcomes to help with things like avoiding predators or obstacles.
How about plants? Are they conscious? They certainly react to some stimuli. They communicate with nearby plants to warn them of dangers and to share nutrients in times of stress. More studies are needed, but evidence has been coming out that not only do plants respond to music, but different plants have different tastes.
It’s one of the hard questions in philosophy, and one I’m not sure we’ll ever be able to fully answer.
I don’t disagree with the ambiguity here, but let’s focus on LLMs.
I don’t understand the argument that an LLM is more conscious than traditional software. Take for example a spreadsheet with a timed macro.
A spreadsheet with a timed macro stores state in both short term (RAM) memory and long term (in the spreadsheet table) memory and accepts many inputs. The macro timer gives it repeatable continuing execution, and a chance to process both past and present inputs with complexity.
An LLM has limited short term memory (it’s current place in the calculation, stored in VRAM) and zero long term memory. It can only accept one input and cannot recall past experience to construct a chain of consciousness.
The spreadsheet is objectively more likely to be conscious than the LLM. But no one argues for spreadsheet consciousness because it’s ridiculous. People argue for LLM consciousness simply because we sentient messy meat bags are evolutionarily programmed to construct theories of mind for everything and in so doing mistakenly personify things that remind us of ourselves. LLMs simply feel “alive-ish” enough to pass our evolutionary sniff test.
But in reality, they’re less conscious than the spreadsheet.
Part of the problem is there is no hard line between what is conscious and what isn’t.
Oh sure, there are things you can definitely call conscious or non-conscious. A dog is conscious, a rock is non-conscious. But what about the things that fall somewhere in the murky middle?
Are jellyfish conscious? They have no brain, and seem to react only to the most basic of stimuli. On the other hand, they do exhibit behaviour like courtship and mating, show aversion to things that harm them, and have been shown to have at least a rudimentary form of learning where they will associate certain stimuli with specific outcomes to help with things like avoiding predators or obstacles.
How about plants? Are they conscious? They certainly react to some stimuli. They communicate with nearby plants to warn them of dangers and to share nutrients in times of stress. More studies are needed, but evidence has been coming out that not only do plants respond to music, but different plants have different tastes.
It’s one of the hard questions in philosophy, and one I’m not sure we’ll ever be able to fully answer.
I don’t disagree with the ambiguity here, but let’s focus on LLMs.
I don’t understand the argument that an LLM is more conscious than traditional software. Take for example a spreadsheet with a timed macro.
A spreadsheet with a timed macro stores state in both short term (RAM) memory and long term (in the spreadsheet table) memory and accepts many inputs. The macro timer gives it repeatable continuing execution, and a chance to process both past and present inputs with complexity.
An LLM has limited short term memory (it’s current place in the calculation, stored in VRAM) and zero long term memory. It can only accept one input and cannot recall past experience to construct a chain of consciousness.
The spreadsheet is objectively more likely to be conscious than the LLM. But no one argues for spreadsheet consciousness because it’s ridiculous. People argue for LLM consciousness simply because we sentient messy meat bags are evolutionarily programmed to construct theories of mind for everything and in so doing mistakenly personify things that remind us of ourselves. LLMs simply feel “alive-ish” enough to pass our evolutionary sniff test.
But in reality, they’re less conscious than the spreadsheet.