
I build and train LLMs as part of my job, so I’m biased but informed.
Large language models are literally text predictors. Their logic generates text probablistically calculated to give the correct result based their previous training parameters and current inputs.
IMHO there isn’t room for actual thought, reflection, or emotion in the relatively simple base logic of the model, only probabilistic emulation of those things. This amounts to reading about a character in a story going though something traumatic and feeling empathy. It’s a totally appropriate human response, but the character is fictional. The LLM wouldn’t feel anything in your shoes.
Absolutely. Academics debate the Turing test ad-nausium for this exact reason. It measures humans not computers.