Companies don’t exactly highlight the fact that their AIs are prone to hallucinating.
Funny enough, if you question Gemini enough, it will eventually cave and admit that it’s answers aren’t always right and that it’s still improving. LOL
The problem is, as I suggested, you already need to know what the correct answer is in order to effectively cross-examine what it told you.
Funny enough, if you question Gemini enough, it will eventually cave and admit that it’s answers aren’t always right and that it’s still improving. LOL
The problem is, as I suggested, you already need to know what the correct answer is in order to effectively cross-examine what it told you.