Maybe I’m not understanding how this works. I just started testing it, but it appears to be running a live web search (as it takes a while and adds a globe icon for each new source it’s reviewing) and summarizes for trends, which is exactly what I would do. If I was doing it myself, I would only skim a few articles anyways to find common themes. I’m not writing a research paper on this, I’m just looking to see if there is a notable trend in source articles.
Now, I have it start a summary and then I check the sources to see if it trust it. Overall, it seems like it reaches the same goal but quicker. Which also makes me more likely to look things up to see if they’re supported with facts or completely made up. I realize AI can also hallucinate and the sources are not comprehensive, but isn’t at least better than nothing?
Pretty much all of the ai tools available now have been shown to hallucinate, even if it started out with an internet search.
I’ve had ai tools spit out real looking URLs that led to 404 pages, because it had hallucinated those links. It’s a place to start your research, to maybe refine your questions, but I wouldn’t trust it much with the actual research.
An LLM, a large language model, that an ai tool like Mistral is, doesn’t really use knowledge, it predicts what the next logical text is going to be based on information it has been trained on. It doesn’t think, it doesn’t reason, it just predicts what the next words are likely going to be.
It doesn’t even understand text, that’s why all of them claimed that there were just 2 Rs in strawberry. It doesn’t treat text as text.
You can use it to rewrite a text for you, perhaps even summarize (though there’s still the possibility of hallucinations there), but I wouldn’t ask it to do research for you.
Why are you using AI for research? It’s a glorified predictive text.
Maybe I’m not understanding how this works. I just started testing it, but it appears to be running a live web search (as it takes a while and adds a globe icon for each new source it’s reviewing) and summarizes for trends, which is exactly what I would do. If I was doing it myself, I would only skim a few articles anyways to find common themes. I’m not writing a research paper on this, I’m just looking to see if there is a notable trend in source articles.
Now, I have it start a summary and then I check the sources to see if it trust it. Overall, it seems like it reaches the same goal but quicker. Which also makes me more likely to look things up to see if they’re supported with facts or completely made up. I realize AI can also hallucinate and the sources are not comprehensive, but isn’t at least better than nothing?
Pretty much all of the ai tools available now have been shown to hallucinate, even if it started out with an internet search.
I’ve had ai tools spit out real looking URLs that led to 404 pages, because it had hallucinated those links. It’s a place to start your research, to maybe refine your questions, but I wouldn’t trust it much with the actual research.
An LLM, a large language model, that an ai tool like Mistral is, doesn’t really use knowledge, it predicts what the next logical text is going to be based on information it has been trained on. It doesn’t think, it doesn’t reason, it just predicts what the next words are likely going to be.
It doesn’t even understand text, that’s why all of them claimed that there were just 2 Rs in strawberry. It doesn’t treat text as text.
You can use it to rewrite a text for you, perhaps even summarize (though there’s still the possibility of hallucinations there), but I wouldn’t ask it to do research for you.
This is really really helpful, thank you. I appreciate the explanation.