

Maybe I’m not understanding how this works. I just started testing it, but it appears to be running a live web search (as it takes a while and adds a globe icon for each new source it’s reviewing) and summarizes for trends, which is exactly what I would do. If I was doing it myself, I would only skim a few articles anyways to find common themes. I’m not writing a research paper on this, I’m just looking to see if there is a notable trend in source articles.
Now, I have it start a summary and then I check the sources to see if it trust it. Overall, it seems like it reaches the same goal but quicker. Which also makes me more likely to look things up to see if they’re supported with facts or completely made up. I realize AI can also hallucinate and the sources are not comprehensive, but isn’t at least better than nothing?
But wouldn’t they be able to cover way more content (and if trained well) be able to filter out biases? I feel like most people are bad at this already. They either just believe whatever they hear without looking it up, or go to one source and take it as final. For the average person, wouldn’t aggregated results be better?