Im still not sure if this is true, but I’ve seen a couple people post that OnlyOffice may be Russian owned with a location in Latvia just to avoid association. I really hope not because I do like the UI more. I tried asking Le Chat about it and got this. I’d love to get someone else’s take on this though.
OnlyOffice is owned by Ascensio System SIA, a Latvian-based IT company with headquarters in Riga. The company was founded by Lev Bannov, who is also the CEO of OnlyOffice. The ownership structure has undergone significant changes, with the Singapore holding company ONLYOFFICE Capital Group Pte. Ltd. now being the ultimate owner of Ascensio System SIA through its UK subsidiary, Ascensio System Limited .
OnlyOffice has faced allegations and concerns regarding its ties to Russia. Some sources claim that OnlyOffice is a Russian company that has attempted to mask its origins by using a Latvian company as a front. These allegations suggest that the company has connections to the Russian government and military, and that it has set up shell companies to avoid being associated with Russia following the Russian invasion of Ukraine .
The company has been accused of using its Latvian headquarters as a way to enter the international market while continuing to operate and develop its products in Russia under the name R7 Office . These concerns have led some companies to sever ties with OnlyOffice due to its alleged support for the Russian invasion of Ukraine and its failure to condemn the conflict .
Despite these claims, it is important to note that OnlyOffice is officially headquartered in Riga, Latvia, and operates under Ascensio System SIA. The company has a complex ownership structure, with a Singapore holding company owning the UK branch, which in turn owns the Latvian branch .
But wouldn’t they be able to cover way more content (and if trained well) be able to filter out biases? I feel like most people are bad at this already. They either just believe whatever they hear without looking it up, or go to one source and take it as final. For the average person, wouldn’t aggregated results be better?
Can you help me understand this better? The response didn’t sound biased and it was all supported with links that confirmed where it inferred that from. I also only asked about this after seeing two different mentions of this in other comments. I do not want to spread false information which is why I put all the sources I got this from and clearly stated where it came from.
Again, not trying to be rude at all, I am genuinely curious
Maybe I’m not understanding how this works. I just started testing it, but it appears to be running a live web search (as it takes a while and adds a globe icon for each new source it’s reviewing) and summarizes for trends, which is exactly what I would do. If I was doing it myself, I would only skim a few articles anyways to find common themes. I’m not writing a research paper on this, I’m just looking to see if there is a notable trend in source articles.
Now, I have it start a summary and then I check the sources to see if it trust it. Overall, it seems like it reaches the same goal but quicker. Which also makes me more likely to look things up to see if they’re supported with facts or completely made up. I realize AI can also hallucinate and the sources are not comprehensive, but isn’t at least better than nothing?
Pretty much all of the ai tools available now have been shown to hallucinate, even if it started out with an internet search.
I’ve had ai tools spit out real looking URLs that led to 404 pages, because it had hallucinated those links. It’s a place to start your research, to maybe refine your questions, but I wouldn’t trust it much with the actual research.
An LLM, a large language model, that an ai tool like Mistral is, doesn’t really use knowledge, it predicts what the next logical text is going to be based on information it has been trained on. It doesn’t think, it doesn’t reason, it just predicts what the next words are likely going to be.
It doesn’t even understand text, that’s why all of them claimed that there were just 2 Rs in strawberry. It doesn’t treat text as text.
You can use it to rewrite a text for you, perhaps even summarize (though there’s still the possibility of hallucinations there), but I wouldn’t ask it to do research for you.
Im still not sure if this is true, but I’ve seen a couple people post that OnlyOffice may be Russian owned with a location in Latvia just to avoid association. I really hope not because I do like the UI more. I tried asking Le Chat about it and got this. I’d love to get someone else’s take on this though.
source 1 source 2 source 3
AI chatbots are not and probably never will be good tools to research information
But wouldn’t they be able to cover way more content (and if trained well) be able to filter out biases? I feel like most people are bad at this already. They either just believe whatever they hear without looking it up, or go to one source and take it as final. For the average person, wouldn’t aggregated results be better?
Oh no, people are using AI slop to generate Lemmy comments
Can you help me understand this better? The response didn’t sound biased and it was all supported with links that confirmed where it inferred that from. I also only asked about this after seeing two different mentions of this in other comments. I do not want to spread false information which is why I put all the sources I got this from and clearly stated where it came from.
Again, not trying to be rude at all, I am genuinely curious
Why are you using AI for research? It’s a glorified predictive text.
Maybe I’m not understanding how this works. I just started testing it, but it appears to be running a live web search (as it takes a while and adds a globe icon for each new source it’s reviewing) and summarizes for trends, which is exactly what I would do. If I was doing it myself, I would only skim a few articles anyways to find common themes. I’m not writing a research paper on this, I’m just looking to see if there is a notable trend in source articles.
Now, I have it start a summary and then I check the sources to see if it trust it. Overall, it seems like it reaches the same goal but quicker. Which also makes me more likely to look things up to see if they’re supported with facts or completely made up. I realize AI can also hallucinate and the sources are not comprehensive, but isn’t at least better than nothing?
Pretty much all of the ai tools available now have been shown to hallucinate, even if it started out with an internet search.
I’ve had ai tools spit out real looking URLs that led to 404 pages, because it had hallucinated those links. It’s a place to start your research, to maybe refine your questions, but I wouldn’t trust it much with the actual research.
An LLM, a large language model, that an ai tool like Mistral is, doesn’t really use knowledge, it predicts what the next logical text is going to be based on information it has been trained on. It doesn’t think, it doesn’t reason, it just predicts what the next words are likely going to be.
It doesn’t even understand text, that’s why all of them claimed that there were just 2 Rs in strawberry. It doesn’t treat text as text.
You can use it to rewrite a text for you, perhaps even summarize (though there’s still the possibility of hallucinations there), but I wouldn’t ask it to do research for you.
This is really really helpful, thank you. I appreciate the explanation.