That’s just the thing, though, the point I am making, which is that it turns out in practice synthetic data can give you the same effect as original data. In some sense, training an LLM is kind of like a lossy compression algorithm, you are trying to fit petabytes of data into a few hundred gigabytes as efficiently as possible. In order to successfully compress it, it has to lose specifics, so the algorithm only captures general patterns. This is true for any artificial neural network, so if you train another neural network with the data yourself, you will also lose specifics in the training process and end up with a model that only knows general patterns. Hence, if you train a model using synthetic data, the information lost in that synthetic data will be information the AI you are training would lose anyways, so you don’t necessarily get bad results.
But yes, when I was talking about synthetic data I had in mind data purely generated from an LLM. Of course I do agree translating documents, OCRing documents, etc, to generate new data is generally a good thing as well. I just disagree with your final statement there that it is critical to have a lot of high-quality original data. The notion that we can keep making AIs better by just giving them more and more data, this method is already plateauing in the industry and showing diminishing returns. ChatGPT 3.5 to 4 was a massive leap but the jump to 4.5, which uses an order of magnitude more compute mind you, is negligible.
Just think about it. Humans are way smarter than ChatGPT and we don’t require the energy of a small country and petabytes of all the world’s information to solve simple logical puzzles, just a hot pocket and a glass of water. There is clearly an issue in how we are training things and not the lack of data. We have plenty of data. Recent breakthroughs have come in finding more clever ways to use the data rather than just piling on more and more data.
For example, many models have recently adopted reasoning techniques, so rather than simply spitting out an answer it generates an internal dialog prior to generating the answer, it “thinks” about the problem for a bit. These reasoning models perform way better on complex questions. OpenAI first invented the technique but kept it under lock and key, and the smaller company DeepSeek managed to replicate it and made their methods open source for everyone, and then Alibaba put it into their Qwen model in a new model they call QwQ which dropped recently and performs almost as well as ChatGPT 4 on some benchmarks yet can be run on consumer-end hardware with as little as 24GB of VRAM.
All the major breakthroughs happening recently are coming from not having more data but using the data in more clever ways. Just recently a diffusion LLM dropped which creates text output but borrows the same techniques used in image generation, so rather than doing it character-by-character it outputs a random sequence of characters all at once and continually refines it until it makes sense. This technique is used with images because uncompressed images take up megabytes of data while LLM outputs only output a few kilobytes in a response, so it would just be too slow to use the same method for image generation, yet by applying the image generation method to do what LLMs do it makes it produce reasonable outputs faster than any traditional LLM.
This is a breakthrough that just happened, here’s an IBM article on it from 3 days ago!
https://www.ibm.com/think/news/diffusion-models-llms
The breakthroughs are really not happening in huge data collection right now. Companies will still steal all your data because big data collection is still profitable to sell to advetisers, but it’s not at the heart of the AI revolution right now. That is coming from computer science geniuses who cleverly figure out how to use the data in more effective ways.
Personally I think general knowledge is kind of a useless metric because you’re not really developing “intelligence” at that point just a giant dictionary, and of course bigger models will always score better because they are bigger. In some sense training an ANN is kinda like a compression algorithm of a ton of knowledge, so the bigger the parameters the less lossy the compression it is, the more it knows. But having an absurd amount of knowledge isn’t what makes humans intelligent, most humans know very little, it’s problem solving. If we have a problem solving machine as intelligent as a human we can just give it access to the internet for that information. Making it bigger with more general knowledge, imo, isn’t genuine “progress” in intelligence. The recent improvements by adding reasoning is a better example of genuine improvements to intelligence.
These bigger models are only scoring better because they have just memorized so much they have seen similar questions before. Genuine improvements to intelligence and progress in this field come when people figure out how to improve the results without more data. These massive models already have more data than ever human could ever have access to in hundreds of lifetimes. If they aren’t beating humans on every single test with that much data then clearly there is something else wrong.