April 05, 2025

The influence of Large language models to academic publishing

Until the year 2025 the influence of Artificial intelligence to academic publication is surprisingly small. In contrast to social media which has been flooded with AI generated images, the amount of of AI generated academic papers is small or even zero. To investigate the subject we have to count the amount of papers first.
According to public databases like Openalex and Google Scholar there are around 160 million upto 250 million papers available with an increase of 5 million new papers every year. The amount of new papers has increased in the last years with a small percentage of 4% which can be explained with higher productivity of authors or a small increase of the amount of authors. So in general the situation is very static which can be explained with the strict peer review system used for conference proceedings and academic journals.
Peer review means, that before an 8 page long paper is getting published, two and more human judges have to confirm that the content is valid and that the content has a high quality.
The open question is, if Artificial intelligence and especially large language models are able to increase the paper count in the academic domain. For example by writing 10 million new papers instead of the current 5 million human generated papers.
One obstacle for doing so is the need for novelty in scholarly writing. What academic authors are trying to archive is to write something new. This can be a new subject, or a new perspective to an existing issue. Even if not all published papers are excellent, the claim is always, that a report or essay is innovative. Such kind of requirement is difficult to reproduce with large language models. At least in the now, its not possible to generate high quality academic papers with a computer program.

No comments:

Post a Comment