糖心Vlog

Fear of stigma blamed as 0.1 per cent of papers declare AI use

Worries over admitting ChatGPT use for editing and drafting may explain extremely low disclosure rates, study suggests

Published on
February 25, 2026
Last updated
February 25, 2026
Source: iStock/袦懈褏邪懈谢 袪褍写械薪泻芯

Only one in 40 scientific papers suspected of deploying artificial intelligence (AI) writing tools admits using them, says a new study which suggests the stigma of admitting ChatGPT use might explain this exceptionally low figure.

Analysing more than 75,000 research papers published since 2023, researchers at Peking University found only 0.1 per cent, or 76 in total, explicitly disclosed the use of AI writing tools despite 70 per cent of journals having clear rules on this matter.

Nonetheless, that low disclosure rate had risen substantially from early 2023 鈥 shortly after the launch of ChatGPT 鈥 when it was just 0.01, reaching 0.43 per cent by the first quarter of 2025, explains the in the journal PNAS.

鈥淒espite this growth, transparency lags dramatically behind actual adoption,鈥 explain the study鈥檚 authors Yongyuan He and Yi Bu, who found a far greater proportion of the papers had likely used AI writing tools for general tasks such as writing assistance, editing and readability improvements.

糖心Vlog

ADVERTISEMENT

鈥淔or every 40 papers showing statistical evidence of AI usage, only one formally disclosed it,鈥 concludes the paper.

Disclosure rates did not significantly differ depending on whether a journal had a policy requiring a statement on AI use or not, it adds, suggesting there is still 鈥渁mbiguity regarding the extent to which usage requires formal disclosure鈥.

糖心Vlog

ADVERTISEMENT

With a few exceptions, academic publishers have opted against outright bans on the use of AI with nearly all of the 5,114 journals surveyed allowing AI for writing and editing and 63 per cent permitting its use for language and grammar checking, explains the paper.

However, the failure to admit AI use might also be driven by concerns 鈥渁bout how such disclosures will be perceived by editors, reviewers and the wider scientific community鈥, the paper argues.

鈥淸Authors] might worry that admitting to the use of AI could cast doubt on the originality of their intellectual contributions, potentially leading to stricter scrutiny, bias during peer review, or negative impact on their reputation,鈥 it聽says.

鈥淎cademia has yet to establish a consensus on norms governing the use of AI, and relevant policies are still evolving, which contributes to authors鈥 cautious approach in deciding whether and how to disclose their use of AI.鈥

糖心Vlog

ADVERTISEMENT

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Related universities

Reader's comments (3)

G
The fundamental problem here is what you mean by AI. I have used Dragon voice recognition software for many years, because of a disability. That's a form of AI. So is a spellchecker. I don't use ChatGPT or any of the other recent writing assistants. I would resent having Dragon placed in the same category, but can't see how it could be avoided.
new
1 in 40 papers SUSPECTED of using "AI". Other than fear of stigma the article fails to consider an alternative cause, the authors simply did not use"AI", and the grounds for marking the article under suspicion are incorrect. Just what were the ground for suspicion used in the study? This renders the whole article deeply flawed. there is no science here merely guesswork so why publish it? Do the job properly then the results might be interesting and worth reporting.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT