- Climate
What is peer-review? Why one ‘peer-reviewed’ Grok-written paper doesn’t disprove climate change
When is a scientific paper not a quality scientific paper?
On 21 March 2025, a paper emerged claiming that humans aren’t responsible for climate change. One of the paper’s authors, Jonathan Cohler, claimed on X that it was the “first-ever peer-reviewed climate science paper”. Curiously, the paper credits the AI large language model Grok 3 as its lead author.
The paper quickly gained traction online. Robert Malone, a well-known source of vaccine-related misinformation, shared it in a blogpost that has garnered more than one million views. A 25 March 2025 article from the website Slay News called it the “first-ever artificial intelligence-led peer-reviewed climate science paper” and stated that it “has confirmed” that climate change is “a hoax”.
This idea is incorrect – the paper hasn’t confirmed anything. There’s overwhelming evidence that human activity is changing the climate[1], far outweighing papers like this one. Below, we’ll show why this paper isn’t credible. We’ll start by talking about how peer-review works.
Main Takeaways:
- Peer-review is a standard part of science. Done well, peer-review helps ensure that scientists conduct quality work. Yet some “peer-review” does not live up to this standard. Merely calling a paper “peer-reviewed” does not mean the peer-review was effective.
- No credible journal would accept a paper that credits an AI author. AI large language models are prone to spreading incorrect information, and a large language model can’t assume responsibility.
- There’s overwhelming evidence, collected over decades, that humans are changing Earth’s climate. A single paper does not outweigh all that evidence – this isn’t how science works.
Proper peer-review is a key part of quality science, but not all peer-review is properly done
Peer-review is a way that scientists can check each other’s work. It’s a key part of modern research, including in climate science – virtually every credible climate science paper is peer-reviewed. Thus, calling any new climate science paper the “first-ever peer-reviewed” is incorrect.
Here’s a brief description of how it works in most cases. (You can find more details here.)
When a researcher or a group of researchers wants to publish some of their work, they write a paper describing their research and how it fits into the bigger picture, then submit that paper to a journal. There are thousands of different journals around the world – some like Nature and Science publish all sorts of research, but most publish research specific to one field or subfield.
If a journal thinks a paper may be right for them, then it approaches reviewers to examine the paper. In most cases, the reviewers are not paid by the journal, but are volunteers from the scientific community – the authors’ peers, in other words.
After reading a paper, a reviewer can make one of three decisions. First, they can recommend that the journal publish the paper without edits – this is rare. Second, if they dislike the paper, they can recommend the journal scrap it entirely. Third, they can return the paper with questions for the authors or suggestions for improvement.
If reviewers take the third option, then the journal can send the paper back to the authors, who can address the comments. Then the paper is sent back to the reviewers, who can again accept the paper, recommend it be rejected, or send it back with more comments. This cycle often repeats, with the paper going back and forth between the authors and the reviewers multiple times. The peer-review process can thus last many months.
If the paper reaches a stage where all its reviewers are content, it can move along for the journal to publish.
So, we can see peer-review’s effectiveness depends on the quality of the reviewers and the thoroughness of their reviews. A good reviewer is an expert in the paper’s field. In a thorough review, multiple reviewers carefully evaluate whether the paper is good science.
María de los Ángeles Oviedo-García, Professor of Marketing and Administration at the University of Seville who has studied peer-review, told Science Feedback:
“peer-reviewers should be able to identify research flaws in the manuscript (those flaws might be key, and then make the manuscript unpublishable, or not and be solved so the manuscript will have further rounds of peer-review) such as identifying gaps in the relevant previous research the authors work with in order to set the research problem/hypothesis or assessing the relevance of the results and its accordance with the collected data.”
Most (but not all) journals keep the process confidential. In many cases, this is intended to make reviews more effective – a reviewer can speak more freely about a paper if its authors don’t know the reviewer’s identity, for instance. But in less reputable journals, the closed nature of the process can also make it difficult to judge a peer-review’s quality from the outside.
Debora Weber-Wulff, Professor for Media and Computing at the HTW Berlin who has studied academic integrity, told Science Feedback:
“Many so-called predatory publishers state that they do peer-review, but it is impossible from the outside to judge the quality of the peer-review. I have had journals run spelling checks and call that peer-review […] Just stating that peer-review happens does not mean that actual peers critically reviewed the paper.”
For more examples of peer-review not living up to its name, we’ve seen numerous cases of fake reviews and researchers creating fake identities to review their own papers. We shouldn’t take these as evidence against science, but simply saying a paper is peer-reviewed isn’t a mark of good science if the peer-review wasn’t conducted properly.
With that in mind, let’s examine the paper in question.
The quality of the Grok-written paper’s supposed “peer-review” is highly doubtful
This paper appeared in a website calling itself Science of Climate Change. If we take the website at face value, its articles are examined by at least two reviewers. But, as we’ve seen, simply because a journal says that peer-review happens does not mean that it actually meets the standard.
Weber-Wulff noted to Science Feedback:
“The article was apparently submitted on 2025-03-06 and accepted 12 days later. Serious peer-review takes time”
Science Feedback did not find “Science of Climate Change” when we searched its name in several databases of scientific literature, including SCImago, the Web of Science, and Index Copernicus, indicating that it isn’t accepted as a credible scientific journal. This is a sign that the paper’s peer-reviewers, if they exist, are not experts in climate science.
Science of Climate Change describes its objective as “to publish…scientific contributions, which contradict the often very unilateral climate hypotheses of the IPCC and thus, to open the view to alternative interpretations of climate change”. This suggests that the website’s true purpose is to only publish papers supporting a particular conclusion – in other words, that it’s openly biased.
This isn’t how science works – quality science should limit its bias as much as possible. Bias can hurt science by leading to inaccurate results.
Examining the website and the paper’s authors gives us more evidence of bias. The website is published by the so-called “Norwegian Climate Realists”, an organization dedicated to opposing climate science. The paper’s list of authors includes well-known climate change deniers such as Willie Soon and David Legates, who have a history of pushing climate misinformation. They are not considered credible by the scientific community.
A better mark of a paper’s credibility is whether it has been published in a credible journal. A credible journal is far more likely to use higher-quality peer-review. You can quickly gauge this by searching a journal’s name in SCImago. SCImago rates journals by how often other scientists have cited them. If SCImago rates a journal as Q1, this means the journal is within the top 25% of all journals in its field, and it’s likely to be credible.
Grok’s involvement is evidence against the paper’s credibility
The Science of Climate Change paper’s authors created it using Grok 3 – an AI large language model developed by the Elon-Musk-owned xAI. According to the paper, Grok 3 generated most of the paper’s content, which the other authors then edited.
The authors credit Grok 3 as a fellow author. In itself, this is a red flag. What’s more, Grok 3 is the study’s first author – a slot usually reserved for the scientist who contributed the most work to a paper.
Crediting an AI tool as an author at all is a sign that neither the authors nor the journal are reputable. When Science Feedback asked Weber-Wulff if a paper crediting an AI as author would pass at a reputable journal, she replied:
“Absolutely not! Neither as author and most certainly not as first author, since all authors must take responsibility for the entire paper. An automaton cannot assume responsibility. Since Large Language Models are in essence text-extruding machines that are based on stochastic methods, they cannot be responsible for anything.”
Crediting an AI tool as an author is forbidden by many journals, which instead ask authors to simply disclose in their paper whether and how they’ve used AI. The Committee of Publication Ethics, which sets guidelines that many journals follow, agrees with this view:
“AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.”
Oviedo-García told Science Feedback:
“even if the manuscript passes peer-review, editors (editor in chief and handling editor) should have prevented this manuscript from publication for not respecting authorship criteria.”
In addition to their inability to take responsibility, large language models like Grok are well-known for often getting facts wrong. While many AI developers have worked to prevent their models from repeating inaccurate information, Grok’s developers in particular have faced criticism for failing to do so. So, Grok’s heavy involvement here is better reason not to trust the paper.
A single paper cannot “confirm” a scientific idea
There is an immense body of evidence unequivocally showing the climate is warming, at rates that are unprecedented in recent history, and that this warming is driven by greenhouse gases from human activities, especially burning fossil fuels[1]. Scientists have collected this evidence over decades.
Although the IPCC is often depicted as a single entity making heavy-handed decisions, the reality is that its statements result from the scientific processes we’ve described. That’s why their reports often cite hundreds of other papers – they’re trying to summarize all of this evidence to best make sense of it.
If there really were opposing evidence that climate change has been exaggerated – and, to be absolutely clear, scientists don’t have such evidence – there would need to be a pattern of opposing evidence that manages to outweigh all of that. A single paper wouldn’t be enough.
References:
- 1 – IPCC. (2022) Climate Change 2022: Impacts, Adaptation and Vulnerability.