The ChatGPT era prompts a boom in A-graded coursework
Summary: Single-source brief on AI grade inflation offers useful data but relies almost entirely on one researcher with no corroborating or dissenting voices.
Critique: The ChatGPT era prompts a boom in A-graded coursework
Source: axios
Authors: Josephine Walker
URL: https://www.axios.com/2026/05/16/ai-grade-inflation-college-classes
What the article reports
A UC Berkeley professor, Igor Chirikov, published a study finding that since ChatGPT's release in 2022, "excellent" grades rose 30% in AI-amenable classes at an unnamed Texas research university, while grades in non-AI-amenable classes stayed flat. The piece frames this as evidence of AI-driven grade inflation and quotes Chirikov on potential remedies including AI-integrated assignments.
Factual accuracy — Adequate
The core claim — a 30% rise in "excellent" grades since 2022 in AI-useful classes — is attributed to Chirikov's study and is internally consistent with the described methodology (homework-weighted vs. exam-weighted classes, 2018–2025 data window). No outright factual errors are detectable from the text. However, the study's findings are presented without a link, citation, or journal name, making independent verification impossible. The unnamed university is a noted limitation the article does surface: "Chirikov didn't name the university used in the study." The claim that "Universities and colleges were already concerned about how many students are earning A's and B's" is asserted as background fact without any sourced data point to anchor it — a meaningful vagueness for a claim that could easily be quantified.
Framing — Adequate
- "boom in students earning A's" — The headline and lede use "boom," a growth-connotation word, without first establishing a baseline. A 30% rise in one category at one university is significant, but the word implies widespread, dramatic change the article doesn't fully document.
- "AI-proficient rather than knowledgeable about their subjects" — This interpretive framing in the "Why it matters" section presents a contested assumption (that AI use equals subject-ignorance) as settled fact, without attribution to any source.
- "getting crafty to crack down on AI-fueled cheating" — The phrase "crack down" and "cheating" are editorially charged; the article doesn't establish that all AI use in homework constitutes academic dishonesty, nor does it quote any faculty or institutional voice on their actual policies.
- On the positive side, the closing Chirikov quote — "AI just exacerbates the existing trends" — does provide some moderating nuance and is allowed to stand on its own without further editorial amplification.
Source balance
| Voice | Affiliation | Stance |
|---|---|---|
| Igor Chirikov | UC Berkeley professor / study author | Central — supports the grade-inflation-via-AI thesis |
| (no second source) | — | — |
Ratio: 1 supportive : 0 critical : 0 neutral. No skeptical researcher, no university administrator, no student, no faculty member outside the study is quoted. The entire evidentiary and interpretive load rests on one researcher who is also the author of the study being reported on.
Omissions
- Study publication details — The article never names the journal, preprint server, or publication status of Chirikov's study. Readers cannot locate or assess it independently.
- Pre-AI grade inflation baseline — The piece notes grades have risen since the "early 2000s" but gives no numbers. Without a baseline rate, the 30% post-2022 rise cannot be contextualized as exceptional or expected.
- Dissenting or alternative interpretations — No researcher who questions the AI-causation hypothesis (e.g., pandemic-era grading policy changes, instructor adaptation) is quoted. The causal link between ChatGPT availability and grade rises is asserted, not demonstrated in the text.
- Institutional policy landscape — Many universities have published AI-use policies since 2022; none are cited, leaving readers without a sense of what guardrails already exist.
- Student perspective — No student voice appears, despite students being the subjects of the described behavior.
What it does well
- The article is transparent about a key limitation of the underlying study: "Chirikov didn't name the university" and surfaces his own reasoning for that choice, letting readers assess it.
- The contrast between AI-amenable and non-AI-amenable classes ("In classes where it's not — like sculpture and lab-based courses — grades remained flat") is a clean methodological detail that gives readers a sense of the study's internal controls.
- Chirikov's self-moderating quote — "There are many cases when students can select easier courses and get easier A's" — is included, preventing the piece from overstating AI as the sole cause.
- The format is appropriately scoped for a 430-word brief; it does not overreach structurally.
Rating
| Dimension | Score | One-line justification |
|---|---|---|
| Factual accuracy | 7 | No outright errors, but the study is uncited and background claims are unsourced |
| Source diversity | 3 | One source — the study's own author — carries the entire piece |
| Editorial neutrality | 6 | "Boom," "crack down," and the unattributed AI-equals-ignorance framing tilt the piece, though moderating quotes are included |
| Comprehensiveness/context | 5 | Missing study citation, pre-AI baseline data, dissenting researchers, and student voices |
| Transparency | 7 | Byline present, study limitation disclosed, but no study link or journal name |
Overall: 6/10 — A serviceable brief on an important trend, undermined by single-source reliance and missing context that would let readers assess the study's strength independently.