The 2002 New England Journal of Medicine article by Madsen et al., often cited as pivotal in “debunking” the link between MMR vaccination and autism, contains numerous red flags that point to serious flaws and, potentially, elements of scientific misconduct or fraud. A thorough forensic analysis reveals multiple issues involving inappropriate handling of confounders, diagnostic biases, misrepresented incidence rates, and modeling weaknesses. Here’s a critique of the study, line by line and layer by layer:
Inaccurate and Obsolete Autism Rates
The authors reported 7.7 per 10,000 for autistic disorder and 22.2 per 10,000 for other autism spectrum disorders in 8-year-old children. But by 2002, multiple studies had already documented rates several times higher. Even CDC data showed rising prevalence by the late 1990s. For example:
-
The California DDS (1999) report cited in the study noted a 373% increase in autism caseloads between 1987–1998.
-
The authors mention in passing that Danish rates rose to >10 per 10,000 in 2000 but then claim their lower prevalence is valid and comparable.
Conclusion: They used outdated diagnostic tracking systems, which likely underreported true autism prevalence, especially among the unvaccinated group that was younger and less likely to be diagnosed. This creates an artificial suppression of autism incidence in the unvaccinated group, potentially reversing the actual signal.
Model Overfitting via Confounder Overadjustment
In their multivariable Poisson regression model, the authors included seven covariates:
-
Age (as a time-dependent variable),
-
Calendar year,
-
Sex,
-
Birth weight (5 levels),
-
Gestational age (3 levels),
-
Mother’s education (5 levels),
-
Socioeconomic status (6 levels).
Many of these are collinear or surrogates for one another (e.g., mother’s education vs. socioeconomic status; gestational age vs. birth weight), inflating the risk of overfitting the model. Additionally, age was the dominant confounder since diagnosis rates depend heavily on age. But they treated age and calendar year and birth cohort and diagnostic delay—intertwined temporal factors—as independent, diluting statistical power.
Conclusion: This is classic overadjustment bias, where true effects may be nullified by excessive modeling of correlated variables. The low event count (316 autistic cases in a cohort of over 500,000) further compounds the problem of sparse data with too many predictors.
Misclassification and Diagnostic Delay Bias
The average age at MMR vaccination was 17 months, but the mean diagnosis age was 4 years and 3 months, meaning many vaccinated children had years of exposure time before a potential diagnosis. In contrast, unvaccinated children tended to be younger and less likely to be diagnosed at all.
They acknowledge that “much of the follow-up for the unvaccinated group involved young children, in whom autism is often undiagnosed”, yet do not correct for this delay in diagnosis.
Conclusion: This leads to immortal time bias and diagnostic latency bias, both of which undercount autism in the unvaccinated group. Their claim that this somehow did not affect the results is scientifically untenable.
Use of Time-Dependent Exposure Misclassifies Outcomes
The authors treated vaccination status as time-dependent: children counted as unvaccinated until they received MMR, at which point they were reclassified as vaccinated. However, if a child was diagnosed with autism at any point, their vaccination status was recorded as of that date.
This means:
-
A child could receive a diagnosis and then a vaccine later, and still be counted as unvaccinated with autism.
-
Conversely, any autism diagnosis that occurred post-vaccination was assumed not to be caused by the vaccine unless it appeared very soon afterward.
Conclusion: This reclassification design inflates autism rates in the unvaccinated group and breaks the temporality rule for causal inference—a hallmark of p-hacking and statistical obfuscation.
P-Hacking via Calendar Time Splits
The authors split their analysis into multiple windows by:
-
Age at vaccination
-
Time since vaccination
-
Year of vaccination
This fragmentation of the data (see Table 2) into many subgroups reduces statistical power, introduces noise, and allows potentially hidden cherry-picking of favorable intervals, as was reported to occur in the Destefano et al. (2008) study when Coleen Boyle directed William Thompson to try adjusting age group clusters to make the findings insignificant. The reporting of numerous p-values without adjusting for multiple comparisons also opens the door to false negatives and Type II errors.
Conclusion: This is textbook p-hacking—testing multiple windows until nothing reaches statistical significance, then claiming no effect.
Lack of Regression Subtype Analysis
They admit they could not determine whether children had regressive autism or early-onset autism, nor could they identify autistic children with gastrointestinal symptoms—both central to the Wakefield hypothesis.
Conclusion: Their negative findings cannot address the specific autism phenotype most plausibly linked to MMR. This is a misdirection by scope.
CDC Funding, CDC Coauthor, and Missing Conflict of Interest Disclosures
The study was co-authored by Diana Schendel of the CDC, and the funding came in part from the CDC and the National Vaccine Program Office. Given the CDC’s vested interest in vaccine policy, this represents a serious undisclosed conflict of interest, especially since coauthor Poul Thorsen was later indicted for fraud involving NIH funds.
Conclusion: This constitutes an institutional conflict of interest that should have been disclosed—and taints the independence of the study’s design, conduct, and reporting.
Unbalanced Group Sizes and Power Claims
Despite claiming strong statistical power, the groups are vastly unequal:
-
440,655 vaccinated vs. 96,648 unvaccinated
-
Only 53 autistic children in the unvaccinated group
This makes any estimation of relative risk unstable for the unvaccinated subgroup, especially given diagnostic underreporting in this younger and less likely-to-present population.
Conclusion: The study lacked adequate power to detect risk increases in subpopulations, despite claims to the contrary.
Final Verdict: A Horrendously Conducted Study
This study is often cited as definitive, yet it is methodologically unsound and statistically compromised. It should be treated not as definitive evidence, but as a case study in institutional bias, poorly constructed regression modeling, and the dangers of unbalanced power with predetermined outcomes.
IPAK-EDU is grateful to Popular Rationalism as this piece was originally published there and is included in this news feed with mutual agreement. Read More

























Leave a Reply