IPAK 2026 PRELIMINARY REPORT ON GLYPHOSATE: Balance of Evidence on “Harm Found” vs. “Failure to Find Evidence”

Glyphosate is the active ingredient in Roundup and other glyphosate-based herbicides (GBHs), making it the world’s most widely used weed killer. Its pervasive use in agriculture and landscaping means that trace residues are frequently detected in food, water, and even human urine. As public exposure increased, so did scientific scrutiny of glyphosate’s safety. Decades of studies have yielded a complex body of evidence – some research reports evidence of harm (adverse effects) while others report null findings (no detected harm). This report analyzes 1,071 studies on glyphosate, comprising 712 studies showing evidence of harm and 359 studies failing to detect harm, to assess the balance of evidence and evaluate patterns in research and regulation. In this IPAK Preliminary Report, we summarize the findings.

A full-text analysis is warranted.

SUPPORT IPAK

Share

Subscribe now

Hazard vs. Risk: A key source of confusion in glyphosate’s public narrative is the distinction between hazard and risk. A hazard is an agent’s potential to cause harm, whereas risk is the likelihood of harm occurring under real-world exposure levels[1]. In 2015 the International Agency for Research on Cancer (IARC) classified glyphosate as “probably carcinogenic to humans” (a hazard identification indicating strong evidence it can cause cancer under some conditions)[2]. In contrast, regulators like the U.S. Environmental Protection Agency (EPA) and European Food Safety Authority (EFSA) focus on risk assessment – they concluded that glyphosate is “not likely to be carcinogenic to humans” when used as labeled[3][4]. In other words, IARC asked can glyphosate cause cancer (and found yes, under certain exposures), while EPA/EFSA asked will it cause cancer at typical human exposure levels (and they answered likely not). This hazard-versus-risk dichotomy underpins much of the policy debate. Throughout this report, we will see how hazard evidence of various health effects has accumulated, even as risk-based regulators maintain that glyphosate is “safe” at permitted exposure levels – a position at odds with the balance of evidence from science.

Scope of This Report: We synthesize findings from the two streams of evidence – the 712 “harm” studies and 359 “no-harm” studies – using curated metadata from their abstracts. We can conclude that an objective, independent a full-study literature analysis is warranted. We will highlight key publication trends over time, summarize study designs and endpoints, and examine systemic patterns such as dose relevance, endpoint selection, and conflicts of interest. We also critically assess regulatory coherence in light of this evidence, questioning whether official safety assurances adequately account for the full scope of independent research. All technical terms are defined at first use for clarity. (For example, Acceptable Daily Intake (ADI) refers to the estimated amount of a chemical a person can ingest daily over a lifetime with no appreciable health risk, as determined by regulators. We will note later that none of these studies reported their tested doses in relation to any ADI, highlighting a communication gap. Similarly, omics refers to high-dimensional biomolecular analyses – e.g. genomics, proteomics – used to detect subtle biological changes.)

Overview of the 1,071 Glyphosate Studies

Over the last 30 years, the scientific literature on glyphosate has grown from a trickle in the early 1990s to a flood in the 2010s. Of the 1,071 studies analyzed, 712 (66%) reported some evidence of harm or adverse effect from glyphosate or glyphosate-based formulations, while 359 (34%) reported no significant harm detected. It is important to note that “no harm detected” is not equivalent to proof of safety – it often means the study did not find an effect under the specific conditions tested, which could be due to truly no effect or due to limitations of the study (more on that later).

Trends Over Time: The volume of glyphosate research has increased dramatically in the past decade. In the 1990s, only a handful of relevant studies were published per year. By contrast, in 2023 alone, 86 studies in our dataset reported evidence of harm, while 34 studies reported null findings – the highest annual counts on record. The surge began after 2015, likely spurred by heightened scientific and public interest following IARC’s classification and subsequent high-profile controversies. Before 2015, the number of “harm” versus “no-harm” papers each year was relatively low and sometimes comparable. After 2015, studies showing harm consistently outnumbered null-results studies each year, often by 2-to-1 or more in recent years. This suggests that as methodologies diversified and scrutiny intensified, (or as journals relaxed their editorial restrictions), more adverse effects were being detected and reported. At the same time, a steady stream of studies (many funded or co-authored by industry-affiliated scientists) continued to report no significant effects.

From 1991 through the early 2000s, annual publication volume was low and the number of studies reporting harm and no harm was often comparable. Beginning around 2014–2016, total publication volume increased sharply. After 2015—the year the International Agency for Research on Cancer (IARC) classified glyphosate as “probably carcinogenic to humans”—studies reporting evidence of harm consistently outnumbered studies reporting no detected harm in every complete year. In most post-2015 years, harm-reporting studies exceeded null findings by roughly two-to-one or more.

The figure illustrates a sustained divergence between the two evidence streams in the past decade, with peak publication volume occurring in 2023 (86 harm vs. 34 no-harm studies).

Figure 1. Annual publication counts of glyphosate studies reporting evidence of harm versus studies failing to detect harm (1991–2025).

Each point represents the verified number of studies published in a given year within the adjudicated corpus (N = 1,071 total; 712 reporting evidence of harm; 359 reporting no detected harm). Counts are derived directly from abstract-level metadata; 55 records with missing year are excluded.

Types of Harms Detected and Reported

Of the 712 studies in this corpus that reported evidence of harm, 630 studies (88.5%) documented functional toxic effects rather than tumors or purely molecular biomarkers. Within the full harm corpus, the most frequently detected systems were endocrine/metabolic disruption (124 studies; 17.4% of 712), neurotoxicity and behavioral effects (109 studies; 15.3%), and developmental/teratogenic outcomes (107 studies; 15.0%). Liver toxicity appeared in 88 studies (12.4%), while reproductive and fertility impairment was identified in 66 studies (9.3%). Additional domains included gut/microbiome disruption (49 studies; 6.9%), immune or inflammatory effects (47 studies; 6.6%), kidney toxicity (30 studies; 4.2%), and cardiovascular findings (21 studies; 2.9%). Separately, 82 studies (11.5% of 712) explicitly examined or identified harms associated with glyphosate-based formulations (GBHs), rather than glyphosate technical alone.

Key observations. Both evidence streams draw heavily on animal experiments and in vitro cell studies; relatively few are human studies (e.g. epidemiological investigations). This is understandable – controlled toxicology experiments on humans would be unethical, so human evidence comes mostly from observational studies of exposed populations, which are limited in number. The vast majority of entries had “unclear” model in the abstract metadata, often because the abstract didn’t explicitly state if the work was in animals or cells. Among those that did specify: rodent studies dominate, with a smaller subset of cell culture studies, and only a handful of human observational studies.

Both streams also include studies on pure glyphosate (“technical grade”) as well as formulated herbicides (commercial mixtures of glyphosate with adjuvants, often referred to by brand names like Roundup). Around 49% of harm studies vs. 39% of no-harm studies explicitly examined technical glyphosate, while 12% vs. 14% examined full formulations (the rest were unclear). Formulations tend to be more toxic than glyphosate alone due to added surfactants like POEA (polyoxyethylene amine) that enhance herbicidal action but also have their own toxicity[5]. Yet regulatory assessments often consider only the active ingredient. The inclusion of formulation studies in both evidence streams is important – some “no harm” findings apply to glyphosate in isolation, which might understate risks of the real-world product, whereas many “harm” findings come from testing the formulations or comparing them to glyphosate alone.

In terms of endpoints, an overwhelming majority of studies (≈88%) fell into “other functional” effects – meaning non-cancer health outcomes such as endocrine disruption, neurotoxicity, liver/kidney damage, reproductive/developmental effects, microbiome alterations, etc. A modest subset (~11% of harm studies, 10% of null studies) focused on biomarkers or omics endpoints, reflecting modern molecular approaches to detect changes in gene expression, DNA damage, oxidative stress markers, etc. Only a very small fraction (~1%) of studies were classified as direct tumor/carcinogenicity studies. This highlights that carcinogenicity has not been the primary focus of most academic glyphosate research, likely because full two-year cancer bioassays are costly and usually conducted by industry for regulatory submissions. (Indeed, much of the evidence for glyphosate’s carcinogenic hazard comes from industry studies and epidemiological data considered by IARC, rather than from academic labs publishing new rodent cancer trials.) The literature instead is rich with studies showing sub-chronic toxic effects that could plausibly contribute to long-term disease (for example, endocrine system changes, organ toxicity, and biochemical disruptions that might over time elevate cancer or other disease risk).

Exposure duration was rarely clearly stated in abstracts, but where noted, chronic (long-term or repeat dose) studies showing harm (32 studies) outnumbered chronic studies finding no effect (10 studies). This hints that longer-term exposures are more likely to reveal problems that acute studies might miss. Many null findings come from acute or short-term tests (e.g. a single high dose) that may not capture chronic or latent effects. We will revisit why dose and timing (“dose realism”) matter when evaluating these outcomes.

Finally, it’s worth noting the low rate of declared conflicts of interest (COI) or funding sources in the abstracts. Only ~5–8% of studies explicitly declared “no conflicts of interest” in the available abstract metadata, meaning the vast majority did not state their funding or COI status in the abstract. Many journals include COI statements elsewhere (full text or footnotes), so “unclear” here doesn’t automatically mean there was a conflict – it means it wasn’t plainly stated in the abstract data. However, this also reflects a broader issue: potential industry influence is not easy to discern from abstracts alone. This practice should change. Historically, industry-funded research in this arena has been associated with outcomes favoring safety[6][7], whereas independent academic studies more often find hazards. We will present evidence of this pattern in the regulatory context below.

In summary, the weight of published evidence tilts toward finding harm: about two-thirds of these studies documented some adverse effect of glyphosate or its formulations. The remaining one-third with null results serve as an important reminder that context and study design matter – not every experiment will show an effect, depending on what endpoints were measured, at what dose, for how long, and with what statistical power. Next, we delve into the patterns that explain why some studies find harm and others don’t, and what “no effect” really signifies (or doesn’t signify) for declaring glyphosate’s safety.

Why Some Studies Find Harm and Others Don’t

Divergent outcomes in glyphosate research often trace back to differences in study design, dose selection, endpoints, and analytical approach. Here we examine critical factors that influence whether a study detected harm or not, and we expose several systemic patterns that suggest “null results” can sometimes be by-products of study choices rather than proof of innocuousness. Key issues include dose realism, endpoint bundling, conflict of interest and suppression, and multiplicity of comparisons. Understanding these factors is crucial for interpreting the evidence balance.

  • Dose Realism: The phrase “the dose makes the poison” is a bedrock of toxicology, but deciding what dose levels are realistic for humans is non-trivial. Many industry toxicology studies use very high doses to probe worst-case hazard (sometimes up to the MTD, or Maximum Tolerated Dose, which is the highest dose animals can survive). Paradoxically, effects seen only at extremely high doses are often dismissed by regulators as not relevant to real-world exposure. For example, EFSA noted that some tumor increases occurred “only at dose levels at or above the limit dose/MTD” and thus discounted them[8]. On the other hand, some critical effects (like endocrine disruption) may occur only at lower doses and get missed if a study only examined one or two high doses. “Dose realism” also relates to whether studies mimic actual human exposure patterns – e.g. chronic low-dose dietary exposure versus a single bolus dose. The Buonsante et al. (2014) review highlighted that standard regulatory tests often lack realistic low-dose and long-term exposure scenarios, and thus miss important effects with latency (diseases that take time to develop)[9][10]. Academic studies not bound by guideline protocols have explored a wider dose range, but even so, our analysis found no abstracts that contextualized their doses to an ADI or regulatory safety limit. This means an informed reader or policymaker must dig into full papers to judge whether an effect occurred at a fraction of the human acceptable exposure or at a massive dose. In short, dose matters: A “no effect” study that only tested very low doses might be accurate but not reassuring if those doses were far below real human exposures; conversely, a “no effect” at a very high dose may not indicate safety either – it could mean the wrong endpoint was measured (the effect might not be acute lethality but something subtler at lower doses). The take-home point is that without dose context, null results cannot automatically be generalized as “proof of safety” – dose realism and relevance must be scrutinized case by case.

  • Endpoint Bundling: This term refers to the practice of combining or selecting endpoints in analysis in a way that might obscure specific adverse outcomes. Regulatory evaluations have been criticized for “bundling” multiple tumor types or outcomes together and then concluding overall there’s no consistent effect. For instance, regulators often require that an increase in a specific tumor type appear in both sexes of animals and across studies before they acknowledge a carcinogenic hazard – effectively bundling evidence and dismissing isolated signals as “not reproducible”. In the EU assessment, it was noted that “no consistent positive association was observed” across studies, and thus even credible positive studies were labeled “not reliable” by the German Federal Institute for Risk Assessment (BfR)[11][12]. The IARC working group explicitly criticized this approach, arguing that each study should be weighted by quality, not just counted, and that seeing the same rare tumor in two different studies isn’t the only way to infer hazard[13]. This is especially true when the studies differ by popular type. Another form of endpoint bundling is when multiple related measures are averaged or combined. For example, an experiment might measure a battery of cognitive tests in animals – if one or two show deficits from glyphosate but others do not, a paper might report “overall no cognitive effect” (bundling the endpoints) even though certain functions were impaired. Similarly, grouping different health outcomes together (e.g. reporting only an aggregate “overall toxicity score”) can mask a significant impact on a specific organ or system. Many of the 712 harm studies focused on specific endpoints (e.g. sperm quality, gut microbiome composition, hormone levels, etc.), whereas some of the null studies were broad guideline-style studies that looked at general toxicity and reported “no adverse findings overall,” which can belie the presence of narrower effects that weren’t deemed biologically significant or were lost in the noise of aggregate analysis.

  • Conflict of Interest & Suppression: Scientific objectivity can be compromised by financial or institutional interests. A systemic pattern of suppression and bias has been documented in the glyphosate saga. Regulators and courts have scrutinized the “Monsanto Papers”, internal documents revealing how Monsanto (glyphosate’s original manufacturer) orchestrated ghost-written articles and exerted influence to shape the scientific discourse. While our dataset doesn’t label which studies were industry-funded (due to lack of abstract COI statements), external investigations show a clear split: industry-commissioned studies almost invariably conclude no harm, whereas independent academic studies often do find harm. For example, an analysis by Charles Benbrook found that EPA relied on 95 industry toxicology studies, 94 of which (99%) reported no genotoxic effects, whereas of the publicly available studies on glyphosate genotoxicity, 74% did find evidence of DNA damage or related harm[7][14]. This astonishing imbalance (99% vs 74%) reflects the well-known phenomenon of funding bias – research funded by those with a vested interest in a “safe” outcome tends to produce results aligning with that interest[6]. Indeed, a commentary in Journal of Epidemiology & Community Health notes that regulators often “dismiss most peer-reviewed studies in favor of industry-sponsored studies” during risk assessments[15]. This means that hundreds of published academic findings (including many in our 712 harm set) were effectively sidelined, while greater weight was given to a smaller number of industry studies (which are typically not published openly) that found no issues. There have even been instances of outright fraud: contract laboratories hired by industry have been caught falsifying results. A historical example is the Industrial Bio-Test Labs scandal in the 1970s, where fraudulent practices invalidated many pesticide safety studies. More recently in 2020, a major German lab (LPT) that did glyphosate studies for regulatory submissions was investigated for data fabrication and animal abuse; consumer advocates revealed at least 20 glyphosate safety studies submitted to EU regulators came from this lab and must now be considered suspect[16][17]. Whistleblowers described routine faking of results at LPT, which had been certified as adhering to Good Laboratory Practices[18]. Such cases underscore that regulatory science is not immune to misconduct, and heavy reliance on industry-provided data without transparent peer review is a recipe for potential suppression of harms. In our context, “suppression” can also mean subtler forms – like not publishing unfavorable data (publication bias) or downplaying significant effects in reports. The net effect is that the public literature (especially independent research) is where evidence of harm largely surfaces, whereas industry studies used in risk assessment may paint a rosier picture. This is why full disclosure and independent replication are critical. When reading a “no effect” study, one should consider who conducted it and whether any conflicts might have influenced the endpoints chosen or conclusions drawn.

  • Multiplicity and Statistical Power: Multiplicity refers to the challenge of making sense of many comparisons or tests. In toxicology, a single study might measure dozens of endpoints (multiple organ weights, blood markers, behavioral tests, etc.). If no adjustment is made for multiple comparisons, there’s a higher chance of false-positive results (finding an effect that’s actually due to chance). Conversely, if one applies very stringent corrections or expects consistency across all endpoints, one risks false negatives – making real effects go away because they don’t appear across the board. None of the studies in our dataset explicitly mention adjusting for multiple comparisons in their abstract, which is not surprising (methods details are often in full text). But regulators often implicitly apply a multiple comparisons lens at the evidence review level: they might dismiss a tumor increase in one study as a fluke because other studies didn’t show it, essentially treating each study as a “trial” in a multiple testing sense. The EFSA review of glyphosate did exactly this, citing “lack of consistency in multiple animal studies” and that some tumor findings lacked statistical significance in pairwise tests[19]. Statistically, requiring that each individual comparison meet p<0.05 can lead to a high false-negative rate when many endpoints are truly affected modestly. IARC’s approach of looking at trend tests (which increase power by considering dose-response across groups) found significant trends for certain rare tumors that were not significant in any single dose group comparison[19]. By contrast, the EU assessment focused only on pairwise comparisons and saw no “statistically significant” tumors in those particular tests, hence declaring evidence of carcinogenicity absent[19]. This is essentially a statistical interpretation issue: one approach aimed to reduce false negatives (IARC, looking at broader patterns and trends), the other aimed to reduce false positives (EFSA/EPA, requiring consistency and conventional significance in each test). The consequence of an overly strict multiplicity control is what some critics call “risk assessment’s insensitive toxicity testing” – you fail to detect real hazards because your methods were not sensitive enough[20][10]. On the flip side, a study that finds nothing might simply have been underpowered – e.g. a small sample size or high variability can produce a null result even if a real effect exists. Many academic studies in our set have limited sample sizes due to resource constraints, whereas industry studies often use larger group sizes for regulatory tests. Thus, a null result could mean no effect, or it could mean an effect that couldn’t be confidently detected under the study conditions. It’s important to keep this in mind whenever we weigh one study against another. No single study is definitive; patterns across studies, analyzed with appropriate statistical caution, give the more reliable signal. And the pattern here is that a sizable majority of studies that are capable of detecting something do detect some form of harm.

Null Is Not Proof of Harmlessness: In light of the above, it bears repeating: “absence of evidence is not evidence of absence.” A study that reports no significant effect might simply not have examined the right endpoint, or not for long enough, or not with a realistic exposure scenario, or it may have been one of the few industry-funded studies where the design or reporting biases towards finding no issue. Therefore, policymakers and the public should be cautious in interpreting null findings. Multiple null studies in different contexts are relevant – for example, it is somewhat reassuring that in a few cases, chronic high-dose animal studies did not show certain effects, or that epidemiological studies have not found certain disease links. But those have to be weighed against the many positive findings (including multiple that do show such links or effects).

It’s worth pointing out that historically, the conflation of results from industry-supported biased results in favor of not finding evidence of harm with independent studies as epistemic equals leads to stalemate – and if the product or ingredient is already generating massive profits, stalemate leads to no change. In the case of glyphosate, the balance of evidence in the literature breaks that stalemate: Only upweighting studies that have failed to find harm arbitrarily and down-weighting those that have not can the balance be made to appear as a stalemate.

In the realm of glyphosate, several high-quality independent reviews have concluded that glyphosate and its formulations do have biologically concerning effects across a range of systems even at doses below regulatory limits[21]. None of the null studies individually refute that; at best they address specific questions (e.g. “Did glyphosate at X dose affect parameter Y in species Z? No, it did not in that experiment.”). To declare something truly “safe,” evidence of absence of harm is needed, which is inherently hard to prove. Regulators instead look for a lack of credible evidence of harm at realistic exposures – a judgment call that can be influenced by which evidence is considered “credible” or relevant. As we’ve seen, that process has its own inconsistencies.

Regulatory Incoherence in the Face of Evidence

Despite the mounting evidence of various hazards associated with glyphosate, regulatory agencies in the U.S. and EU have largely maintained a posture that can best be described as regulatory incoherence. On one hand, they acknowledge certain hazards in principle (for example, glyphosate is acknowledged to cause toxicity at some level, otherwise there would be no need for any exposure limits at all), but on the other hand, their official assessments keep concluding that glyphosate poses “no unreasonable risk” to human health under current usage. This section examines how regulatory logic has contorted itself in light of the evidence – through selective reliance on industry data, procedural maneuvers, and outdated assumptions – and why this has prompted criticism from scientists and even legal rebuke from courts.

Selective Science and Weight of Evidence: A glaring example of incoherence is how differently IARC vs. EPA/EFSA weighted the evidence, as discussed earlier. IARC considered all publicly available studies (including many of the 712 harm studies) and found sufficient evidence in animals and strong mechanistic evidence (genotoxicity, oxidative stress) supporting classification as a probable carcinogen[22][23]. EPA and EFSA, by contrast, gave primacy to “secret” industry studies that are not available for public scrutiny – some of which showed tumors or other effects but were downplayed – and they discounted most academic studies as not guideline-compliant. The result was diametrically opposed conclusions from ostensibly the same corpus of science. As the GMWatch analysis of Dr. Benbrook’s paper put it: EPA’s conclusion that glyphosate is not genotoxic or carcinogenic was only possible by framing the assessment in a “highly selective and biased way”[14], excluding or minimizing the very studies that IARC found informative. EFSA’s 2015 review similarly was critiqued for “serious flaws” and incorrectly characterizing glyphosate’s hazard by not properly weighing high-quality independent studies[23][12]. In plainer terms, regulators had an end in mind (finding glyphosate acceptable) and sifted the evidence accordingly. This has led to accusations that regulatory agencies, instead of being neutral arbiters, have been influenced – even “infiltrated” – by industry interests to maintain the status quo of approval[24]. While “infiltration” is a strong word, evidence like the Monsanto-EPA collusion allegations (e.g. an EPA official reportedly bragging about “killing” an ATSDR review of glyphosate to help Monsanto) and heavy lobbying during EU re-approval votes lend credence to concerns about regulatory capture.

Policy Logic Under Scrutiny: Several specific policy logic issues have drawn criticism:

  • Acceptable Daily Intake and Chronic Risk: Regulators set an ADI (for glyphosate, the U.S. EPA’s chronic Reference Dose is 1.0 mg/kg body weight/day, and the EU’s ADI is 0.5 mg/kg/day). They then estimate human exposure – which for diet is often far below that on average – and conclude risk is low. However, this approach assumes the ADI is truly protective. The ADI was derived from older toxicology studies (primarily industry 90-day or 2-year studies in the 1980s) and incorporates uncertainty factors. What if some of the newer evidence of harm (e.g. endocrine effects, reproductive harm in animals, etc.) occurs at doses below the current ADI? In that case, the ADI may be too high. Indeed, some independent scientists argue that glyphosate’s ADI does not adequately account for endocrine-disrupting effects that have no clear threshold and can occur at very low doses. Regulators have been slow to incorporate such data, often citing “lack of consensus” or methodological issues. This results in an incoherent stance where regulators acknowledge certain findings (say, endocrine effects in some studies) but declare them not relevant to humans because the exposures in those studies were “above the ADI” – even though, circularly, the ADI itself might need re-evaluation because of those findings. Our analysis noted that none of the study abstracts tied their findings to ADI, making it harder for regulators to use them in risk calculations without deep review. A coherent approach would be to update safety thresholds when credible evidence shows harm at lower doses or via mechanisms not previously considered, yet as of now, EPA and EFSA have not appreciably adjusted glyphosate’s ADI in response to the deluge of new data (some of which show effects in the parts-per-billion range on cells or animals).

  • Hormesis (Low-Dose, Non-Linear Toxicity):

    While no study demonstrates a biphasic dose curve in animals or humans, the studies include many that show low-dose sublethal toxicity involving chronic cumulative stress, dose- and time-dependent injury and partial tolerance without recovery. Studies across the bolus report reproductive toxicity at environmentally relevant doses, oxidative stress induction, neurochemical alterations, developmental cardiac defects, behavioral and memory impairment, multigenerational reproductive suppression (previously noted) and chronic organ-level molecular alterations. We will be publishing a second report “Evidence on Low-Dose Chronic Toxicity Associated with Glyphosate Exposure” soon.

    Subscribe now

  • Multiplicity and Consistency Requirements: As discussed, the regulatory insistence on consistency (e.g. the same effect in multiple studies) before acting can be seen as prudent or as a recipe for inaction. With hundreds of endpoints across studies, inconsistency is guaranteed – not every study will find every effect, due to natural variability and differing methods. Expecting consistent replication of each specific outcome (especially non-cancer outcomes) sets the bar so high that virtually no new hazard could ever be “proven.” It allows regulators to always say, “the evidence is mixed or inconclusive,” regardless of how much of it points toward harm. This failure of multiplicity logic means that even as more evidence accumulates, regulators can claim “no new surprises.” For example, even though dozens of studies now link glyphosate to oxidative stress and gut microbiome disturbances, a regulator might note that not all studies show this or that the degree varies, and thereby avoid drawing firm conclusions. In effect, by focusing on the noise (the inevitable scatter in a large body of studies), one can ignore the signal. Such incoherence has been criticized by scientists who say the “weight-of-evidence” approach has been misused to actually down-weigh valid evidence. A truly coherent approach would transparently codify how evidence is weighed (for instance, using systematic review methods). Unfortunately, EFSA’s 2023 review again concluded “no critical areas of concern”[25], suggesting that despite hundreds of new studies, the official stance remains that nothing decisively actionable has been observed.

  • Legal and Public Health Implications: The disconnect between the scientific evidence and regulatory pronouncements has not gone unnoticed. In 2019–2020, multiple U.S. juries in liability trials (Johnson v. Monsanto, Hardeman v. Monsanto, etc.) found that Roundup was a substantial factor in causing plaintiffs’ cancers (non-Hodgkin lymphoma), awarding large damages. Their verdicts were influenced by internal company documents (indicating undue influence on science) and expert testimony that weighed IARC’s findings over EPA’s. In 2022, the U.S. 9th Circuit Court of Appeals vacated EPA’s interim re-registration of glyphosate, calling EPA’s evaluation of cancer evidence and endocrine disruption “arbitrary and capricious” – essentially saying the agency didn’t properly engage with the evidence and had internal contradictions in its assessment[26]. When a court tells a regulator to revisit the science, it’s a strong sign of incoherence. EPA is now re-reviewing glyphosate with a deadline (as of this writing, likely 2026 for a revised decision). Meanwhile, public health bodies like the state of California’s OEHHA have listed glyphosate as a chemical “known to the state to cause cancer” (following IARC Group 2A classification) despite legal challenges from industry. This duality – one arm of the government effectively warning the public about cancer hazard, while the federal regulator says “no risk if used properly” – epitomizes regulatory incoherence. It confuses the public and policymakers alike.

Moving Forward – Toward Coherence: The evidence compiled in this report suggests that glyphosate is not the benign substance once assumed. It may not be acutely toxic in the way some pesticides are (hence its long-standing reputation for safety), but chronic low-level effects are accumulating in the literature: endocrine disruption, liver damage, reproductive effects, developmental neurotoxicity, gut dysbiosis, and yes, carcinogenicity (supported by animal studies and some human epidemiology of farm workers).

The literature does not currently demonstrate clear, reproducible, high-confidence evidence of marked differential toxicity at environmental exposure levels in defined susceptible subgroups. However, gaps remain in gene–environment interaction studies, long-term low-dose developmental follow-up, and mixture-specific exposures (formulated products versus glyphosate alone). At present, the only subgroup with substantial epidemiologic evaluation is occupationally exposed agricultural workers. For other potentially susceptible groups—children, genetically predisposed individuals, autoimmune patients, or microbiome-sensitive populations—the evidence remains limited. This means they should be prioritized: A rational, coherent policy stance would also be updated pending studies of the effects of glyphosate on susceptible subgroups.

The negotiated position range from a ban (made much less likely with the Trump administration recent move by Executive Order to indemnify glyphosate producers and users from liability if they acted based on directions from the U.S. Government toward U.S. national security), to an alternative track that recognizes glyphosate as a hazard – something capable of contributing to disease – and then openly debate the risk management question: to what extent can we limit exposures or find safer alternatives? Might individuals at highest risk somehow be informed of the risk and provided equal protection?

Instead, the current posture often appears to be denial or minimization of the hazard to avoid difficult regulatory decisions (like banning or restricting uses of such an economically important chemical), and administrative get-out-jail-free cards take science off the table entirely.

To be clear, glyphosate’s risk in practical terms is a function of dose and exposure – most people are not exposed to the high levels that cause tumors in mice, for example. But some populations (farmers, heavy herbicide users) might be; also, even lower exposures might carry some risk that accumulates. Regulators typically employ a large safety factor (100x or more below animal no-effect levels) to set limits, which is supposed to account for uncertainties. The question is whether the safety factors in place truly account for all the uncertainties we now know (such as potential synergistic effects of formulants, or non-monotonic dose responses where low doses might do things high doses don’t, or vulnerable life stages like prenatal exposure potentially leading to adult disease).

Given the systemic issues discussed – dose realism, endpoint selection, COI biases, susceptible subgroup awareness, and evidentiary standards – a precautionary approach would err on the side of safety: if in doubt, assume harm until proven otherwise, rather than the opposite. Yet historically, regulators have done the opposite for glyphosate, giving it the benefit of the doubt time and again. As public and scientific pressure mounts, this stance is increasingly untenable. In any case, full transparency and independent review of all studies (including industry ones) are essential. Our summary of 1,071 studies is a step towards that transparency, but ultimately regulators must integrate this vast evidence base coherently.

Conclusion

The balance of evidence from 1,071 studies tilts clearly toward glyphosate being capable of harm and in some ways shown to cause harm. The specific findings range from biochemical and cellular disruptions to organ toxicity, developmental and reproductive harm, and signs of carcinogenic potential. The 359 studies that found no significant harm must be interpreted with context – they do not erase the positive findings, but rather highlight how outcomes can differ based on study conditions or biases. Null results cannot be taken as a clean bill of health, especially in light of the methodological factors we reviewed, including industry bias effects.

For U.S. lawmakers, regulatory officials, and public health professionals, the glyphosate case is instructive. It demonstrates the need for better triangulation of policy with science. Right now, we have a disjointed situation: a chemical labeled “probably carcinogenic” by international experts, yet declared “not a risk” by domestic regulators; thousands of plaintiffs alleging it caused their cancers on one side, and government agencies assuring the public it’s safe on the other. Bridging this divide will require acknowledging that both perspectives hold truth: glyphosate is a hazard (that’s essentially indisputable scientifically at this point – it can cause cancer and other harms under certain conditions[2][22]), and the task is to manage the risk (minimize those conditions of exposure). A coherent policy would neither demonize glyphosate as an absolute evil nor rubber-stamp it as innocuous. Instead, it would rigorously evaluate where and how glyphosate’s use can be reduced or made safer (robotic microdosing combined with lasers), invest in research on alternatives, and continuously update safety standards as new evidence emerges.

Finally, this analysis comes with an important caveat: abstract-level data is not a substitute for full-text review. For truly robust conclusions – such as those needed in litigation or regulatory rule-making – one must delve into the details of dose, statistics, and peer review of each study. We have highlighted broad patterns and concerns, which can inform decision-makers and individuals who perform systematic reviews and evidence weighting on where to look more closely. Any critical decisions (bans, restrictions, etc.) should be based on comprehensive risk assessments that include all available data (published and unpublished), evaluated by independent experts free of conflicts of interest. Secret studies must be made public and brought into the clarifying light of day. In that sense, our report is a starting point for inquiry, not the final word.

The evidence in aggregate sends a clear message: glyphosate is not a one-in-a-million ultra-safe chemical. It is a substance with real hazards that manifest under certain exposure scenarios – scenarios that are not implausible in the real world (considering farm workers mixing and spraying it, or residents in agricultural areas chronically exposed to drift, or children consuming residues daily in food). Pretending otherwise, or clinging to outdated assurances, is increasingly incoherent in the face of evidence. The challenge ahead is to align regulation with the science – to protect public health while ensuring that scientific integrity and transparency form the foundation of policy.

You can help fund the full-text literature analysis by the Institute for Pure and Applied Knowledge here:

DONATE MONTHLY OR ONE TIME TO IPAK

Subscribe now

Share

Leave a comment

Sources:

· IARC Monograph 112 (2015) – Glyphosate classified 2A (probable human carcinogen)[2]

· EFSA Renewal Assessment Report (2015) – Glyphosate “unlikely to pose a carcinogenic hazard”[4]

· U.S. EPA (2017 draft, reaffirmed 2020) – Glyphosate “not likely to be carcinogenic to humans”[3]

· SciMoms explainer on hazard vs. risk (2025)[1]

· Journal of Epidemiology & Community Health commentary on IARC vs EFSA differences (Portier et al. 2016)[27][23]

· Environmental Sciences Europe (Benbrook 2019) on genotoxicity evidence disparity[7][14]

· Journal of Epidemiology & Community Health summary: regulators dismissing peer-reviewed studies in favor of industry data[15]

· Food Packaging Forum summary of Buonsante et al. (2014) – lack of realistic dosing in toxicity tests[9][10]

· The Guardian (Carey Gillam, 2020) on industry-funded lab fraud in glyphosate studies[16][28]

· Glyphosate, Roundup and the Failures of Regulatory Assessment (2022, open-access review) – discussion of regulatory shortcomings[21][24]


[1] Risk In Perspective: Hazard And Risk Are Different | by SciMoms | Medium

https://medium.com/@SciMoms/risk-in-perspective-hazard-and-risk-are-different-5c4b3fd607c2

[2] [4] Differences in the carcinogenic evaluation of glyphosate between the International Agency for Research on Cancer (IARC) and the European Food Safety Authority (EFSA) – Occupational Cancer Research Centre

https://www.occupationalcancer.ca/resources/evaluation-of-glyphosate-iarc-efsa/

[3] U.S. EPA says glyphosate not likely to be carcinogenic to people | Reuters

https://www.reuters.com/article/business/healthcare-pharmaceuticals/us-epa-says-glyphosate-not-likely-to-be-carcinogenic-to-people-idUSKBN1EE2XG/

[5] [15] [21] [24] Glyphosate, Roundup and the Failures of Regulatory Assessment – PMC

https://pmc.ncbi.nlm.nih.gov/articles/PMC9229215/

[6] What the Monsanto Papers tell us about corporate science

https://corporateeurope.org/en/food-and-agriculture/2018/03/what-monsanto-papers-tell-us-about-corporate-science

[7] [14] How did the US EPA and IARC reach opposite conclusions about glyphosate’s genotoxicity?

https://www.gmwatch.org/en/106-news/latest-news/18699-how-did-the-us-epa-and-iarc-reach-opposite-conclusions-about-glyphosate-s-genotoxicity

[8] [11] [12] [13] [19] [22] [23] [27] Differences in the carcinogenic evaluation of glyphosate between the International Agency for Research on Cancer (IARC) and the European Food Safety Authority (EFSA) – PMC

https://pmc.ncbi.nlm.nih.gov/articles/PMC4975799/

[9] [10] [20] Risk assessment uses insensitive toxicity testing | Food Packaging Forum

https://foodpackagingforum.org/news/risk-assessment-uses-insensitive-toxicity-testing

[16] [17] [18] [28] Science shouldn’t be for sale – we need reform to industry-funded studies to keep people safe | Carey Gillam | The Guardian

https://www.theguardian.com/commentisfree/2020/feb/18/science-shouldnt-be-for-sale-we-need-reform-industry-funded-studies-monsanto

[25] EFSA completes its conclusions about glyphosate

https://www.glyphosate.eu/grg/whatsnew/efsa-completes-its-conclusions-about-glyphosate/

[26] [PDF] NRDC v. EPA – Court of Appeals for the Ninth Circuit

https://cdn.ca9.uscourts.gov/datastore/opinions/2022/06/17/20-70787.pdf

 

IPAK-EDU is grateful to Popular Rationalism as this piece was originally published there and is included in this news feed with mutual agreement. Read More

Subscribe to SciPublHealth


Science-based knowledge, not narrative-dictated knowledge, is the goal of WSES, and we will work to make sure that only objective knowledge is used in the formation of medical standards of care and public health policies.

Comments


Join the conversation! We welcome your thoughts, feedback, and questions. Share your comments below.

Leave a Reply

  • Feds for Freedom

Discover more from Science, Public Health Policy and the Law

Subscribe now to keep reading and get access to the full archive.

Continue reading