Why You Can't Just Trust the Headline
News headlines about research are frequently misleading — not always intentionally, but because summarizing complex science in eight words is nearly impossible. "Coffee causes cancer" and "Coffee prevents cancer" can both be based on real studies. Understanding why requires actually reading what the research says.
You don't need a PhD to evaluate a study. You need a checklist and a healthy skepticism.
Step 1: Find the Actual Study
Before evaluating anything, find the original paper — not just the press release or news article. Search PubMed (pubmed.ncbi.nlm.nih.gov) for medical research, or Google Scholar for broader academic work. Many studies are freely available; others require access through a library or a preprint server like arXiv or bioRxiv.
Step 2: Check the Study Type
Not all research is equal. The type of study largely determines how strong the conclusions can be.
| Study Type | What It Can Show | Strength |
|---|---|---|
| Randomized Controlled Trial (RCT) | Causation | High |
| Cohort study | Association over time | Moderate |
| Cross-sectional study | Snapshot associations | Lower |
| Case study / case series | Individual observations | Very low |
| Animal study | Biological mechanisms | Preliminary |
| Meta-analysis / systematic review | Synthesized evidence | Very high |
A single observational study showing an association does not prove causation. "X is linked to Y" is very different from "X causes Y."
Step 3: Look at Sample Size
Small sample sizes produce unreliable results. A study with 40 participants is interesting, but its findings are unlikely to hold up at scale. Look for studies with large, diverse participant pools — and be skeptical of dramatic conclusions drawn from tiny samples.
Step 4: Check Who Funded It
Funding source doesn't automatically invalidate a study, but it's relevant context. Industry-funded research has a documented tendency to produce results favorable to the funder. Look for conflicts of interest in the disclosure section — reputable journals require authors to declare them.
Step 5: Understand Effect Size, Not Just Significance
Statistical significance (the famous p < 0.05) tells you the finding is unlikely to be random chance. It does not tell you the finding is large, meaningful, or practically important.
A drug that reduces risk of a disease from 0.2% to 0.1% is statistically significant — it also halved the risk, which sounds dramatic. But in practical terms, you're moving from a very small risk to a slightly smaller one. Always look for absolute numbers alongside relative ones.
Step 6: Was It Peer-Reviewed?
Peer review means other experts in the field evaluated the methodology before publication. It's not perfect, but it's a meaningful quality filter. Preprints (studies not yet peer-reviewed) may be legitimate early findings — but treat them with more caution, especially when media covers them as settled science.
Red Flags at a Glance
- Very small sample size (under 50 participants)
- Only one study — findings not replicated
- Correlation presented as causation
- Undisclosed or obvious conflicts of interest
- Extreme effect sizes that contradict existing evidence
- Only released as a press release, no peer-reviewed paper
The Bigger Picture
Science advances through accumulation, not single studies. One study is a data point. A replicated finding across multiple independent studies is evidence. A systematic review of many high-quality studies is about as close to certainty as science gets.
When you see a bold headline about new research, ask: Is this one study, or is this a pattern? That question alone will save you from a lot of misinformation.