Math

Bayes' Theorem Calculator

Update probabilities with Bayes' theorem instantly

Bayes' Theorem Calculator

Bayes' theorem calculates the probability that a hypothesis is true, given new evidence. It combines your prior belief (how probable you thought the hypothesis was before seeing the evidence) with the reliability of the test that generated the evidence. The result — the posterior probability — is how confident you should be in the hypothesis now that you have the evidence.

This calculator is used in medical screening, spam detection, legal reasoning, and machine learning to update beliefs based on data.

How to Use This Calculator

  1. Prior probability — the base rate of the condition or event, before seeing new evidence. Example: 1% of the population has the disease → P(Disease) = 0.01.
  2. True positive rate (sensitivity) — the probability that the test is positive when the condition is truly present. Example: 95% sensitivity → P(+ | Disease) = 0.95.
  3. False positive rate — the probability that the test is positive when the condition is absent. Example: 5% false positive rate → P(+ | No Disease) = 0.05.
  4. Evidence type — whether the observed test result is positive or negative.

Bayes' Theorem Formula

P(H | E) = P(E | H) × P(H) / P(E)
where P(E) = P(E|H) × P(H) + P(E|¬H) × P(¬H).

In plain terms: posterior = (likelihood × prior) / (total probability of evidence).

The Base Rate Fallacy

The most surprising insight from Bayes' theorem is how the prior dramatically affects the posterior. Consider a disease that affects 1% of the population and a test with 95% sensitivity and 5% false positive rate. If the test is positive, most people assume the diagnosis is almost certain. But Bayes' theorem shows P(Disease | +) ≈ 16%. Why? Because when only 1 in 100 people has the disease, most positive tests come from the 99% who are healthy.

This counterintuitive result is called the base rate fallacy, and it is why population-level screening for rare conditions requires careful statistical interpretation.

Real-World Examples

Medical screening: A rare disease has 0.1% prevalence. A test has 99% sensitivity and 1% false positive rate. After a positive test: posterior ≈ 9%. Not 99%.

Spam filtering: 40% of emails are spam a priori. The word "lottery" appears in 80% of spam and 5% of legitimate email. After seeing "lottery": P(spam | lottery) = (0.8 × 0.4) / (0.8×0.4 + 0.05×0.6) ≈ 91.4%.

Drug testing: 2% of athletes use a banned substance. The test has 99% sensitivity and 2% false positive rate. A positive test yields a posterior of about 50% — making false accusations a real concern.

Frequently Asked Questions

What is Bayes' theorem in simple terms?
Bayes' theorem updates your belief (prior probability) in light of new evidence to give a revised belief (posterior probability). It weights the prior against how likely the evidence is under each hypothesis.
What is the base rate fallacy?
The base rate fallacy is the mistake of ignoring the prior probability when evaluating test results. A positive test from a highly sensitive test can still be mostly false positives if the underlying condition is rare enough. Bayes' theorem quantifies this effect.
What is the difference between sensitivity and specificity?
Sensitivity = P(positive test | condition present) = true positive rate. Specificity = P(negative test | condition absent) = 1 - false positive rate. This calculator uses sensitivity and false positive rate (1 - specificity) as inputs.
What does positive predictive value (PPV) mean?
Positive predictive value is the probability that the condition is truly present given a positive test — exactly what Bayes' theorem computes as the posterior. PPV depends on both test accuracy and the prevalence (prior probability) of the condition.
Can I use Bayes' theorem for a negative test result?
Yes. Select "negative" for the evidence type. The calculator computes P(condition | negative test) using the false negative rate and true negative probability. A negative result from a sensitive test provides strong evidence against the condition.
What is a Bayesian update?
A Bayesian update is the process of running Bayes' theorem repeatedly as new evidence arrives. The posterior from one calculation becomes the prior for the next. This is how Bayesian reasoning accumulates evidence over time.
Why is my posterior so low even with a positive test?
If the prior (base rate) is very small — say 0.1% — even a test with 95% accuracy will yield a low posterior. Most people find this counterintuitive. Use this calculator to see exactly how prior probability and test reliability interact.
How is Bayes' theorem used in machine learning?
Bayes' theorem underlies Naive Bayes classifiers, Bayesian neural networks, and Bayesian optimization. In classification, it frames prediction as: P(class | features) ∝ P(features | class) × P(class). Prior probabilities are learned from training data.