P-Value Calculator
Convert test statistics to p-values for any distribution
P-Value Calculator: Convert Test Statistics to Probability
The p-value is one of the most important concepts in inferential statistics. It measures the probability of obtaining a test statistic at least as extreme as what you observed, assuming the null hypothesis is true. A small p-value (typically below 0.05) indicates the data are unlikely under the null hypothesis — strong evidence to reject it.
This calculator converts any test statistic (z, t, chi-square, or F) into a precise p-value, saving you from looking up statistical tables or writing code. Whether you are a researcher, data scientist, or student, this tool makes hypothesis testing fast and reliable.
How to Use the P-Value Calculator
- Select distribution type — Choose Z for large samples or known variance, T for small samples with unknown variance, chi-square for goodness-of-fit or independence tests, or F for ANOVA and regression F-tests.
- Choose the tail type — Two-tailed tests check for any difference; left-tailed tests check if the parameter is smaller than hypothesized; right-tailed tests check if it is larger.
- Enter the test statistic — This is the value computed from your sample (z-score, t-statistic, chi-square statistic, or F-ratio).
- Set degrees of freedom — Required for T, chi-square, and F distributions. For F, enter both numerator and denominator df.
- Set significance level (alpha) — Usually 0.05 (5%) or 0.01 (1%). The calculator compares your p-value to this threshold automatically.
Understanding the Results
P-value is the main output: the probability of seeing a test statistic this extreme (or more) if H0 is true. The left-tail and right-tail probabilities break this down by direction.
Significance decision: When p ≤ alpha, the result is statistically significant — you reject the null hypothesis. When p > alpha, you fail to reject it (you do not "accept" the null; absence of evidence is not evidence of absence).
Distribution Formulas
Z-distribution: Uses the standard normal CDF. Appropriate when population standard deviation is known or n > 30.
T-distribution: Uses Student's t-CDF with df = n - 1. Wider tails than Z, accounting for uncertainty in estimating population variance.
Chi-square: Right-skewed distribution used for count data. P-value is always right-tailed for goodness-of-fit; you choose the tail for independence tests.
F-distribution: Ratio of two chi-square variables scaled by their df. Used in ANOVA and linear regression F-tests.
Real-World Examples
Example 1 — Drug trial (two-tailed T-test): You compare blood pressure before and after treatment in 20 patients. The paired t-statistic is 2.36 with df = 19. This calculator gives p = 0.0295, which is less than 0.05 — you reject the null hypothesis that the treatment has no effect.
Example 2 — Website A/B test (Z-test): Two landing pages are compared. The z-statistic is 1.84 (right-tailed). The p-value is 0.033, indicating the new page has a statistically significantly higher conversion rate at the 5% level.
Example 3 — Chi-square independence test: A chi-square statistic of 9.49 with 2 df gives p = 0.0087, indicating a significant association between two categorical variables.
Common Mistakes to Avoid
- The p-value is NOT the probability that H0 is true. It assumes H0 is true and measures how surprising the data are.
- Statistical significance does not imply practical significance. A tiny effect can be significant with a large sample size.
- Reporting exact p-values (e.g., p = 0.023) is more informative than "p < 0.05".
- Choose your significance threshold before running the test to avoid p-hacking.