Math
Confusion Matrix Calculator
Turn TP, FP, FN, and TN counts into core ML metrics
Accuracy
91.60%
Precision
87.88%
Recall
90.63%
F1 score
89.23%
Specificity
92.21%
Negative predictive value
94.04%
Balanced accuracy
91.42%
False positive rate
7.79%
False negative rate
9.38%
MCC
0.8237
Total evaluated observations: 250
Why A Confusion Matrix Matters
Accuracy alone can hide weak classifiers, especially on imbalanced datasets. A confusion matrix exposes exactly where the model is making mistakes so you can judge tradeoffs between false alarms and missed positives.
Core Metrics From Four Counts
With TP, FP, FN, and TN you can derive precision, recall, F1 score, specificity, negative predictive value, balanced accuracy, and Matthews correlation coefficient. Together these metrics give a much fuller picture than accuracy alone.
Common Use Cases
This calculator is useful for evaluating binary classifiers in spam detection, credit-risk scoring, medical testing, moderation systems, and quality-control workflows.
Frequently Asked Questions
What is a confusion matrix?
A confusion matrix is a 2x2 summary of classification outcomes. It counts true positives, false positives, false negatives, and true negatives so you can compute quality metrics from the same prediction set.
When should I focus on precision versus recall?
Precision matters when false positives are expensive, such as spam moderation or fraud reviews. Recall matters when false negatives are expensive, such as disease screening or safety detection.
What does MCC tell me?
Matthews correlation coefficient summarizes the whole confusion matrix in one value between -1 and 1. It is especially useful when the classes are imbalanced because it considers all four counts together.
What is balanced accuracy?
Balanced accuracy is the average of recall and specificity. It helps when the positive and negative classes are uneven because it gives equal weight to both sides.