Math

Min-Max Normalization Calculator

Scale any dataset to a custom range instantly

Min-Max Normalization Calculator

Min-max normalization (also called min-max scaling or feature rescaling) transforms a dataset so that all values fall within a chosen target range, most commonly [0, 1]. It is one of the most widely used preprocessing steps in machine learning, ensuring that features with larger numerical ranges do not dominate those with smaller ones during model training.

This calculator applies the min-max formula to your dataset instantly, showing a preview of the normalized values alongside the original statistics.

How to Use This Calculator

  1. Enter your dataset — comma-separated or newline-separated numbers.
  2. Target minimum and maximum — the desired output range. Defaults are 0 and 1.
  3. Round digits — how many decimal places to show in the output (default 4).
  4. The calculator shows the original min/max, the number of values normalized, and a preview of all transformed values.

The Min-Max Formula

For each value x in a dataset with minimum xmin and maximum xmax, the normalized value is:
x' = (x - xmin) / (xmax - xmin) × (targetmax - targetmin) + targetmin

For the default [0, 1] range, this simplifies to:
x' = (x - xmin) / (xmax - xmin)

The minimum of the original data always maps to target_min, and the maximum maps to target_max.

When to Use Min-Max Normalization

  • Neural networks: Gradient descent converges faster when inputs are scaled to [0, 1] or [-1, 1].
  • k-NN and k-means: Distance-based algorithms are dominated by features with large ranges if not normalized.
  • Image processing: Pixel values are normalized from [0, 255] to [0, 1].
  • Custom ranges: Sometimes features need to be in [-1, 1] or [0, 100] for a specific model or visualization.

Min-Max vs. Z-Score Normalization

Min-max normalization fits data into a bounded range but is sensitive to outliers — one extreme value shifts all others toward 0 (or target_min). Z-score normalization (standardization) rescales data to have mean 0 and standard deviation 1; it handles outliers better for normally distributed data. For most deep learning applications, min-max scaling to [0, 1] is preferred for inputs with known bounds.

Real-World Examples

House prices: Prices range from $120,000 to $2,000,000. Normalizing to [0, 1] allows a regression model to treat price on equal footing with other features like square footage.

Sensor readings: Temperature readings from -10°C to 40°C are normalized to [0, 1] for input into a classification model.

Survey responses: A Likert scale from 1–5 is normalized to [0, 1] to combine with other features in a recommendation engine.

Frequently Asked Questions

What is min-max normalization?
Min-max normalization rescales every value in a dataset proportionally so the minimum becomes target_min and the maximum becomes target_max. The formula preserves relative distances between values.
What is the difference between normalization and standardization?
Normalization (min-max) rescales data to a fixed range like [0, 1]. Standardization (z-score) rescales to have mean 0 and standard deviation 1, with no fixed bounds. Use normalization when you need bounded output; use standardization when the distribution should be centered at zero.
Is min-max normalization sensitive to outliers?
Yes. Because the formula uses the global minimum and maximum, a single extreme outlier can compress all other values into a narrow band near 0. If your data has extreme outliers, consider clipping them first or using robust scaling (based on IQR) instead.
Can I normalize to a range other than [0, 1]?
Yes. Set target_min and target_max to any values you need, such as [-1, 1] for tanh activations or [0, 255] to reverse-normalize image data. The formula handles any target range.
How do I reverse (invert) the normalization?
To recover the original value from x' normalized to [0, 1]: x = x' × (x_max - x_min) + x_min. Store the original min and max before normalizing — you will need them to undo the transformation at inference time.
Does min-max normalization change the shape of the distribution?
No. It is a linear transformation, so the shape (histogram) of the distribution remains the same. Skewness, kurtosis, and the relative distances between all pairs of values are preserved.
What happens if all values in my dataset are the same?
If x_min equals x_max, the denominator is zero and the formula is undefined. This calculator detects this case and returns a warning. All identical values would normally be mapped to the midpoint of the target range.
Should I normalize before or after splitting into train/test sets?
Always compute the min and max from the training set only, then apply those values to normalize both the training and test sets. Normalizing the full dataset before splitting leaks information about the test set into the training fit.