In metrology, precision indicates the repeatability or reproducibility of an instrument (but does not indicate accuracy). In other words is the degree of the repetitiveness of the measuring process of a quantity made by using the same method, under similar conditions. See also Accuracy vs Precision ยป

The ability of the measuring instrument to repeat the same results during the act of measurements for the same quantity is known as repeatability. Repeatability is random in nature and, by itself, does not assure accuracy, though it is a desirable characteristic. Precision refers to the consistent reproducibility of a measurement.

Reproducibility is normally specified in terms of a scale reading over a given period of time. If an instrument is not precise, it would give different results for the same dimension for repeated readings. In most measurements, precision assumes more significance than accuracy. It is important to note that the scale used for the measurement must be appropriate and conform to an internationally accepted standard.

If an instrument is used to measure the same input, but at different instants, spread over the whole day, successive measurements may vary randomly. It also represents a static characteristic of an instrument. The random fluctuations of readings, (mostly with a Gaussian distribution) are often due to random variations of several other factors that have not been taken into account while measuring the variable. A precision instrument indicates that the successive reading would be very close, or in other words, the standard deviation \(\sigma_e\) of the set of measurements would be very small. Quantitatively, the precision can be expressed as:

\[\textrm{Precision}=\dfrac{\textrm{measured range}}{\sigma_e}\]