Tutorial: Internal Standards  

The method of internal standards is used to improve the precision of quantitative analysis. An internal standard is a known concentration of a substance that is present in every sample that is analyzed. Internal standards can be used with either the calibration curve or standard addition methods, although the former is probably more common.

The purpose of the internal standard is to behave similarly to the analyte but to provide a signal that can be distinguished from that of the analyte. Ideally, any factor that affects the analyte signal will also affect the signal of the internal standard to the same degree. Thus, the ratio of the two signals will exhibit less variability than the analyte signal.

Internal standards are often used in chromatography, mass spectroscopy and atomic emission spectroscopy. They can also be used to correct for variability due to analyte loss in sample storage and treatment.


In the analysis of sodium metals by flame atomic emission spectroscopy, lithium may be used as an internal standard. Using the data below, calculate the concentration of sodium in the sample. Compare the precision of the result from the internal standard method with that achieved with the calibration curve method (ie, if the lithium emission signals were ignored).

solution Na emission Li emission
0.2 ppm Na, 500 ppm Li 0.22 48
0.5 ppm Na, 500 ppm Li 0.53
2.0 ppm Na, 500 ppm Li 2.30 51
5.0 ppm Na, 500 ppm Li 5.00 46
sample, 500 ppm Li 0.88 48

Answer: 0.81 +/- 0.20 ppm (95% CI)

In the above example, fluctuations in the flame temperature will affect the analyte signal by changing the degree of thermal excitation and ionization of the analyte. Presumably, temperature fluctuations will affect the lithium atoms - which are also easiliy excited and ionized in the atomizer - in a similar manner. The analytical technique must be capable of yielding multichannel data in order for the method to work, and the concentration of internal standard must be the same in all solutions.

Note: Sometimes the analyte concenetration is calculated by using the response factor, which is defined as the ratio of sensitivities of the analyte and the internal standard. This calculation is less elegant than the solution given above.