The first oximeter was developed by Millikan in the 1940s. It monitors the ratio of oxygen-carrying hemoglobin to oxygen-free hemoglobin in the arteries. A typical oximeter is equipped with two light-emitting diodes. These two light-emitting diodes face the patient’s area to be measured-usually the fingertips or earlobe. One diode emits a beam with a wavelength of 660 nanometers, and the other emits 905, 910, or 940 nanometers. The absorption of oxygenated hemoglobin at these two wavelengths differs greatly from the absence of oxygen. Using this property, the ratio of the two hemoglobins can be calculated. The test usually does not require blood to be drawn from the patient. Ordinary oximeters can also show a patient’s pulse. According to Beer-Lambert’s law, the functional relationship between the ratio R / IR and the arterial oxygen saturation (SaO2) should be linear, but because biological tissue is a complex optical system with strong scattering, weak absorption, and anisotropy [2 ~ 4], which does not fully conform to the classic Beer-Lambert law, which leads to the establishment of a mathematical model expressing the relationship between the relative changes in absorbance of red and infrared light (R / IR value) and arterial oxygen saturation (SaO2) difficult. The correspondence between R / IR and SaO2 can only be determined experimentally, that is, the calibration curve. Most pulse oximeter manufacturers use experimental methods to obtain empirical calibration curves to complete the pre-calibration of the product before leaving the factory.