When dealing with low-cost, home made termometers, the choice often fall on the NTCs. These sensors are resistors which are varying their resistance in function of the temperature, with a negative trend: Negative Temperature Coefficient is their name, also known as Thermistor. That means higher the temperature, lower the resistance:
We observe that this cheap sensor have the price to be non-linear at all. There are various methods to derive a method to interpret the correct temperature. Here I will go through the ones that are used to achieve quite reasonable precision without having/paying a calibration laboratory.
Look-Up Table approach
Usually the datasheet provides a set of values which are sampled from a sensor and correspond to our being inside their stated tolerance:
In this picture you can see the stated resistance at a certain temperature, and its tolerance. An immediate drawback, if no other tables are available, is that this table can bring a precision of up to 5°C only. The parameter “B” is called normally .
Moreover, a sensor with a tolerance of 5%, refers to 25°C only. This error of the resistance must be added with the “ due to β toll.” factor in the table above, from now on called ΔR/β for simplicity, which is given from the datasheet. Associated with it, there is the temperature coefficient, TCR (also known as ), which describes how steep is the curve. You may understand that with a very steep curve, so an high TCR, there is a little deviation of the temperature in the X axis, associated with a high variation of resistance on the Y axis therefore we have an high sensitivity. The situation is the opposite when reading high temperatures with a low TCR, the sensitivity drops rapidly.
The thermistor error of a punctual read is then devised as follow:
where is the resistance tolerance at the reference temperature (specified in the datasheet, here is 25°C), and is a coefficient which characterize the NTC material, devised by measuring two different temperature (and is specified in the datasheet). If we need to keep the right tolerance after changing the sensor with another of the same model without calibration, we need to observe these tolerances. Combineing them with the TCR, we obtain the temperature’s punctual error:
There are a lot of methods which are used to linearize the behaviour , by linearizing the model. The most famous is the Steinhart-Hart equation, which uses a set of coefficients which are provided in two ways: from manufacturer, or can be devised by measuring 3 different temperatures and solving 3 equations for 3 unknowns. If these coeffients are not provided from manufacturer, one can use a more precise termometer, measure 3 temperatures and its resistance temperature, solving this equation for the S-H of 3rd order:
The sensor’s manufacturer adopted to experiment provides the coefficients up to 4th order, both for reversed anddirect measurement:
With a sensor stated to have 5% of tolerance, one can actually use the coefficient without the full decimal precision instead used (ideally) from the manufacturer, because the error provided by such formula is in the order of mK, while the final reading, due to various errors, is higher than 1.5K. I neglect this S-H error.
Assuming a negligible error from the S-H calculus that uses the parameters given by manufacturer, usually we use a microcontroller with an ADC and a voltage divider. The error from a common ADC is half LSB, so that
The error of the voltage divider tends to be double of the two resistors used if are too much different, otherwise tends to be the mean of the two relative errors of the two resistors, like . See the graph, where 100% is the mean value between errors of the 2 resistors, and 200% represent the sum of the 2 errors:
These two resistors are used in this way, in which one is the Thermistor:
where at the reference temperature (provided by manufacturer and usually 25°C) the Rref and Rtherm have the same value (so we bought a matched thermistor with a certain resistance to reduce errors). Depending on your temperature range, you can se how varies the the ratio and see how greater can be considered the total error.
How greater is the error in my temperature range? The previous value of and its TCR will lead to a temperature error like this:
in which the ADC and condition circuitry errors are NOT considered. But an idea of performance can be made if no calibration is performed (see later). If the resistance at the extremes of my range is not so different from the reference resistor (a normal fixed resistor in the schematic above), then it is not a mandatory to sum it up both errors of the two resistors, but can be a little less.
If I need to measure between 0°C and 100°C, the datasheet provides the additional error of the resistance due to the tolerance of β parameter, called Δβ/β. We will find that at 100°C there is low TCR and high relative error. ONLY NOW we can apply the worst case total error with , where is the (eq. 1) at temperature of 100°C using the table from manufacturer, while Rref is the the fixed resistor in the schematic above and its relative error.
TCR will be chosen to achieve the higher relative error, so will be the TCR at 100°C (as said before, the higher temperature of the range), along with the estimated resistance value at that temperature (of course..). You may see how the error can be greatly reduced if reading values with higher TCR at lower temperatures, and how small is it at 25°C. But the boundaries must contains the greater error tolerance, allowing the user to change the sensor in the field without recalibration.
With the calibration using the set of 3 equations above, all these errors are compensated, voltage divider included. The remaining one will be truncation error of the S-H coefficients due to the finite machine precision (whether is a PC or an MCU used to make the calibration), the errors of the reference termometer and the intrinsic errors of the S-H model, the quantization error of the ADC (half LSB) and for sure others that I have missed. It is not trivial to quantify everything. And quantification, when talking about measures, is almost everything.
Where is the ADC?
We have found how greater is the error of the analog quantities. Now where is the least sensible part of the NTC curve? The one at the higher temperature, as said before (lower TCR). Until now I have estimated a certain error of the total voltage divider’s resistance.
Now is needed to find how an ADC error can mismatch the resistance. Let’s go at 100°C, using an S-H estimation or the Look-Up table, then calculate a sort of manual derivative, let’s say the value , where ΔR is the immediate available step to achieve a temperature lower of a step equal to the required precision (if I want a precision of 1°C, then is the resistance at 99°C if I have the S-H equation, or it is the resistance at 95°C if I have a rough Look-Up table like the one in this article).
From the circuit of the voltage divider, we have and . The is how many discrete steps can be sampled inside a ΔR variation. E.g., if 1LSB = 3mV (ADC provides 3mV/bit) and from 100°C to 99°C the variation read from ADC is 6mV, I can’t have an accuracy higher than 2LSB, meaning 0.5°C (2LSB to represent 1°C). If I am lower than 1LSB, I can’t discern my prefixed step lower than 1°C.
Saying the same more mathematically: consider the reference voltage applied to voltage divider to be 3V. And the datasheet provides a certain TCR at 100°C. Then the resistance at 99°C will be:
Finally, the additional temperature error from the ADC is, in the worst case:
The final precision, from (eq.2) and (eq. 4), is:
One can try and find out that with an ADC of N = 10 bit, and components at 5%, included the NTC, in range between 0°C-100°C, hardly can be achieved a precision lower than 3°C/4°C, despite the accuracy can be around 0.5°C/1°C, without calibration. But note that this low precision is due to the consideration of the range up to its most imprecise extreme: reducing the range to, let’s say, 60°C the precision can be improved a lot. Just keep that in mind when you read 25°C, or 150°C using an NTC.