This is only a preview of the November 2024 issue of Silicon Chip. You can view 46 of the 112 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Articles in this series:
Items relevant to "Variable Speed Drive Mk2, Part 1":
Items relevant to "Surf Sound Simulator":
Items relevant to "JMP014 - Analog pace clock & stopwatch":
Items relevant to "JMP013 - Digital spirit level":
Items relevant to "FlexiDice":
Items relevant to "0.91-inch OLED Screen":
Articles in this series:
Items relevant to "3D Printer Filament Dryer, Part 2":
Purchase a printed copy of this issue for $13.00. |
Part 1: Introduction
Precision Electronics
This is the first article in a series covering the basics of precision electronics
design. The practical series will cover a range of topics, including precision op amps,
instrumentation amplifiers, signal switching and noise. I will use real examples and real
components to demonstrate the concepts.
By Andrew Levido
W
hile I aim to cover this topic from
a practical perspective rather
than a theoretical one, some
theory is unavoidable. Along with
explaining the concepts, I hope to give
a few tips and tricks along the way.
Since most devices built today include
a microcontroller, we will also look at
analog-to-digital and digital-to-analog
conversion.
What is precision?
We should start by defining ‘precision’ in the context of precision
circuits. We should also distinguish
between precision and accuracy, two
often confused terms.
Both precision and accuracy are
ways of looking at the error in the
measurement of a physical or electrical quantity. Accuracy describes
how closely a measurement or series
of measurements matches the ‘true’
value. In practice, that more likely
means how closely it matches an
accepted proxy for the quantity, probably traceable to some international
standard.
Precision describes how closely a
series of measurements match each
other. It relates to the repeatability of
a measurement – how confident we
can be that another measurement in
one minute, tomorrow, or next year
will be the same as the one taken now.
Alternatively, it could indicate how
confident we are that the measurement
taken by the second, 100th or 10,000th
unit off the production line will perform identically to the first one.
Fig.1 illustrates this nicely. It is a
histogram of 16 different measurements of a nominal 10.0V source taken
over time. The mean of the samples is
Fig.1: 16 samples of a nominally 10V source. The measurement accuracy is
the difference between the sample mean and the ‘true’ value of 10V, while the
precision is the spread of samples about the mean. Here, the precision is ±0.2V
(absolute) or ±2% (relative).
42
Silicon Chip
Australia's electronics magazine
9.9V, with a spread of ±0.2V around
this (from 9.7V to 10.1V). The mean
differs from the ‘true’ 10V value by
0.1V.
Therefore, we can say that our accuracy is within ±0.1V of 10.0V or ±1%.
The precision of our measurement
is ±0.2V around the 9.9V mean, so
within ±2%.
Precision and accuracy are related
but independent quantities. We can
have precision without accuracy and
accuracy without precision (although
the latter would be of limited value).
Note that in the example above, an
accuracy of ±1% does not mean that
every measurement will be within
±1% of the actual value since the measurement precision is not good enough
to allow that.
Accuracy is all about traceability
and calibration, whereas precision
is all about understanding and controlling the sources of uncertainty or
error in our circuits. It is not always
about achieving the highest levels of
precision – it is about getting ‘good
enough’ results for the application,
which requires us to know what the
precision of our circuit is.
From the example above, you will
have seen that we talk about precision
in both absolute terms, such as ±0.2V,
or in relative terms using percentages
(±2%). We also use parts per million
(ppm) for relative precision when the
numbers get very small; for example,
0.01% equals 100ppm. If we have
extremely good precision, we might
even talk about parts per billion (ppb)!
We can always measure the precision of a circuit after it is built, but we
have just seen that one sample isn’t
enough. Also, we usually want to be
sure our design will meet the precision targets before we commit to mass
manufacture. Precision circuit design
is the process of keeping careful track
siliconchip.com.au
of errors and uncertainties and how
they accumulate to impact the overall precision of the circuit of interest.
Sources of uncertainty
Before we get into a practical example, it might help to understand where
these errors and uncertainties come
from. Many errors result from complex interactions of various causes,
but it helps to think of them in some
broad categories:
Limitations of physics
Real-world limitations introduce
errors. For example, there is no such
thing as a perfect insulator, so leakage currents occur. It is impossible
to source or sink infinite current, so
devices must have some finite output
impedance, which means outputs will
change with load.
Noise
Another inescapable result of physics is the electrical noise caused by
the random movement of electrical
charges in certain materials. This can
significantly impact measurements
involving small quantities (microvolts
and microamps, or even nanovolts and
nanoamps!). Noise is a whole topic in
itself that we will cover later in this
series.
Temperature
Sadly, almost everything in electronics changes with temperature, and
usually not for the better. Resistor values change, noise increases and offsets drift. The wider the temperature
range your device will be subject to,
the more this will be a problem you
must address.
Frequency and time
Like temperature, frequency changes
almost everything. A parameter specified at DC may vary considerably as
frequency increases. Some things get
worse over time, too. MLCC capacitors
lose capacitance with age, and even
the frequency of crystals can drift over
time. It’s not the biggest problem you
will likely encounter, but it is worth
being aware of.
Manufacturing variation
Even a well-designed component,
using the best materials and a good
manufacturing process, will have
some degree of variation between
parts. It is impossible to make them all
siliconchip.com.au
absolutely identical. Common examples include resistor tolerances and
op amp input offset voltages. There
will be a natural spread of these values around a mean (the nominal resistance for resistors or 0mV for op amp
offset voltages).
Understanding component
limitations
There are no perfect components,
just as there are no perfect circuits.
Optimising for one parameter may
have a detrimental effect on another.
One example that springs to mind
is the common multi-layer ceramic
capacitor (MLCC). Many of these use
a dielectric material that allows the
manufacturers to cram a huge amount
of capacitance into a tiny volume for a
ridiculously low price. The downside
is that the capacitance is highly sensitive to temperature, applied voltage
and ageing.
The component variation with
these conditions can easily be two or
three times the nominal tolerance of
the capacitance. That is the price you
pay for 10¢ 10µF 0402-size capacitors.
Sticking with the example of
ceramic capacitors, do you know what
it means when a capacitor is labelled
X7R, X5R, Y5V, C0G, NP0 etc? It is
related to the temperature range and
how much the capacitance varies over
it, but it is actually much more than
that. For example, these codes also
affect how capacitance changes with
voltage. This shows why it pays to do
your homework!
Manufacturers are not always as
forthcoming about a part’s limitations as they are about its features
(especially on the front page of the
data sheet). Be wary of typical values
compared to worst-case values. You
must read the data sheets carefully
and thoroughly.
Don’t just read the data tables –
often, the graphs give useful information about how a device will perform
that is quite different from the flattering conditions under which the nominal values are derived.
A practical example
Despite all this, it is, of course, possible to design high-precision circuits,
and there are a few handy tricks that
can help us get there. To get started,
we will use a simple example that we
can build upon in subsequent articles.
Imagine we are designing a DC power
Australia's electronics magazine
Fig.2: our first attempt at a
current-measuring circuit. The
0-1A current to be measured (Il)
flows through Rs and the resulting
voltage is amplified by IC1 to
produce a 0-2.5V output. It uses
regular 1% resistors and a lowcost rail-to-rail op amp.
supply to power a microcontroller-
based circuit. We want to measure the
current consumed by our device over
the range of 0A to 1A. We would ultimately like to measure currents down
to the microamp level (or lower) if
possible, since our device may go into
sleep mode.
This isn’t easy to achieve. We will
develop the idea over the next few
articles, but let’s start by working out
what sort of performance is possible
with some very basic components and
a straightforward circuit. Fig.2 shows
the circuit we will begin with.
On the left is a 0.1W resistor used as
a current shunt. For the time being, we
will assume it is ground-referenced.
This shunt will drop 100mV across it
at the full 1A load. We need to amplify
this signal to get it into the range of
an analog-to-digital converter, say to
around 2.5V, which means we need
an amplifier gain of 25.
I have used a low-cost general-
purpose rail-to-rail input and output
(RRIO) op amp, the LM7301, to start
with since its inputs and outputs can
swing to the rails. We’ll also use standard 1% tolerance resistors to set the
gain. Initially, we will power this part
of the circuit with a single 5V supply.
To estimate the precision that we
can expect from this circuit, we need to
move through the circuit one element
at a time, find its contribution to the
overall error and sum them somehow.
We will take this very slowly initially
to illustrate the process.
At node A, we will see a voltage proportional to the load current but with
some uncertainty due to the resistor
November 2024 43
Parameter
Test Conditions TYP
MAX
Ta = 25°C
0.03mV 6mV
Ta = Tj
N/A
8mV
2μV/°C
Measured Data
Error
Current
Vout
Abs.
Rel.
0.0
25.0
25.0
1.0%
N/A
99.7
251.9
2.7
0.1%
Fig.3: this extract from the LM7301 data sheet shows the expected input offset
voltage. At 25°C, it is specified to be ±30µV (typical) and ±6mV (maximum) –
quite a range! I suggest using the latter in your designs.
199.8
515.2
15.7
0.6%
299.7
769.6
20.4
0.8%
399.9
1021.3
21.6
0.9%
499.9
1272.5
22.8
0.9%
599.9
1523.9
24.2
1.0%
699.9
1777.0
27.3
1.1%
800.0
2030.1
30.1
1.2%
900.0
2282.1
32.1
1.3%
1000.0
2533.3
33.3
1.3%
Vos – input offset voltage
TCVos – input offset voltage average drift Ta = Tj
Adding two quantities with errors:
(z + Δz) = (x + Δx) + (y + Δy) = (x + y) + (Δx + Δy)
→ z = x + y, Δz = Δx + Δy
Multiplying two quantities with errors:
(z + Δz) = (x + Δx)•(y + Δy) = x•y + x•Δy + y•Δx + Δx•Δy
→ z = x•y
Δz Δx Δy
and
≈ +
Δz ≈ x•Δy + y•Δx
z
x y
Fig.4: when adding or subtracting quantities with uncertainties, the uncertainty
of the result is the sum of the absolute uncertainties, shown at the top. When
multiplying or dividing, the uncertainty of the result is approximated by the
sum of relative uncertainties, shown below.
Table 1 – measured results from the
Fig.2 circuit using a single supply
(+5V). Units: Current (mA), Vout
(mV), Absolute (mV), Relative (%).
tolerance. The resistor tolerance is 1%,
so it will have an absolute resistance
value of 100±1mW. We will therefore
see a voltage across it of 100±1mV at
full load.
We will also see the op amp’s input
offset voltage appearing at node A.
Fig.3 shows the relevant extract from
the LM7301 data sheet. The input
offset voltage at 25°C is specified to
be ±30µV (typical) and ±6mV (maximum). The maximum offset is more
than 100 times the typical figure! We
will use the worst-case value for reasons I will discuss below.
We now have two quantities (voltage across the resistor and the op
amp offset voltage), each with its own
uncertainty, that we need to sum. The
error in the total value will simply be
the sum of the absolute errors of each
part. This probably seems obvious,
gain-setting resistors will be 25±2%,
or 25±0.5 in absolute terms.
The total error at the circuit output
(Node B) will therefore be the sum of
the relative errors of the Node A voltage (±7%) and the gain (±2%), or ±9%.
This corresponds to about ±225mV
absolute error in the 2.5V full-scale
signal. Clearly, that is not acceptable.
The op amp offset voltage is the biggest contributor by far and is pretty
easy to deal with. But how will this
circuit perform in real life?
but you can see the maths that proves
it in Fig.4.
That figure also shows the less obvious result: that the total error when two
quantities are multiplied is approximated by the sum of the relative
errors of each quantity. The approximation works because we can ignore
the Δx•Δy term if the errors are small.
This leads to an important rule for
precision circuit design: If adding or
subtracting quantities, sum the absolute errors; if multiplying or dividing,
sum the relative errors.
So, back to our circuit. Summing
the absolute errors at node A gives a
total error of ±7mV. You can probably
already see this is a potential problem
(no pun intended), but let’s keep going.
At node B, we will see the voltage at
node A multiplied by the gain of the
op amp stage. The gain with two 1%
Practical results
I built this circuit and measured the
results shown in Table 1. You won’t
be surprised that they are much better
than the worst-case estimate of ±9%.
This is because the errors result from
statistical variation, and there is a
much higher probability that any given
Fig.5: at left is a plot of the measured results from the Fig.2 circuit; note the subtle kink in the curve near zero. The closeup on the right clearly shows that the output is too high at 0A due to the op amp’s limited output swing.
44
Silicon Chip
Australia's electronics magazine
siliconchip.com.au
Measured Data
Error
Measured Data
Error
Current
Vout
Abs.
Rel.
Current
Vout
Abs.
Rel.
0.0
-41.5
-41.5
-1.7%
0.0
12.8
12.8
0.5%
97.9
203.7
-41.1
-1.6%
97.9
253.9
9.2
0.4%
198.2
454.6
-41.9
-1.6%
198.2
500.7
5.2
0.2%
298.3
693.3
-52.5
-2.1%
298.3
735.5
-10.3
-0.4%
398.3
944.1
-51.7
-2.1%
398.3
982.1
-13.6
-0.5%
498.3
1197.2
-48.6
-1.9%
498.3
1231.1
-14.7
-0.6%
598.3
1447.5
-48.3
-1.9%
598.3
1477.2
-18.5
-0.7%
698.0
1728.3
-16.7
-0.7%
698.0
1753.4
8.4
0.3%
798.0
1982.2
-212.8
-0.5%
798.0
2003.1
8.1
0.3%
898.0
2235.2
-9.8
-0.4%
898.0
2252.0
7.0
0.3%
998.0
2488.8
-6.2
-0.2%
998.0
2501.4
6.4
0.3%
Table 2 – raw results from the Fig.6
circuit with a dual supply (±5V).
Table 3 – the Table 2 data after
applying a fixed offset and gain
corrections.
sample will be near the mean or nominal value than an outlier.
The full-scale error was 33mV, or
1.3%, and the errors reduce at lower
currents except at the bottom of the
range, where there seems to be some
kind of anomaly.
You can see this also in the plot of
the results in Fig.5, on the left. The full
set of results looks OK except for the
zero-current reading, which is slightly
off. The first three readings, along with
the ideal response, are shown on the
‘zoomed in’ plot on the right of Fig.5.
There is clearly a problem at or near
zero current.
We know the op amp offset voltage
is not causing this, because that would
appear as a consistent vertical shift
of the measurements above or below
the ideal line. It is not caused by gain
error, because that would appear as a
variation in the slope compared to the
ideal line. Something else is going on
– there is a small but definite ‘bend’ in
the measured results at the bottom end.
The culprit is the op amp’s output
swing. While the LM7301 claims to be
a “rail-to-rail” output op amp, a close
look at the data reveals that with a 5V
supply and a 10kW load, the output
typically won’t go below 70mV (and
isn’t guaranteed to go below 120mV).
We are measuring 25mV, which is better than claimed. This is a very good
swing, better than most op amps, but
it isn’t rail-to-rail as advertised!
We would rather avoid non-linearities like this because they are harder
to deal with than purely linear errors
such as fixed offsets or gain errors, as
we shall see. I refined my circuit by
adding a negative supply rail (Fig.6).
Running the tests again produced the
data shown in Table 2 and plotted in
Fig.7.
In some ways, this looks worse than
our first test! The most significant error
is just over -52mV or 2.1% of full scale.
This error occurred mid-scale, with the
absolute error at zero being -42mV; at
full scale, it is only -6mV (0.2%).
The good news is that the points are
fairly linear. The dotted line in Fig.7
is a line of best fit, using the equation
shown on the graph. This line suggests there is a fixed offset error of
-54.5mV and a gain error (the difference between the slope of the line and
the ideal slope of 2.5) of about 1.7%.
The fixed error comes mainly from
the op amp’s offset voltage, which
must be around -2.2mV (taking the
gain of 25 into account). The gain error
comes largely from the resistor tolerances. The good news is that there is
no longer a bend in the plot.
Note that the op amp offset is less
than the quoted worst-case figure
(±6mV), but by no means does it fall
within the typical figure of ±30µV.
This is just one sample, but it does
illustrate the danger of assuming your
results will match the ‘typical’ figures
in the data sheet.
We will improve this result next
time by selecting a ‘better’ op amp and
tighter tolerance resistors. But just for a
moment, let’s look at another solution.
We could compensate for both of these
errors (offset & gain) by adding a fixed
correction – either through analog
trimming or, more likely these days,
in software on the microcontroller.
Just because we can, let’s look at
how much we could improve these
readings by applying gain and offset
correction using the values from the
Fig.6: powering the op amp from
dual supply rails (±5V) fixes its
output swing problem. Otherwise,
this circuit is identical to Fig.2.
Fig.7 (right): the measured result of the Fig.6 circuit, along with a calculated line of best fit (dotted). There is now a fixed
offset and gain error that can be trimmed out in either the analog or digital domains.
siliconchip.com.au
Australia's electronics magazine
November 2024 45
line of best fit. Table 3 shows the corrected results. Now the absolute error
is never worse than about ±20mV, or
0.75% of full scale. Not bad, given the
parts we have chosen.
This is one of the big secrets of precision design. You can usually trim out
fixed offset or gain errors to some significant degree. The emphasis should
be on the word “fixed”. It’s way more
difficult to trim out non-linearities or
errors that change over time, such as
temperature drift.
Temperature effects
To examine the effect of temperature, I want to introduce the idea of
the error budget table. This is just a
way of capturing the uncertainties
we discussed above in a neat tabular
form. Table 4 shows an example. You
can use any format you like, but this
is how I generally do it.
Under the “At Nominal 25°C” section, you will see each step we went
through in the above example, capturing the nominal value and relative
and/or absolute uncertainty.
For example, Line 1 is the shunt
resistor and Line 3 is the op amp offset. Lines 2 and 4 are calculated values
and are shown in bold text. I always
show both the absolute and relative
errors on calculated lines. At Line 8,
we get to the ±225mV and ±9% error
figures calculated above.
The second part of the table brings
the temperature-dependent errors
into the picture. We obviously have to
know the temperature range of interest
to calculate these uncertainties. I have
chosen a range of 0°C to 50°C (±25°C
either side of the nominal 25°C) in
this example.
The data sheet for the shunt resistor I
used (Stackpole CSR1225) tells me that
its temperature coefficient (tempco) is
100ppm/°C. This means we will see a
resistance change of up to ±2500ppm
or ±0.25% over the range of interest
on top of the 1% tolerance.
Similarly, the op amp’s offset voltage has a drift of ±2µV/°C, corresponding to ±50µV. This is already more
than the ±30µV ‘typical’ offset at 25°C
claimed in the data – another reason to
take ‘typical’ values with a grain of salt.
If we continue with the rest of the
analysis in the same way, we arrive
at a variation of about ±0.8% over
the proposed operating temperature
range. Even if we could trim out all
of the 25°C error in software, we are
left with a temperature-dependent
error approaching 1%. We will look
at how we can reduce this in further
instalments.
Optimist or pessimist?
One objection that frequently comes
up when we are summing worst-case
errors in this way is that we are being
overly pessimistic in our design. We
are assuming that errors will accumulate in the worst possible way. For
example, we have assumed that our
gain error is 2%, which would only
be the case when both gain-setting
resistors are at the extremes of their
tolerances and in opposite directions.
If they were both high or low by the
same percentage, this would cancel
out, and the gain would be unaffected.
Is it reasonable to take this pessimistic view? What if our circuit had
10 gain-setting resistors instead of
two? Would it be reasonable to assume
they would all be at their tolerance
extremes in the worst way? There is
no correct answer to the question, but
I can suggest some guidelines.
Uncertainty is a statistical game –
it’s all about probabilities and consequences. If the likelihood of the worst
case occurring is low and its consequences are not severe, it is probably
OK to make some concessions.
But if the probability of an error
Table 4: Error Budget Table for our Application
occurring is high (eg, if you are making a lot of something), or the consequences of any errors are significant
(dangerous, expensive or embarrassing), a cautious approach is better.
One concession you might choose to
make is to assume that the sources of
error are uncorrelated. In such cases,
it is possible to add errors (absolute or
relative) as the root sum of squares. In
our example of 10 gain-setting resistors, each with a 1% tolerance, we
would come up with a gain error of
±3.1% instead of 10%.
But I urge caution. The root sum
of squares is just another statistical
tool – it works best when there are a
great many samples in a truly random
and uncorrelated distribution. We do
use this type of summation for noise,
which fits these criteria, as we shall
see in a later article.
Remember that if some resistors
have the same value, they will likely
come from the same batch. In fact, they
will probably have been manufactured
sequentially. So they will very likely
be off by roughly the same amount and
in the same direction. In other words,
the errors won’t be uncorrelated at all!
In some cases, that can help you; eg,
if you’re relying on matched resistor
values. Still, you must examine the
specific circuit to determine whether
correlated errors will help or hurt your
precision.
Summary
At this stage, it has become clear that
our simple circuit is probably not up
to the job of monitoring the current in
our supply if we want anything better
than a couple of percent resolution.
We can trim out the worst of the ±9%
error down to a little better than 1%,
but we will have another 1% or so of
error over the temperature range. This
2% error means a ±20mA uncertainty.
We’ll have to do better next time! SC
At Nominal 25°C
Error
Nominal Value
Shunt Resistor: Stackpole CSR1225 (1% 100ppm/°C)
100mW
Node A Voltage due to I × R shunt
100mV
1mV
Op Amp: LM7301 (Vos ±6mV, 2μV/°C)
0mV
6mV
Node A Voltage total (Line 2 + Line 3)
100mV
7mV
Op Amp Gain Resistor R1: Yageo RC0805 (1% 100ppm/°C)
1kW
1.00%
0.25%
Op Amp Gain Resistor R2: Yageo RC0805 (1% 100ppm/°C)
24kW
1.00%
0.25%
Op Amp Gain (R1 + R2) ÷ R1
25
0.5
2.00%
0.125
0.50%
Vout (Line 4 × Line 7)
2.5V
0.225V
9.00%
0.02V
0.80%
46
Silicon Chip
Abs. Error
Rel. Error
0-50°C (Nominal ±25°C)
Abs. Error
1.00%
Australia's electronics magazine
1.00%
Rel. Error
0.25%
0.25mV
0.25%
0.05mV
7.00%
0.3mV
0.30%
siliconchip.com.au
|