Accuracy Study

A study of accuracy characteristics 
of the Antenna Analyst


February 2007 ...

Something I'd like everyone to keep in mind is that the Micro908 is a $200 hobbyist instrument with an 8-bit processor for the computing engine ... it's not a $10,000 lab-grade measurement device from HP that is spec'd to 0.1% accuracy with several DSPs and a Pentium-grade processor. Nor is it a vector-based instrument such as the TAPR VNA (which also requires a $1500 PC for it to work). It is, however, a scalar-based instrument in the same class as the MFJ and Autek products, which also have issues at higher frequencies (some different, some the same, and *lots* less flexibility compared to the Micro908).

Overall, performance below 30 MHz has been fairly good ("outstanding" would be my actual description, but I'm trying to be objective), which was in the days when we were usuing the DDS-30 signal generator card within, and even recently when using the DDS-30 card (while keeping the operating frequency below 30 MHz). One just cannot beat the feature/performance of the Micro908 in the HF frequency range.

However as many of us have noted, the measurement accuracy falls off when you start getting into the >30 MHz VHF ranges. In a nutshell, Joe and I think this is due to the layout of the reflectometer circuit, whereby the trace runs and component layout introduces some unexpected impedances as we aproach VHF freqs. When the Micro908 was initially designed, we didn't think that we'd be seeing operation that high, but with the advent of the DDS-60 Daughtercard we saw that we could indeed reach up higher than 30 MHz ... but with some unexpected caveats, as we are now seeing.

We thought that the improved, multi-point calibration techniques would work out well across the entire frequency range. They indeed worked well for improving accuracy below 30 MHz, but curiously not so above that mark. It's still a mystery to me how the system can indeed be calibrated and work very well below 30 MHz, but that same software not calibrate as well above. Which again leads to the belief that the layout introduces various "internal reactances" at varying/differing areas of the reflectometer circuitry at VHF. As someone recently noted, it's the same software and algorithm that is being used throughout the frequency range, so it's not a software bug that is specific to VHF. Further, I've carefully studied the internal registers and intermediate numbers being generated from the computations, and recently increased the precision of the registers to properly handle the extreme cases of numeric computation. So as one can see for the Analysis/Calculator worksheets with the raw data produced in Debug Mode, the numbers are well within the limitations of the computing engine.

In the past we have had protracted discussions with some of the same guys still here on the reflector concerning the approach to the reflectometer design and computations. There have been a few good ideas that would've required some redesign (such to use a log amp or frequency-dependent Wheatstone bridge equations), but we decided to stay the course and try to flesh out our original design approach for the reflectometer. There have also been a few off-mark approaches (e.g., that neglect the phase relationship on the reflected channel), that we let pass by.

That all said, we are indeed actively pursuing design changes to improve the performance above 30 MHz. Fortunately, because of the modular approach for the overall product design, this will be a plug-replacement for the DDS card. I've described this before at a high level, but it involves a careful layout of the refletometer components, oriented very close to the BNC connector, including automatic level control, using a 12-bit A/D converter (instead of the 8 bit ADC used today), and an onboard math coprocessor, and has an option of using a log amps in the buffer chain. It'll likely provide a kick-butt improvement and the protoype card for this currently being evaluated.

Okay, on to the data analysis that Paul Wilton has been spearheading ...

I've supplied him a spreadsheet containing my specific comparative data sets. It's clear from this comparison that the performace seen by the units on my bench are slightly better than his, but in general the trend is as described thus far: performance falls off above 30 MHz. I'm not sure of the value in displaying raw and corrected component values (Vf, Vr, Va and Vz) and how the levels decrease in frequency, but they generally show the calibration system working. It also shows how the system has greater difficulty calibrating when the load is far away from the nominal 50-ohm bridge impedance. That is, the 274-ohm load is not being calibrated as closely as the 50-hm load. This is understandable, and in general indicates the main problemmatic area which is a combination of the larger numbers being computed at the measurement extremes (i.e., when far away from the nominal 50-ohm point), and hence the greater effect of round-off and 8-bit computing limitations.

I think a simpler way of showing performance is via the graphical plotting capability of the AA908 Plotter program of Al Gerbens K7SBK. I've taken those same four load conditions represented in Paul's spreadsheet (short, open, 50-ohm, 274-ohm) and plotted the instrument response 1-60 MHz. 

Case 1: 50-ohm Load -- Dead-on accuracy across the entire frequency range. As expected, for the reasopns stated above (identical to the inherent characteristic impedance of the reflectometer bridge.)


Case 2: Shorted Load -- Dead-on accuracy across the entire frequency range. Minor increase in reactance as frequency increases, but this can actually be calibrated out with a minor addition to the cal process (coming in next version.)


Case 3: Open Load -- This case is fairly useless, as there is no relevant measurement when one is measuring nothing(!) One could argue that antenna impedances approaching infinity would be applicable, but I haven't encountered any of these. Other analyzers essentially block this condition, as we do too for most of the range, and display a relatively innocuous message (e.g., "Z>1K") when the values cannot be computed. However it is interesting to see how the AA908 responds with an open load, by the reactance decreasing from infinity down to mid-scale, starting (mysteriously) at 30 MHz.


Case 4: 274-ohm Load -- This is the case of most interest for the accuracy discussion. It shows accuracy even better than 10% throughout most of the 0-30 MHz range. But higher than 30 MHz we see the effects of internal reactance kicking in (as described in the discussion above). It's very important to note that one should NEVER expect to a perfect line across the frequency spectrum at 274 ohms (at least for a low-cost instrument such as the Micro908). There will always be frequency effects seen in the loads, BNC connectors and perhaps even RF fields present in the local area (i.e., BCB interference with an unshielded unit), and the higher one goes in frequency, the more pronounced the effects become. This is natural.

The data used to create these charts are shown in the
AA908 Calculator data set  that I prepared.  

Paul Wilton put together another view and analysis of the data in his own spreadsheet, to which I added data from my Micro908 as well for comparison.  You can download this at M1CNK Raw Analysis Spreadsheet  

Overall, I am happy with the the performance of the AA908 at HF frequencies (below 30 MHz), but I too am not satisfied with performance above 30 MHz. My iterations with many of you here on the reflector, pointing out possible areas for you all to check in your units, is not meant to be disavowing that this troublesome area above 30 MHz exists, but merely to state that if you do *not* have a prperly-adjusted DDS, properly-constructed Micro908 and dummy loads, properly-performed calibration, et al, these conditions will be present even more so, and will likely extend down even below 30 MHz. But if you have taken care in the construction, adjustment, and operation of the unit, your performance will be comparable to what I have shown and to what Paul has described.

If you are looking to meaure resistors down to the 1-ohm level, this is not the instrument for you. Likewise, if you are looking for absolute value readings that are 1% accurate, this too is not the unit for you.

However if you are looking for a flexible, reprogrammable unit that has numerous antenna measurement features that show good reulst below 30 MHz and acceptable comparative results above that (i.e., shoiwing SWR plots, general complex impedance trends, changes from day-to-day for your antenna), this would serve the purpose well.

We are actively working to get past this low-VHF measurement problem, as we next intend on taking the design up to 2m use. The path described, using a new reflectometer module layout, greater bit depth, greater computing capability and ALC, we feel we're on the right path. Time will tell, and it shouldn't be too much longer now to see these results.

I'm very eager to work with each of you who have ideas or questions. As you can see I'm being entirely open with the project, which has always been the case with this open source / open hardware / not-for-profit product. You do not see much (any?) of this kind of product on the market these days, but Joe and I feel this would be of more interest and value to the technical community.


November 2005 ...

Some Micro908 Antenna Analyst owners have been studying the slight measurement drifting and accuracy characteristics of the instrument.  The drift is currently attributed to the heat generated by the DDS card affecting the reflectometer diodes, and the accuracy is somewhat affected by the limitations of 8-bit math computations and the inherent "uncertainty" of needing to simultaneously measure phase and magnitude of the reflectometer signals.  

Fred DeRoos, WA0GMH  describes the typical scenario experienced by a number of AA-908 users ...

I've completed the assembly of my Micro908 and it works, but I have a couple of questions about its calibration. When I calibrate and then check the resistors used for calibration, the 50 ohm resistor is within 1-3 % but the 274 ohm resistor can be off by as much as 10-30 %. For example, it sometimes reads in the 305 - 330 ohm range with those two values toggling back and forth. If I repeat the calibration multiple times, I can get it close to an SWR of 5.4 and a resistance reading in the 270 range. Also during calibration it is sometimes not possible to get an SWR of 5.4 during the adjustment using the 274 ohm resistor. It goes from 5.3 to 5.5 without stopping at 5.4 Usually if I repeat the calibration enough times, I can get the 5.4 value.

If I turn it off and then try it again after a couple of hours, the 50 ohm resistor is within 1 ohm, but the 274 ohm resistor reads an SWR of 4.9 and the resistance may read as high as 340 ohms. As the unit warms up, the indicated SWR and resistance get closer to what it read immediately after calibration, but usually no closer than 305 ohms with some jumps down to the 270 ohm range. The SWR also jumps 0.1 units as the resistance toggles between the two values. When it does this, there are no intermediate values, just 305, 277, 305, 277, etc.

I built a 150 resistor (1 %) into a BNC and it reads close most the time. The SWR may change between 3.0 and 3.1 and the resistance may vary from 164 to 166 ohms (DC resistance measures 150 with Fluke 87). And, the SWR and resistance don't seem to change as the Micro908 warms up. 

After adjusting the RF level (2 k pot on DDS grandaughter board) the first three values read during calibration are in the Cx range. The fourth value seems a little high at 09 or 0A, as the manual suggests a typical value of 04. 

Any ideas of where I should start looking to make the calibration more stable and reproducible? 

From George N2APB ...

Here are the equations used to compute the instrument results. They are all directly related to the static DC measurments of the four main voltages coming from the reflectometer: Vf, Vr, Vz and Va. These signals correspond to the four "legs" of the classic wheatstone bridge reflectometer that are necessary in computing the results for SWR, R and X without performing heavy-duty, exponentiated math or trigonometry. 

1) SWR calculation:  SWR = (Vf+Vr) / (Vf-Vr)

2) Impedance calculation:  Z = 50 (Vz_acc/Va_acc)

3) Resistance calculation:
           (2500 + Z^2) * SWR
     R = ----------------------------
             50 * (SWR^2 + 1)
     or ...
     R = --------------------------------------

4) Reactance calculation:  X = SQRT ( Z_squared - R_squared )

5) Inductance calculation: L= ((X*10)*10000)/((63)*F)

6) Capacitance calculation: C= 1/(6.28*F*X)

In essence, my conclusion on the source of the errors is totally based on the limitations of the 8-bit original data values for Vf, Vr, Vz and Va (which are immediately turned into 32-bit fixed point numbers when you see in the software how I use a 4-byte word for each number). If you look at the execution of the equations in the software (see the calcs.asm file), some of the denominators get large as compared to the numerators and I need to do all sorts of scaling to help the results of any given operation have an appropriate number of significant digits, thus making the given fixed point computation actaully work while allowing the result to be used in the *next* computation. This "scaling to accommodate the bit lengths" of the numbers is at the root of the instrument's limitations, when coupled with the naturally-occuring small numbers at resonance and other math extremes.

Can something be done about these limitations? I'm sure this is possible ... perhaps by going to trig/expo math as Tuck has suggested, or by going to use of a floating point math library, or by going to use of a 10- or 12-bit A/D converter for reflectometer signal measurement. Any/all of these might well be my next step, but curently I'm combing through the equations to see see if there is a better way to order the internal steps and/or better way to scale the small numbers so as to reduce the occurrence of gross remainders.

Tuck Choy, M0TCC describes the problem in terms of complex reflection & transmission coefficients ...

First I am not surprised at the struggle you have with the numerical problems in the software to get any accuracy at all because the problem is a fundamental one and nothing to with the Math, its the physics.  Your impedance equations are identical in essence with mine, which for reasons I have mentioned previously I prefer to use rho and tau, the complex reflection coefficient and transmission coefficients respectively.

It is important to go back to the basics and just take voltmeter DMM readings and that is what I have done. We can massage things later within the software when we have understood what is happening.

The reflection coefficient I shall define as: rho = |rho| exp(j phase angle).  Analogously the transmission coefficient I shall define as tau where  tau=1+rho.  The maths can relate these to R and X if necessary, so I stay with these as they are more directly related to the measured voltages.

The AA908 effectively measures two numbers |rho| = 2VR/VF and |tau|=  2VZ/VF , these are indeed dc measurements as you said and hence the magnitude symbols I have used.  The measurement of VA in my opinion is redundant, nevertheless it can provide a good check on diode drift, Op-Amp offset etc using VF=VZ+VA for (non-reactive loads only).  If you just check the voltages in the box you will find that  VF=VZ+VA  to within 1 or 2% as they should, hence in principle measuring VA is redundant.

I have mentioned before that the Wheatstone bridge balance gives a good 1% value for |rho|.  This is to be expected as its accuracy should only depend on the bridge resistances . Measured errors in |tau|  are also not bad about 2-3% they come from DMM volt meter inaccuracies I think.  The Maths for determining impedance is essentially based on calculating the phase angle from |tau|:

Cos (phase angle) = (|tau|2-1-|rho|2)/(2|rho|)                                                                              (1)

Once |rho| and its phase angle are determined the job is done, everything else can be converted from these data.

To give a concrete example, here are my in-box measurements for the 274 ohm load.

|rho| = 0.691 and |tau| = 1.663 and on another occasion |rho| = 0.699 and |tau| = 1.661 versus the theoretical value of |rho| = 0.691 and |tau| = 1.691 respectively . In both cases we get a phase error from Eq(1) above Cos (phase angle) = 0.932 for |tau| = 1.663  and  Cos (phase angle)  = 0.909 for |tau| = 1.661.   The exact value for a resistive load is of course Cos (phase angle) =1 or phase angle  = zero.  In other words about 10% error.  Why are such small measurement errors in |rho| and |tau| become magnified into large errors in the phase angle.  This is the essence of what I have called the uncertainty principle and is a general problem of all electrical measurements that try to determine both the amplitude and phase simultaneously in the same circuit configuration.

For the more technically minded the error analysis of (1) proceeds easily by a high school technique which I shall not go into the details here.  Let y = Cos (phase angle).  If you take the log of Eq(1) and then differentiate you will find that

Fractional error in y  =  |tau| 2       x fractional error in |tau|    .
                                  (|rho| y)           

For large loads, as long as the loads are not too reactive i.e. y gets close to 1 then the maximum error in y could be up to 8 times 2% i.e. 16%, the max value of |tau| is 2 and that of  |rho| is 1 for large imbalances.  This is at the heart of why resistive measurements for large R gives large errors.

However there is going to be an even bigger problem when the reactance component becomes large, i.e. y --> 0 or a phase angle close to 90.    The correction factor diverges. It is here that even more extra care needs to be exercised in impedance measurements.   The challenge is to be able to either devise an accurate calibration procedure with the software to compensate for the inherent errors I mentioned or to devise cleverer measurement schemes subject to the limitations of the present set up. Mine you even the most sophisticated schemes are not exempt from the above uncertainty principle!  I think you have all done a great job but the above fundamental limitations are against you with the present set up.

Joe N2CX points out detector linearity in our simple (inexpensive) instrument ...

I'm not certain that the issue of detector linearity has been discussed. The compensation scheme is relatively simple and thus imperfect with a detection transfer function that has ripple at various amplitudes. To some extent this may the reason that slight adjustment of the DDS output level results in easier calibration. In fact the clever multipoint compensation that George worked out is an attempt to smooth out some 
of the linearity errors. 

As far as the perceived temperature drift, it can indeed be caused by the fact that there can be differential temperature variation between diodes in a given detector and compensation circuit. We did not go the effort to use multiple-diode arrays to circumvent this issue at the expense of a more complicated pc layout and use of relatively uncommon components. 

Certainly more work can be done in the detector/compensation area to gain improved performance. At the moment it is more or less on the back burner but could certainly improve measurement accuracy. 

George, N2APB presents an interesting approach to gain better stability ...

Fred DeRoos has been doing some informal temperature experiments and he surmised that the heat generated by the DDS Daughtercard affects the reflectometer diodes and thus is a major contributor to the calibration/measurement drift condition that we've been discussing. This hot DDS card exacerbates the condition because it is located directly over the reflectometer components, and when used in normal operating position, the heat from it rises to directly encounter those diodes.

I thought it would be an interesting test to locate the DDS card *outside* of the box and pump the signal in, thus eliminating the heat generating module from the enclosure altogether. Curiously, this is the approach I'm also taking with the SDR-908 project, whereby I extend the DDS card connector upwards through a thin rectangular hole in the back plastic case in order to plug in a dual-DDS card, thus gaining control of it from the HC908 connections on pins 1-3 but using the DDS signal externally with the IQ subsystem. 

So in the case of the AA-908, I mounted the DDS Daughtercard in the same fashion, and just pumped the RF back into the same connector. Now the only difference in operation is that the heat-generating module is on the *outside* of the unit and quite isolated from the temperature-sensitive reflectometer diodes, and/or the temperature-sensitive op amp (offsets). It works the the same with the DDS plugged in in this manner, and if I wanted to keep it as-such, placing a small black plastic "cover" over the exposed DDS card is quite easy. (Which is just what we do in the SDR-908 version of the instrument with the external dual-DDS card and IQ sampling circuitry.) 
The photos below show how the external mounting was done.  (Click on any photo for a full-size look).


I indeed observed a more stable and drift-free operation, as compared to the condition when the DDS card was mounted inside the enclosure. Is this enough improvement to warrant me keeping it on the outside permanently? No, because I can live quite nicely with the "keeping the unit on for 15 minutes to let it warm up inside" if I want to have accurate, drift free measurements. Frankly, most of the time I just need a quick indication of where the SWR dip is, or where the resonant point is in my tribander. But if I do need decimal SWR accuracy, or resistance and reactance accuracy to within 9 ohms, letting the unit warm up before use is certainly an easy and more-than-acceptable alternative.


Back to the Micro908 Kit page


Page last updated:  February 26, 2007