- Hello Wolf,I think we are talking past one another. Granted you don't need anything to heterodyne an incoming I&Q stream -- if you already have one. What I was talking about was a real input stream, suffering a conversion to complex form by multiplication from a complex exponential. That generates spurious images of the original spectrum.Sorry for any misunderstanding...Dr. David McClainChief Technical OfficerRefined Audiometrics Laboratory4391 N. Camino FerreoTucson, AZ 85750email: dbm@...phone: 1.520.390.3995On Oct 19, 2010, at 14:48, wolf_dl4yhf wrote:
Hello David,

Ok - well I am still very convinced you don't need a Hilbert transformer to decimate an I / Q stream, or baseband signal. It's not just a compromise, doing the complex multiplication / complex decimation / complex FFT without a Hilbert transform. Again, it doesn't require a Hilbert transformer.

Now if you just did a poor-man's Hilbert transform in the frequency domain, by chopping off the negative frequencies entirely, that still doesn't completely take care of the bleed over from the negative image of a signal near to DC.

IMO there is no problem with the negative image, because in an I / Q signal, the image (aka unwanted sideband) doesn't fold back into the same frequency range as the 'wanted' sideband, which it would do if the decimator chain would use real numbers only.

You can easily see the effect in Spectrum Lab: Use the test signal generator to let a frequency modulated sinewave sweep from DC to f_sample/2 (slowly), use the spectrum analyser / FFT with complex frequency shift and decimation. You will not see any signals which shouldn't be there. No images, artefacts, etc, not even 100 dB below the level of the wanted signal. And since the system is linear, this doesn't depend on the waveform / sidebands, etc.

Of course (I think you asked this in a previous message) the anti-aliasing lowpasses in the decimator stages do have a transition (slope) between passband and stopband, not too steep, because they are FIR filters of limited length. But, and this is the only "trick", if the spectrum analyser uses the decimation to increase the effective FFT length / frequency resolution, only half of the frequency bins (from the complex FFT) are displayed, so you will not notice the filter slope, and the artefacts folded back at the Nyquist frequencies (positive and negative).

So much about this topic from my side - back to work.

All the best,

Wolfgang .

- Hello Wolf,Very interesting background you have -- very similar to mine - embedded systems. I use mostly Lisp and other FPL here, even in the embedded platforms. Very interactive and dynamically extensible, so you get to try ad-hoc things as needs come up. Which they always do...Been thinking hard about how to make precise RF frequency measurements -- down to micro-Hz when possible. I have been looking at alternatives. SpectrumLab's approach is one of those alternatives. Correct me if I'm wrong, but you do the lowpass filtering in the frequency domain by just chopping the high-frequency bins? Then you have to convert back to time domain by inverse FFT for the downsampling, so that you can accumulate the number of samples needed for the actual analysis FFT?BTW: I found this paper, with some good points on the boundary conditions for doing this...The other alternative, which involves no heterodyning (in the conventional sense), and needs no sound card calibration is as follows:1. Take in the raw 48 kHz audio stream coming out of a receiver. The receiver is operated in AM detection mode so that we are completely insensitive to LO wander and inaccuracies. Inside the passband, at the low end, inject a precise carrier from a Rb oscillator or such, to heterodyne with the unknown carrier. I like 1500 Hz as a working region since it is in the middle of the passband of most radios. Also other reasons, below... So I set my precise carrier at 1510 Hz below the signal of interest.I also AM modulate my reference carrier with a 1512.5 Hz tone to insert a pilot line right next to the signal of interest in the waterfall. That modulation tone is also locked to the Rb source.Now I just record the sounds coming through the receiver bandpass, and watch on SpectrumLab's waterfall display. Very nice!!!2. For post-processing of the recorded sounds, I send the audio through an IIR bandpass filter whose Q increases with the desired decimation ratio. For example, to decimate by 1024, use a Q of 100. This produces a noise field in what follows whose floor in the final passband equals or less than achieved by the SpectrumLab approach. Requires about 6 multiply+adds per incoming sample.3. Then I decimate by tossing out samples as they come along, for 1024 decimation, take the first sample, and toss the following 1023 of them. Repeat until finished. These samples are fed into the analysis FFT. This is aliasing by design, as mentioned previously.For 1500 Hz center, and 48 kHz sample rate, any multiple of 32 for decimation aliases the 1500 Hz band down to DC. For the example of 1024 decimation, we end up with a bandwidth a bit over 20 Hz wide. My 1510 Hz signal, and my 1512.5 reference tone are now down at 10 Hz and 12.5 Hz.4. I absolutely know the reference tone is at 12.500 000 Hz or better. And I can find what FFT bin holds its peak. It has superb SNR, so peak interpolation in the complex FFT works very well here. I really don't care what the sound card sample rate actually was, and in fact they all drift, some rather badly.Use a Kalman Filter to track that reference interpolated peak, and that gives us the scale in the frequency domain at any instant of FFT. And now I can find the peak in the poorer SNR signal of interest near 10 Hz, and using that scale, determine its more-or-less exact offset around 10 Hz.Since that 10 Hz offset was originally 1510 Hz above my precise injected carrier, I now know its frequency very precisely, having performed only one FFT on the incoming data per scan line in the waterfall.My old SIGINT buddies would have a cow hearing that I'm using an IIR bandpass filter to limit the aliased noise components. In that world, it is absolute gospel to always use a Hilbert transform and phase-linear FIR filters. (Hence my predisposition toward HT's, sorry...)By my count, a decimation ratio of 1024 requires on the order of 23 million operations if you take incoming samples in batches of 2^20, perform and FFT to filter and decimate to produce a collection of analytic signal output samples with another 1024 point inverse FFT to produce 1024 output samples for the analysis FFT. I would expect a loss of around 2 bits due to roundoff errors - not significant here.Alternatively, you could do 1024 batches of 1024 point FFT's to produce 1 output value per FFT. That costs around 10 million operations.If you forego the heterodyning by multiplication and simply shift FFT bins you can save an additional 2 million multiplies. By using the pilot tone, we don't really need to care about exact heterodyning frequencies and exact sample rates.For the IIR filtering and decimation, that same 1 million input points requires about 6 million multiply+adds, and then time-domain decimation.But despite this apparent economy, FFT's have been honed to perfection lately, and so you might well find them to be faster overall.I plan on doing both approaches so one can be compared against the other as a cross-check and quality assessment.[ BTW... a Kalman Filter tracking a waterfall line would be really nice to have in SpectrumLab -- for when a line encounters some wild Doppler shifting and seems to disappear in a cloud, only to reappear later on in the waterfal. ]Cheers,Dr. David McClainChief Technical OfficerRefined Audiometrics Laboratory4391 N. Camino FerreoTucson, AZ 85750email: dbm@...phone: 1.520.390.3995On Oct 20, 2010, at 00:02, wolf_dl4yhf wrote:
Hello David,

It was a bit late at night on this side of the pond so I did't find a more convincing example... but now I did, see below, but I just saw you are already convinced.

you wrote:

You are making the assumption that the user will decimate to the extent that the spurious image will be thrown away. But, for example, a decimation of only 4 on a 1500 Hz signal sampled at 48 kHz and shifted by 1500 Hz will not discard that spurious image. Images exist at DC and at 3 kHz. The Nyquist limit in this case will be 6 kHz, thereby including the spurious image.

The user doesn't have to care about this, because the spectrum analyser will limit the displayed frequency range automatically, as dictated by the decimation ratio. And each decimator choses his own FIR filter coefficient, depending on wheter it decimates by two or three.

So, now that we understand your working assumptions, we can proceed with proper knowledge for how the system is to be used for real signals.BTW, I don't know your background, (..)

I am not a DSP guru myself, just an engineer in electronics and software development. But that software is mostly for automation, 'deeply' embedded stuff (microcontrollers etc), no digital signal processing. And the university maths has been rusting for over 15 years now ;-)

Averages are possible already, even though in a very limited way. I don't have too much spare time to invest for this program (I mostly implement what I need for myself, plus a few extras for other hams, scientists, and researchers every now and then).I'd like to see more data analysis tools provided, especially for the chart recorder -- running averages, variances, additional filters, etc.

Finally, here is one more example for the sin / cos - multiply, and complex decimating principle:

It's the digital downconverter (DDC), which uses a frontend that is very similar to the direct conversion receiver already mentioned earlier. The rest of the circuit is usually a chain of complex decimators (but, at least with DDCs implemented in silicon, with CIC filters in the first stages. This is something I didn't try in Spectrum Lab yet but CIC filters in the first decimator stages might reduce the CPU load):

http://en.wikipedia.org/wiki/Digital_down_converter

All the best,

Wolfgang .