- Feb 17View Source
For some reason I did not see it – must have scrolled off the screen due to the added line-feeds I receive. Me being lazy and not scrolling down – mea culpa.
I agree that the waterfall settings have no influence on the decoding, however the processing of the flatten will take up CPU and on marginal systems may drain CPU resource. Your system though should cope OK I would have thought. I have noticed at times that I get (for example) 4 decodes appearing in JTAlert-X with a few seconds break after which several others appear. Total time elapsed around 5 seconds even 4 12 or so decodes, and I assume that this is due to a delay in writing the information to the file from which JTAlert gets this data from, my logical assumption is that should this time then stretch into the next TX period then decoding may be affected.
Are you using split mode or just JT9? And have you tried both?
Is it OK on the sample audio files?
Is the audio source set to DVD quality?
Again clutching at straws perhaps, but possible lines of investigation.
I signed my post with my name and call so don't understand the "no name to address you" comment.
I have no shortage of CPU power (2.3 gig quad core A10). In any event, to the best of my knowledge, CPU power only influences the speed of the decode and not the probability of a successful decode.
I started wsjt modes with JT65-HF and then moved to wsjt-x to be able to use JT9. It has always been my understanding that the waterfall display parameters had no influence on decode, and that the signal received by the decoder was completely independent of any waterfall tailoring. Did something change?