Loading ...
Sorry, an error occurred while loading the content.

Re: Memory ARQ implemented?

Expand Messages
  • Robert
    Rick, Thanks for the good explanation. I am also a WINMOR and PACTOR III user and have been impressed at how many times the memory ARQ in those protocols
    Message 1 of 4 , Aug 6, 2011
    • 0 Attachment
      Rick,

      Thanks for the good explanation. I am also a WINMOR and PACTOR III user and have been impressed at how many times the memory ARQ in those protocols (also using Viterbi encoding I believe) seems to have saved the QSO when all else failed. It would eventually be interesting to see if memory ARQ becomes of any value as V4 evolves to be robust in other areas.

      Bob N7ZO
      Just upgraded to 0.4.4.0


      --- In V4Protocol@yahoogroups.com, "Rick Muething" <rmuething@...> wrote:
      >
      > Bob,
      >
      > At one time I had that in there and it wasn’t that effective (almost always it would get a good decode on just the Viterbi FEC decode before scoring a good decode on the memory ARQ. I ran lots of tests (typically 1000 frames at â€"5 dB S/N multipath poor) . There have been a lot of changes since then so I should go back and revisit that again. The memory ARQ is “soft” using the soft symbol values (before binary quantization) prior to the Viterbi decoder which gives the best performance but it showed no significant improvement during my earlier tests. It may be in part to the interleaving used in the Viterbi processing which tends to spread errors widely through the frame prior to decoding with the Viterbi decoder. Viterbi works best with randomly spread errors vs blocks of errors.
      >
      > Rick KN6KB
      >
    • Rick Muething
      Bob, Your question got me to thinking and looking back at my previous MEMARQ tests on V4. I was using a fairly basic averaging algorithm to simply averaging
      Message 2 of 4 , Aug 7, 2011
      • 0 Attachment
        Bob,
        Your question got me to thinking and looking back at my previous MEMARQ tests on V4.  I was using a fairly basic averaging algorithm to simply averaging the soft symbols upon each failed CRC check.  In hindsight I think there is  a much better approach.
         
        With each received frame a “score” of 0- 100 is calculated. that score measures basically the average symbol quality (how closely the symbols “fit” onto the 4 perfect “corners”.  This score is shown on the constellation diagram with each received data frame (whether correctly decoded or not)
         
        The idea is to use that score to determine when/if to average the received (but failed CRC) soft symbols into the Memory ARQ.  Using the score will insure the average is always improving in score and will discard any received frames that won’t improve the memory ARQ average score.
         
        It will take a while and some extensive testing (long sessions over the HF Simulator on poor channels) to see how effective it is.  It certainly won’t hurt and I should be able to gather some statistics to determine how useful it is.
         
        Thanks again for the feedback.
         
        Rick KN6KB
         
      Your message has been successfully submitted and would be delivered to recipients shortly.