Loading ...
Sorry, an error occurred while loading the content.
 

Re: [VLF_Group] Re: Large Radio

Expand Messages
  • JimNLori
    In my original post, I was thinking it would be advantageous to bring the VLF signals into phase from a number of separated receivers. (Is it true that the
    Message 1 of 24 , Apr 1 7:39 AM
      In my original post, I was thinking it would be advantageous to bring the
      VLF signals into phase from a number of separated receivers. (Is it true
      that the 60 Hz signal will be in phase everywhere on the power grid? - I am
      assuming it is.)

      The local 60 Hz harmonics should be different in the different locations, so
      those would be diminished by adding the signals together. Any other local
      noise would likewise be diminshed. If the VLF signals are brought into
      phase, then due to the distance separation of the receivers, the 60 Hz
      signal will be out of phase. In fact, the further the separation, the
      greater the dimuation of the 60 Hz signal. If 60 Hz is filtered at each
      receiver, then the signal should get a good boost relative to the noise.

      jd


      ----- Original Message -----
      From: <pan@...>
      To: <VLF_Group@yahoogroups.com>
      Sent: Friday, March 24, 2006 11:53 AM
      Subject: [VLF_Group] Re: Large Radio


      > Michael Neverdosky, N6CHV, wrote:
      >
      > > I just did a very quick, rough, (top of the head)
      > > calculation and find that a separaton of about 91
      > > miles between VLF antennas should give similar
      > > stereo separation to our ears listening to sound.
      >
      > I did a similar estimate once, and got a similar
      > answer.
      >
      > The data timestamping wouldn't have to be mega accurate,
      > because the software that combined the signals could be
      > made to latch onto, say, alpha signals, or the mains hum
      > signal, and use these to do a fine adjustment of the time
      > bases. It would make an interesting software project and
      > I imagine the results would be pretty awesome to listen to.
      >
      > --
      > Paul Nicholson
      > Manchester, UK.
      > --
      >
      >
      >
      > Post message: VLF_Group@yahoogroups.com
      > Subscribe: VLF_Group-subscribe@yahoogroups.com
      >
      > Members may request the option of receiving just one e-mail per day which
      contains all of the days comments. Simply send an e-mail to the list owner
      (VLF_Group-owner@yahoogroups.com) requesting digest mode.
      > Yahoo! Groups Links
      >
      >
      >
      >
      >
      >
    • Peter Schmalkoke
      I m afraid it s not that simple. If there is a 3-phase power distri- bution system, then the 50/60 Hz noise can be at any phase locally. Moreover it is
      Message 2 of 24 , Apr 1 10:34 AM
        I'm afraid it's not that simple. If there is a 3-phase power distri-
        bution system, then the 50/60 Hz noise can be at any phase locally.
        Moreover it is unlikely that the VLF signals from different locations
        would match well enough in the time domain to allow for useful ave-
        raging, even after an estimated "correcting" phase shift, without
        knowledge of the actual differences between them.

        I rather imagine a DSP based system like this: At every receiver
        location derive a power spectrogram of the received signal (thus
        ignoring all phase information), another one derived one from the
        local power grid (with appropriate previous filtering to match the
        spectral image with the received signal), and subtract the latter
        from the first with an appropriately adjusted amplitude ratio. The
        remaining power spectrograms from the individual stations could then
        be fed to and concentrated at an arbitrary location and there be
        averaged. In addition to the resulting power spectrogram, this could
        also be transformed to an artificially re-generated "pseudo" signal
        representing the actual overall VLF reception in an audible form,
        which could be distributed as a continuous audio data stream to
        everyone interested.

        This scheme should
        1. circumvent all the phase problems
        2. clear the individual signals from the power grid related noise
        3. reduce the effect of local interferences in the resulting signal
        4. reduce the effect of the artificially created dropouts with the
        signals from the individual stations.
        5. be scaleable and thereby result in increasing accuracy with every
        station contributing from within a limited area.

        Of course this would require that all the stations perform the spectral
        analysis based on the same sampling rate. Some kind of time stamp with
        an accuracy in the ms range would also be helpful. Matching receiver
        bandwidths and amplitude resolutions would not be necessary, however,
        thus still allowing for different individual analog sub-system designs.

        Peter



        jd wrote:

        >In my original post, I was thinking it would be advantageous to bring the
        >VLF signals into phase from a number of separated receivers. (Is it true
        >that the 60 Hz signal will be in phase everywhere on the power grid? - I am
        >assuming it is.)
        >
        >The local 60 Hz harmonics should be different in the different locations, so
        >those would be diminished by adding the signals together. Any other local
        >noise would likewise be diminshed. If the VLF signals are brought into
        >phase, then due to the distance separation of the receivers, the 60 Hz
        >signal will be out of phase. In fact, the further the separation, the
        >greater the dimuation of the 60 Hz signal. If 60 Hz is filtered at each
        >receiver, then the signal should get a good boost relative to the noise.
        >
        >jd
        >
        >
        >----- Original Message -----
        >
        >
        >
        >>Michael Neverdosky, N6CHV, wrote:
        >>
        >>
        >>
        >>>I just did a very quick, rough, (top of the head)
        >>>calculation and find that a separaton of about 91
        >>>miles between VLF antennas should give similar
        >>>stereo separation to our ears listening to sound.
        >>>
        >>>
        >>I did a similar estimate once, and got a similar
        >>answer.
        >>
        >>The data timestamping wouldn't have to be mega accurate,
        >>because the software that combined the signals could be
        >>made to latch onto, say, alpha signals, or the mains hum
        >>signal, and use these to do a fine adjustment of the time
        >>bases. It would make an interesting software project and
        >>I imagine the results would be pretty awesome to listen to.
        >>
        >>--
        >>Paul Nicholson
        >>Manchester, UK.
        >>--
        >>
        >>
      • pan@abelian.demon.co.uk
        ... Not possible: From some source there will be a propagation delay to each receiver - which you could adjust away by some added delay compensation. But
        Message 3 of 24 , Apr 1 11:12 AM
          JD wrote:

          > In my original post, I was thinking it would be advantageous
          > to bring the VLF signals into phase from a number of separated
          > receivers.

          Not possible: From some source there will be a propagation delay
          to each receiver - which you could adjust away by some added delay
          compensation. But then from any other source, the delays (and
          therefore the relative phases) would be completely different.
          So you can only bring every frequency into phase from all receivers
          for just one source location.

          > Is it true that the 60 Hz signal will be in phase everywhere on
          > the power grid?

          I don't know about 'in phase' everywhere, but in the UK, as far as
          I know, the frequency is the same everywhere, so every mains outlet
          has some fixed phase relationship to every other.

          As far as I know, the ear does not use phase information to discern
          arrival direction, so all that would be needed would be to bring
          the signals from pairs of receivers into some kind of reasonable
          time domain unison and then keep things locked by counting the
          mains cycles from each receiver. The sample rates, even if nominally
          the same, will not be identical, so the received data streams will
          need to be resampled anyway.

          Thinking more about this, little or no timing information would need
          to be inserted. It would not be unreasonable to get the software to
          scan the two data sets at various sub-samplings, produce a correlation
          function, and home in on the spike. Then one might use the steady
          stream of sferics to obtain a tight lock somewhere near the start of
          the data, and from there, maintain the lock by line cycle counting.
          The line cycle counters would nudge the resampling clock frequencies
          to maintain the counts in step.
          --
          Paul Nicholson
          Manchester, UK.
          --
        • pan@abelian.demon.co.uk
          ... I don t think this would work, the difficult bit is surely the appropriate previous filtering bit. The mains spectra as received through the antenna
          Message 4 of 24 , Apr 1 12:01 PM
            Peter Schmalkoke wrote:

            > derive a power spectrogram of the received signal ...
            > another one derived one from the local power grid (with
            > appropriate previous filtering to match the spectral image
            > with the received signal), and subtract ...

            I don't think this would work, the difficult bit is surely the
            'appropriate previous filtering' bit. The mains spectra as
            received through the antenna and as taken more directly from the
            mains, would not maintain a stable amplitude ratio, so you would,
            as you say, have to match the two spectra before subtracting,
            which is equivalent to just striking out those unwanted
            frequencies directly from the received signal (as for example
            one of the filters Wolf has implemented in Spectrum Lab).

            I like the idea of a scheme in which, as more receivers are
            plumbed in, the quality improves. But I don't know how to do
            this for anything other than either a spot frequency or a single
            source location.

            Earlier in the thread, Shawn wrote of the impressive stereo effect
            obtained by manual (tape speed adjustment!!) alignment of two
            recordings. I don't see why something similar but more precise
            can't be implemented in software. This sounds like something
            realistically achievable. All the software has to achieve is a
            close and stable time domain alignment of a pair of signals, and
            we can rely on the human DSP to do the most wonderful non-coherent
            directional and spectral joint analysis - to the pleasure of the
            listener.
            --
            Paul Nicholson
            Manchester, UK.
            --
          • JimNLori
            That is a very good point, only one source could be resolve at a given phase relationship of the signals. But that reminds me of the project where 3-D
            Message 5 of 24 , Apr 1 5:33 PM
              That is a very good point, only one source could be resolve at a given phase
              relationship of the signals. But that reminds me of the project where 3-D
              lightning tracks were recovered by use of three microphones. Even one VLF
              source could be expected to move through space, I suppose. I wonder if the
              3D structure of VLF sources could be re-constructed if three receivers were
              located far enough apart?

              jd

              ----- Original Message -----
              From: <pan@...>
              To: <VLF_Group@yahoogroups.com>
              Sent: Saturday, April 01, 2006 1:12 PM
              Subject: [VLF_Group] Re: Large Radio


              > JD wrote:
              >
              > > In my original post, I was thinking it would be advantageous
              > > to bring the VLF signals into phase from a number of separated
              > > receivers.
              >
              > Not possible: From some source there will be a propagation delay
              > to each receiver - which you could adjust away by some added delay
              > compensation. But then from any other source, the delays (and
              > therefore the relative phases) would be completely different.
              > So you can only bring every frequency into phase from all receivers
              > for just one source location.
              >
              > > Is it true that the 60 Hz signal will be in phase everywhere on
              > > the power grid?
              >
              > I don't know about 'in phase' everywhere, but in the UK, as far as
              > I know, the frequency is the same everywhere, so every mains outlet
              > has some fixed phase relationship to every other.
              >
              > As far as I know, the ear does not use phase information to discern
              > arrival direction, so all that would be needed would be to bring
              > the signals from pairs of receivers into some kind of reasonable
              > time domain unison and then keep things locked by counting the
              > mains cycles from each receiver. The sample rates, even if nominally
              > the same, will not be identical, so the received data streams will
              > need to be resampled anyway.
              >
              > Thinking more about this, little or no timing information would need
              > to be inserted. It would not be unreasonable to get the software to
              > scan the two data sets at various sub-samplings, produce a correlation
              > function, and home in on the spike. Then one might use the steady
              > stream of sferics to obtain a tight lock somewhere near the start of
              > the data, and from there, maintain the lock by line cycle counting.
              > The line cycle counters would nudge the resampling clock frequencies
              > to maintain the counts in step.
              > --
              > Paul Nicholson
              > Manchester, UK.
              > --
              >
              >
              >
              > Post message: VLF_Group@yahoogroups.com
              > Subscribe: VLF_Group-subscribe@yahoogroups.com
              >
              > Members may request the option of receiving just one e-mail per day which
              contains all of the days comments. Simply send an e-mail to the list owner
              (VLF_Group-owner@yahoogroups.com) requesting digest mode.
              > Yahoo! Groups Links
              >
              >
              >
              >
              >
              >
            • Peter Schmalkoke
              ... Agreed. But a perfect amplitude match would not strictly be necessary. Too little subtracted noise would still yield an improvement, while too much
              Message 6 of 24 , Apr 1 7:09 PM
                Paul Nicholson wrote:

                >> derive a power spectrogram of the received signal ...
                >> another one derived one from the local power grid (with
                >> appropriate previous filtering to match the spectral image
                >> with the received signal), and subtract ...
                >
                > ... The mains spectra as
                > received through the antenna and as taken more directly from the
                > mains, would not maintain a stable amplitude ratio,

                Agreed. But a perfect amplitude match would not strictly be
                necessary. Too little subtracted "noise" would still yield an
                improvement, while too much subtracted "noise" would only result
                in removal of a greater portion of the healthy signal components
                than necessary, which still results in only clean signal compo-
                nents at the output.

                > so you would,
                > as you say, have to match the two spectra before subtracting,
                > which is equivalent to just striking out those unwanted
                > frequencies directly from the received signal (as for example
                > one of the filters Wolf has implemented in Spectrum Lab).

                That's not equivalent. I do not only think of the harmonics at
                fixed frequencies, but also of the broadband pulses from the power
                grid, which can not be removed using a frequency comb filter.

                > I like the idea of a scheme in which, as more receivers are
                > plumbed in, the quality improves. But I don't know how to do
                > this for anything other than either a spot frequency or a single
                > source location.

                Paul, you seem to be stuck with thinking of phase and amplitude
                match. My approach does not require phase match at all, since all
                phase information is completely ignored with the spectrograms and
                all the arithmetic is done with the spectrograms only. In the
                spectrograms all amplitudes are positive and thus the outputs of
                the individual receivers can be added (averaged) without occurrence
                of mutual cancellations. Time shifts between the received signals
                from different locations (due to different propagation delays),
                would only result in a nearly inaudible "smearing out" of the
                averaged signal in time (according to distances between the
                receiver locations), while such events like whistlers would still
                remain articulate. With an appropriate definition of the maximum
                allowable inaccuracy in the time domain it would also be possible
                to increase the maximum distance between the receiver locations,
                which may contribute to the same common output stream.

                Peter
              • pan@abelian.demon.co.uk
                ... Yes, I don t know how to do a non-coherent summation of the signals and turn the result back into something audible. Having discarded the phase
                Message 7 of 24 , Apr 1 7:44 PM
                  Peter Schmalkoke wrote:

                  > Paul, you seem to be stuck with thinking of phase and amplitude
                  > match.

                  Yes, I don't know how to do a non-coherent summation of the
                  signals and turn the result back into something audible.
                  Having discarded the phase information, you no longer have enough
                  information to reproduce an audio output. You might suggest
                  simply setting the phase of each frequency component to zero
                  before converting back to the time domain, but then the resulting
                  audio would surely be very poor. For example, each 'click'
                  of a sferic will end up smeared out in time since its frequency
                  components will no longer sum to a sharp point in the time domain.
                  --
                  Paul Nicholson
                  Manchester, UK.
                  --
                • Wolf DL4YHF
                  Greetings all, An interesting subject - wish I had more time to contribute something senseful ;-) Just a few points / hints for now: - The sampling rate /
                  Message 8 of 24 , Apr 2 12:57 AM
                    Greetings all,

                    An interesting subject - wish I had more time to contribute something
                    senseful ;-)

                    Just a few points / hints for now:
                    - The sampling rate / timestamp issue:
                    Peter Martinez (G3PLX) is experimenting with a system for coherent
                    (phase-sensitive) analysis of signals in the time domain, using a GPS
                    receiver with 1-pps-output on one channel of the soundcard to
                    A) resample the audio stream to a ultra-precise sampling rate, using a
                    PLL to detect the momentary soundcard sample rate, and a simple 1-st
                    order interpolation for resampling the data. Then do the rest of
                    processing at exactly 8000 samples/second (in his case; here 48000
                    samples per second may be appropriate). Peter told me the loss due to
                    this simple resampling process is small, because the sampling rates
                    (input and output) are very close to each other.

                    B) add a precise timestamp to the collected data for later
                    post-processing (time-of-arrival, coherent integration, etc).

                    With this, one could circumvent much of the soundcard- and real-time-
                    related problems in this. Of course, some additional hardware (GPS
                    receiver with 1-pps-output) would be required on every receiving site.

                    About the mains harmonics and other annoyances we have to deal with on
                    some receiving sites: Not all of them are connected to the AC mains
                    frequency. Some are totally independent of the 50 / 60 Hz frequency
                    (like aynchronous motors, CRT deflection signals, etc), others ore
                    "sidebands" of the mains harmonics (caused by a kind of amplitude
                    modulation). I tried different approaches besides Paul's excellent
                    multi-stage comb filter to get rid of most of them, but there really is
                    no simple solution to cope with all. Especially, if the first step would
                    be converting the RX stream into overlapping power spectra, you would
                    lose a lot of information - imagine this: A tweek, caught in a single
                    FFT block, cannot be converted back from a power spectrum into the time
                    domain : Looking at the spectrum, you cannot see if it goes up or down
                    in frequency (this is the reason why the FFT-based filter in SpecLab
                    processes amplitudes *and* phases). You could make the FFTs so short
                    that you have 20 spectra per second, but the frequency bins would be so
                    wide that it's almost useless to remove narrow-spaced harmonics or other
                    carriers.

                    Have a nice sunday all,

                    Wolfgang "Wolf" Büscher
                    (DL4YHF)
                  • Peter Schmalkoke
                    ... Sure, absolutely. But our starting point was the wish to somehow combine the signals from separated receiving locations with the goal being listening to
                    Message 9 of 24 , Apr 2 5:55 AM
                      Paul Nicholson wrote:

                      > ... setting the phase of each frequency component to zero
                      > before converting back to the time domain, but then the resulting
                      > audio would surely be very poor. For example, each 'click'
                      > of a sferic will end up smeared out in time since its frequency
                      > components will no longer sum to a sharp point in the time domain.

                      Sure, absolutely.

                      But our starting point was the wish to somehow combine the signals
                      from separated receiving locations with the goal being listening to
                      that combination, not precise analysis in the time domain.

                      I do also compare the imagined result with the audio output streams
                      of the online DAN receivers as a reference and I still see a potential
                      quality boost with the audio side of the coin.

                      The necessarily existing time shifts between the individual signals
                      (depending on the distances between the receiver locations and the
                      actual locations of the VLF signal source) must result in uncertainties
                      in the time domain anyway.

                      This could perhaps be solved by deriving the actual source locations
                      from all the receivers' output signals and then reconstruct virtual
                      signals that represent the source locations. Surely a diffucult task.
                      Thereafter the associated information on source location could be
                      omitted and the virtual signals be added based on a single common
                      clock. The information on source location could also be used to recon-
                      struct an arbitrary multi channel audio image (with stereo being the
                      simplest choice)!

                      Such a scheme would require that
                      - all the individual receiver's signals be obtained with largely iden-
                      tical hardware (excluding much of the fun factor),
                      - very precise timing information (in the µs range) be maintained
                      among all the participating receiving units and added to the data
                      streams,
                      - the data streams be transferred to a central signal processing unit
                      without a significant portion of the signal processing taking place
                      at the receiver locations. The entire DSP must then be conducted at
                      that CPU location. The workload there would be tremendous and most
                      probably require some pretty expensive hardware.

                      So this would surely lie beyond the scope of the initial idea.

                      Peter
                    • Peter Schmalkoke
                      ... Yes, a well known and regrettable fact. The approach of combining a multitude of spatially separated receivers could diminish some of the more local
                      Message 10 of 24 , Apr 2 6:51 AM
                        Wolf DL4YHF wrote:

                        > About the mains harmonics and other annoyances we have to deal with on
                        > some receiving sites: Not all of them are connected to the AC mains
                        > frequency. Some are totally independent of the 50 / 60 Hz frequency
                        > (like aynchronous motors, CRT deflection signals, etc), others ore
                        > "sidebands" of the mains harmonics ...

                        Yes, a well known and regrettable fact. The approach of combining a
                        multitude of spatially separated receivers could diminish some of the
                        more local annoyances like motors and CRTs. Furthermore, if a multitude
                        of receivers within a restricted area is combined, then perhaps some
                        designated stations could be used to pick up a maximum relative amplitude
                        of some identified disturbing sources, which are typical to that area
                        (like high voltage lines for both mains and railway power supply) for
                        subtraction purposes only. Since railway power lines are the most pro-
                        minent radiating sources of the disturbances from that net themselves,
                        this in particular should help quite well.

                        Peter
                      • pan@abelian.demon.co.uk
                        ... Such a scheme is much more general purpose in that it doesn t exploit any particular characteristic of the received signal. I ve used a similar process,
                        Message 11 of 24 , Apr 2 11:24 AM
                          Wolf wrote:

                          > Peter Martinez (G3PLX) is experimenting with a system for
                          > coherent (phase-sensitive) analysis of signals in the time
                          > domain, using a GPS receiver with 1-pps-output on one
                          > channel

                          Such a scheme is much more general purpose in that it doesn't
                          exploit any particular characteristic of the received signal.
                          I've used a similar process, using a PPS signal derived from
                          60kHz MSF, to timestamp soundcard data.

                          I tried this morning an experiment, removing the phase information
                          from an FT block, rotating each complex bin vector into alignment
                          with the real axis. The resulting output was barely recognisable
                          as a VLF signal. The crisp patter of sferics was smeared out into
                          a noisy hiss with only a small fraction of sferics emerging vaguely
                          sferic-like. The input and output signals have identical amplitude
                          spectra, but sound completely different. I'm not surprised - I just
                          wondered what it would sound like!

                          I think we're lucky with the VLF signals, there are sufficient
                          'features' within the signal to provide information to bring a
                          left and right signal into some realistic stereo alignment, so
                          we don't require the receivers to go to the trouble of injecting
                          timing marks. They don't even have to filter the hum (nice isn't
                          it to find a good use for that tiresome hum!)

                          But when it comes to actually combining signals, eg to make say
                          the left channel the sum of two or more VLF data streams, that
                          becomes much more tricky, and possibly undesirable. Undesirable
                          because the combined signal would weaken the stereo image.
                          Each 'ear' would be listening to the sum of two or more receivers
                          so the apparent position of a source in the stereo image would
                          become smeared or ambiguous.

                          Peter wrote:

                          > But our starting point was the wish to somehow combine the
                          > signals from separated receiving locations with the goal being
                          > listening to that combination, not precise analysis in the time
                          > domain.

                          > This could perhaps be solved by deriving the actual source
                          > locations from all the receivers' output signals and then
                          > reconstruct virtual signals that represent the source locations.

                          Yes, we are aiming at quite different things then, as I have
                          digressed away on a tangent, inspired by Johan's and Shawn's
                          suggestion of combining pairs of receivers to make a stereo
                          recording. Or rather, not combining, but simply aligning. We let
                          the 'ear' do the combining so no mega CPU required apart from the
                          brain. I'm afraid my comments so far have been limited to that
                          which is realistically achievable, and all I can contribute with
                          that limitation is to suggest that no extra timing information
                          need be inserted by the receivers.

                          Launching now into speculation, and following Peter's direction,
                          I wonder how much can be achieved towards rendering a virtual
                          image of the VLF source distribution, using perhaps only
                          non-coherent signal processing. A certain amount is done already
                          by those networks of VLF receivers that are used to plot lightning
                          distribution. Accurate arrival timing of prominent sferics is used
                          to triangulate the sources and so produce a map of thunderstorm
                          locations. Perhaps something similar could be, or is being, done
                          for things like whistlers and auroral noises? I wouldn't know
                          where to begin with the signal processing. But I would be tempted
                          to throw in the same suggestion: that the VLF signal itself contains
                          enough markers to allow multiple recieved signals to be organised
                          properly for processing. With some thought, it might not be necessary
                          to demand uS timestamping. An interesting question is:- what is the
                          minimum number of receivers necessary to pin down, say, sferic source
                          locations, given that you have no embedded timing information in
                          the signals. Say, you can use only the sferics for alignment, and
                          assume perfect sample rates. So the degrees of freedom you have
                          are to translate all but one of the received signals along the time
                          axis, and you are looking for the one correct set of time displacements
                          which bring all the received sferics into their correct place on
                          a virtual source map. The question is, is this possible, and if so,
                          how many receivers would be needed to eliminate the timing ambiguity?
                          And a follow-up question:- if you are allowed to compare amplitudes
                          and not just relative arrival times (ie you demand calibrated
                          receivers), does that reduce the number? I don't know if that's
                          an 'interesting' question - I pose it before considering the answer.
                          Maybe the lightning mapping networks already use some algorithm along
                          those lines?

                          For whistlers and tweaks, you could in principle de-chirp them and
                          then proceed with a timing comparison as for sferic locating.

                          Oh well, hopefully there's some bones to pick at in there.
                          --
                          Paul Nicholson
                          Manchester, UK.
                          --
                        • Peter Schmalkoke
                          ... What does pps stand for? Wikipedia gives at least 17 results - none of them applicable. Peter
                          Message 12 of 24 , Apr 2 1:24 PM
                            Wolf wrote:

                            >> Peter Martinez (G3PLX) is experimenting with a system for
                            >> coherent (phase-sensitive) analysis of signals in the time
                            >> domain, using a GPS receiver with 1-pps-output on one
                            >> channel
                            >
                            What does "pps" stand for?
                            Wikipedia gives at least 17 results - none of them applicable.

                            Peter
                          • Rob
                            Hi Peter PPS = pulse per second Cheers Rob
                            Message 13 of 24 , Apr 2 7:36 PM
                              Hi Peter
                              PPS = pulse per second
                              Cheers
                              Rob

                              --- In VLF_Group@yahoogroups.com, Peter Schmalkoke
                              <peter.schmalkoke@...> wrote:
                              >
                              > Wolf wrote:
                              >
                              > >> Peter Martinez (G3PLX) is experimenting with a system for
                              > >> coherent (phase-sensitive) analysis of signals in the time
                              > >> domain, using a GPS receiver with 1-pps-output on one
                              > >> channel
                              > >
                              > What does "pps" stand for?
                              > Wikipedia gives at least 17 results - none of them applicable.
                              >
                              > Peter
                              >
                            • Jean-L. RAULT
                              ... Jean-Louis F6AGR
                              Message 14 of 24 , Apr 2 8:35 PM
                                Peter Schmalkoke a écrit :
                                > Wolf wrote:
                                >
                                >
                                >>> Peter Martinez (G3PLX) is experimenting with a system for
                                >>> coherent (phase-sensitive) analysis of signals in the time
                                >>> domain, using a GPS receiver with 1-pps-output on one
                                >>> channel
                                >>>
                                > What does "pps" stand for?
                                > Wikipedia gives at least 17 results - none of them applicable.
                                >
                                > Peter
                                >
                                > It's an ultrastable one "pulse per second" (1 Hz) output signal delivered by the GPS receiver.
                                >
                                Jean-Louis F6AGR
                              • Wolf DL4YHF
                                Hi Peter, 1-pps = one pulse per second. Produces exactly one pulse per second. In certain GPS receivers, the accuracy of the rising edge of this pulse is in
                                Message 15 of 24 , Apr 3 8:51 AM
                                  Hi Peter,

                                  1-pps = one pulse per second.
                                  Produces "exactly" one pulse per second. In certain GPS receivers, the accuracy of the rising edge of this pulse is in the XX nanosecond range, and it's almost jitter-free, making it possible to lock a 10 MHz OCXO (oven controlled crystal oscillator) reference frequency to the GPS'es master clock.
                                  Example: google for "The G4JNT GPSDO".

                                  Cheers, Wolf.





                                  >>>> Peter Martinez (G3PLX) is experimenting with a system for
                                  >>>> coherent (phase-sensitive) analysis of signals in the time
                                  >>>> domain, using a GPS receiver with 1-pps-output on one
                                  >>>> channel
                                  >>
                                  >>
                                  >>
                                  >
                                  >
                                  What does "pps" stand for?
                                  Wikipedia gives at least 17 results - none of them applicable.

                                  Peter
                                Your message has been successfully submitted and would be delivered to recipients shortly.