Loading ...
Sorry, an error occurred while loading the content.

Re: apodising filters

Expand Messages
  • ofer_r.geo
    Hi All What the Benchmark do to correct Jitter, is to use an Asynchronies Sample Rate Converter (ASRC) The Jittered samples, are samples moved in time, because
    Message 1 of 22 , Sep 1, 2009
    • 0 Attachment
      Hi All

      What the Benchmark do to correct Jitter, is to use an Asynchronies Sample Rate Converter (ASRC)
      The Jittered samples, are samples moved in time, because of clock problems, ASRC regards this samples as correct time samples, which is not true, and reconstruct the signal with incorrect time samples, the reconstruction is done to the clock of the DAC,
      This is a wrong way to handle Jitter
      There are several companies who do this right, like MSB, which ignores the sending part clock, put the data in a buffer, and then read the data with the DAC clock
      Of course, this method can cause the data to overflow, or underflow, so they use silence gaps in the music, to resync the buffer
      There are other ways to handle jitter
      In the professional audio industry, there is a master clock, which control the sender and the receiver parts

      Ofer


      --- In regsaudioforum@yahoogroups.com, "regtas43" <regonaudio@...> wrote:
      >
      >
      > Maybe I can offer a mathematical view(without formulas) as I see the situation at present--no claims of definitive here, I am too busy working on my book to have gone through this inch by inch as yet.
      >
      > The theorem(Whittaker's Theorem though others got their names on it later) is this:
      > If a signal has no energy content above frequency f than knwoing samples at 2f times per time interval completely determines the signal.
      >
      > What this means in practice is that if one linear phase filters the signal to have no content above say 20 k and does 44.1 sampling, then one can reconstruct the part of the orignal signal that was below 20k from the samples.
      >
      > Now think about an impulse in the original signal. An impulse
      > has infiite bandwidth. What does it look like if one removes the energy above 20 k energy but does not shift phase below 20k? It looks like what is known as a sync function-- or to mathematicians, a Dirichlet kernel.
      > This looks like a central pulse with little ripples before and after.
      >
      > In no sense is this "wrong". This really is what the part of an impulse looks like if you take out its content above 20k. And the "pre-ringing "
      > part is above 20k. This is just what an impulse looks like if you bandlimit it without phase alteration.
      >
      > Now how could you do the filtering to get this thing from an impulse? The good way would be to sample the signal at a very high bit rate, so that the analogue filter needed to get rid of the "aliasing energy" (that is, to get rid of content above half the sampling rate) could be done at a very high frequency and this analogue filtering would not be doing any phase shifting to speak of down below 20kHz.
      >
      > Then you could use a linear phase digital filter to get rid of the energy above 20 kHz and you would be ready for CD.
      >
      > But if you do analogue brickwall filtering at 20k , you are typically going to have heavy phase shifting at the top,just below 20k and down on into the audible range. This was common in times gone by. It is also possible to screw up the digital filtering even if you do do the initial filtering high and the initial A to D at a high sampling rate.
      >
      > The Benchmark DAC does linear phase reconstruction as I understand what is going on: if the original signal was linear- phase converted into
      > digits, which makes an impulse into the Dirichlet kernel, then the Benchmark will produce the correct resconstruction of the linear phase filtered original signal. What will come out will be a literal copy
      > of the under-20kHz part of the signal that came into the original A to D device, except for dither noise.
      >
      > But of course if the recording's A to D messed the signal up and did not filter it so as to preserve the sub20k part of the signal correctly(correctly as to phase--most of them are frequency response flat), then of course the output from a linear phase converter will not put out the original signal 's sub-20 kHz part correctly. A D to A can only work with what it has.
      >
      > So what de facto do apodising filters do? They phase shift the reconstruction. The result is not what was there originally as the sub20kHz part of the input to the A to D, if the A to D did tis job correctly! The signal has been messed with as to phase--this is assuming the A to D did the right thing. But of course if the A to D did the wrong thing, then the apodising filters changing of it might in fact be closer to the original signal. But one is effectively guessing at what happened wrong.
      >
      > Why is this even sensible? Why is this different from EQing your system as to response, de facto guessing what microphones do wrong?
      > (which most people do not admit to doing though most of them do it in effect since they "review" things by listening to commercial recordings).
      >
      > The answer to that seems to be this:
      > Phase shifting that causes audible range preringing is annoying, but phase shfting that causes post-ringing is not so annoying, because the sound itself masks the post-ringing. Having the aftersound is not as bad as having the too early sound.
      >
      > So one can get away with the phase EQ in that if the phase was right on the CD to begin with(and sometimes it is), it will be all right to make it post ring--one is altering the signal but not in asome annoying way. The alteration might not even be really audible because of masking. But what would be audible and presumably good would be to get rid of the pre-ringing from wrongly made CDs--because preringing is hyper annoying.
      >
      > I agree with Victor that well made CDs can sound extremely good. There is quite a lot of evidence that corectly done CD digital is remarkably close to transparent. But the problem is that all too often it has not been correctly done!
      >
      > So what we have is , it seems,with apodising filters, is a system that changes the signal but is harmless(or largely so) when it is not needed and very helpful when it is needed.
      >
      > I would suppose that having it switchable would be sensible!
      >
      > Anyway, that is the way I see the mathematics.
      >
      > I have not heard the results yet. (I am hoping to get my hands on a Meridian player soon).
      >
      > If I have interpreted this wrong, UB should feel free to say so.
      > (I have not studied this in microdetail as yet--I am busy finsihing my book)
      >
      > REG
      >
    • listentwice2002
      Hi Ofer At the last weekend I had the opportunity to listen to the new Naim DAC. Naim claim to achieve zero jitter. As I understood it, they separate 2 channel
      Message 2 of 22 , Sep 1, 2009
      • 0 Attachment
        Hi Ofer
        At the last weekend I had the opportunity to listen to the new Naim DAC. Naim claim to achieve zero jitter. As I understood it, they separate 2 channel music data from the other data transmitted. Then only the music data bits are stored in a FirstInFirstOut buffer and a circuit controls buffer over-and underrun. 10 internal clocks are present and the right one is switched as master clock to match incoming sample rate.

        One thing we discussed was why digital cables make a difference with reclocking DACs and Naims answer was: Currently people are working on how to separate noise from the data. If they succeed there will be a way of analyzing and reducing specific noise or understand how to compensate it.
        Audio streaming via network shows some audible differences between cables, despite data buffering. Noise is the culprit, they say, serial data, jitter, noise - bit correct data, no jitter, but still noise.
        The noise problem is still to be solved, but first it must be isolated from data for analysis.
        The sound was very forward, with good depth and little tss (only CDs were played, no HD material, even though the DAC was said to be capable of 768 KHz). Bass is very powerful with the internal 210VA power supply, still it was outperformed when using an external powersupply with 500VA transformer (or maybe even more) driving DACs analog section and following stages. Such external supply releases the internal supply from load, so digital benefits from more reserve.

        I think the word clock idea will not work at home with numerous digital sources with samplerates from 44.1 to 192.
        It might work with clock outputs from the DAC to the source, assigned to certain inputs. But as soon as you run a DVD-player with a CD instead of DVD, the whole thing will stop working.
        Regards Hans-Martin

        >
        > Hi All
        >
        > What the Benchmark do to correct Jitter, is to use an Asynchronies Sample Rate Converter (ASRC)
        > The Jittered samples, are samples moved in time, because of clock problems, ASRC regards this samples as correct time samples, which is not true, and reconstruct the signal with incorrect time samples, the reconstruction is done to the clock of the DAC,
        > This is a wrong way to handle Jitter
        > There are several companies who do this right, like MSB, which ignores the sending part clock, put the data in a buffer, and then read the data with the DAC clock
        > Of course, this method can cause the data to overflow, or underflow, so they use silence gaps in the music, to resync the buffer
        > There are other ways to handle jitter
        > In the professional audio industry, there is a master clock, which control the sender and the receiver parts
        >
        > Ofer
        >
        >
        > --- In regsaudioforum@yahoogroups.com, "regtas43" <regonaudio@> wrote:
        > >
        > >
        > > Maybe I can offer a mathematical view(without formulas) as I see the situation at present--no claims of definitive here, I am too busy working on my book to have gone through this inch by inch as yet.
        > >
        > > The theorem(Whittaker's Theorem though others got their names on it later) is this:
        > > If a signal has no energy content above frequency f than knwoing samples at 2f times per time interval completely determines the signal.
        > >
        > > What this means in practice is that if one linear phase filters the signal to have no content above say 20 k and does 44.1 sampling, then one can reconstruct the part of the orignal signal that was below 20k from the samples.
        > >
        > > Now think about an impulse in the original signal. An impulse
        > > has infiite bandwidth. What does it look like if one removes the energy above 20 k energy but does not shift phase below 20k? It looks like what is known as a sync function-- or to mathematicians, a Dirichlet kernel.
        > > This looks like a central pulse with little ripples before and after.
        > >
        > > In no sense is this "wrong". This really is what the part of an impulse looks like if you take out its content above 20k. And the "pre-ringing "
        > > part is above 20k. This is just what an impulse looks like if you bandlimit it without phase alteration.
        > >
        > > Now how could you do the filtering to get this thing from an impulse? The good way would be to sample the signal at a very high bit rate, so that the analogue filter needed to get rid of the "aliasing energy" (that is, to get rid of content above half the sampling rate) could be done at a very high frequency and this analogue filtering would not be doing any phase shifting to speak of down below 20kHz.
        > >
        > > Then you could use a linear phase digital filter to get rid of the energy above 20 kHz and you would be ready for CD.
        > >
        > > But if you do analogue brickwall filtering at 20k , you are typically going to have heavy phase shifting at the top,just below 20k and down on into the audible range. This was common in times gone by. It is also possible to screw up the digital filtering even if you do do the initial filtering high and the initial A to D at a high sampling rate.
        > >
        > > The Benchmark DAC does linear phase reconstruction as I understand what is going on: if the original signal was linear- phase converted into
        > > digits, which makes an impulse into the Dirichlet kernel, then the Benchmark will produce the correct resconstruction of the linear phase filtered original signal. What will come out will be a literal copy
        > > of the under-20kHz part of the signal that came into the original A to D device, except for dither noise.
        > >
        > > But of course if the recording's A to D messed the signal up and did not filter it so as to preserve the sub20k part of the signal correctly(correctly as to phase--most of them are frequency response flat), then of course the output from a linear phase converter will not put out the original signal 's sub-20 kHz part correctly. A D to A can only work with what it has.
        > >
        > > So what de facto do apodising filters do? They phase shift the reconstruction. The result is not what was there originally as the sub20kHz part of the input to the A to D, if the A to D did tis job correctly! The signal has been messed with as to phase--this is assuming the A to D did the right thing. But of course if the A to D did the wrong thing, then the apodising filters changing of it might in fact be closer to the original signal. But one is effectively guessing at what happened wrong.
        > >
        > > Why is this even sensible? Why is this different from EQing your system as to response, de facto guessing what microphones do wrong?
        > > (which most people do not admit to doing though most of them do it in effect since they "review" things by listening to commercial recordings).
        > >
        > > The answer to that seems to be this:
        > > Phase shifting that causes audible range preringing is annoying, but phase shfting that causes post-ringing is not so annoying, because the sound itself masks the post-ringing. Having the aftersound is not as bad as having the too early sound.
        > >
        > > So one can get away with the phase EQ in that if the phase was right on the CD to begin with(and sometimes it is), it will be all right to make it post ring--one is altering the signal but not in asome annoying way. The alteration might not even be really audible because of masking. But what would be audible and presumably good would be to get rid of the pre-ringing from wrongly made CDs--because preringing is hyper annoying.
        > >
        > > I agree with Victor that well made CDs can sound extremely good. There is quite a lot of evidence that corectly done CD digital is remarkably close to transparent. But the problem is that all too often it has not been correctly done!
        > >
        > > So what we have is , it seems,with apodising filters, is a system that changes the signal but is harmless(or largely so) when it is not needed and very helpful when it is needed.
        > >
        > > I would suppose that having it switchable would be sensible!
        > >
        > > Anyway, that is the way I see the mathematics.
        > >
        > > I have not heard the results yet. (I am hoping to get my hands on a Meridian player soon).
        > >
        > > If I have interpreted this wrong, UB should feel free to say so.
        > > (I have not studied this in microdetail as yet--I am busy finsihing my book)
        > >
        > > REG
        > >
        >
      • yvl222
        Time and frequency domains give us equivalent technical information. Unfortunately the frequency domain info we have been using for consumer audio often uses
        Message 3 of 22 , Sep 1, 2009
        • 0 Attachment
          Time and frequency domains give us equivalent technical information. Unfortunately the frequency domain info we have been using for consumer audio often uses only the amplitude part of the information. The phase response is left out. If phase has been an integral part of the audio measurements, the group delay issue in steep filters may be better understood and perhaps have been resolved already.

          Victor

          --- In regsaudioforum@yahoogroups.com, Uli Brueggemann <uli.brueggemann@...> wrote:
          >
          > IMHO this is a key point how we are listening.
          > If even the maths cannot solve the problem to get information about exact
          > frequency and time behaviour at the same time then the ear/brain system also
          > cannot do this.
          > This means we cannot perceive a signal with both perfect information for
          > time and frequency domain. We hear a mix. That's why we do not recognize all
          > the sharp peaks and dips in the frequency response and why the usual 1/n
          > octave averaging or ERB ... works. We also cannot experience the
          > micro-detailed timing as then we wouldn't know about the frequency then. MP3
          > is thus possible.
          >
          > Of course the main question is to get a proper approach to how we listen and
          > detect both time and frequency information from a signal.
          >
          > Uli
          >
          >
          >
          > On Tue, Sep 1, 2009 at 1:19 AM, regtas43 <regonaudio@...> wrote:
          >
          > >
          > >
          > > This is just the mathematics of the situation.
          > >
          > > It considerably predates Heisenberg as mathematics, but nowadays
          > > it is usually called the Heisenberg Uncertainty Principle,
          > > because the physics applications are so spectacular.
          > >
          > >
          >
        • yvl222
          Hi HM, My observations are similar to yours. This is probably one reason why the Big Ben produces a cleaner sound, though its measured jitter is not
          Message 4 of 22 , Sep 1, 2009
          • 0 Attachment
            Hi HM,

            My observations are similar to yours. This is probably one reason why the Big Ben produces a cleaner sound, though its measured jitter is not exceptional.

            Although part of the effect of noise can be translated into jitter, the spectrum of the noise can affect the audio quality in different ways. Recently I had the chance to took at the jitter and their spectra of several digital audio devices. I was surprised to observe that one output device with jitter just below 1 nS sounded distictly better and cleaner than another device with jitter in the 150 pS range. Jitter spectrum revealed that the high jitter devive had a spectrum like random noise while the low jitter device had a spectrum with many single frequency spikes. These spikes may upset the behavior of some PLLs. So low jitter number alone does not guarantee good sound and high jitter number with a well behaved spectrum can sound good.

            Many of the sonic differences we hear that we attribute to jitter may be caused by the behavior of the PLLs under different jitter conditions.

            Victor

            --- In regsaudioforum@yahoogroups.com, "listentwice2002" <hmartinburm@...> wrote:
            >
            > Hi Ofer
            > At the last weekend I had the opportunity to listen to the new Naim DAC. Naim claim to achieve zero jitter. As I understood it, they separate 2 channel music data from the other data transmitted. Then only the music data bits are stored in a FirstInFirstOut buffer and a circuit controls buffer over-and underrun. 10 internal clocks are present and the right one is switched as master clock to match incoming sample rate.
            >
            > One thing we discussed was why digital cables make a difference with reclocking DACs and Naims answer was: Currently people are working on how to separate noise from the data. If they succeed there will be a way of analyzing and reducing specific noise or understand how to compensate it.
            > Audio streaming via network shows some audible differences between cables, despite data buffering. Noise is the culprit, they say, serial data, jitter, noise - bit correct data, no jitter, but still noise.
            > The noise problem is still to be solved, but first it must be isolated from data for analysis.
            > The sound was very forward, with good depth and little tss (only CDs were played, no HD material, even though the DAC was said to be capable of 768 KHz). Bass is very powerful with the internal 210VA power supply, still it was outperformed when using an external powersupply with 500VA transformer (or maybe even more) driving DACs analog section and following stages. Such external supply releases the internal supply from load, so digital benefits from more reserve.
            >
            > I think the word clock idea will not work at home with numerous digital sources with samplerates from 44.1 to 192.
            > It might work with clock outputs from the DAC to the source, assigned to certain inputs. But as soon as you run a DVD-player with a CD instead of DVD, the whole thing will stop working.
            > Regards Hans-Martin
            >
            > >
            > > Hi All
            > >
            > > What the Benchmark do to correct Jitter, is to use an Asynchronies Sample Rate Converter (ASRC)
            > > The Jittered samples, are samples moved in time, because of clock problems, ASRC regards this samples as correct time samples, which is not true, and reconstruct the signal with incorrect time samples, the reconstruction is done to the clock of the DAC,
            > > This is a wrong way to handle Jitter
            > > There are several companies who do this right, like MSB, which ignores the sending part clock, put the data in a buffer, and then read the data with the DAC clock
            > > Of course, this method can cause the data to overflow, or underflow, so they use silence gaps in the music, to resync the buffer
            > > There are other ways to handle jitter
            > > In the professional audio industry, there is a master clock, which control the sender and the receiver parts
            > >
            > > Ofer
            > >
            > >
            > > --- In regsaudioforum@yahoogroups.com, "regtas43" <regonaudio@> wrote:
            > > >
            > > >
            > > > Maybe I can offer a mathematical view(without formulas) as I see the situation at present--no claims of definitive here, I am too busy working on my book to have gone through this inch by inch as yet.
            > > >
            > > > The theorem(Whittaker's Theorem though others got their names on it later) is this:
            > > > If a signal has no energy content above frequency f than knwoing samples at 2f times per time interval completely determines the signal.
            > > >
            > > > What this means in practice is that if one linear phase filters the signal to have no content above say 20 k and does 44.1 sampling, then one can reconstruct the part of the orignal signal that was below 20k from the samples.
            > > >
            > > > Now think about an impulse in the original signal. An impulse
            > > > has infiite bandwidth. What does it look like if one removes the energy above 20 k energy but does not shift phase below 20k? It looks like what is known as a sync function-- or to mathematicians, a Dirichlet kernel.
            > > > This looks like a central pulse with little ripples before and after.
            > > >
            > > > In no sense is this "wrong". This really is what the part of an impulse looks like if you take out its content above 20k. And the "pre-ringing "
            > > > part is above 20k. This is just what an impulse looks like if you bandlimit it without phase alteration.
            > > >
            > > > Now how could you do the filtering to get this thing from an impulse? The good way would be to sample the signal at a very high bit rate, so that the analogue filter needed to get rid of the "aliasing energy" (that is, to get rid of content above half the sampling rate) could be done at a very high frequency and this analogue filtering would not be doing any phase shifting to speak of down below 20kHz.
            > > >
            > > > Then you could use a linear phase digital filter to get rid of the energy above 20 kHz and you would be ready for CD.
            > > >
            > > > But if you do analogue brickwall filtering at 20k , you are typically going to have heavy phase shifting at the top,just below 20k and down on into the audible range. This was common in times gone by. It is also possible to screw up the digital filtering even if you do do the initial filtering high and the initial A to D at a high sampling rate.
            > > >
            > > > The Benchmark DAC does linear phase reconstruction as I understand what is going on: if the original signal was linear- phase converted into
            > > > digits, which makes an impulse into the Dirichlet kernel, then the Benchmark will produce the correct resconstruction of the linear phase filtered original signal. What will come out will be a literal copy
            > > > of the under-20kHz part of the signal that came into the original A to D device, except for dither noise.
            > > >
            > > > But of course if the recording's A to D messed the signal up and did not filter it so as to preserve the sub20k part of the signal correctly(correctly as to phase--most of them are frequency response flat), then of course the output from a linear phase converter will not put out the original signal 's sub-20 kHz part correctly. A D to A can only work with what it has.
            > > >
            > > > So what de facto do apodising filters do? They phase shift the reconstruction. The result is not what was there originally as the sub20kHz part of the input to the A to D, if the A to D did tis job correctly! The signal has been messed with as to phase--this is assuming the A to D did the right thing. But of course if the A to D did the wrong thing, then the apodising filters changing of it might in fact be closer to the original signal. But one is effectively guessing at what happened wrong.
            > > >
            > > > Why is this even sensible? Why is this different from EQing your system as to response, de facto guessing what microphones do wrong?
            > > > (which most people do not admit to doing though most of them do it in effect since they "review" things by listening to commercial recordings).
            > > >
            > > > The answer to that seems to be this:
            > > > Phase shifting that causes audible range preringing is annoying, but phase shfting that causes post-ringing is not so annoying, because the sound itself masks the post-ringing. Having the aftersound is not as bad as having the too early sound.
            > > >
            > > > So one can get away with the phase EQ in that if the phase was right on the CD to begin with(and sometimes it is), it will be all right to make it post ring--one is altering the signal but not in asome annoying way. The alteration might not even be really audible because of masking. But what would be audible and presumably good would be to get rid of the pre-ringing from wrongly made CDs--because preringing is hyper annoying.
            > > >
            > > > I agree with Victor that well made CDs can sound extremely good. There is quite a lot of evidence that corectly done CD digital is remarkably close to transparent. But the problem is that all too often it has not been correctly done!
            > > >
            > > > So what we have is , it seems,with apodising filters, is a system that changes the signal but is harmless(or largely so) when it is not needed and very helpful when it is needed.
            > > >
            > > > I would suppose that having it switchable would be sensible!
            > > >
            > > > Anyway, that is the way I see the mathematics.
            > > >
            > > > I have not heard the results yet. (I am hoping to get my hands on a Meridian player soon).
            > > >
            > > > If I have interpreted this wrong, UB should feel free to say so.
            > > > (I have not studied this in microdetail as yet--I am busy finsihing my book)
            > > >
            > > > REG
            > > >
            > >
            >
          • regtas43
            People do this because long ago it was discovered that the frequency response was a much larger factor than the phase response in what one hears. This of
            Message 5 of 22 , Sep 1, 2009
            • 0 Attachment
              People do this because long ago it was discovered that the
              frequency response was a much larger factor than the phase response
              in what one hears.

              This of course does not mean that phase is inaudible of course.

              We covered this earlier, and I explained how in particular
              large phase nonlinearities that occur over a single "critical band"
              in fact in the ear can induce amplitude response erros--because
              of the ear's spread frequency discrimination --and this surely leads to audible effects.

              Please look up those old posts--I do not want to go through this again.

              On the other hand, it is easy to exaggerate the extent to which
              broader slower phase shifts are audible. For one thing,
              looking at pictures of signals is very deceptive.

              This is old stuff!

              REG

              -- In regsaudioforum@yahoogroups.com, "yvl222" <yvl222@...> wrote:
              >
              > Time and frequency domains give us equivalent technical information. Unfortunately the frequency domain info we have been using for consumer audio often uses only the amplitude part of the information. The phase response is left out. If phase has been an integral part of the audio measurements, the group delay issue in steep filters may be better understood and perhaps have been resolved already.
              >
              > Victor
              >
              > --- In regsaudioforum@yahoogroups.com, Uli Brueggemann <uli.brueggemann@> wrote:
              > >
              > > IMHO this is a key point how we are listening.
              > > If even the maths cannot solve the problem to get information about exact
              > > frequency and time behaviour at the same time then the ear/brain system also
              > > cannot do this.
              > > This means we cannot perceive a signal with both perfect information for
              > > time and frequency domain. We hear a mix. That's why we do not recognize all
              > > the sharp peaks and dips in the frequency response and why the usual 1/n
              > > octave averaging or ERB ... works. We also cannot experience the
              > > micro-detailed timing as then we wouldn't know about the frequency then. MP3
              > > is thus possible.
              > >
              > > Of course the main question is to get a proper approach to how we listen and
              > > detect both time and frequency information from a signal.
              > >
              > > Uli
              > >
              > >
              > >
              > > On Tue, Sep 1, 2009 at 1:19 AM, regtas43 <regonaudio@> wrote:
              > >
              > > >
              > > >
              > > > This is just the mathematics of the situation.
              > > >
              > > > It considerably predates Heisenberg as mathematics, but nowadays
              > > > it is usually called the Heisenberg Uncertainty Principle,
              > > > because the physics applications are so spectacular.
              > > >
              > > >
              > >
              >
            • ofer_r.geo
              Hi Hans-Martin What you wrote is very interesting I think jitter, is one of the biggest problems in DACs, from my experience, it spoil the attacks of the
              Message 6 of 22 , Sep 2, 2009
              • 0 Attachment
                Hi Hans-Martin

                What you wrote is very interesting
                I think jitter, is one of the biggest problems in DACs, from my experience, it spoil the attacks of the signal, and maybe it is a major reason, why some people prefer analog over digital

                For DVD playback, there are other solution instead of FIFO, which reduce the delay, but don't handle jitter that good

                Best Regards
                Ofer



                --- In regsaudioforum@yahoogroups.com, "listentwice2002" <hmartinburm@...> wrote:
                >
                > Hi Ofer
                > At the last weekend I had the opportunity to listen to the new Naim DAC. Naim claim to achieve zero jitter. As I understood it, they separate 2 channel music data from the other data transmitted. Then only the music data bits are stored in a FirstInFirstOut buffer and a circuit controls buffer over-and underrun. 10 internal clocks are present and the right one is switched as master clock to match incoming sample rate.
                >
                > One thing we discussed was why digital cables make a difference with reclocking DACs and Naims answer was: Currently people are working on how to separate noise from the data. If they succeed there will be a way of analyzing and reducing specific noise or understand how to compensate it.
                > Audio streaming via network shows some audible differences between cables, despite data buffering. Noise is the culprit, they say, serial data, jitter, noise - bit correct data, no jitter, but still noise.
                > The noise problem is still to be solved, but first it must be isolated from data for analysis.
                > The sound was very forward, with good depth and little tss (only CDs were played, no HD material, even though the DAC was said to be capable of 768 KHz). Bass is very powerful with the internal 210VA power supply, still it was outperformed when using an external powersupply with 500VA transformer (or maybe even more) driving DACs analog section and following stages. Such external supply releases the internal supply from load, so digital benefits from more reserve.
                >
                > I think the word clock idea will not work at home with numerous digital sources with samplerates from 44.1 to 192.
                > It might work with clock outputs from the DAC to the source, assigned to certain inputs. But as soon as you run a DVD-player with a CD instead of DVD, the whole thing will stop working.
                > Regards Hans-Martin
                >
                > >
                > > Hi All
                > >
                > > What the Benchmark do to correct Jitter, is to use an Asynchronies Sample Rate Converter (ASRC)
                > > The Jittered samples, are samples moved in time, because of clock problems, ASRC regards this samples as correct time samples, which is not true, and reconstruct the signal with incorrect time samples, the reconstruction is done to the clock of the DAC,
                > > This is a wrong way to handle Jitter
                > > There are several companies who do this right, like MSB, which ignores the sending part clock, put the data in a buffer, and then read the data with the DAC clock
                > > Of course, this method can cause the data to overflow, or underflow, so they use silence gaps in the music, to resync the buffer
                > > There are other ways to handle jitter
                > > In the professional audio industry, there is a master clock, which control the sender and the receiver parts
                > >
                > > Ofer
                > >
                > >
                > > --- In regsaudioforum@yahoogroups.com, "regtas43" <regonaudio@> wrote:
                > > >
                > > >
                > > > Maybe I can offer a mathematical view(without formulas) as I see the situation at present--no claims of definitive here, I am too busy working on my book to have gone through this inch by inch as yet.
                > > >
                > > > The theorem(Whittaker's Theorem though others got their names on it later) is this:
                > > > If a signal has no energy content above frequency f than knwoing samples at 2f times per time interval completely determines the signal.
                > > >
                > > > What this means in practice is that if one linear phase filters the signal to have no content above say 20 k and does 44.1 sampling, then one can reconstruct the part of the orignal signal that was below 20k from the samples.
                > > >
                > > > Now think about an impulse in the original signal. An impulse
                > > > has infiite bandwidth. What does it look like if one removes the energy above 20 k energy but does not shift phase below 20k? It looks like what is known as a sync function-- or to mathematicians, a Dirichlet kernel.
                > > > This looks like a central pulse with little ripples before and after.
                > > >
                > > > In no sense is this "wrong". This really is what the part of an impulse looks like if you take out its content above 20k. And the "pre-ringing "
                > > > part is above 20k. This is just what an impulse looks like if you bandlimit it without phase alteration.
                > > >
                > > > Now how could you do the filtering to get this thing from an impulse? The good way would be to sample the signal at a very high bit rate, so that the analogue filter needed to get rid of the "aliasing energy" (that is, to get rid of content above half the sampling rate) could be done at a very high frequency and this analogue filtering would not be doing any phase shifting to speak of down below 20kHz.
                > > >
                > > > Then you could use a linear phase digital filter to get rid of the energy above 20 kHz and you would be ready for CD.
                > > >
                > > > But if you do analogue brickwall filtering at 20k , you are typically going to have heavy phase shifting at the top,just below 20k and down on into the audible range. This was common in times gone by. It is also possible to screw up the digital filtering even if you do do the initial filtering high and the initial A to D at a high sampling rate.
                > > >
                > > > The Benchmark DAC does linear phase reconstruction as I understand what is going on: if the original signal was linear- phase converted into
                > > > digits, which makes an impulse into the Dirichlet kernel, then the Benchmark will produce the correct resconstruction of the linear phase filtered original signal. What will come out will be a literal copy
                > > > of the under-20kHz part of the signal that came into the original A to D device, except for dither noise.
                > > >
                > > > But of course if the recording's A to D messed the signal up and did not filter it so as to preserve the sub20k part of the signal correctly(correctly as to phase--most of them are frequency response flat), then of course the output from a linear phase converter will not put out the original signal 's sub-20 kHz part correctly. A D to A can only work with what it has.
                > > >
                > > > So what de facto do apodising filters do? They phase shift the reconstruction. The result is not what was there originally as the sub20kHz part of the input to the A to D, if the A to D did tis job correctly! The signal has been messed with as to phase--this is assuming the A to D did the right thing. But of course if the A to D did the wrong thing, then the apodising filters changing of it might in fact be closer to the original signal. But one is effectively guessing at what happened wrong.
                > > >
                > > > Why is this even sensible? Why is this different from EQing your system as to response, de facto guessing what microphones do wrong?
                > > > (which most people do not admit to doing though most of them do it in effect since they "review" things by listening to commercial recordings).
                > > >
                > > > The answer to that seems to be this:
                > > > Phase shifting that causes audible range preringing is annoying, but phase shfting that causes post-ringing is not so annoying, because the sound itself masks the post-ringing. Having the aftersound is not as bad as having the too early sound.
                > > >
                > > > So one can get away with the phase EQ in that if the phase was right on the CD to begin with(and sometimes it is), it will be all right to make it post ring--one is altering the signal but not in asome annoying way. The alteration might not even be really audible because of masking. But what would be audible and presumably good would be to get rid of the pre-ringing from wrongly made CDs--because preringing is hyper annoying.
                > > >
                > > > I agree with Victor that well made CDs can sound extremely good. There is quite a lot of evidence that corectly done CD digital is remarkably close to transparent. But the problem is that all too often it has not been correctly done!
                > > >
                > > > So what we have is , it seems,with apodising filters, is a system that changes the signal but is harmless(or largely so) when it is not needed and very helpful when it is needed.
                > > >
                > > > I would suppose that having it switchable would be sensible!
                > > >
                > > > Anyway, that is the way I see the mathematics.
                > > >
                > > > I have not heard the results yet. (I am hoping to get my hands on a Meridian player soon).
                > > >
                > > > If I have interpreted this wrong, UB should feel free to say so.
                > > > (I have not studied this in microdetail as yet--I am busy finsihing my book)
                > > >
                > > > REG
                > > >
                > >
                >
              Your message has been successfully submitted and would be delivered to recipients shortly.