Klarinet Archive - Posting 000733.txt from 1997/11

From: Jonathan Cohler <cohler@-----.net>
Subj: Re: Nyquist, web page ok
Date: Thu, 20 Nov 1997 11:18:27 -0500

Jerry Korten writes:

>Well, I understand now why I am in disagreement with the web page. In the
>case of a sampled waveform, in a non A/D world, you can in fact represent
>the waveform provided you use an interpolating function
>sin(2*pi*t/T/(pi*t/T)) to filter (interpolate) the sampled signal in order
>to reconstruct the sampled signal. I guess the standards page is correct
>;^)
>

This is precisely what Jordan Selburn pointed out, what I have pointed out
and what is pointed out in all detailed mathematical descriptions of the
Nyquist theorem.

>This issue remains however, that a signal sampled by an A/D cannot be
>"exactly" reconstructed from its samples.

Here we go again! This is wrong. Wrong. Wrong. What you talk about
below is called quantization error. The Nyquist theorem does not deal with
quantization. It assumes and states that the samples are exact samples
taken from the signal. See below.

> I thought this over last night
>and have decided that an example is the best way to describe this:
>
>An A/D is a discrete quantizing device. It takes a varying analog signal (a
>voltage) and converts it into one of several levels. For a 16 bit A/D it is
>32767 levels. But for the sake of example I would like to refer to an
>imaginary A/D converter that converts our music signal into 10 levels. Say
>we have things set up so that a voltage varying from 0 to 10 volts are
>digitized such that anything between 0 and .99 volts gets a value of 0,
>anything from 1 to 1.99 gets a value of 1, a voltage from 2 to 2.99 a value
>of 2 and so on up until 9 A/D units (0 - 9 or 10 different possibilities).
>The A/D converter will "look" at this voltage once every hundredth of a
>second. This is called the sample period.
>
>Now we can visualize the problem with the sampling theorem when applied to
>A/D signals for the following reason:
>
>Suppose the signal was ramping up so that when we first sample it the level
>was 1 volt, this would correspond to a 1. The next period the waveform is
>at 2.9 volts corresponding to a reading of 2. The next reading is 3
>corresponding to an A/D reading of 3 and so on. Our sampled sequence is 1,
>2, 3
>
>Now we consider a voltage that starts at 1.1 volts (A/D value = 1), goes to
>2.0 volts (A/D value @-----. Our sample sequence is
>now also 1, 2, 3.
>
>The interpolation requried to reconstruct the sampled signal when applied
>to these two equal data streams will produce the same "replica" of a
>waveform. And only one waveform is possible from the interpolation. But we
>clearly saw two different waveforms being measured.

Again, you are demonstrating that you do not understood the basic
underlying principles and mathematics involved in the Nyquist theorem.

The example you give above illustrates what is called quantization error,
and is purely a matter of the smallest quantum unit of your measuring
device (which is not a part of the Nyquist theorem).

The accuracy with which we can measure an audio signal is expressed by the
number of bits in the samples. Sixteen-bit samples allow one to discern
65,536 different levels. This is what CDs use.

Therefore, the quantization error that you are discussing is down from the
maximum signal level by 96dB in the case of 16-bit recordings. In the case
of a 20-bit recording, this would be down by 120dB from the maximum signal
level.

Think of it this way. In an ideal situation, the difference between the
samples taken and the actual points on the original wave are at most 1/2 of
the least-significant bit of the sample-size. In other words, if we were
to represent the reconstructed sampled signal as the original signal plus a
"quantization-error waveform", then the maximum amplitude of that QE
waveform would be 1 divided by the number of quantum steps.

In the case of CDs (which store 16-bit samples) that is 1/65,536.

This amount of error is virtually inaudible, and marginally audible under
only the most extreme situations, such as listening to virtually silent
material with your stereo cranked up to the max.

Nonetheless, this is why there is a movement afoot to raise the sample size
to 24-bits at which level we can say that it will be totally inaudible.

And once again, this quantization error is much smaller than the kinds of
distortions that are introduced in analog playback systems (such as
cassette tapes and LPs).

>
>The effect of this quantizing affects both magnitude and phase of a signal.

Wrong.

Ideal quantization does not effect magnitude and phase. It does introduce
so-called "quantization noise", which is correspondingly removed in the
output filtering stage of a D/A converter. And it does introduce the small
QE error waveform described above.

By the way, most commercial recordings these days use sophisticated
"dithering" algorithms to deal with this least significant bit,
quantization problem. There are different algorithms such as Sony's super
Bit Mapping (TM) and other so-called "noise-shaping" algorithms that in
essence smooth out the least significant bit of the signal so that we don't
hear anything anomylous even in very soft passages.

I suppose this is the digital equivalent of tape hiss. Only it is at a
level that is roughly 20dB lower, i.e. virtually inaudible.

-------------------
Jonathan Cohler
cohler@-----.net

   
     Copyright © Woodwind.Org, Inc. All Rights Reserved    Privacy Policy    Contact charette@woodwind.org