Klarinet Archive - Posting 000681.txt from 1997/11

From: Jonathan Cohler <cohler@-----.net>
Subj: Re: Nyquist and analog
Date: Wed, 19 Nov 1997 18:35:00 -0500

Jerry Korten writes:

>By definition, a sampling process that cannot represent a continous
>process. This is a fact.

Not only is this wrong, but the opposite is true. In fact, the Nyquist
theorem's importance and central position in information theory and the
music business is precisely that one can mathematically PERFECTLY
reconstruct a signal with highest frequency component F from a set of
samples taken from the original signal periodically at a frequency of 2F.

This is all mathematical, incontrovertible fact.

>
> It is why the process is called sampling. The high
>frequency portion of the signal (which is filtered out to avoid aliasing)
>is not present on the recorded material when digitization occurs.
>

The "high frequency portion" that you refer is all above 22KHz, which is
indeed filtered out in all modern-day, high-end A/D converters to avoid
aliasing as you point out. But as modern science has shown, human beings
cannot hear above 20KHz. Therefore this portion of the signal that is
removed is inaudible according to innumerable studies.

>This is
>the main reason that the harmonic structure on a CD does not match what the
>ear hears from a live performance or an analog recording.

This is utterly false. The big differences between recordings and live
performances have to do with things like speakers, microphones (and their
placement), and NOT to do with the removal of inaudible parts of the
spectrum.

>> http://www.its.bldrdoc.gov/fs-1037/dir-025/_3621.htm
>>Here is the theorem in summary:
>> An analog signal waveform may be uniquely reconstructed, without
>> error from samples taken at equal time intervals. The sampling
>> rate must be equal to, or greater than, twice the highest
>> frequency component in the analog signal.
>>
>And, when we are talking about digitizing music, filled with transients
>(piano, attacks etc.) this is where the application of this type of
>reasoning fails. You cannot reconstruct the actual waveform.

This is wrong once again. Saying things over and over doesn't make them
so. By the way the above referenced site is a one of the International
standards site maintained by the U.S. government. It has been peer
reviewed (as Mark C. already pointed out).

>
>The definition you provide above is not in agreement with my reference. In
>fact it is wrong. You should refer to page 29 of Oppenheim and Schafer
>"Digital Signal Processing" (Prentice hall), in which they describe how
>interpolation must be used to reconstruct the original waveform when using
>a discrete sampling system. And interpolation is an approximation, not
>actual reconstruction.

Again, you are confusing two terms "interpolation" and "approximation"
while interpolation is often used in approximation methods, the two words
are not synonymous. For example, if I tell you that I am thinking of a
straight line that goes through points A and B, then you can mathematically
PERFECTLY reconstruct that line. And yes, it requires interpolation to
determine all the missing points. But no, they are not approximate, they
are exact.

This is analagous to what goes on in the D/A process. As Jordan already
pointed out, the digital samples are run through a filter which
mathematically PRECISELY interpolates the points of the original signal.

>
>This reference to a definition on the web is a big problem with the
>internet, there is no peer review involved with posting facts in this
>media. I always recommend a text with rigorous academic peer review.

Again, it would be nice if you checked your facts before responding.

>The fourier theorem as described above deals with continuous waveforms.
>This means that in order to represent a waveform, the series of sinewaves
>used must go up in frequency to infinity. In fact in a digital system, the
>number of sinewaves used is limited. And therefore the waveform CANNOT be
>PRECISELY reconstructed. This is also a fact as represented in Oppenheim
>and Schaffer page 15.

No, it must only go up to a frequency of 2F where F is the highest
frequency component of the original signal.

>
>There is nobody in the field of DSP who will agree that one can "precisely"
>reconstruct an digital waveform from its FFT (or DFT). The DFT samples at
>discrete finite number of freuqencies and as a result distorts the
>waveform through a sampling process of its own. I can send you data that
>show this.

Once again, your "nobody will agree" arguments are irrelevant, untrue and
big space wasters. Let's deal in facts, not in who said what to whom.

>Please tell me exactly what
>type of DSP work do you do?

This again has nothing to do with the discussion at hand, but to satisfy
your curiosity, I have designed software and hardware systems, including
detailed algorithm design for numerous areas of the signal processing
industry. This has included areas such as image processing, speech
recognition, anti-submarine warfare, radar, and others.

>>I have yet to see a relevant fact from you. Again who concedes what to
>>whom has nothing to do with facts.
>Now you do!

Sorry, but I still haven't seen any facts.

>>The person or group of people who say something is true, does not make it
>>any more true or false. People's opinions have no bearing on facts.
>>People's feelings have no bearing on facts.
>>Again, bring on your double blind studies!
>
>
>The audio engineers are through with that stage (they have been convinced),
>they are currently trying to find a way to ensure that a higher fidelity
>recording medium than the current CD standard. So it is out of our hands
>already.

In other words, you don't have any facts to present so you are again
reverting to the omniscient "they".

To summarize,

1. The Nyquist theorem is a fact and it states that a signal
with maximum frequency component F can be mathematically
perfectly reconstructed from a set of samples from the signal
obtained with frequency 2F. And the reference provided above
is absolutely correct (and yes, peer reviewed).

2. The human ear does not hear above 20KHz. That's why 44.1KHz
was chosen as the sampling frequency on CDs. The extra 4.1KHz
was added to accomodate the practical realities of anti-aliasing
filters.

3. Certainly, there were and are cheapo A/D and D/A converters
on the market that introduced audible problems with sound,
because they did/do not achieve the mathematical perfection
of the Nyquist theorem. But I am not and have at no point
been discussing these. There is no question that bad digital
equipment produces bad sound, as does bad analog equipment. This
discussion is about the comparison of the two fundamental
processes and their inherent limitations (or lack thereof).

4. *High-end* A/D, D/A converters today are so good, however, that
no double-blind, level-balanced study has been done that isolates
these components and shows any human being able to differentiate
between them. (If someone knows of one I'd love to hear about it.)

5. Digital recordings made with high-end equipment and played back
on suitably high-end equipment are certainly far superior in all
objective aspects to analog recordings. Including recordings
of the clarinet (which is where this all started).

6. People are still entitled to subjectively prefer analog recordings.
But that does not entitle the zealots to pretend that science
or objective reasoning is on their side.

Please do not inundate the list with unsubstantiated opinions about matters
of fact. It is an entirely unproductive and annoying use of this otherwise
very informative and fun list.

Cheers.

---------------------
Jonathan Cohler
cohler@-----.net

   
     Copyright © Woodwind.Org, Inc. All Rights Reserved    Privacy Policy    Contact charette@woodwind.org