# Nyquist theorem - Computer Definition

The theorem developed by Harry Nyquist and published in his 1928 paper entitled "Certain Topics in Telegraph Transmission Theory."The Nyquist theorem states that an analog signal waveform can be converted to digital format and be reconstructed without error from samples taken at equal time intervals if the sampling rate is equal to, or greater than, twice the highest frequency component in the analog signal. The Nyquist theorem forms the basis for pulse code modulation (PCM), the fundamental method for converting analog voice to digital format. See also Nyquist, Harry and PCM.

The concept behind digitizing sound. Working at Bell Labs, Harry Nyquist discovered that it was not necessary to capture the entire analog waveform, and samples of the wave could be taken at various points. He also found that in order to have enough information in the sample pool to reconstruct the original waveform, the sampling rate must be at least twice the signal bandwidth.
**The Basis for PCM**
These realizations became the foundation for using pulse code modulation (PCM) to convert analog sound to digital in North America and Japan. A typical 4 kHz voice signal is sampled 8,000 times a second, with each sample converted into an 8-bit number, resulting in a 64 Kbps data stream (a single DS0 channel). See sampling and PCM.
**The Human Ear**
It was believed that people could not hear a frequency greater than 20 kHz (20,000 cycles per second). However, this number was always an approximation and has been challenged since the first music CDs came on the market in the mid-1980s. To deliver 20 kHz to the human ear, analog waves are sampled at 44.1 kHz for a regular music CD. Next-generation audio takes samples more frequently (see SACD and DVD-Audio.