# Hamming code - Computer Definition

A family of linear error-correcting codes used for forward error correction (FEC), Hamming code can detect and correct single-bit errors by adding multiple parity bits to a data set. As an example, one of the simplest Hamming codes is the 7,4 code, which uses each group of four bits to compute a three-bit value, which it appends to the original four bits prior to transmission. If any of the seven bits is altered in transit, the receiving device can easily identify, isolate, and correct the errored bit.The 7,4 code is generally considered impractical, as it involves a non-standard character length. More complex Hamming codes based on standard character lengths (e.g., 11,7 for ASCII and 12,8 for EBCDIC) can also detect and distinguish two-bit and three-bit errors, but not correct them. Hamming code was invented in the 1940s by Richard W. Hamming of Bell Labs. See also ASCII, data set, EBCDIC, error control, FEC, and parity bit.

An error correction method that intersperses three check bits at the end of each four data bits. At the receiving station, the check bits are used to detect and correct one-bit errors automatically.