Analog to Digital Conversion

Larry D. Paarmann

Larry D. Paarmann

Wichita State University, Department of Electrical and Computer Engineering, Wichita, Kansas

Search for more papers by this author
First published: 14 April 2006

Abstract

The process of converting an analog, or continuous time, signal into a digital signal is known as analog-to-digital conversion (ADC). Many signals are inherently analog. Examples of analog signals obtained from a transducer abound. Other analog signals are inherently electrical, such as those obtained from biomedical electromyography or electroencephalography electrodes. In many cases, it is deemed desirable to process a signal by a microprocessor or computer to extract information, or to store the signal in some digital storage medium such as a hard drive, a floppy disk, or a CD ROM. If the original signal is in analog format, in such cases, it is necessary to convert the analog signal into an equivalent digital signal.

This article describes the process of ADC. It introduces the topic and describes the operation of two of the most common methods of accomplishing ADC: the successive approximation ADC and the flash ADC. The sampling theorem is stated and proved, aliasing is described, and quantization noise is discussed. Oversampling is described, and the resultant improvements in ADC performance are documented.

Then delta-sigma ADCs are introduced and described. Conventional first-order and second-order delta-sigma ADCs are described and evaluated. Performance comparisons among simple oversampling, first-order delta-sigma, and second-order delta-sigma ADCs are documented. The article concludes with an example of a higher order delta-sigma ADC performance evaluation.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.