Multirate Digital Systems
Multirate digital systems process signals at multiple sample rates, enabling efficient implementation of signal processing operations that would be computationally prohibitive at a single rate. By strategically changing sample rates throughout a processing chain, designers can minimize computational load, reduce memory requirements, and achieve sophisticated spectral manipulations impossible with single-rate approaches. From telecommunications and audio processing to radar systems and medical imaging, multirate techniques underpin many of the signal processing advances that define modern technology.
The fundamental insight behind multirate processing is that different parts of a signal processing system may not require the same sample rate. High sample rates are necessary only where the signal contains high-frequency content that must be preserved. Once filtering removes high-frequency components, the sample rate can be reduced without losing information. Conversely, when signals must be combined with others at higher rates or prepared for conversion to analog form, sample rates must be increased through interpolation. This flexibility in sample rate enables dramatic improvements in implementation efficiency.
Foundations of Multirate Processing
Multirate signal processing rests on the mathematical framework describing how sample rate changes affect both time-domain and frequency-domain signal representations. Understanding these effects is essential for designing systems that change rates without introducing artifacts or losing information. The interplay between sampling rate and spectral content determines what rate changes are permissible and what filtering is required to execute them properly.
Sample Rate and the Nyquist Theorem
The Nyquist-Shannon sampling theorem establishes that a bandlimited signal can be perfectly reconstructed from its samples if the sampling rate exceeds twice the signal's highest frequency component. This critical frequency, the Nyquist frequency, equals half the sampling rate. Any frequency content above the Nyquist frequency cannot be distinguished from lower frequencies, a phenomenon called aliasing that irreversibly corrupts the signal.
When a signal's bandwidth is significantly less than the Nyquist frequency, it is oversampled relative to its information content. This excess sample rate represents wasted computational resources when processing the signal. Multirate techniques exploit this observation by reducing the sample rate after filtering removes high-frequency components, computing subsequent operations at the minimum rate necessary to preserve signal fidelity.
Conversely, signals sometimes must be represented at higher sample rates than their bandwidth strictly requires. Interfacing with digital-to-analog converters, combining signals sampled at different rates, or preparing signals for further processing may all require increased sample rates. Interpolation increases rates by computing new samples between existing ones, effectively filling in the signal waveform at finer time resolution.
Spectral Effects of Rate Changes
Downsampling by a factor M, which retains every Mth sample while discarding the rest, compresses the signal's spectrum by factor M in the frequency domain. A signal occupying the range from zero to the original Nyquist frequency now spans zero to M times the new Nyquist frequency. If the original signal had energy above the new Nyquist frequency, this energy aliases into the baseband, corrupting the signal. Anti-aliasing filters must remove this high-frequency content before downsampling.
Upsampling by a factor L, which inserts L-1 zeros between each original sample, creates L-1 spectral images of the original signal spectrum at multiples of the original sampling frequency. These images appear because the zero insertion effectively samples the continuous-time signal at the higher rate but with zero values at the new sample positions. Interpolation filtering removes these images, leaving only the baseband spectrum while filling in the time-domain signal values at the new sample positions.
The frequency scaling associated with rate changes requires careful attention when designing multirate systems. Filter specifications must account for the relationship between continuous frequency and normalized digital frequency at each sample rate in the system. A filter specified in terms of continuous frequency (Hertz) will have different normalized cutoff frequencies at different sample rates, a fact that must be considered when cascading rate changes with filtering operations.
Basic Building Blocks
The downsampler, symbolized by a down-arrow with the decimation factor M, reduces sample rate by keeping every Mth sample. Mathematically, if the input sequence is x[n], the output is y[m] = x[mM]. This operation is memoryless in the sense that each output depends only on one input, but it is not reversible because information is discarded.
The upsampler, symbolized by an up-arrow with the interpolation factor L, increases sample rate by inserting L-1 zeros between each input sample. If the input is x[n], the output is y[m] = x[m/L] when m is a multiple of L, and zero otherwise. Like downsampling, upsampling is memoryless, but unlike downsampling, no information is lost because the original samples can be recovered by downsampling the result.
Filters in multirate systems serve different purposes depending on their location relative to rate changes. Anti-aliasing filters before downsamplers remove frequency content that would alias. Interpolation filters after upsamplers remove spectral images created by zero insertion. The design specifications for these filters directly impact the quality of the rate conversion and the overall system performance.
Decimation
Decimation reduces the sample rate of a digital signal while preserving its essential information content. This operation combines lowpass filtering to remove high-frequency components that would alias, followed by downsampling to reduce the rate. Efficient decimation implementations avoid computing filter outputs that will subsequently be discarded, dramatically reducing computational requirements compared to naive approaches.
Decimation Process
The complete decimation process by factor M consists of lowpass filtering followed by downsampling. The anti-aliasing filter must have cutoff frequency at or below pi/M (normalized to the input Nyquist frequency) to prevent aliasing when the rate is reduced. Any frequency content between pi/M and pi in the input signal will fold back into the baseband after downsampling unless removed by the filter.
The filter's passband should extend to the highest frequency of interest in the signal, while the stopband should attenuate potential aliasing components to acceptable levels. The transition bandwidth represents a compromise between filter complexity and spectral efficiency. Sharper transitions require higher-order filters with more coefficients and greater computational cost.
Stopband attenuation requirements depend on the application's sensitivity to aliasing. Audio applications typically require 80 dB or more of attenuation to keep aliasing below audible levels. Communications systems may specify stopband attenuation based on adjacent channel interference requirements. Scientific and measurement applications may need even greater attenuation to preserve data integrity.
Efficient Decimation Implementation
Direct implementation of decimation wastes computation by evaluating filter outputs that will be discarded. For a decimation factor of M, M-1 out of every M filter outputs need not be computed. Rearranging the computation to avoid this waste reduces the operations per output sample by factor M.
The efficient approach computes only the retained outputs by stepping through the input signal M samples at a time. At each step, the filter's convolution sum is computed using only the input samples that contribute to that particular output. This restructuring requires tracking which input samples and filter coefficients participate in each output computation.
For an FIR filter with N coefficients operating at decimation factor M, the efficient implementation requires N multiplications and N-1 additions per output sample. Since the output rate is M times lower than the input rate, this translates to N/M multiplications per input sample, compared to N multiplications per input sample for the naive approach. The savings become substantial for large decimation factors.
Multistage Decimation
Large decimation factors are most efficiently implemented as cascades of smaller factors. A 100:1 decimation, for example, might be realized as cascaded stages of 10:1, 5:1, and 2:1 decimation. This approach reduces total computation because each stage operates at a different sample rate, with the most expensive filtering occurring at the lowest rate.
The anti-aliasing filter at each stage need only prevent aliasing from that stage's decimation. Later stages operate on signals with narrower bandwidth relative to the sample rate, allowing simpler filters with wider transition bands. The accumulated filtering across all stages provides the overall required frequency response.
Optimal factoring of the decimation ratio depends on the specific filter requirements, computational costs, and memory constraints. For power-of-two decimation factors, cascades of halfband filters are particularly efficient because halfband filters have roughly half their coefficients equal to zero. Non-power-of-two factors require more general filter designs but can still benefit from cascade implementation.
CIC Decimators
Cascaded integrator-comb (CIC) filters provide extremely efficient decimation by large factors using only additions and subtractions, completely avoiding multiplication. The structure consists of N integrator stages operating at the input rate, a downsampler, and N comb stages operating at the output rate. The integrators accumulate running sums while the combs compute differences between delayed samples.
The CIC decimator's frequency response approximates a sinc function raised to the Nth power, with nulls at multiples of the output sample rate. This response provides inherent anti-aliasing for the decimation operation, though with significant passband droop and limited stopband attenuation. Applications requiring better frequency response typically follow a CIC stage with compensation filtering.
CIC filters excel as the first decimation stage in high-rate systems where multiplications would be prohibitively expensive. A CIC stage might reduce the rate by a factor of 100 or more, after which conventional FIR filtering at the reduced rate provides the required frequency response. This combination achieves both efficiency and performance.
Register growth in CIC filters requires attention to prevent overflow. Each integrator accumulates values that can grow to large magnitudes before the comb stages produce the differences. The required register width equals the input width plus N times log2(M) bits, where N is the number of stages and M is the decimation factor. Proper sizing ensures correct operation despite the large intermediate values.
Interpolation
Interpolation increases the sample rate of a digital signal by computing new sample values between the existing ones. The process involves upsampling by inserting zeros, then filtering to remove the spectral images created by the upsampling operation. The interpolation filter effectively reconstructs the continuous-time signal at the new sample instants, producing smooth output waveforms from the sparse input samples.
Interpolation Process
Upsampling by factor L inserts L-1 zeros between each input sample, creating a signal at the higher rate with the same spectral content as the original but accompanied by L-1 spectral images. These images are replicas of the baseband spectrum centered at multiples of the original sampling frequency. The interpolation filter removes these images, passing only the baseband component.
The interpolation filter's cutoff frequency should be pi/L (normalized to the output Nyquist frequency) to suppress the images while passing the signal. The filter's gain must be L to compensate for the energy reduction caused by zero insertion, which spreads the signal energy across L times as many samples. This gain normalization ensures that the interpolated signal has the same amplitude as the original.
The quality of interpolation depends on how well the filter approximates the ideal lowpass response. Perfect reconstruction would require an ideal brick-wall filter with infinite duration, which is impractical. Real interpolation filters trade off complexity against image suppression and passband flatness, with the acceptable compromise depending on the application.
Efficient Interpolation Implementation
Direct implementation of interpolation multiplies each zero-inserted sample by filter coefficients, wasting computation on multiplications by zero. Since L-1 out of every L input samples to the filter are zero, L-1 out of every L terms in the filter's convolution sum contribute nothing to the output. Eliminating these unnecessary operations reduces computation by factor L.
The efficient implementation recognizes that only every Lth filter coefficient multiplies a non-zero input sample at any given output time. As the output index advances, a different subset of L coefficients participates in the computation. The filter can be decomposed into L subfilters, each containing every Lth coefficient, with each subfilter producing one of the L output samples associated with each input sample.
This polyphase decomposition reduces the number of multiplications per output sample to N/L for an N-coefficient filter. Since the output rate is L times higher than the input rate, the total operations per input sample equals N, the same as filtering at the input rate without interpolation. The computational load grows with the filter order but not with the interpolation factor.
Interpolation Quality
Linear interpolation, the simplest approach, draws straight lines between adjacent samples. This corresponds to filtering with a triangular impulse response, which has poor frequency-domain characteristics including significant image content and passband droop. Linear interpolation is acceptable for visualization and some audio applications but inadequate for high-quality signal processing.
Sinc interpolation theoretically achieves perfect reconstruction by convolving with the sinc function, the impulse response of the ideal lowpass filter. However, the sinc function has infinite duration, requiring truncation for practical implementation. Windowed sinc interpolators achieve excellent quality with finite-length filters, with the window function controlling the trade-off between filter length and frequency response.
Polynomial interpolators use polynomials fitted to nearby samples to estimate intermediate values. Lagrange interpolators guarantee that the polynomial passes through the sample points, while spline interpolators optimize smoothness at the cost of not exactly matching sample values. These methods can be implemented efficiently using Farrow structures that separate the time-invariant and time-varying parts of the computation.
The choice of interpolation method depends on quality requirements, computational constraints, and the specific application. Audio applications typically require high-quality interpolation to avoid audible artifacts. Communications systems may tolerate more image content if it falls outside the channel bandwidth. Real-time embedded systems may be limited to simple interpolators by processing power constraints.
Multistage Interpolation
Large interpolation factors benefit from multistage implementation, paralleling the efficiency gains of multistage decimation. Cascading smaller interpolation factors allows simpler filters at each stage, with the most complex filtering occurring at the lowest rate where computation is cheapest.
The ordering of stages matters for efficiency. Unlike decimation, where filtering precedes downsampling, interpolation requires upsampling before filtering. In a cascade, later stages operate at progressively higher rates, making their filters more expensive. Placing larger interpolation factors early keeps more computation at lower rates.
CIC interpolators provide multiplication-free rate increase for large factors, complementing CIC decimators. The structure reverses the decimator, with comb stages at the input rate followed by upsampling and integrator stages at the output rate. Compensation filtering at the lower rate addresses the passband droop and limited stopband rejection of the CIC response.
Polyphase Filters
Polyphase decomposition restructures filters into parallel subfilters operating at reduced rates, providing the foundation for efficient multirate implementation. By partitioning a filter's coefficients into groups that can be processed independently, polyphase structures eliminate redundant computation associated with samples that will be discarded or were inserted as zeros. This elegant mathematical framework unifies the efficient implementation of decimation, interpolation, and filter banks.
Polyphase Decomposition Concept
A filter with N coefficients can be decomposed into M polyphase components, each containing approximately N/M coefficients. The kth polyphase component contains coefficients h[k], h[k+M], h[k+2M], and so on, effectively subsampling the original impulse response. These components, when properly combined, reproduce the original filter's operation.
The decomposition exploits the structure of rate conversion operations. In decimation, only every Mth output is retained, meaning that each output depends on a specific pattern of input samples and coefficients. The polyphase view separates these patterns, computing only the needed outputs. In interpolation, zero-valued input samples contribute nothing to outputs, and the polyphase view eliminates the corresponding multiplications.
Mathematically, the z-transform of a filter H(z) can be expressed as the sum of M terms, each involving a polyphase component Ek(z) evaluated at z^M and multiplied by z^(-k). This decomposition reveals the structure that enables efficient implementation and provides the algebraic framework for manipulating multirate systems.
Polyphase Decimation Structure
In polyphase decimation, the input signal is distributed to M branches, with each branch receiving every Mth sample starting from a different offset. Each branch filters its samples with one polyphase component of the anti-aliasing filter, producing a partial result at the reduced output rate. The partial results combine to form the decimated output.
This structure computes exactly the samples that will be retained, avoiding the wasted computation of naive implementations. Each polyphase filter operates at the output rate rather than the input rate, processing only the samples that affect the retained outputs. The total computation equals that of filtering at the output rate with a filter of the original length.
The input commutator that distributes samples to branches can be implemented as a circular buffer with multiple read pointers or as explicit demultiplexing logic. The choice depends on the implementation platform and the specific rate change factor. Hardware implementations often favor the commutator view, while software implementations may use pointer manipulation within a single buffer.
Polyphase Interpolation Structure
Polyphase interpolation reverses the decimation structure. Each polyphase component processes the input samples at the input rate, producing one of the L output samples associated with each input. An output commutator cycles through the branch outputs, assembling the full-rate interpolated signal from the partial contributions.
This structure eliminates multiplication by the zero-valued samples inserted during upsampling. Each polyphase filter contains only the coefficients that would multiply non-zero inputs at its corresponding output phase. The total computation is independent of the interpolation factor, depending only on the filter length and input rate.
The output commutator can be viewed as a time-division multiplexer selecting among the polyphase outputs. In hardware, this might be a literal multiplexer switching between branch outputs. In software, it might be a loop iterating through stored branch results or interleaved computation and output in a single loop.
Polyphase Filter Design
Designing polyphase structures starts with designing the prototype filter meeting the overall frequency response requirements. The prototype filter's order determines the complexity of each polyphase component, with longer filters providing better frequency response at the cost of more computation per component.
The prototype filter is then decomposed into polyphase components by distributing its coefficients across the branches. For decimation by M, the prototype has cutoff frequency pi/M, and the polyphase components are the M interleaved subsets of its coefficients. For interpolation by L, the prototype has the same cutoff and is similarly decomposed into L branches.
Linear phase prototype filters yield polyphase components with favorable symmetry properties that can reduce computation further. If the prototype has symmetric coefficients, pairs of polyphase components are related by time reversal, and their inputs can be combined before filtering. This additional optimization approaches a factor of two reduction in multiplications.
Sample Rate Conversion
Sample rate conversion changes signals between arbitrary rates, generalizing the integer-factor operations of decimation and interpolation. Rational rate conversion combines interpolation and decimation to convert between rates related by a ratio L/M. Arbitrary rate conversion handles irrational ratios using continuously variable interpolation techniques. These capabilities enable interoperability between systems operating at different rates and precise synchronization between unsynchronized sources.
Rational Rate Conversion
Converting between sample rates related by the rational factor L/M requires interpolation by L followed by decimation by M. The signal is first upsampled by L, filling in L-1 samples between each original sample. The result is then downsampled by M, retaining every Mth sample of the interpolated signal. The output rate equals (L/M) times the input rate.
The interpolation and decimation filters can be combined into a single filter operating between the input and output rates. This combined filter must satisfy both the interpolation requirement (removing images from upsampling) and the decimation requirement (preventing aliasing from downsampling). The cutoff frequency is the minimum of pi/L and pi/M, normalized to the intermediate rate.
Polyphase implementation of rational rate conversion achieves efficiency by computing only the output samples actually needed. The polyphase structure cycles through L phases to produce the L intermediate samples per input sample, but only the phases corresponding to retained outputs need evaluation. The effective computation depends on the output rate, not the intermediate rate.
The greatest common divisor of L and M should be factored out before implementation to minimize the number of polyphase branches. Converting 44100 Hz to 48000 Hz involves the ratio 160/147, not 48000/44100, because the GCD of 48000 and 44100 is 300. This reduction significantly impacts implementation complexity for rates with common factors.
Arbitrary Rate Conversion
When the ratio of input to output rate is irrational or unknown, arbitrary rate conversion techniques compute output samples at precisely specified time offsets from the input grid. Rather than predetermining which input samples and filter coefficients combine for each output, the system calculates these combinations dynamically based on the required output timing.
Polynomial-based interpolation computes output samples using weighted combinations of nearby input samples, with weights determined by the fractional time offset from the input grid. Lagrange interpolators, cubic interpolators, and higher-order polynomials provide increasingly accurate approximations of the ideal sinc interpolation at increasing computational cost.
The Farrow structure implements polynomial interpolation efficiently by separating time-invariant and time-varying computations. A bank of FIR filters computes polynomial basis functions of the input samples, and these results are combined using the fractional delay as a parameter. This structure enables efficient implementation of continuously variable fractional delay with fixed filter coefficients.
Asynchronous sample rate conversion uses arbitrary rate conversion to synchronize signals from unsynchronized sources. The conversion ratio may vary slowly as the source clocks drift relative to each other. A timing recovery system estimates the instantaneous rate ratio and adjusts the interpolator accordingly, maintaining synchronization despite clock imperfections.
Quality Considerations
Sample rate conversion quality is measured by how closely the converted signal matches ideal band-limited interpolation or decimation. Passband droop, image rejection, and aliasing rejection characterize the frequency-domain performance. Time-domain measures include step response overshoot, ringing, and group delay variation.
Audio sample rate conversion demands high quality to avoid audible artifacts. Professional audio converters achieve 140 dB or better rejection of images and aliases, with passband response flat within fractions of a decibel. Consumer audio accepts somewhat lower specifications but still requires careful design to avoid audible degradation.
Communications systems balance quality against latency and complexity. Real-time systems may accept some degradation to meet timing constraints. Burst-mode systems converting between packet rates and continuous rates face additional challenges at packet boundaries that require special handling.
Testing sample rate converters requires appropriate metrics and test signals. Swept-sine testing reveals frequency response and image/alias levels. Multitone testing reveals intermodulation products that swept-sine testing might miss. Time-domain testing with impulses and steps reveals transient behavior and potential instabilities.
Filter Banks
Filter banks decompose signals into multiple frequency bands for separate processing, then reconstruct the output by combining the processed bands. This spectral decomposition enables frequency-dependent processing operations, efficient coding through subband quantization, and spectral analysis with controllable resolution. Filter bank theory provides the mathematical framework for understanding and designing these systems.
Analysis and Synthesis Structure
An M-channel analysis filter bank consists of M bandpass filters covering the frequency range of interest, each followed by downsampling by factor M. The analysis filters divide the spectrum into M bands, and the downsamplers reduce each band to the minimum rate necessary for its bandwidth. The resulting subband signals collectively represent the original signal at the same total sample rate.
The synthesis filter bank reverses this process. Each subband signal is upsampled by M, filtered to remove images, and summed with the other bands to reconstruct the full-band signal. The synthesis filters must complement the analysis filters to achieve accurate reconstruction of signals that pass through the system unprocessed.
The analysis-synthesis cascade can process each subband independently before reconstruction. Different processing can be applied to different frequency bands, enabling operations like frequency-dependent gain adjustment, selective noise reduction, or subband coding. The filter bank structure isolates these operations from each other, simplifying design and implementation.
Perfect Reconstruction
A filter bank achieves perfect reconstruction if the analysis-synthesis cascade reproduces the input signal exactly, possibly with a fixed delay. This ideal behavior requires precise relationships between the analysis and synthesis filters that cancel all distortions introduced by the subband processing, including aliasing from the downsamplers.
Perfect reconstruction conditions can be expressed in terms of the filter transfer functions. The overall transfer function must be a pure delay (or a scaled delay for systems with gain), and the aliasing terms introduced by downsampling must cancel exactly. These conditions impose constraints that link the analysis and synthesis filters together.
For critically sampled filter banks, where the decimation factor equals the number of channels, perfect reconstruction requires specific filter designs. The constraints are stringent, limiting the achievable frequency selectivity for a given filter order. Oversampled filter banks relax these constraints by using smaller decimation factors, trading rate efficiency for design flexibility.
Near-perfect reconstruction accepts small reconstruction errors in exchange for simpler filter designs or better frequency selectivity. Audio coding applications often use near-perfect reconstruction filter banks where the errors are below audible thresholds. The acceptable error level depends on the application's perceptual or measurement requirements.
Polyphase Filter Bank Implementation
Polyphase structures provide efficient filter bank implementation by exploiting the common elements across channels. The analysis filter bank can be restructured as a polyphase network followed by a transform operation, typically the discrete Fourier transform (DFT). This structure reduces computation by sharing the polyphase filtering across all channels.
The DFT modulated filter bank uses a single prototype lowpass filter, modulated to different frequencies to create the analysis filters. The polyphase components of the prototype filter appear in the polyphase network, and the DFT provides the modulation. The fast Fourier transform (FFT) computes the DFT efficiently, making this structure attractive for large numbers of channels.
Synthesis filter banks have dual polyphase structures, with the inverse DFT followed by polyphase filtering. The synthesis prototype filter may differ from the analysis prototype, depending on the perfect reconstruction requirements. The overall structure achieves efficient implementation of both analysis and synthesis with shared computational resources.
Quadrature Mirror Filters
Quadrature mirror filters (QMF) form a special class of two-channel filter banks with specific symmetry properties. The QMF structure uses a lowpass filter and its frequency-shifted mirror image as highpass filter, creating complementary filters that together cover the full frequency range. Originally developed for subband coding, QMF concepts extend to multi-channel filter banks and provide the foundation for many wavelet transforms.
Two-Channel QMF Structure
The two-channel QMF bank splits a signal into low-frequency and high-frequency bands using complementary filters. The lowpass filter H0(z) passes frequencies below pi/2, while the highpass filter H1(z) passes frequencies above pi/2. Both bands are downsampled by 2, and the synthesis bank upsamples and filters to reconstruct the original signal.
The quadrature mirror relationship requires H1(z) = H0(-z), making the highpass response a frequency-shifted version of the lowpass. This relationship ensures that the two filters together cover the full spectrum without gaps. The frequency responses are mirror images about pi/2, hence the quadrature mirror name.
Aliasing cancellation in QMF banks requires specific relationships between analysis and synthesis filters. For the standard QMF structure, the synthesis filters are time-reversed versions of the analysis filters, with the synthesis highpass negated. This configuration cancels the aliasing that would otherwise corrupt the reconstructed signal.
True QMF designs cannot achieve both perfect reconstruction and linear phase with FIR filters of finite length. The aliasing cancellation introduces amplitude distortion unless the filters satisfy constraints that preclude exact linear phase. Practical designs choose between nearly perfect reconstruction with linear phase or exact reconstruction with nonlinear phase.
Conjugate Quadrature Filters
Conjugate quadrature filters (CQF), also called power-symmetric or orthogonal filters, achieve perfect reconstruction in two-channel filter banks. The CQF relationship requires that the sum of squared magnitude responses of lowpass and highpass filters equals a constant. This power complementarity ensures that signal energy is preserved through the analysis-synthesis process.
CQF banks use the relationship H1(z) = z^(-N) * H0(-z^(-1)) for the highpass filter, where N is the filter order. The synthesis filters equal the time-reversed analysis filters. This configuration achieves exact reconstruction with FIR filters but requires nonlinear phase for the individual filters.
Orthogonal wavelets, including the Daubechies family, arise from iterated CQF filter banks. The perfect reconstruction property of the filter bank guarantees that the wavelet transform is invertible. The orthogonality property ensures that transform coefficients have unit energy and are uncorrelated, desirable properties for signal analysis and compression.
Biorthogonal Filter Banks
Biorthogonal filter banks use different filters for analysis and synthesis, relaxing the orthogonality constraint to enable linear phase and symmetric filters. The biorthogonality condition requires that the analysis filters be orthogonal to shifted versions of the synthesis filters, but the analysis and synthesis sets need not be orthogonal within themselves.
Linear phase biorthogonal filters are achievable with symmetric or antisymmetric coefficients. These filters produce symmetric wavelets that avoid the phase distortion of orthogonal designs. The popular 5/3 and 9/7 wavelet filters used in JPEG 2000 are biorthogonal designs with excellent coding performance and linear phase.
Biorthogonal filter design offers more degrees of freedom than orthogonal design, enabling optimization for specific applications. Regularity, which controls the smoothness of the associated wavelet, can be maximized independently for analysis and synthesis. Frequency response can be optimized separately for the lowpass and highpass filters within constraints required for perfect reconstruction.
Perfect Reconstruction Systems
Perfect reconstruction guarantees that a signal passing through a multirate system emerges unchanged except for a known delay. Achieving this property requires careful coordination between all system components, from filter design through implementation. Perfect reconstruction systems find application in subband coding, where any processing artifacts should arise from intentional quantization rather than filter bank imperfections.
Conditions for Perfect Reconstruction
Perfect reconstruction in an M-channel filter bank requires that the cascade of analysis and synthesis operations produces a pure delay or scaled delay. Mathematically, if X(z) is the input and Y(z) is the output, perfect reconstruction means Y(z) = c * z^(-d) * X(z) for some constant c and delay d. All other terms, including aliasing, must cancel exactly.
The aliasing cancellation conditions constrain the relationship between analysis and synthesis filters. In a critically sampled M-channel bank, M-1 aliasing terms arise from the downsampling operation. Each must be eliminated by appropriate choice of synthesis filters given the analysis filters, or by joint design of both sets.
The distortion function, representing the gain and phase through the system in the absence of aliasing, must be a pure delay for perfect reconstruction. This function depends on both analysis and synthesis filters and equals a delay only for specific filter combinations. Deviation from a pure delay indicates amplitude distortion or phase distortion in the reconstructed signal.
Paraunitary Filter Banks
Paraunitary filter banks represent the analysis filter bank as a paraunitary matrix, meaning that the matrix times its conjugate transpose (with z replaced by 1/z*) equals the identity. This algebraic structure guarantees perfect reconstruction and ensures that the filter bank preserves signal energy. Orthogonal filter banks are paraunitary.
The paraunitary condition provides a framework for designing and analyzing perfect reconstruction systems. Filter banks can be constructed from cascades of simple paraunitary building blocks, with each block guaranteed to preserve the paraunitary property. This modularity simplifies both design and implementation.
Lattice structures implement paraunitary filter banks using cascades of rotation matrices, each parameterized by a single angle. The angles provide independent design parameters that can be optimized for frequency response or other criteria. The resulting filters automatically satisfy the perfect reconstruction conditions regardless of the angle values.
Design Methods for Perfect Reconstruction
Direct design methods specify the desired filter frequency responses and solve for coefficients satisfying the perfect reconstruction constraints. This approach works for short filters but becomes computationally challenging as filter length increases. The nonlinear constraints make the optimization problem difficult, and multiple solutions may exist with different frequency response properties.
Lattice-based design parameterizes the filter bank by rotation angles and optimizes these parameters for the desired response. Since any angle values yield perfect reconstruction, the optimization focuses purely on frequency response without constraints. This approach scales well to long filters and automatically satisfies reconstruction requirements.
Lifting-based design constructs filter banks through sequences of prediction and update steps. Starting from a simple filter bank like the Haar, lifting steps incrementally modify the frequency response while preserving perfect reconstruction. This approach provides intuition about the design process and enables integer-to-integer transforms useful for lossless coding.
Applications of Multirate Systems
Multirate techniques enable applications spanning telecommunications, audio processing, instrumentation, and beyond. The efficiency gains from operating at minimum necessary rates, combined with the spectral manipulation capabilities of filter banks, make multirate processing indispensable in modern signal processing systems.
Digital Audio Processing
Audio sample rate conversion enables interoperability between equipment operating at different standard rates. Professional audio uses 48 kHz and 96 kHz, while consumer formats use 44.1 kHz (CD) and various compressed formats. High-quality sample rate converters maintain audio fidelity when transferring between systems or combining sources at different rates.
Oversampling in digital-to-analog conversion uses interpolation to shift quantization noise to higher frequencies where simple analog filters can remove it. Rather than requiring sharp analog filters at the audio Nyquist frequency, oversampling allows gentle analog filtering that introduces less phase distortion. This technique, combined with noise shaping, enables high-quality audio from relatively simple analog output stages.
Subband coding underlies perceptual audio compression standards including MP3 and AAC. Filter banks decompose audio into frequency bands that can be quantized according to psychoacoustic models of human hearing. Bands where masking reduces audibility receive coarser quantization, achieving dramatic compression while maintaining perceptual quality.
Communications Systems
Digital down-conversion in software-defined radios uses decimation to translate signals from high intermediate frequencies to baseband. A complex mixer shifts the desired signal to zero frequency, and cascaded decimation stages reduce the sample rate while filtering out adjacent channels. This digital approach replaces analog mixing and filtering with flexible software implementations.
Channelization in multi-channel receivers uses filter banks to separate signals occupying adjacent frequency bands. A DFT-modulated filter bank efficiently extracts many channels from a wideband digitized input, enabling simultaneous reception of multiple signals with a single analog front end. This approach finds application in cellular base stations, spectrum monitoring, and satellite communications.
Symbol timing recovery in digital receivers uses interpolation to sample the received signal at optimal points within each symbol period. The timing recovery loop estimates the optimal sampling instant and adjusts the interpolator to sample there, even when the transmitter and receiver clocks are not synchronized. Polyphase interpolators provide the continuously variable delay needed for this application.
Image and Video Processing
Image scaling requires two-dimensional sample rate conversion to change image resolution. Separable processing applies one-dimensional interpolation or decimation first horizontally, then vertically. The filter design must consider the specific artifacts visible in images, including aliasing that appears as jagged edges and ringing around sharp transitions.
Video format conversion between different frame rates and resolutions uses sophisticated multirate techniques. Converting between film (24 fps) and video (30 fps) rates requires interpolating or dropping frames. Resolution conversion for display on different devices involves resampling to match the display pixel count. Motion-compensated rate conversion improves quality by accounting for movement between frames.
Wavelet-based image compression in JPEG 2000 uses two-channel filter banks applied iteratively to decompose images into multiple resolution levels. The decomposition concentrates image energy in a small number of coefficients, enabling efficient compression. The multiresolution structure supports progressive transmission where image quality improves as more data arrives.
Instrumentation and Measurement
Digital oscilloscopes use decimation to match the stored sample rate to the display resolution and measurement needs. High-speed acquisition captures signals at rates sufficient to resolve the fastest features, while decimation reduces the data to manageable amounts for display and analysis. Variable decimation enables zoom functions that show different time scales.
Spectrum analyzers use filter banks to compute frequency-domain representations of signals. The FFT-based approaches common in lower-cost instruments trade off frequency resolution against time resolution. Filter bank approaches can provide better control over this trade-off, with potential for non-uniform frequency resolution matched to the analysis requirements.
Biomedical signal processing often operates on signals with very low bandwidths, such as EEG (below 100 Hz) and ECG (below 150 Hz). Decimation reduces high sample rate inputs to rates appropriate for these bandwidths, reducing storage and processing requirements. Multi-rate processing enables real-time analysis on resource-constrained portable devices.
Implementation Considerations
Implementing multirate systems requires attention to practical issues beyond the theoretical framework. Finite precision arithmetic, memory management, real-time scheduling, and hardware constraints all affect system performance. Understanding these considerations enables designs that achieve theoretical performance limits within practical constraints.
Fixed-Point Implementation
Fixed-point arithmetic requires careful management of word lengths throughout the multirate system. Filter coefficients must be quantized to the available precision, affecting frequency response accuracy. Signal samples experience quantization at each processing stage, accumulating quantization noise that degrades signal-to-noise ratio.
Scaling at rate changes affects the dynamic range utilization. Decimation can increase signal levels as energy concentrates into fewer samples, potentially causing overflow. Interpolation can decrease levels as energy spreads across more samples, wasting dynamic range. Proper scaling maintains optimal signal levels throughout the processing chain.
CIC filters require special attention to register sizing due to their recursive structure. The integrators accumulate values that can grow very large, requiring wide registers to prevent overflow. The comb stages produce differences that reduce the word length requirements. Proper sizing throughout the structure ensures correct operation with minimum hardware cost.
Memory Organization
Circular buffers efficiently manage the delay lines required for filtering and rate conversion. Rather than shifting data at each sample, pointer arithmetic tracks the current position in a fixed memory region. This approach eliminates data movement overhead and is particularly important for long filters or high sample rates.
Polyphase structures partition memory according to the processing phases. Input samples can be distributed to separate buffers for each polyphase branch, or a single buffer can be accessed with phase-dependent indexing. The choice depends on the memory architecture and whether parallel or sequential processing is used.
Buffer management between processing stages with different rates requires careful design. The rate mismatch means that buffer fill and drain rates differ, requiring sizing and flow control to prevent overflow or underflow. Double-buffering or more complex schemes may be needed to decouple stages with different timing requirements.
Real-Time Operation
Real-time multirate systems must complete all processing within the time available at the highest sample rate. Although multirate techniques reduce average computational load, worst-case timing must still meet deadlines. Cascaded rate changes create variable computational loads that complicate scheduling analysis.
Latency in multirate systems includes contributions from filter delays at each rate and buffering between stages. The group delay of anti-aliasing and interpolation filters translates to signal delay in the output. Applications sensitive to latency must account for these delays, potentially accepting reduced filter quality to minimize delay.
Interrupt-driven processing matches well to multirate systems, with different processing triggered at different rates. High-rate processing handles simple operations like CIC filtering, while lower-rate interrupts perform more complex operations with more time available per sample. Priority schemes ensure that higher-rate deadlines are never missed.
Summary
Multirate digital systems provide the theoretical foundation and practical techniques for processing signals at multiple sample rates. Decimation and interpolation change rates with appropriate filtering to prevent aliasing and remove images. Polyphase decomposition restructures these operations for efficient implementation, computing only the outputs actually needed and avoiding multiplication by zero-valued samples.
Sample rate conversion between arbitrary rates combines these building blocks with polynomial interpolation for continuously variable timing. Filter banks extend multirate concepts to spectral decomposition, enabling frequency-dependent processing and efficient coding. Perfect reconstruction systems guarantee that properly designed filter banks introduce no artifacts beyond intentional processing.
Quadrature mirror filters and their generalizations provide the specific filter relationships needed for perfect reconstruction two-channel systems, with extensions to wavelets and multi-channel banks. Applications spanning audio, communications, video, and instrumentation demonstrate the broad utility of multirate techniques in modern signal processing systems.
Implementation considerations including fixed-point arithmetic, memory organization, and real-time operation translate theoretical designs into working systems. Understanding both the mathematical framework and practical constraints enables designers to create multirate systems that achieve excellent performance within the limitations of real hardware and software platforms.