Measurement Techniques
Accurate electromagnetic compatibility measurements require more than just quality test equipment. The methodology applied during testing determines whether results are valid, repeatable, and truly representative of equipment performance under real-world conditions. From initial equipment calibration through final data analysis, each step in the measurement process introduces potential sources of error that must be understood and controlled. Proper measurement techniques transform raw instrument readings into meaningful emission and immunity characterizations.
The complexity of modern electronic systems and the stringent requirements of EMC standards demand rigorous attention to measurement methodology. Factors such as antenna calibration accuracy, cable losses, ambient noise levels, and detector function selection can individually cause measurement errors of several decibels, and their combined effects can lead to incorrect compliance determinations. This article examines the essential measurement techniques that ensure accurate, reproducible EMC test results and enable meaningful comparisons between different test sessions, laboratories, and products.
Antenna Calibration Methods
Antennas are fundamental transducers in radiated emission and immunity measurements, converting electromagnetic fields to voltages and vice versa. The accuracy of any radiated measurement depends directly on knowing the antenna's performance characteristics across the frequency range of interest. Antenna calibration establishes the relationship between the electric field strength at the antenna location and the voltage delivered to the measurement receiver, expressed as the antenna factor. Without accurate calibration data, radiated emission measurements cannot be traced to field strength values required for compliance determination.
Antenna factor is defined as the ratio of the electric field strength (in volts per meter) to the voltage (in volts) at the antenna output terminals, typically expressed in decibels as dB per meter. A calibrated antenna with known antenna factor allows conversion of measured voltages to field strengths through simple addition in the logarithmic domain. The calibration must cover the full frequency range of intended use with sufficient resolution to capture any resonances or variations in antenna performance. Temperature, humidity, and physical condition can affect antenna factor, so calibration conditions should be documented and regular recalibration performed.
Standard Site Method
The standard site method (SSM) is the primary reference technique for antenna calibration below 1 GHz. This method uses a calibrated open-area test site (OATS) or equivalent facility meeting the normalized site attenuation (NSA) requirements of CISPR 16-1-4. A signal is transmitted from one antenna to another at known separation distance, typically 10 meters, and the received signal is measured. By using three antennas in pairs and performing measurements with height scanning to find the maximum response, the individual antenna factors of all three antennas can be determined from the set of measurements without requiring any pre-calibrated reference antenna.
The three-antenna technique relies on measuring the site attenuation between each pair of antennas from a set of three. Mathematical manipulation of the three site attenuation values yields the individual antenna factors. The site attenuation is defined as the ratio of input power to the transmitting antenna to the output power from the receiving antenna under specified geometric conditions. Height scanning over a range typically from 1 to 4 meters finds the maximum coupling condition, accounting for ground reflection effects. The calculations require the theoretical direct-ray path loss as a reference.
Accuracy of the standard site method depends critically on the quality of the test site. Ground reflections from a perfectly flat, uniform ground plane combine with the direct ray to create the measurement geometry assumed in the calculations. Real sites deviate from ideal conditions, introducing errors. Site validation through NSA measurements confirms that the site performs adequately for calibration purposes. Environmental conditions during calibration should be recorded, including temperature, humidity, and any anomalous conditions that might affect propagation.
Reference Antenna Method
The reference antenna method offers a simpler alternative to the three-antenna technique by comparing the unknown antenna directly against a reference antenna with known, traceable calibration. A transmitted signal is received first by the reference antenna, then by the antenna under test, at the same location and orientation. The difference in received levels, corrected for any differences in cable losses and receiver settings, gives the antenna factor difference. Adding this to the known reference antenna factor yields the antenna factor of the unknown antenna.
This substitution technique requires a reference antenna calibrated by a national metrology institute or a secondary laboratory with demonstrated traceability. The reference antenna need not be identical in type to the antenna being calibrated, but both should have similar directional characteristics to ensure they respond to the same field. Positioning accuracy is critical since the field strength may vary spatially, particularly in the presence of reflections. The reference antenna should be a quality instrument maintained specifically for calibration purposes and protected from damage that might alter its calibration.
Advantages of the reference antenna method include simplicity of the measurement procedure and the requirement for only two antennas rather than three. Uncertainties can be well characterized when using high-quality reference antennas with traceable calibrations. The method is particularly suitable for calibrating specialty antennas that might not be available in matched sets of three required for the standard site method. Disadvantages include the need to purchase and maintain calibrated reference antennas and the propagation of any errors in the reference calibration to all antennas calibrated against it.
Calibration Verification and Maintenance
Antennas should be recalibrated periodically to ensure continued accuracy. The appropriate interval depends on the antenna type, usage intensity, environmental exposure, and required accuracy. Annual recalibration is common for antennas used in compliance testing, with verification checks performed more frequently. Any antenna that has been dropped, physically damaged, or exposed to excessive power should be recalibrated before further use regardless of the scheduled interval.
Verification checks between full calibrations can detect gross changes in antenna performance. Comparing measurements of a stable reference source against historical values identifies drift or damage. Some laboratories maintain reference emission sources specifically for this purpose. Visual inspection of antenna elements, baluns, connectors, and cables identifies physical deterioration. Mechanical dimensions of tuned antennas such as biconicals and log-periodics should be checked against specifications, as element deformation affects performance.
Calibration records should document the full antenna factor versus frequency data, the calibration method used, traceability information, measurement uncertainty, calibration date, and due date for recalibration. The environmental conditions during calibration and any observations about antenna condition should be recorded. This documentation supports quality management system requirements and enables investigation of any anomalous measurement results that might be attributed to antenna performance.
Cable Loss Compensation
The cables connecting antennas, probes, and transducers to measurement receivers introduce signal attenuation that must be accounted for in final results. Cable losses increase with frequency and cable length, reaching substantial values at the higher frequencies used in EMC testing. A 10-meter run of typical coaxial cable might exhibit 3 dB of loss at 100 MHz, increasing to 10 dB or more at 1 GHz. Failure to compensate for cable losses causes measured emission levels to appear lower than actual values, potentially leading to false compliance determinations.
Cable loss compensation involves measuring the attenuation of each cable in the measurement path and adding this value to measured receiver readings to obtain the true signal level at the input to the cable. The measurement is straightforward using a network analyzer or signal generator and power meter, comparing the output signal to the input. The measurement should cover the full frequency range of intended use with resolution adequate to capture any frequency-dependent variations. Temperature affects cable loss, so measurements at extreme temperatures may be necessary for outdoor or environmental chamber testing.
Frequency-Dependent Loss Characteristics
Cable attenuation increases with frequency primarily due to skin effect losses in the conductors and dielectric losses in the insulation. At low frequencies, current flows through the full cross-section of the conductor, but at higher frequencies, current concentrates near the conductor surface, increasing effective resistance. Dielectric losses result from molecular friction as the insulation material polarizes in response to the alternating field. Both effects are frequency-dependent, with attenuation typically proportional to the square root of frequency for skin effect and linearly proportional for dielectric losses.
Different cable types exhibit different loss characteristics based on their construction. Flexible cables with stranded conductors show higher losses than semi-rigid cables with solid conductors. Foam dielectric cables have lower loss than solid dielectric types. Low-loss cable types should be specified for long runs and high-frequency measurements. The trade-off between flexibility, durability, and loss performance must be considered when selecting cables for specific applications.
Connectors also contribute loss and should be included in the cable calibration. Poor connector conditions such as damaged center pins, worn contacts, or contamination increase losses unpredictably and degrade measurement accuracy. Regular inspection and cleaning of connectors, along with proper mating techniques, minimize connector-related losses. Using quality connectors rated for the frequency range and torquing to specification ensures consistent, repeatable connections.
Measurement and Correction Procedures
Cable loss measurements should be performed with the cables in representative physical configurations. Bending and flexing affects loss, particularly at higher frequencies, so cables should be arranged during measurement as they will be during testing. For cables that will be repeatedly reconfigured, measurements in multiple configurations establish the range of expected variation. Some laboratories measure cables in their installed positions by injecting a known signal at one end and measuring at the other.
Cable loss corrections are applied differently depending on whether the test software handles them automatically or manually. Automated systems typically store cable loss tables and apply corrections internally so that displayed values represent levels at the antenna. Manual testing requires the operator to add cable loss values to measured readings when recording results. Either approach requires accurate cable loss data and correct identification of which cable is in use. Labeling cables with identification numbers and maintaining a database of measured losses supports consistent, error-free correction.
Periodic verification of cable losses detects degradation from wear, damage, or connector deterioration. Cables subjected to repeated flexing, crushing, or connector cycling should be verified more frequently. Any cable exhibiting significantly changed loss characteristics should be replaced or repaired. Documentation of cable loss history supports trend analysis and prediction of when cables will need replacement.
System Loss Budgets
A complete measurement system includes multiple cables, adapters, switches, and other components that each contribute loss. The total system loss is the sum of all individual component losses in the signal path. A comprehensive loss budget identifies each component, documents its loss versus frequency, and calculates the total correction required. This systematic approach ensures nothing is overlooked and enables assessment of whether the total loss is acceptable for the measurement sensitivity required.
Excessive system loss degrades measurement sensitivity by reducing the signal level reaching the receiver. When total losses approach or exceed the receiver's sensitivity limit, the ability to measure low-level emissions is compromised. Loss budgets should be reviewed when configuring test systems to ensure adequate sensitivity. Options to reduce total loss include using shorter cables, selecting lower-loss cable types, minimizing adapters and switches, or adding a low-noise preamplifier.
Measurement Uncertainty
Every measurement result includes some degree of uncertainty arising from limitations of the equipment, methodology, and environmental conditions. Understanding and quantifying measurement uncertainty is essential for meaningful interpretation of results and proper compliance determination. A measured emission level very close to the limit cannot be definitively declared compliant or non-compliant without considering the uncertainty range. Standards such as CISPR 16-4-2 provide guidance on calculating and applying measurement uncertainty for EMC testing.
Measurement uncertainty is expressed as an interval around the measured value within which the true value is expected to lie with a specified level of confidence. A result might be stated as 48.3 dB with an expanded uncertainty of plus or minus 3.2 dB at 95 percent confidence, meaning there is 95 percent probability that the true value lies between 45.1 and 51.5 dB. This information is critical when the measured value is within the uncertainty interval of the limit, where the compliance status is genuinely uncertain.
Sources of Uncertainty
Multiple factors contribute to the total measurement uncertainty. Receiver uncertainty includes the amplitude accuracy of the instrument, which varies with frequency, level, and bandwidth settings. Antenna factor uncertainty comes from the calibration certificate and any drift since calibration. Cable loss uncertainty arises from the calibration measurement accuracy plus any variation from bending and temperature. Site imperfections contribute uncertainty in radiated measurements through variations in ground reflection characteristics.
Mismatch uncertainty results from impedance variations at connections in the measurement path. When source and load impedances are not perfectly matched to the characteristic impedance, reflections cause the measured power to differ from the actual power. The effect depends on the reflection coefficients at both ends of each cable and cannot be fully determined without detailed impedance measurements. Statistical estimates based on typical reflection coefficients provide reasonable uncertainty contributions when specific data is unavailable.
Operator-related uncertainties include positioning accuracy of antennas and equipment under test, judgment in identifying maximum emission conditions, and interpretation of measurement displays. Training and experience reduce operator uncertainty but cannot eliminate it entirely. Automated systems reduce some operator dependencies but introduce their own uncertainties from positioning mechanisms and software algorithms. Repeatability studies where multiple operators or multiple runs measure the same quantity help quantify these contributions.
Uncertainty Calculation Methods
The Guide to the Expression of Uncertainty in Measurement (GUM) provides the internationally accepted framework for calculating and expressing measurement uncertainty. Individual uncertainty contributions are first expressed as standard uncertainties, representing one standard deviation of the assumed probability distribution. These are then combined using the root-sum-of-squares method to obtain a combined standard uncertainty. Finally, the combined uncertainty is multiplied by a coverage factor, typically 2 for 95 percent confidence, to obtain the expanded uncertainty reported with results.
Each contributing factor must be characterized by its probability distribution. Contributions with normal distributions, such as random measurement variations, are characterized by their standard deviation. Contributions with rectangular distributions, such as specifications that state a maximum error without further information, are divided by the square root of 3 to obtain the standard uncertainty. Triangular distributions, where values near the center are more likely, use a divisor of the square root of 6. Correct assignment of distributions affects the combined uncertainty calculation.
Some uncertainty contributions are correlated and should not be combined by root-sum-of-squares. For example, if the same signal generator calibrates both the receiver and the antenna, its uncertainty contribution appears in both and the correlation must be accounted for. In practice, many EMC uncertainty budgets use simplified approaches that assume all contributions are independent, which tends to overestimate the total uncertainty slightly. More rigorous analyses are required when uncertainty margins are critical.
Uncertainty Budgets and Documentation
An uncertainty budget is a systematic tabulation of all contributing factors, their individual uncertainty values, the basis for each value, the probability distribution assumed, and the resulting standard uncertainty. The budget document shows the calculation combining individual contributions into the total uncertainty. Preparing uncertainty budgets requires gathering calibration certificates, equipment specifications, site validation data, and repeatability study results. The resulting document provides traceability for the reported uncertainty and supports review and improvement.
Uncertainty budgets should be prepared for each distinct measurement type, as the contributing factors and their magnitudes differ. Conducted emission uncertainty includes LISN calibration but not antenna factor, while radiated emission uncertainty shows the opposite. Different frequency ranges may require separate budgets if the contributing factors differ significantly. Immunity test uncertainties require different analyses focused on field uniformity and generator calibration.
Uncertainty values reported on test reports should be consistent with the documented budgets. When equipment is changed or calibration data is updated, the budgets should be revised accordingly. Accreditation bodies require laboratories to maintain current uncertainty documentation and to demonstrate understanding of the underlying methodology during assessments. Regular review of uncertainty budgets identifies opportunities for improvement by addressing the largest contributors.
Repeatability and Reproducibility
Repeatability refers to the variation in results when the same item is measured multiple times by the same operator using the same equipment under identical conditions. Reproducibility refers to the variation when the same item is measured under different conditions, such as different operators, different equipment, or different laboratories. Both properties are essential for meaningful measurements. Poor repeatability indicates unstable measurement conditions or equipment, while poor reproducibility suggests that results depend excessively on specific setup details or personnel.
Quantifying repeatability requires performing the complete measurement procedure multiple times without changing anything between repetitions. The standard deviation of the results characterizes the measurement system's inherent variability. This should be performed for representative samples across the frequency and amplitude ranges of interest. Repeatability better than 1 dB is achievable with well-maintained equipment and careful procedures, while values exceeding 2 dB suggest problems requiring investigation.
Factors Affecting Repeatability
Equipment stability is the primary determinant of repeatability. Spectrum analyzers and receivers can exhibit drift in calibration, particularly during warm-up or with temperature changes. Allowing adequate warm-up time, typically 30 minutes to several hours depending on the instrument, improves stability. Temperature-controlled laboratories provide more stable conditions than environments with fluctuating temperatures. Battery-powered portable instruments may show variation as battery voltage changes.
Setup repeatability includes the ability to position equipment under test, cables, antennas, and other items identically between measurements. Fixtured setups with mechanical positioning aids improve setup repeatability compared to manual placement. Cable routing affects both conducted and radiated measurements, so consistent arrangement is important. For radiated measurements, antenna positioning in height scans should be verified, as the maximum may be sharply peaked and small positioning errors cause large amplitude variations.
The equipment under test may itself exhibit variation in emissions. Thermal effects change component values and operating points. Software states and processing loads affect digital emissions. Power line voltage variations change power supply switching behavior. These equipment-related variations appear as measurement non-repeatability but are actually real emission variations. Controlling these factors or documenting them as part of the measurement conditions distinguishes equipment variation from measurement system variation.
Interlaboratory Reproducibility
Different laboratories measuring the same equipment can obtain significantly different results despite each laboratory operating correctly within its own quality system. Site differences, equipment differences, setup interpretation differences, and environmental differences all contribute. Interlaboratory comparison studies, where a stable artifact is circulated and measured by multiple laboratories, quantify real-world reproducibility and help identify systematic differences between facilities.
Correlation between pre-compliance testing in development facilities and final compliance testing at accredited laboratories is a practical concern. Significant differences can result in products that appear compliant during development failing formal testing, or vice versa. Understanding the factors causing correlation problems helps development laboratories improve their predictions of formal test results. Common issues include site quality differences, antenna calibration differences, and different interpretations of setup requirements.
Harmonization efforts through standards organizations and accreditation bodies work to improve interlaboratory reproducibility. Standardized equipment specifications, mandatory site validation, proficiency testing programs, and detailed measurement procedures all contribute to better agreement between laboratories. Despite these efforts, measurement uncertainty remains a reality that must be acknowledged when interpreting results near limits.
Improving Measurement Consistency
Systematic attention to measurement procedures improves both repeatability and reproducibility. Written procedures that detail every step of the setup and measurement process ensure consistency between operators and over time. Checklists capture important details that might otherwise be overlooked. Photographs of standard setups provide visual references for positioning and cable routing. Training programs ensure all operators understand the procedures and their rationale.
Equipment maintenance contributes to consistent performance. Regular calibration verifies continued accuracy. Preventive maintenance addresses connectors, cables, and mechanical components before they cause problems. Calibration verification checks between formal calibrations detect drift promptly. Keeping equipment in a controlled environment minimizes stress from temperature cycling and humidity extremes that accelerate degradation.
Process controls from quality management systems support consistent measurements. Configuration management ensures the correct versions of software and procedures are in use. Change control documents modifications and their effects. Internal audits verify compliance with procedures. Management review identifies systemic issues and drives improvement. These elements of ISO 17025 accreditation contribute to measurement quality beyond the minimum requirements.
Ambient Noise Cancellation
The electromagnetic environment in which measurements are performed inevitably contains signals from sources other than the equipment under test. Broadcast transmitters, communications systems, computing equipment, power systems, and countless other sources contribute to the ambient electromagnetic noise. This ambient noise can mask the emissions being measured, add to them causing falsely high readings, or be mistaken for equipment emissions. Effective ambient noise management is essential for accurate EMC measurements.
Shielded enclosures provide the most effective ambient noise control by blocking external electromagnetic fields from reaching the measurement volume. A well-constructed shielded room can provide 80 dB or more of attenuation, reducing ambient signals to negligible levels. Semi-anechoic chambers combine shielding with absorber-lined walls to control both ambient noise and internal reflections. The substantial cost of shielded facilities is justified for laboratories performing frequent compliance testing where ambient noise would otherwise compromise measurement validity.
Ambient Characterization
Before testing equipment, characterizing the ambient noise level establishes the measurement floor and identifies frequencies where ambient signals are present. An ambient scan with the equipment under test powered off but all other measurement equipment operating reveals the background noise. This measurement should cover the full frequency range of subsequent tests using the same receiver settings to ensure direct comparability. Frequencies where ambient levels approach or exceed limits require special attention.
Ambient noise varies with time as external sources change their operating states. Broadcasting schedules, communication system activity, and nearby equipment operation all cause temporal variation. Measurements at different times of day may reveal different ambient conditions. For critical measurements, monitoring the ambient during testing identifies any changes that might affect results. Some test systems include separate ambient monitoring receivers to flag contamination in real time.
The character of ambient signals provides clues about their sources. Narrowband signals at specific frequencies suggest intentional transmissions from communications or broadcasting. Broadband noise with harmonically related components indicates switching power supplies or motor drives. Impulsive noise may come from ignition systems, electrical switching, or arc welding. Identifying sources enables evaluation of whether they might affect particular measurements and suggests mitigation approaches.
Mitigation Techniques
When shielded facilities are not available, several techniques can reduce ambient noise impact. Scheduling measurements during periods of lower ambient activity, such as nights and weekends when broadcast power may be reduced and industrial sources are not operating, can improve conditions. Orienting directional antennas to minimize response to ambient sources while maintaining sensitivity to the equipment under test exploits antenna directivity. Physical separation from obvious noise sources reduces coupling.
Signal processing techniques can distinguish equipment emissions from ambient noise when their characteristics differ. Gated measurements triggered by equipment operation capture emissions only during specific intervals, rejecting ambient that is uncorrelated with the trigger. Averaging multiple sweeps reduces random noise while preserving coherent emissions. Maximum hold over many sweeps captures intermittent emissions that might be missed in single sweeps. These techniques require that equipment emissions and ambient noise have distinguishable temporal characteristics.
Post-measurement correction can estimate equipment emissions when the ambient level is known but cannot be eliminated. If the combined level (equipment plus ambient) and the ambient alone are measured, logarithmic subtraction yields an estimate of the equipment contribution. This calculation becomes increasingly uncertain as the equipment level approaches the ambient level, with practical accuracy requiring the equipment emission to be at least 6 dB above ambient. Results corrected for significant ambient contribution should be noted in reports.
Documentation and Validity
Ambient conditions must be documented as part of measurement records. The ambient scan, the time and date of measurement, and any notable ambient sources or conditions should be recorded. When ambient levels are significant relative to measured emissions, this must be noted and the validity of measurements at affected frequencies assessed. Frequencies where valid measurements could not be obtained due to ambient should be identified, with recommendations for retesting under better conditions if necessary.
Test reports should describe the ambient conditions and their effect on measurement validity. Accreditation requirements mandate documentation of environmental conditions including electromagnetic ambient. When measurements are performed in unshielded environments, the limitations should be acknowledged. Compliance determinations at frequencies where ambient approaches the limit may require additional evidence such as measurements in a shielded facility or ambient-free time windows.
Overload and Compression
Measurement receivers can be driven into nonlinear operation by strong signals, causing errors in the measurement of both the strong signal and other signals present. Overload occurs when signal levels exceed the receiver's linear operating range, causing distortion and erroneous readings. Compression is a specific form of nonlinearity where the output fails to increase proportionally with input, typically at high signal levels. Recognizing and avoiding these conditions is essential for accurate EMC measurements.
Strong signals can overload receivers even when they are outside the measurement bandwidth. A high-level broadcast signal at a frequency far from the emission being measured can drive the receiver front end into compression, affecting all measurements. Wideband noise or impulsive signals with high peak power can cause overload despite modest average power. The receiver's input stages see the total signal, not just the portion within the measurement bandwidth, making wideband overload a concern even in narrowband measurements.
Detecting Overload Conditions
Modern receivers typically include overload indicators that illuminate when input levels exceed safe limits. These indicators should be monitored throughout measurements and any overload conditions investigated. However, overload indicators may not respond to all overload conditions, particularly from signals outside the tuned frequency. Additional vigilance is required when measuring in environments with known strong signals or when unusual reading patterns suggest nonlinearity.
Changing the input attenuator setting provides a test for overload. Increasing attenuation by 10 dB should reduce measured levels by exactly 10 dB if the receiver is operating linearly. A change of less than 10 dB indicates compression was occurring at the lower attenuation setting. Checking several attenuation settings identifies the minimum attenuation required for linear operation. This test should be performed at frequencies with the highest signal levels and whenever overload is suspected.
Spurious responses can indicate intermodulation distortion from overload. When two or more strong signals mix in a nonlinear front end, products appear at frequencies that are sums, differences, and other combinations of the input frequencies. These spurious products can be mistaken for equipment emissions. Checking whether suspected emissions change level with receiver attenuation changes, or whether they disappear when the strong signals causing them are attenuated, identifies spurious responses.
Prevention Strategies
Using adequate input attenuation is the primary defense against overload. The reference level setting on most receivers controls input attenuation automatically, with higher reference levels providing more attenuation. Setting the reference level high enough that the strongest expected signal appears in the upper portion of the display usually ensures adequate headroom. Manually adding attenuation may be necessary in particularly demanding situations.
Preselector filters can protect the receiver from out-of-band signals that might cause overload. A preselector is a tunable bandpass filter that passes only the frequency range of interest while attenuating signals at other frequencies. Modern EMI receivers often include preselectors as standard or optional features. External preselection can be added to receivers lacking this capability. The preselector must track the receiver tuning, automatically or manually, to remain effective as measurements sweep across frequency.
For measurements in environments with known strong signals, planning the measurement strategy to avoid overload is advisable. Identifying the frequencies and levels of strong ambient signals enables selection of appropriate attenuation settings. Selecting measurement times when strong signals are not present, such as avoiding broadcast frequencies during peak programming hours, may be practical. Using directional antennas oriented to minimize response to strong signal sources while maintaining sensitivity to the equipment under test can help.
Effects on Measurement Accuracy
Overload errors can cause both over-reading and under-reading depending on the specific mechanism. Compression causes under-reading of the strong signal that is compressing the receiver. Intermodulation products add energy at spurious frequencies, causing over-reading at those frequencies. Desensitization, where a strong signal reduces the receiver's sensitivity to weaker signals, causes under-reading of everything except the strong signal. The complex interplay of these effects makes overloaded measurements unpredictable and unreliable.
Measurements made during overload conditions are invalid and should not be reported as accurate results. If overload is detected after data has been recorded, the affected data should be discarded and measurements repeated with appropriate precautions. When overload cannot be avoided, such as measuring near an extremely strong transmitter, the limitations should be documented and the affected frequency ranges identified as having potentially compromised accuracy.
Detector Function Selection
EMC measurements use several different detector functions that process the received signal in distinct ways, yielding different results for complex and time-varying signals. The detector function selected significantly affects measured values for pulsed, modulated, and noise-like signals. EMC standards specify which detectors to use for compliance measurements, and using the wrong detector produces invalid results that cannot be compared against limits. Understanding detector characteristics enables correct selection for both compliance and diagnostic measurements.
The peak detector responds to the instantaneous maximum signal level during the measurement interval. For continuous wave signals, peak detection gives the same result as other detectors. For pulsed signals, peak detection captures the pulse amplitude regardless of the pulse repetition rate or duty cycle. Peak detection is fast and never underestimates the maximum level present, making it useful for initial scans and worst-case assessments. However, peak values for low-duty-cycle pulsed signals may significantly exceed their actual interference potential.
Quasi-Peak Detection
The quasi-peak detector was developed specifically for EMC measurements to weight pulsed emissions according to their perceived annoyance factor in analog radio and television reception. The quasi-peak detector includes defined charge and discharge time constants that cause its output to depend on both the amplitude and repetition rate of pulsed signals. Higher repetition rates produce higher quasi-peak readings, reflecting the subjective finding that frequent pulses are more annoying than infrequent ones.
CISPR standards specify the quasi-peak detector time constants for different frequency bands. In Band B (150 kHz to 30 MHz), the charge time constant is 1 millisecond and the discharge time constant is 160 milliseconds. In Band C/D (30 MHz to 1 GHz), these become 1 millisecond and 550 milliseconds. The ratio of time constants, along with the meter response time, determines how the detector weights different pulse patterns. Only receivers with calibrated quasi-peak detectors meeting these specifications should be used for compliance measurements.
Quasi-peak measurements require sufficient dwell time at each frequency for the detector to respond properly. A minimum dwell time related to the discharge time constant, typically several hundred milliseconds per frequency point, is necessary. Full quasi-peak scans are therefore time-consuming, often requiring many minutes to cover the conducted emission range. Common practice performs fast peak scans to identify frequencies of interest, then applies quasi-peak measurement only where peak values approach limits. Since quasi-peak values never exceed peak values, this approach is efficient while ensuring all potential issues are identified.
Average and RMS Detection
The average detector calculates the mean value of the signal envelope over the measurement period. For continuous wave signals, average detection gives the same result as peak detection. For pulsed signals, the average value is proportional to the duty cycle and can be substantially below the peak value. Some EMC standards specify average limits in addition to quasi-peak limits, reflecting that sustained power delivery, not just peak levels, affects interference potential for certain receivers.
The RMS (root mean square) detector measures the power content of the signal by computing the square root of the mean of the squared amplitude. RMS detection responds to the actual power regardless of signal waveform and is appropriate when heating effects or power delivery are the concern. For sinusoidal signals, the RMS value is 3 dB below the peak value. For pulsed signals, the relationship between RMS and peak depends on the duty cycle and pulse shape.
Average and RMS detection are faster than quasi-peak detection because they do not require the long settling times dictated by quasi-peak time constants. When standards allow average detection, full scans can be completed much faster than quasi-peak scans. Some measurement strategies use average detection for compliance where limits are specified in average terms, while using quasi-peak only for the frequencies where quasi-peak limits apply. Understanding which detector is appropriate for each limit line prevents incorrect comparisons.
CISPR Average Detection
The CISPR average detector is a specific form of average detection with defined characteristics for amplitude modulated signals. It differs from simple averaging in its response to signals with varying envelope. The CISPR average detector is specified in CISPR 16-1-1 and is required for certain measurements, particularly in standards with average limits. Not all spectrum analyzers include true CISPR average detection, though many EMI receivers do. Using an incorrect approximation of CISPR average can produce erroneous results for modulated signals.
Understanding the differences between detector types enables efficient measurement strategies. Peak detection is fast and provides upper bounds on emission levels. Average detection is faster than quasi-peak while providing insight into the integrated emission level. Quasi-peak detection, though slow, gives the official compliance value for most CISPR-based limits. Combining these strategically minimizes total measurement time while ensuring valid compliance determinations.
Bandwidth and Sweep Settings
The measurement bandwidth and sweep parameters significantly affect both the accuracy and efficiency of EMC measurements. Resolution bandwidth determines the frequency selectivity and affects measured levels of broadband signals. Sweep time and number of points affect the ability to capture intermittent emissions and the overall measurement duration. Correct settings ensure valid measurements that can be compared against limits and between test sessions.
Resolution bandwidth (RBW) is the frequency width of the measurement filter. Narrower bandwidths provide finer frequency resolution and lower noise floors but require longer sweep times. Wider bandwidths sweep faster but may fail to resolve closely spaced signals. For EMC measurements, standards specify the bandwidths to ensure consistent, comparable results. CISPR 16 specifies 9 kHz for Band B and 120 kHz for Bands C/D. Using non-standard bandwidths invalidates compliance measurements.
CISPR Bandwidth Requirements
CISPR 16-1-1 specifies measurement bandwidths and filter shapes for each frequency band. Band A (9 kHz to 150 kHz) uses a 200 Hz bandwidth. Band B (150 kHz to 30 MHz) uses 9 kHz. Bands C and D (30 MHz to 1 GHz and above) use 120 kHz. These bandwidths are implemented as Gaussian or near-Gaussian filters with specified shape factors and impulse response characteristics. The filter bandwidth affects measured levels of broadband emissions and must be exactly as specified for compliance measurements.
The 6 dB and 3 dB bandwidth specifications refer to the filter's frequency response characteristics. CISPR specifies the 6 dB bandwidth, meaning the frequency range over which the filter response is within 6 dB of its peak. The shape factor, relating the 6 dB bandwidth to the 60 dB bandwidth, must also meet specifications to ensure consistent selectivity. Commercial spectrum analyzers often use different filter shapes than EMI receivers, which can cause measurement differences even at the same nominal bandwidth.
Spectrum analyzers typically offer resolution bandwidths in a standard sequence (1-3-10-30-100 Hz, etc.) that does not include the CISPR values of 200 Hz, 9 kHz, or 120 kHz. EMC-specific analyzers and EMI receivers include the exact CISPR bandwidths. Using approximations such as 10 kHz instead of 9 kHz introduces measurement errors, particularly for broadband emissions where the error is proportional to the bandwidth ratio. For compliance measurements, only equipment with exact CISPR bandwidths should be used.
Sweep Time Considerations
Minimum sweep time depends on the resolution bandwidth and frequency span. Sweeping too fast causes the filter to not fully respond to signals, reducing measured amplitudes. Most analyzers indicate when sweep time is too fast for the selected settings. Auto-coupled sweep time, where the analyzer automatically selects an appropriate sweep time based on RBW and span, ensures adequate response for most situations. Manual override may be necessary for specialized measurements requiring specific timing relationships.
Longer sweep times may be necessary to capture intermittent emissions. A single sweep may occur during a quiet period in the equipment's operation, missing emissions that occur only occasionally. Maximum hold over multiple sweeps accumulates the highest reading at each frequency, eventually capturing all emissions regardless of their timing. The number of sweeps or total observation time needed depends on the emission's periodicity; slower variations require more sweeps.
Video bandwidth (VBW) settings affect the visual display and effective averaging of measurements. Video bandwidth lower than resolution bandwidth smooths the display and averages noise, improving signal-to-noise ratio for continuous signals but potentially missing short-duration pulses. Video bandwidth equal to or greater than resolution bandwidth preserves transient response. Standards may specify video bandwidth settings, particularly for average measurements where video filtering affects results.
Frequency Coverage and Step Size
The frequency step size determines how finely the spectrum is sampled. Too coarse a step size may miss narrow spectral peaks falling between measurement points. Too fine a step size extends measurement time unnecessarily. A reasonable guideline is to use step sizes no larger than half the resolution bandwidth to ensure that peaks are not missed. Automated systems typically adjust step size based on RBW and span to balance completeness against speed.
For emissions consisting of discrete spectral lines, such as clock harmonics, the step size should be fine enough that multiple measurement points fall on each spectral line. This ensures the peak is captured accurately. For broadband emissions, coarser step sizes are acceptable since the emission level varies smoothly with frequency. Mixed emissions require step sizes appropriate for the discrete components, which then also adequately characterize the broadband portions.
The number of frequency points equals the span divided by the step size. Analyzers have limited memory for storing trace data, which may limit the minimum step size achievable in a single sweep across a wide span. Multiple narrower sweeps may be necessary to achieve fine resolution across a wide range. Some EMI receivers use frequency lists rather than continuous sweeps, measuring only at specific frequencies where limits are defined plus any additional frequencies where emissions exceed thresholds.
Data Recording Methods
Proper data recording ensures that measurement results are preserved accurately, can be retrieved for analysis, and provide the documentation necessary for compliance reports and quality records. Modern EMC test systems generate large volumes of data that must be managed systematically. The recording method affects what information is available for later analysis and whether the data meets requirements for traceability and legal validity.
Measurement data includes not only the emission levels at each frequency but also the configuration information needed to interpret the results. Receiver settings, antenna used, cable losses applied, equipment under test configuration, ambient conditions, and date/time must all be recorded. Missing configuration data can render measurements uninterpretable or prevent valid comparison with other measurements. Comprehensive metadata recording should be integral to the data acquisition process.
Electronic Data Formats
Standard data formats enable exchange of measurement data between different software systems and long-term storage in accessible form. Common formats include CSV (comma-separated values) for tabular data, XML for structured data with metadata, and proprietary formats specific to particular equipment manufacturers. The choice of format affects interoperability with analysis software and long-term accessibility as software evolves. Using open, documented formats improves long-term data utility.
Graphics formats preserve the visual appearance of measurement displays. Screen captures in PNG or JPEG format document what the operator observed during testing. These images support report generation and provide a human-readable record that supplements numerical data. Vector graphics formats such as PDF or SVG scale better for printing and allow extraction of data values. Combining both numerical data files and graphical captures provides complete documentation.
Database systems organize measurement data for efficient storage and retrieval. Fields for equipment identification, test parameters, results, and status enable queries to find specific data among large collections. Relational databases link measurements to equipment records, calibration records, and test reports. Laboratory information management systems (LIMS) integrate data recording with workflow management and quality system documentation requirements.
Traceability and Integrity
Measurement data used for compliance determinations must be traceable to the original measurement event. This traceability requires documentation linking the recorded data to specific equipment, calibration status, test procedures, and personnel. Time stamps establish when measurements were made. Version control tracks changes to recorded data and documents any corrections. Chain of custody records document who has handled the data and for what purpose.
Data integrity measures protect against accidental or intentional alteration of records. Write-once media or write-protected network storage prevents modification of archived data. Digital signatures or hash values detect any changes to files. Access controls limit who can view, modify, or delete records. Audit trails log all data access and modifications with user identification and timestamps. These measures support regulatory requirements and defend against challenges to data validity.
Backup and archival procedures protect against data loss. Regular backups to separate storage protect against hardware failures. Off-site or cloud storage protects against facility disasters. Retention policies specify how long data must be kept, which may be decades for compliance records. Migration strategies ensure data remains accessible as storage technologies evolve. Testing recovery procedures verifies that backed-up data can actually be restored when needed.
Report Generation
Test reports document measurement results in a format suitable for review by customers, regulators, and accreditation bodies. Standard report elements include equipment identification, test standards applied, measurement equipment used with calibration status, test configurations and conditions, results with limit comparisons, and conclusions regarding compliance. Graphical presentations of spectra with limit lines provide visual evidence of compliance status.
Report templates ensure consistency and completeness across different tests and operators. Templates capture the required elements and standardized formatting while allowing customization for specific customer needs. Automated report generation from measurement databases reduces transcription errors and preparation time. Review and approval workflows ensure reports are checked before release. Secure electronic signatures provide authentication for electronic report distribution.
Retention of supporting data behind reports enables response to questions or challenges. Original measurement data files, photographs of test setups, calibration certificates in effect at the time of testing, and calculation worksheets should be archived with reports. Index systems allow retrieval of supporting data given a report reference. Periodic audits verify that supporting data is actually accessible and matches the reports it supports.
Summary
Proper measurement techniques are the foundation of valid EMC testing. From antenna calibration ensuring accurate field strength measurements to data recording preserving results for future reference, each element of the measurement process contributes to the overall accuracy and value of test results. Attention to cable losses, measurement uncertainty, ambient noise, detector selection, and instrument settings transforms raw measurements into meaningful compliance determinations.
Mastery of measurement techniques enables engineers to configure effective test systems, obtain accurate and repeatable results, and troubleshoot electromagnetic compatibility problems efficiently. Understanding how each parameter affects measurements allows optimization for different objectives, whether fast screening during development or rigorous characterization for compliance certification. The investment in developing measurement expertise pays continuous dividends in measurement quality and testing efficiency throughout an EMC testing program.