Conducted Emission Measurement
Accurate measurement of conducted emissions is fundamental to electromagnetic compatibility testing and regulatory compliance. The measurement process quantifies the high-frequency electrical noise that equipment injects into power lines and signal cables, comparing these levels against specified limits to determine compliance. Proper measurement technique requires specialized equipment, controlled test conditions, and thorough understanding of the standards that govern both the methodology and the acceptable limits.
Conducted emission measurements present unique challenges compared to other electrical measurements. The frequencies of interest span from 150 kHz to 30 MHz or higher, a range where component parasitics significantly affect behavior. The noise levels being measured are typically in the microvolt to millivolt range while the equipment under test operates from line voltage, requiring careful isolation and signal conditioning. Furthermore, ambient noise from the power mains and nearby equipment can contaminate measurements if not properly managed. Success requires attention to every element of the measurement chain, from power source to test equipment calibration.
Line Impedance Stabilization Networks
The line impedance stabilization network (LISN), also known as an artificial mains network (AMN), is the cornerstone of conducted emission measurement. The LISN serves three essential functions: it provides a defined, repeatable impedance to the equipment under test at radio frequencies, it couples the high-frequency emissions from the power line to the measurement equipment, and it isolates the measurement from noise present on the external power mains. Without a LISN, measurements would be unpredictable and irreproducible because the impedance of the power mains varies widely depending on what other equipment is connected and the characteristics of the building wiring.
Standard LISNs present a 50-ohm impedance to the equipment under test across the measurement frequency range. This impedance specification comes from CISPR 16-1-2, which defines the requirements for measurement equipment used in EMC testing. The LISN achieves this impedance through a carefully designed combination of inductors and capacitors. A series inductor of approximately 50 microhenries blocks high-frequency noise from the mains while allowing 50/60 Hz power to pass. Shunt capacitors provide a low-impedance path for high-frequency currents while blocking the mains voltage from reaching the measurement port.
LISN Circuit Architecture
A typical 50-ohm LISN for each power conductor contains a series inductor connected between the mains input and the equipment output, with a network of capacitors connecting to ground and to the measurement port. The 50 microhenry inductor presents high impedance at frequencies above approximately 150 kHz, effectively isolating the measurement from mains impedance variations. A 0.1 microfarad capacitor shunts high-frequency energy from the equipment side to ground, while a series combination of a 1 microfarad blocking capacitor and a 50-ohm resistor couples the noise to the measurement output while blocking the mains voltage.
For three-phase power systems, three-phase LISNs provide stabilized impedance on each phase conductor and the neutral. The construction is more complex, but the operating principles remain the same. Some LISNs include provisions for direct current (DC) power systems used in automotive and aerospace applications, with different impedance specifications reflecting the characteristics of those power distribution networks. The CISPR 25 standard specifies LISN requirements for automotive conducted emission measurements, using lower inductance values appropriate for the lower source impedances in vehicle electrical systems.
LISN Selection and Specifications
Selecting an appropriate LISN requires matching its specifications to the applicable test standard and the equipment under test. Key specifications include the voltage and current ratings, which must exceed the equipment's power requirements with appropriate margin. The impedance specification defines the RF impedance presented to the equipment and must comply with the applicable standard's tolerance. The isolation specification indicates how well the LISN blocks external mains noise from affecting measurements.
Additional specifications include the voltage division factor, which relates the noise voltage at the measurement port to the noise voltage at the equipment terminals, and the capacitive discharge time, which affects safety when testing equipment that could leave hazardous charge on the coupling capacitors. High-quality LISNs include features such as transient protection, ground fault interrupters, and calibration verification ports. The mechanical construction must provide appropriate safety isolation between mains-connected terminals and measurement ports accessible during operation.
LISN Installation and Grounding
Proper LISN installation is critical for accurate measurements. The LISN ground terminal must be bonded to the reference ground plane with a short, wide, low-impedance connection. A ground plane, typically a metal sheet at least 2 meters by 2 meters, provides the reference for the measurement. The LISN should be positioned at the edge of the ground plane, with the equipment under test positioned at a specified distance from the LISN and the ground plane edges.
Multiple LISNs are required for multi-conductor power inputs. Single-phase equipment requires two LISNs (line and neutral), while three-phase equipment requires four (three phases and neutral). Some test standards allow a single LISN to be switched between conductors rather than using simultaneous measurement on all conductors, but this approach is slower and may miss transient emissions. The unused LISN measurement ports must be terminated in 50 ohms to maintain proper impedance on all conductors during measurement.
Artificial Mains Networks
While the terms LISN and artificial mains network (AMN) are often used interchangeably, some standards make distinctions between them. The AMN terminology emphasizes the network's role in simulating the impedance characteristics of actual mains power networks. Different standards specify different AMN configurations to represent various power distribution topologies. Understanding these variations helps ensure that the correct network is used for each test scenario.
The V-network AMN, commonly used for single-phase conducted emission measurements, provides impedance measurement on each conductor relative to ground. The delta-network configuration, used for some industrial equipment testing, provides impedance measurement between phase conductors. The specific network type affects how differential-mode and common-mode emissions are measured and must be selected according to the applicable standard. Most consumer and commercial product testing uses V-network configurations following CISPR 16 specifications.
AMN Impedance Characteristics
The impedance presented by an AMN varies with frequency according to the component values and circuit topology. At frequencies below the measurement range, the impedance is dominated by the mains connection and appears low. As frequency increases through the measurement range, the series inductor and shunt capacitors establish the designed impedance, typically 50 ohms. At very high frequencies, parasitic effects in the inductor windings and capacitor leads can cause impedance variations.
CISPR 16-1-2 specifies tolerance limits for AMN impedance magnitude and phase across the measurement frequency range. A compliant AMN must present an impedance magnitude between 43 and 57 ohms (50 ohms plus or minus 14 percent) at frequencies from 150 kHz to 30 MHz. The phase angle tolerance is plus or minus 11.5 degrees relative to a purely resistive reference. Calibration verifies that the AMN meets these specifications, and periodic recalibration ensures continued compliance as components age or are stressed by fault events.
Calibration and Verification
AMN calibration verifies that the network meets its specified impedance and voltage division requirements. Calibration requires a network analyzer or impedance analyzer capable of measuring complex impedance at the frequencies of interest. The measurement is typically performed by injecting a signal at the equipment port and measuring the voltage at the measurement port, with the mains port terminated appropriately. The voltage division factor relates these measurements.
Verification can also be performed using a substitution method where a known signal is applied and the AMN output compared to the expected level. Regular verification between full calibrations catches problems before they affect test results. Many laboratories perform verification weekly or before each test session, with full calibration annually or after any event that might affect performance. Calibration certificates document the AMN's performance and traceability to national standards.
Current Probes and Clamps
Current probes provide an alternative method for measuring conducted emissions by sensing the current flowing in conductors rather than the voltage across a standardized impedance. A current probe is a transformer with the conductor under test forming the primary winding (a single turn passing through the probe's aperture) and a multi-turn secondary winding connected to the measurement equipment. Current probes are particularly useful for measuring common-mode currents on cable bundles and for troubleshooting emission problems.
The transfer impedance of a current probe relates the output voltage to the input current. Typical values range from 0.1 ohms at low frequencies to 10 ohms or more at high frequencies, with the frequency response determined by the core material and winding configuration. Low-frequency response is limited by the inductance of the secondary winding, while high-frequency response is limited by parasitic capacitances and core losses. The specification sheet provides the transfer impedance versus frequency, which must be applied to convert measured voltages to currents.
Current Probe Types
Clamp-on current probes feature a split core that can be opened and closed around a conductor without disconnecting it. This non-invasive characteristic makes them valuable for troubleshooting and diagnostic measurements where modifying the circuit is impractical. The split in the core introduces a small gap that reduces sensitivity and can pick up external magnetic fields if not properly designed. High-quality clamp-on probes use precision machined mating surfaces and shielding to minimize these effects.
Fixed current probes have solid cores through which conductors must be threaded during installation. The continuous core provides better sensitivity and rejection of external fields but requires planning the installation during assembly. Fixed probes are often used in permanent test fixtures and in applications where maximum sensitivity is required. Some probes combine both features with removable core sections for installation flexibility while maintaining the performance of a solid core when assembled.
Common-Mode Current Measurement
When all conductors of a cable or cord pass through a current probe aperture, the probe measures only the common-mode current. Differential-mode currents flow in opposite directions on different conductors and cancel in the transformer core, while common-mode currents flow in the same direction on all conductors and add. This makes current probes ideal for identifying common-mode emission problems, which are often the dominant source of both conducted and radiated emissions.
The relationship between common-mode current and emission levels can be calculated knowing the cable length and configuration. A cable carrying common-mode current acts as a monopole antenna, and the emission intensity is proportional to both the current and the electrical length of the cable. Reducing common-mode current directly reduces both conducted and radiated emissions. Current probes enable engineers to monitor the effectiveness of filtering and grounding changes in real time during troubleshooting.
Probe Calibration and Usage
Current probe calibration establishes the transfer impedance versus frequency characteristic required to convert voltage measurements to current values. Calibration is typically performed by passing a known current through the probe and measuring the output voltage at multiple frequencies across the operating range. The calibration data may be provided as a table, a graph, or correction factors to be applied to measurements.
Proper usage requires attention to probe positioning and cable routing. The conductor should pass through the center of the aperture perpendicular to the core face for maximum sensitivity and minimum position sensitivity. Multiple turns of the conductor through the aperture increase sensitivity proportionally but also increase the probe's self-capacitance and cable loading. The probe output must be terminated in the specified load impedance, typically 50 ohms, for accurate measurements.
Voltage Probe Methods
Voltage probes offer direct measurement of conducted emissions by sampling the voltage on power lines with minimal loading. Unlike LISN-based measurements that inherently provide impedance stabilization, voltage probe methods require separate consideration of the source impedance. These methods are particularly useful for measurements at frequencies above the standard LISN range, for measurements on DC power systems, and for diagnostic investigations where flexibility outweighs the need for standardized conditions.
A basic voltage probe consists of a high-impedance divider network that reduces the mains voltage to levels safe for the measurement equipment while maintaining calibrated attenuation at radio frequencies. The probe must withstand the full mains voltage continuously while presenting minimal loading to the circuit and maintaining flat frequency response across the measurement range. Capacitive dividers are commonly used because they provide inherently flat frequency response and high input impedance at low frequencies.
Capacitive Voltage Probes
Capacitive voltage probes use a series capacitor to couple high-frequency signals while blocking the DC and low-frequency mains voltage. A resistive termination at the measurement end provides the load for the capacitive divider and matches the cable impedance. The coupling capacitor value determines both the low-frequency cutoff and the division ratio. Larger capacitors provide lower cutoff frequencies but higher capacitive loading on the source.
The voltage division ratio of a capacitive probe depends on the ratio of the coupling capacitor reactance to the termination resistance at each frequency. At frequencies where the capacitive reactance is much smaller than the termination resistance, the division ratio approaches unity (zero attenuation). At lower frequencies where the capacitive reactance becomes significant, attenuation increases. The frequency response must be characterized through calibration and applied as correction factors to measurements.
Active Voltage Probes
Active voltage probes incorporate amplifier circuits to provide high input impedance, controlled gain, and signal conditioning for optimal measurement performance. The active circuitry can provide frequency response shaping to extend the usable bandwidth and compensate for cable losses. Active probes require power, either from batteries or from the measurement equipment, and introduce their own noise floor that limits sensitivity to very low-level emissions.
Differential active probes measure the voltage difference between two points, rejecting common-mode voltages that might otherwise overload the measurement system or corrupt results. This capability is valuable for measuring differential-mode emissions in the presence of large common-mode voltages. The common-mode rejection ratio (CMRR) specification indicates how well the probe rejects common-mode signals, with higher values indicating better rejection.
Spectrum Analyzer Setup
Spectrum analyzers provide the frequency-selective measurement capability essential for conducted emission testing. The analyzer sweeps across the frequency range of interest, measuring the signal amplitude at each frequency and displaying the result as a spectrum showing amplitude versus frequency. Proper analyzer configuration ensures accurate, repeatable measurements that can be compared against emission limits and between different test sessions.
Modern spectrum analyzers offer numerous configuration options that must be set appropriately for EMC measurements. The frequency range, resolution bandwidth, video bandwidth, sweep time, reference level, and detector type all affect the measurement results. Standards such as CISPR 16-1-1 specify many of these parameters for compliance testing, while diagnostic measurements may use different settings optimized for speed or sensitivity. Understanding the implications of each setting enables effective use of the analyzer for both compliance and troubleshooting purposes.
Frequency Range and Span
The frequency range for conducted emission measurements typically spans from 150 kHz to 30 MHz for power line measurements, though some standards require measurements starting at 9 kHz or extending to higher frequencies. The analyzer's start and stop frequencies should be set to cover the required range with some margin to capture emissions near the band edges. Multiple sweeps at different span settings may be used to provide both overview and detailed examination of problem frequencies.
Narrow span settings provide higher frequency resolution and can reveal closely spaced spectral components that might merge together in a wide span view. However, narrow spans require longer sweep times to maintain measurement accuracy. For initial characterization, a full span sweep identifies the major emission peaks, followed by narrower spans centered on problem frequencies for detailed investigation. Zoom features in modern analyzers allow rapid changes between overview and detail views.
Reference Level and Dynamic Range
The reference level sets the amplitude at the top of the analyzer display and determines the input attenuator setting. Setting the reference level too high reduces sensitivity and may cause small emissions to fall below the noise floor. Setting it too low risks overloading the analyzer with large signals, causing distortion and erroneous measurements. The optimal reference level places the largest expected signal near the top of the display while maintaining adequate headroom to prevent overload.
The dynamic range of the measurement is the ratio between the largest signal that can be measured without distortion and the smallest signal that can be distinguished from the noise floor. Increasing the reference level increases the maximum measurable signal but raises the noise floor proportionally, maintaining approximately constant dynamic range. Preamplifiers can improve sensitivity for low-level measurements, while additional attenuation prevents overload when measuring high-level emissions.
Sweep Time and Triggering
Sweep time determines how quickly the analyzer moves through the frequency range. Faster sweeps complete measurements more quickly but may miss short-duration or intermittent emissions. The minimum sweep time depends on the resolution bandwidth and the span; narrower resolution bandwidths require longer sweep times for accurate measurements. Most analyzers indicate when the sweep time is too fast for accurate results.
Triggering options control when the analyzer begins its sweep. Free-run mode sweeps continuously, appropriate for steady-state emissions. Video triggering starts the sweep when the signal exceeds a threshold, useful for capturing transient emissions. External triggering synchronizes the sweep to events in the equipment under test, enabling correlation between equipment operation and emission signatures. Gated sweeps measure only during specific time windows, useful for isolating emissions from particular operational states.
Receiver Detector Functions
EMI receivers and spectrum analyzers provide multiple detector functions that process the incoming signal in different ways, each suited to different measurement objectives. The detector type significantly affects the measured amplitude of pulsed or modulated emissions, so selecting the appropriate detector is essential for correct results. Standards specify which detectors to use and how to apply their readings against emission limits.
The peak detector captures the maximum signal level during the measurement dwell time at each frequency point. It responds to the instantaneous maximum regardless of signal duration or repetition rate. Peak measurements are fast and never underestimate the true peak level, making them useful for quick assessments. However, peak values may overestimate the actual interference potential of pulsed signals, leading to apparently non-compliant results for equipment that would actually pass with the appropriate detector.
Quasi-Peak Detection
The quasi-peak (QP) detector was developed specifically for EMC measurements to weight emissions according to their perceived annoyance factor in analog radio and television reception. The quasi-peak detector incorporates specific charge and discharge time constants that cause its output to depend on both the amplitude and repetition rate of pulsed signals. Continuous signals read the same on peak and quasi-peak detectors, but pulsed signals read lower on the quasi-peak detector, with greater reduction for lower repetition rates.
CISPR standards define the quasi-peak detector characteristics for different frequency bands. In Band A (9 kHz to 150 kHz), the charge time constant is 45 milliseconds and the discharge time constant is 500 milliseconds. In Band B (150 kHz to 30 MHz), these values are 1 millisecond and 160 milliseconds respectively. These parameters determine how the detector responds to different pulse patterns and are critical for obtaining correct quasi-peak measurements. Only calibrated EMI receivers with certified quasi-peak detectors should be used for compliance measurements.
Quasi-peak measurements are time-consuming because the detector must dwell at each frequency long enough for the time constants to settle. A full quasi-peak scan of the conducted emission range can take many minutes, compared to seconds for a peak scan. Common practice uses peak detection for initial scans and applies quasi-peak measurement only to frequencies where the peak value approaches or exceeds the limit. Since quasi-peak values are always equal to or less than peak values, this approach identifies all potential problem frequencies while minimizing measurement time.
Average Detection
The average detector computes the mathematical average of the signal envelope over the measurement time. For continuous wave signals, peak, quasi-peak, and average detectors all give the same reading. For pulsed signals, the average value depends on the duty cycle and can be substantially lower than the peak value. Some emission standards specify average limits in addition to or instead of quasi-peak limits, particularly for equipment that produces modulated or pulsed emissions.
Average measurements are faster than quasi-peak measurements while still providing weighting for pulsed signals. The video bandwidth setting affects average measurements by filtering the detected signal before display; narrower video bandwidths provide more averaging but require longer sweep times. Some standards specify video bandwidth settings for average measurements to ensure consistent results. Modern analyzers often provide simultaneous peak and average traces, enabling rapid identification of signals where the average value might provide compliance margin.
RMS and Other Detectors
Root-mean-square (RMS) detectors measure the power content of the signal rather than its envelope. For sinusoidal signals, the RMS value is 3 dB below the peak value. For complex signals with varying crest factors, RMS measurements provide information about the actual power that relates to heating effects and interference potential. Some specialized standards and certain customer requirements specify RMS measurements.
EMI receivers may also include specialized detectors such as the CISPR average detector, which has specific response characteristics for amplitude-modulated signals, and the root mean impulse (RMI) detector for particular aerospace and military applications. Understanding which detector is appropriate for each measurement scenario and how to interpret the results ensures accurate characterization of emission levels and correct compliance determinations.
Measurement Bandwidth Selection
The measurement bandwidth, also called resolution bandwidth (RBW), determines the frequency resolution of the measurement and affects the measured amplitude of broadband noise. CISPR standards specify the bandwidths to be used at different frequencies to ensure consistent, reproducible results. Using incorrect bandwidths leads to erroneous measurements that cannot be compared to limits or to results from other laboratories.
In CISPR Band B (150 kHz to 30 MHz), the specified measurement bandwidth is 9 kHz for conducted emission measurements. This bandwidth is wide enough to capture the energy of typical impulse-type emissions while providing adequate frequency resolution to separate discrete spectral components. The bandwidth is implemented as a Gaussian filter shape, which provides good selectivity with minimal ringing in response to impulsive signals.
Bandwidth and Noise Measurement
The measured level of broadband noise depends directly on the measurement bandwidth. Noise power increases by 3 dB for each doubling of bandwidth, or equivalently by 10 dB per decade of bandwidth increase. This relationship allows conversion of noise measurements between different bandwidths when necessary for comparison or diagnostic purposes. The formula for bandwidth correction is: level at new bandwidth equals level at old bandwidth plus 10 times the logarithm of the ratio of new bandwidth to old bandwidth.
Narrowband signals such as clock harmonics and switching frequency harmonics are not affected by bandwidth changes (within reasonable limits where the signal fits within the filter passband). This distinction between narrowband and broadband behavior helps identify the nature of emissions. Narrowband emissions appear as constant-level peaks that do not change with bandwidth, while broadband emissions rise and fall with bandwidth changes. Mixed emissions from equipment with both clock circuits and switching supplies display both characteristics.
CISPR Bandwidth Requirements
CISPR 16-1-1 specifies the measurement bandwidths and filter characteristics for each frequency band. Band A (9 kHz to 150 kHz) uses a 200 Hz bandwidth with specific pulse response characteristics. Band B (150 kHz to 30 MHz) uses the 9 kHz bandwidth. Bands C and D for higher frequencies use different bandwidths appropriate to those frequency ranges. The filter must meet specifications for shape factor and impulse response in addition to the nominal bandwidth.
Commercial spectrum analyzers often provide configurable resolution bandwidths in a standard sequence (1, 3, 10, 30, 100 Hz, etc.). The CISPR bandwidths of 200 Hz and 9 kHz are not in this standard sequence, so EMC-specific analyzers or EMI receivers include these specific filter options. Using a close approximation (such as 10 kHz instead of 9 kHz) introduces measurement error that can cause incorrect compliance determinations. For compliance testing, only equipment with the exact CISPR bandwidths should be used.
Ambient Noise Management
Ambient electromagnetic noise from the power mains, nearby equipment, and broadcast signals can interfere with conducted emission measurements, potentially masking the equipment's true emissions or creating false readings. Managing ambient noise is essential for obtaining accurate measurements. The LISN provides isolation from mains-borne noise, but additional measures are often necessary to achieve a measurement floor low enough to verify compliance margins.
Before measuring the equipment under test, a background noise measurement with the equipment powered off (but the LISN energized) establishes the ambient noise floor. This measurement identifies frequencies where ambient noise might affect results and provides a reference for interpreting measurements. The ambient level should be at least 6 dB below the lowest limit to ensure that equipment emissions can be accurately measured to the limit level.
Shielded Room Measurements
Shielded rooms provide the ultimate solution for ambient noise control by enclosing the measurement in a conductive enclosure that blocks external electromagnetic fields. A properly constructed shielded room can provide 80 to 120 dB of shielding effectiveness, reducing ambient signals to negligible levels. Power entering the room passes through filtered feedthroughs that block conducted noise on the mains while allowing power delivery.
Shielded rooms require proper design and construction to achieve their rated performance. Doors, ventilation penetrations, and cable entries must maintain shield integrity through the use of finger stock contacts, waveguide-below-cutoff tubes, and filtered connectors. Periodic verification of shielding effectiveness ensures that degradation from mechanical wear, corrosion, or modifications is detected and corrected. The investment in a shielded room is justified for laboratories performing frequent compliance testing.
Test Site Selection and Preparation
When shielded rooms are not available, careful site selection and preparation can reduce ambient noise to manageable levels. Industrial environments with heavy electrical equipment, variable frequency drives, and fluorescent lighting tend to have high ambient noise. Residential and commercial office environments are generally quieter. Testing during off-hours when nearby equipment is not operating can further reduce ambient levels.
Additional filtering on the mains supply feeding the measurement area can reduce conducted ambient noise. A line filter or isolation transformer upstream of the LISN attenuates noise from the facility power system. Ferrite clamps on cables entering the measurement area reduce conducted noise on signal connections. Positioning the measurement setup away from obvious noise sources such as computer monitors, switching supplies, and communications equipment minimizes direct coupling.
Ambient Correction Procedures
When ambient noise levels approach or exceed the equipment's emissions at certain frequencies, correction procedures can estimate the equipment's true emission level. If the combined level (equipment plus ambient) and the ambient-only level are both measured, the equipment contribution can be calculated by logarithmic subtraction. However, this procedure becomes increasingly uncertain as the equipment level approaches the ambient level, and most standards consider measurements invalid if the equipment level is not at least 6 dB above ambient.
Documentation of ambient conditions is required for compliance test reports. The report should identify the measurement environment, describe any ambient noise reduction measures employed, and include ambient measurement data. Frequencies where ambient levels might have affected results should be identified. If compliance cannot be determined at certain frequencies due to ambient noise, the report should state this limitation and recommend retesting in a lower-noise environment if necessary.
Automated Test Systems
Automated test systems increase the efficiency and repeatability of conducted emission measurements by controlling measurement equipment, managing test sequences, and processing results according to programmed parameters. Automation is particularly valuable for production testing where many units must be measured, for compliance testing where extensive scans and detector comparisons are required, and for research and development where parameter studies generate large volumes of data.
A typical automated conducted emission test system includes a spectrum analyzer or EMI receiver, one or more LISNs, switching networks for measuring multiple power conductors, a controller computer, and software that orchestrates the measurement sequence. The software configures the measurement equipment, controls frequency sweep parameters, selects detectors and bandwidths, applies calibration corrections, compares results to limits, and generates reports. User interface screens guide the operator through equipment setup and test execution.
Hardware Components
The EMI receiver or spectrum analyzer forms the core of the measurement system and must meet the specifications required by applicable standards. Modern receivers include computer interfaces (GPIB, USB, or Ethernet) for automated control and data transfer. Multiple measurement modes and detectors enable both fast scanning and compliant measurements. Memory depth and sweep speed determine how quickly full measurements can be completed.
LISN switching networks allow a single receiver to measure conducted emissions on multiple power conductors without manual cable changes. Electromechanical switches or solid-state relays under computer control select which LISN output connects to the receiver input. The switching network must maintain 50-ohm impedance and provide adequate isolation between channels to prevent leakage from degrading measurements. Unused LISN ports must be terminated in 50 ohms, often through termination ports built into the switch matrix.
Additional hardware may include programmable power sources that can vary voltage and frequency to test equipment under different power conditions, environmental chambers for temperature-varied testing, and antenna positioning systems for radiated emission measurements. Integration of all these elements under unified software control enables complex test sequences that would be impractical to perform manually.
Software Features
Test automation software provides functions for defining test configurations, executing measurement sequences, processing data, and generating reports. Configuration tools allow users to specify frequency ranges, bandwidths, detectors, limits, and other parameters for each measurement type. Test sequence editors enable creation of automated procedures that perform multiple measurements and comparisons without operator intervention.
Data processing features include calibration correction, limit comparison, margin calculation, and statistical analysis. Graphical displays present results as spectra, tables, and pass/fail indicators. Historical data storage and retrieval support trend analysis and test-to-test comparisons. Limit lines from various standards are included or can be entered by users. Automatic determination of worst-case frequencies and optimization of measurement time through peak-to-quasi-peak correlation are sophisticated features in advanced systems.
Report generation produces formatted documentation meeting the requirements of test standards and accreditation bodies. Reports include equipment identification, test configuration details, calibration status, ambient conditions, measurement data, limit comparisons, and pass/fail determinations. Export functions enable data transfer to other analysis tools, databases, and customer formats. Secure electronic signatures and audit trails support regulatory requirements for test record integrity.
Calibration and Uncertainty
Automated systems must incorporate calibration data for all elements in the measurement chain. LISN voltage division factors, cable losses, switching network losses, and receiver calibration all contribute to the overall measurement uncertainty. The software applies these corrections automatically so that displayed results represent the actual emission levels at the equipment terminals. Periodic recalibration ensures continued accuracy as components age.
Measurement uncertainty analysis quantifies the confidence in measurement results. Contributions from each element in the measurement chain combine according to statistical rules to give an overall uncertainty figure. Standards such as CISPR 16-4-2 provide guidance on uncertainty calculation for EMC measurements. Compliance determinations must consider measurement uncertainty; a measurement that falls within the uncertainty range of the limit may not clearly demonstrate compliance or non-compliance.
Test Setup Procedures
Consistent test setup procedures ensure reproducible measurements that can be compared across test sessions and between laboratories. Standards such as CISPR 16-2-1 specify setup requirements including equipment positioning, cable arrangement, ground plane dimensions, and LISN placement. Following these procedures is mandatory for compliance testing and recommended for pre-compliance measurements to ensure correlation between development and final test results.
The equipment under test (EUT) is placed on a non-conductive table at a specified height (typically 40 cm for tabletop equipment) above a ground reference plane. The ground plane is a metal sheet with minimum dimensions specified by the standard, typically 2 meters by 2 meters, bonded to the LISN ground terminal. The LISN is positioned at the edge of the ground plane with its power output closest to the EUT. Power cables are arranged in a specified bundle configuration with defined length and routing.
Equipment Configuration
The EUT must be configured to represent typical operating conditions likely to produce maximum emissions. For equipment with multiple operating modes, the mode producing highest emissions must be identified and tested. Peripheral devices and accessories normally used with the equipment should be connected during testing. Loading conditions for power supplies and outputs should reflect worst-case scenarios, as load current often affects emission levels.
Documentation of the EUT configuration is essential for report completeness and test reproducibility. Photographs showing the physical arrangement of equipment, cables, and test apparatus supplement written descriptions. Software versions, option settings, and any non-standard configurations should be recorded. Any deviations from standard setup requirements must be documented and justified, as they may affect the validity of results for compliance purposes.
Cable and Interconnect Handling
Cables associated with the EUT significantly affect conducted emission measurements. Power cables should be of specified length (typically 1 meter) or the standard length supplied with the equipment, bundled in a defined configuration. Excess cable length is arranged in a flat bundle at a specified height above the ground plane. Signal cables connecting to auxiliary equipment should be of minimum necessary length to reduce their influence on measurements.
The routing of cables relative to the LISN and ground plane affects measurements through capacitive and inductive coupling. Standards specify cable positions to minimize these effects and ensure reproducibility. Cables should not drape over the edge of the ground plane or approach the LISN too closely except at the power connection. Where multiple cables exit the EUT, they should be separated according to function and positioned consistently for each test.
Troubleshooting Emission Problems
When conducted emission measurements reveal levels exceeding limits or approaching limits too closely for comfort, systematic troubleshooting identifies the sources and mechanisms responsible. The emission spectrum provides the first diagnostic information: discrete peaks at harmonics of a fundamental frequency suggest switching or clock sources, while broadband noise indicates more distributed sources or resonances. The frequency of the strongest emissions narrows the range of potential sources.
Current probes provide invaluable diagnostic information by measuring the actual currents flowing in power cables, ground connections, and internal wiring. Comparing common-mode and differential-mode currents identifies the dominant emission mechanism. Measuring at multiple points traces the current path from source to cable exit. Changes in current distribution when modifications are made verify the effectiveness of filtering and grounding changes before full spectrum measurements confirm improvement.
Source Identification
Emissions at the switching frequency and its harmonics point to the power converter as the source. The severity of these emissions depends on switch edge rates, power level, and input filter effectiveness. Emissions at clock frequencies and their harmonics originate in digital circuits, with strength depending on the number of gates switching simultaneously and the power distribution network impedance. Identifying the fundamental frequency of harmonic-related emissions reveals the source.
Systematically disabling circuit sections while monitoring emissions isolates the source more directly. Removing clock signals, stopping processors, or disconnecting loads eliminates their contribution to emissions. When isolation is not possible, shielding test sections with grounded metal foil helps identify which areas contribute most to emissions. Near-field probes can locate hot spots on circuit boards where high-frequency currents concentrate.
Mitigation Verification
After implementing modifications intended to reduce emissions, measurements verify effectiveness. Compare spectra before and after the change at the same test conditions to isolate the effect of the modification. Improvements at the target frequencies should not come at the cost of degradation at other frequencies. Multiple modifications may interact, so systematic change management with measurements after each step builds understanding of the system's behavior.
When modifications prove effective in reducing emissions, document the changes thoroughly for incorporation into production units. Component values, placement, and assembly procedures all affect EMC performance. Validation testing on multiple units confirms that the fix is robust and reproducible. Margin testing that intentionally stresses the system verifies that adequate performance remains under worst-case conditions likely to be encountered in production variation and customer use.
Best Practices and Common Pitfalls
Successful conducted emission measurement requires attention to details that might seem minor but can significantly affect results. Cable dressing, ground plane bonding, ambient conditions, and equipment warm-up all influence measurements. Developing consistent procedures and checklists ensures that nothing is overlooked and that results are reproducible from test to test.
Common pitfalls include using incorrect bandwidth settings, failing to account for cable and switching losses, overlooking the need for LISN port termination, and not allowing adequate warm-up time for equipment under test. Ambient noise from the laboratory environment can mask true emissions or create false peaks. Ground loops between measurement equipment and the LISN can introduce errors. Awareness of these issues and procedures to avoid them improve measurement quality.
Pre-Measurement Checklist
Before beginning measurements, verify that all equipment is properly connected and configured. Confirm that LISNs are bonded to the ground plane with low-impedance connections. Verify that unused LISN measurement ports are terminated in 50 ohms. Check that the spectrum analyzer or receiver is configured for the correct frequency range, bandwidth, and detector. Ensure that calibration is current and corrections are applied. Power the EUT and allow it to reach thermal equilibrium in its normal operating mode.
Perform an ambient noise measurement with the EUT powered off to establish the measurement floor. Compare ambient levels to the applicable limits and determine whether ambient noise might affect results at any frequencies. Document ambient conditions including temperature, humidity, and any unusual sources of electromagnetic noise present in the test environment. Take photographs of the test setup showing equipment arrangement, cable routing, and LISN positioning.
Data Quality Verification
After measurements, review the data for anomalies that might indicate measurement errors. Sudden jumps in the spectrum, unexplained peaks, or unusually high noise floors warrant investigation. Compare results to previous measurements of similar equipment to identify any unexpected differences. Check that calibration corrections were properly applied and that the frequency and amplitude scales are correct.
Repeated measurements verify consistency. Significant variation between measurements suggests setup instability, intermittent equipment behavior, or changing ambient conditions. Statistical analysis of multiple measurements provides confidence intervals for emission levels and helps distinguish true emissions from measurement artifacts. For compliance testing, multiple scans with different detectors and at multiple operating conditions build a complete picture of the equipment's emission characteristics.
Summary
Accurate conducted emission measurement requires proper equipment, standardized procedures, and careful attention to the many factors that affect measurement results. The LISN provides the defined impedance and coupling essential for reproducible measurements. Spectrum analyzers and EMI receivers with appropriate detectors quantify emission levels for comparison against regulatory limits. Current probes enable diagnostic measurements for troubleshooting. Automated systems improve efficiency and consistency for high-volume testing.
Success in conducted emission measurement comes from understanding the underlying principles, following standardized procedures, and maintaining vigilance against the many sources of measurement error. The investment in proper measurement capability pays dividends through reliable compliance determinations, efficient troubleshooting, and confidence that products will perform in the electromagnetic environment they encounter. As electronic equipment continues to increase in complexity and operating frequencies, the importance of accurate emission measurement grows correspondingly.