Electronics Guide

mm-Wave Measurements

Accurate characterization of millimeter-wave systems presents some of the most challenging measurement problems in electronics. At frequencies from 30 GHz to 300 GHz, wavelengths range from 10 mm down to 1 mm, making physical dimensions, connector repeatability, cable stability, and environmental factors critically important. Measurements that are routine at lower frequencies require sophisticated calibration procedures, specialized test fixtures, and careful attention to sources of uncertainty that can easily dominate the signal of interest.

The fundamental challenge of mm-wave measurements stems from the fact that nearly everything in the measurement system—connectors, cables, probes, fixtures, and even the air gap between surfaces—represents an electrically significant length. A 1 mm dimension that seems negligibly small represents a full wavelength at 300 GHz, creating phase shifts of 360 degrees and potentially introducing resonances, reflections, and coupling effects. This article addresses the critical techniques and considerations for accurate mm-wave system characterization, from VNA calibration and probe station setup through de-embedding, fixturing, and uncertainty analysis.

Vector Network Analyzer Calibration at mm-Wave

Vector network analyzer (VNA) calibration establishes the reference planes for measurements and removes systematic errors introduced by the test equipment itself. At millimeter-wave frequencies, calibration becomes substantially more complex and critical than at lower frequencies. The calibration process defines the measurement reference plane—typically at the end of a coaxial connector, at wafer probe tips, or at the input to a test fixture—and characterizes the systematic errors of everything between the VNA's internal calibration plane and this external reference.

Calibration Standards and Methods

Standard calibration techniques include Short-Open-Load-Thru (SOLT), Thru-Reflect-Line (TRL), and Line-Reflect-Match (LRM). Each method has specific advantages and limitations in the mm-wave regime:

  • SOLT calibration relies on precisely characterized short, open, load, and thru standards. At mm-wave frequencies, opens can couple to nearby structures, shorts may not achieve perfect reflection due to skin effect and surface roughness, and loads must maintain 50-ohm impedance across extremely wide bandwidths. SOLT works well with high-quality coaxial connectors but becomes increasingly challenging as frequency increases.
  • TRL calibration requires a thru connection, a reflective standard (usually a short or open), and one or more precision transmission lines of specific lengths. TRL is particularly effective at mm-wave because it doesn't require precisely known characteristic impedance for the load standard—only that the reflect standard has high magnitude reflection. The line standards must differ in electrical length by approximately 90 degrees at the center of the calibration band, which translates to very small physical differences at high frequencies.
  • LRM calibration uses line, reflect, and match standards and is particularly useful when manufacturing repeatable line standards is difficult. It requires a well-characterized match standard but can accommodate less-than-perfect reflect standards.

Multiline TRL (mTRL) extends TRL by using multiple line standards at different lengths, improving calibration accuracy across wider bandwidths and providing redundancy that helps identify problems with individual standards. This approach is particularly valuable at mm-wave frequencies where a single line standard cannot provide 90-degree phase shift across the entire band of interest.

Connector and Interface Repeatability

Connector repeatability becomes a dominant source of measurement uncertainty at mm-wave frequencies. A 1-micron variation in connector mating depth at 100 GHz produces approximately 1.2 degrees of phase shift. High-quality precision connectors such as 1.0 mm, 1.85 mm, 2.4 mm, and 2.92 mm types use careful mechanical tolerances to minimize this variation, but even the best connectors show measurable repeatability limits.

Best practices for connector care include:

  • Using calibrated torque wrenches to ensure consistent mating force
  • Regular inspection with microscopes or optical comparators to detect wear, damage, or contamination
  • Careful cleaning procedures using appropriate solvents and lint-free materials
  • Tracking connection cycles and retiring connectors before they reach end-of-life limits
  • Never cross-connecting incompatible connector types that can cause permanent damage

On-wafer measurements eliminate many connector-related issues but introduce their own challenges with probe-to-pad contact repeatability. The overtravel distance, contact force, and probe tip condition all affect measurement repeatability.

Calibration Verification and Validation

After completing calibration, verification measurements confirm that the calibration was successful. Common verification checks include:

  • Measuring the thru standard and confirming S21 magnitude near 0 dB and phase near 0 degrees
  • Measuring a precision airline or verification standard with known characteristics
  • Checking that the calibrated short shows S11 magnitude near 0 dB with phase near 180 degrees
  • Verifying that the load shows S11 magnitude well below -20 dB across the band

These verification steps help identify calibration errors before proceeding to DUT measurements. At mm-wave frequencies, seemingly small calibration errors can produce large measurement uncertainties.

Probe Station Setup and On-Wafer Measurements

On-wafer measurements using probe stations enable direct characterization of devices, structures, and circuits on semiconductor wafers before packaging. This approach eliminates bond wire and package parasitics from the measurement, providing the most direct access to the device under test. However, probe-based measurements introduce their own set of challenges and require careful attention to setup and calibration.

Probe Types and Configurations

Ground-Signal-Ground (GSG) probes are standard for on-wafer mm-wave measurements, providing a well-controlled coplanar waveguide (CPW) interface to the device. The probe pitch (center-to-center spacing between adjacent signal and ground contacts) must match the DUT's pad layout, with common pitches of 50 µm, 75 µm, 100 µm, 125 µm, and 150 µm. Smaller pitches enable denser pad layouts but require more precise probe positioning and smaller probe tips that may wear more quickly.

Probe selection considerations include:

  • Frequency range: Each probe design has a specified bandwidth, typically extending from DC to 110 GHz, 145 GHz, 220 GHz, or higher. The probe's coaxial-to-CPW transition must maintain low reflection and low loss across this entire range.
  • Impedance: Standard 50-ohm probes are most common, but 75-ohm and differential probes are available for specific applications.
  • Contact force: Adequate contact force ensures low-resistance connections but excessive force can damage probe tips or wafer pads. Typical contact forces range from 2-5 grams per tip.
  • Blade material: Tungsten, beryllium-copper, and other materials offer different tradeoffs between durability, electrical performance, and cost.

Calibration Substrate and ISS Standards

On-wafer calibration typically uses Impedance Standard Substrates (ISS) that provide precision short, open, load, and thru structures fabricated on ceramic or semiconductor substrates. These standards are designed to match the DUT's substrate material and pad configuration as closely as possible, minimizing the effects of different dielectric constants, metal thicknesses, and pad geometries.

The substrate choice matters because electromagnetic fields extend into the substrate material, making the transmission line characteristics dependent on dielectric constant and loss tangent. An ISS calibrated on alumina (εr ≈ 9.9) will not accurately characterize devices on silicon (εr ≈ 11.9) or gallium arsenide (εr ≈ 12.9). Custom ISS substrates matching the DUT substrate provide the most accurate calibration.

Probe Positioning and Planarity

Precise probe positioning is essential for repeatable measurements. The probe station's positioning system must achieve micron-level repeatability in X, Y, and Z axes. The chuck (wafer holder) should be adjustable in tip/tilt to ensure the wafer surface is parallel to the probe travel plane. Poor planarity causes non-uniform contact across the probe tips, leading to variable contact resistance and possible open circuits on some probe tips.

Best practices for probe placement include:

  • Using the probe station's microscope or camera system to align probes precisely to pad centers
  • Approaching the wafer gradually to detect initial contact (visible scrubbing or resistance change)
  • Applying consistent overtravel distance (typically 2-5 µm) to ensure stable contact
  • Checking for probe marks on pads after contact to verify that all tips made contact
  • Regularly cleaning probe tips to remove oxide, contaminants, or deposited material

Environmental Control

Temperature and humidity variations affect both the DUT and the calibration standards. Many probe stations include temperature-controlled chucks to maintain the wafer at a specific temperature, enabling characterization of temperature-dependent effects. The probe station enclosure should provide a stable thermal environment and may include active temperature control to maintain consistent conditions during long measurement sessions.

Humidity control prevents condensation, which can cause electrical leakage paths or corrosion. At mm-wave frequencies, atmospheric absorption due to water vapor becomes significant above 60 GHz, with strong absorption peaks near 120 GHz and 183 GHz. For measurements in these bands, dry nitrogen purging of the probe station enclosure can reduce measurement uncertainty.

Over-Temperature Testing

Many mm-wave devices and systems must operate across wide temperature ranges, from cryogenic conditions in space applications to high temperatures in automotive or industrial environments. Characterizing device performance across temperature reveals activation energies, thermal coefficients, and potential failure mechanisms while ensuring that specifications are met under all operating conditions.

Temperature-Controlled Test Fixtures

Over-temperature measurements require fixtures that can heat or cool the DUT while maintaining electrical performance. Probe stations with thermal chucks provide precise temperature control from -60°C to +300°C or more. The chuck temperature is controlled via resistive heaters and/or liquid nitrogen cooling, with feedback from thermocouples or resistance temperature detectors (RTDs).

Key considerations for thermal measurements include:

  • Thermal settling time: After changing chuck temperature, adequate time must elapse for the DUT to reach thermal equilibrium. Small devices may stabilize in seconds, while larger structures or packages may require minutes to reach steady-state temperature.
  • Temperature uniformity: Temperature gradients across the DUT can cause measurement variations. High-quality thermal chucks use distributed heating elements and good thermal conductivity to minimize gradients.
  • Probe thermal expansion: Probe positioners expand and contract with temperature, potentially moving probe tips relative to pads. Some systems include compensation mechanisms or require re-positioning probes at each temperature point.
  • Calibration standards temperature: The ISS calibration substrate should ideally be at the same temperature as the DUT. Some advanced systems include temperature-controlled calibration substrates, but more commonly, calibration is performed at room temperature and measurements apply correction factors.

Temperature-Dependent Effects

Temperature affects virtually all device parameters. In semiconductors, carrier mobility decreases with increasing temperature, reducing transistor transconductance and maximum frequency. Bandgap energy decreases with temperature, shifting threshold voltages and leakage currents. In passive structures, conductor resistivity increases with temperature due to increased phonon scattering, increasing insertion loss. Dielectric constant and loss tangent also exhibit temperature dependence, shifting transmission line impedance and propagation velocity.

Proper characterization involves measuring S-parameters, noise figure, output power, and other key metrics at multiple temperature points spanning the intended operating range. The data reveals temperature coefficients that inform circuit design, compensation techniques, and specification limits.

De-embedding at mm-Wave Frequencies

De-embedding removes the effects of test fixtures, probe pads, transmission line feeds, and other parasitic structures from measurements, extracting the intrinsic performance of the device under test. At mm-wave frequencies, even microscopically small structures introduce significant phase shifts, reflections, and losses that must be accurately characterized and removed to obtain meaningful device parameters.

De-embedding Structures and Methods

The most common de-embedding approaches include:

  • Open-Short de-embedding: This simple method uses an open structure (pads without the device) and a short structure (pads with a short circuit) to characterize shunt and series parasitics respectively. The method assumes parasitics can be modeled as simple lumped elements and works reasonably well at lower mm-wave frequencies but becomes less accurate as frequency increases.
  • Open-Short-Load (Thru) de-embedding: Adding a thru or load structure improves accuracy by providing additional information about the parasitic network. The thru structure connects the probe pads directly without the DUT, characterizing the combined effect of all parasitics in the measurement path.
  • Cascade de-embedding: This approach uses ABCD (chain) parameters to mathematically remove cascaded networks representing the access structures on each side of the DUT. It requires careful attention to reference plane definitions and can accumulate numerical errors if the de-embedding structures are not accurately characterized.
  • 2x-Thru de-embedding: Also called self-calibration, this method uses a thru structure that is twice the length of the access structures to each side of the DUT. By assuming symmetry and reciprocity, the technique extracts the access network characteristics and removes them from the DUT measurement. It's particularly effective when creating precise short and open standards is difficult.

Pad Parasitics and Feed Structures

Probe pads introduce capacitance to ground and mutual capacitance between adjacent pads. Typical GSG probe pads might have 10-30 fF of capacitance per pad, which creates significant impedance at mm-wave frequencies. A 20 fF capacitance presents an impedance of only about 80 ohms at 100 GHz, shunting signal power to ground and creating reflections.

Transmission line feeds connecting probe pads to the DUT introduce propagation delay, loss, and potential impedance discontinuities. These structures must be long enough to accommodate probe placement but should be minimized to reduce their electrical impact. At 100 GHz, a 100-micron length of microstrip or CPW represents approximately 12 degrees of phase shift—enough to significantly affect device characterization if not properly de-embedded.

Accuracy and Validation

De-embedding accuracy depends critically on how well the de-embedding structures match the actual parasitics around the DUT. Ideally, de-embedding structures are fabricated on the same wafer, in the same process, with identical geometry to the DUT's access structures. Even small variations in metal thickness, dielectric thickness, or lateral dimensions can introduce de-embedding errors.

Validation techniques include:

  • Measuring devices with known characteristics (such as precision resistors or transmission lines) through the same access structures and verifying that de-embedded results match expected values
  • Comparing results from different de-embedding methods to identify inconsistencies
  • Checking for physical plausibility (passive devices should not show gain, impedances should be within reasonable ranges)
  • Performing electromagnetic simulations of the complete structure and comparing to measured results

Fixturing Challenges

Test fixtures provide mechanical support and electrical connections to devices under test, but at mm-wave frequencies, even simple fixtures become complex electromagnetic structures. Connectors, transitions, transmission lines, and mounting hardware all interact with the signal in ways that can dominate the measurement if not carefully managed.

Connector Transitions and Launchers

Transitioning from coaxial connectors to planar transmission lines (microstrip, stripline, CPW) requires careful impedance matching across the entire frequency range. Commercial connector launchers are available for many substrate types and frequencies, but custom designs may be necessary for specific applications. The transition region must minimize reflections while maintaining consistent characteristic impedance.

Edge-launch connectors that mate directly to the edge of a PCB or substrate offer compact solutions but require precise control of substrate thickness, metal thickness, and edge preparation. Through-board connectors provide more mechanical stability but introduce via transitions that can cause resonances at mm-wave frequencies.

Shielding and Resonances

Test fixtures often include metal enclosures for mechanical protection and electromagnetic shielding. However, metal enclosures create cavity resonances that can dramatically affect measurements when the cavity dimensions correspond to half-wavelength or full-wavelength resonances. At 100 GHz, a 1.5 mm cavity dimension creates a half-wavelength resonance, causing sharp changes in S-parameters.

Mitigation strategies include:

  • Careful dimensional control to ensure resonances fall outside the measurement band
  • Using absorptive materials to dampen cavity modes
  • Designing enclosures with non-parallel walls to break up standing wave patterns
  • Including venting or cutouts that disrupt cavity modes while maintaining adequate shielding

Substrate Modes and Surface Waves

At mm-wave frequencies, electromagnetic energy can couple into substrate modes that propagate within the dielectric material rather than along the intended transmission line. These modes cause power loss, frequency-dependent behavior, and coupling to other structures on the substrate. Thinner substrates and lower dielectric constants reduce substrate mode coupling but may compromise mechanical strength or other design parameters.

Ground plane vias and fences can suppress substrate modes by creating electromagnetic barriers. Proper placement and spacing of these structures (typically much closer than λ/4) ensures effective mode suppression without introducing new resonances or impedance discontinuities.

Cable Effects and Stability

Coaxial cables connecting test equipment to fixtures or probes introduce loss, phase shift, and impedance variations that must be calibrated out or accounted for in measurements. At mm-wave frequencies, cable performance becomes increasingly challenging, and careful cable selection, handling, and stabilization are essential for accurate measurements.

Cable Loss and Dispersion

Coaxial cable loss increases with frequency, following approximately a square-root relationship at lower frequencies and approaching a linear relationship at mm-wave frequencies where skin effect dominates. A high-quality 2.4 mm cable might exhibit 4 dB/meter loss at 40 GHz but 10 dB/meter or more at 110 GHz. This loss reduces measurement dynamic range and can make low-level signal measurements difficult.

Phase-stable cables use special construction techniques—including solid or foam dielectrics with low temperature coefficients, mechanically stable center conductors, and reinforced outer conductors—to minimize phase variations with temperature and flexure. These cables are essential for applications requiring precise phase measurements or for systems where cables cannot remain absolutely stationary.

Cable Flexure and Repeatability

Flexing a coaxial cable changes its electrical length and characteristic impedance as the center conductor position shifts slightly within the dielectric. These changes cause phase and amplitude variations in measurements. The effect is more pronounced at higher frequencies where the wavelength is shorter.

Best practices for cable handling include:

  • Minimizing cable movement during measurements
  • Supporting cables to prevent stress on connectors
  • Maintaining consistent cable bend radii (following manufacturer minimum bend radius specifications)
  • Allowing cables to stabilize after movement before performing critical measurements
  • Using cable conditioning (repeated flexing) on new cables to stabilize their electrical properties

Cable Alternatives

For applications where cable loss, phase stability, or repeatability are inadequate, alternatives include:

  • Semi-rigid cables: These cables use solid outer conductors and foam dielectrics, providing excellent electrical stability and low loss. However, they cannot be flexed repeatedly and must be carefully formed to the required shape.
  • Waveguide: Below approximately 40 GHz, waveguide is bulky, but at higher mm-wave frequencies, rectangular waveguide provides lower loss than coaxial cable. Waveguide requires careful flange alignment and can be sensitive to mechanical tolerances.
  • Direct probe mounting: Mounting VNA test port modules directly at the probe station eliminates cables entirely, providing the best possible phase stability and lowest loss. This approach requires careful thermal management and may limit accessibility to the DUT.

Measurement Repeatability

Repeatability—the ability to obtain the same measurement result when measuring the same DUT multiple times under the same conditions—is a critical quality metric for any measurement system. At mm-wave frequencies, many factors that are negligible at lower frequencies become significant sources of variation, making repeatability analysis essential for understanding measurement reliability.

Sources of Repeatability Variation

Common sources of repeatability variation include:

  • Connector repeatability: Each connect/disconnect cycle introduces small variations in mating depth, alignment, and contact resistance. High-quality connectors specify repeatability in terms of maximum magnitude and phase deviation, typically on the order of ±0.01 dB and ±1 degree for premium 1 mm connectors.
  • Probe contact variations: Differences in overtravel, contact force, and probe tip position from one touchdown to the next create measurement variations. Automated probe stations with precision positioners achieve better repeatability than manual systems.
  • Thermal drift: Temperature changes affect cable phase, connector dimensions, and DUT characteristics. Measurements made hours apart may show variations due to room temperature cycling or equipment warm-up drift.
  • Calibration stability: VNA calibrations drift over time due to temperature changes, connector wear, and other factors. Re-calibrating periodically improves repeatability for measurements made over extended periods.

Quantifying Repeatability

Standard practice for quantifying repeatability involves making multiple independent measurements of the same DUT, with full disconnect/reconnect cycles between measurements. Statistical analysis of the results provides mean values and standard deviations for each measured parameter. For S-parameters, this typically involves analyzing magnitude (in dB) and phase (in degrees) separately.

A typical repeatability test might include:

  • 10 measurement cycles with complete disconnect/reconnect between each
  • Calculation of mean and standard deviation for S11, S21, S12, and S22 at each frequency point
  • Identification of frequency ranges where repeatability is best and worst
  • Comparison against manufacturer specifications for test equipment

Improving Repeatability

Strategies to improve measurement repeatability include:

  • Using high-quality connectors and replacing them before they reach end-of-life
  • Implementing strict procedures for connection torque, probe placement, and handling
  • Controlling environmental temperature and allowing equipment to reach thermal equilibrium
  • Minimizing cable movement and using phase-stable cables where needed
  • Calibrating more frequently when highest accuracy is required
  • Using automated systems to eliminate operator variability

Measurement Uncertainty

Measurement uncertainty quantifies the doubt that exists about a measurement result. Unlike a simple repeatability study that examines variation under nominally identical conditions, uncertainty analysis considers all known sources of error—both random and systematic—and combines them to establish confidence intervals around reported values. At mm-wave frequencies, comprehensive uncertainty analysis is essential for comparing results between labs, qualifying measurement systems, and ensuring that devices meet specifications.

Uncertainty Sources and Components

Measurement uncertainty includes contributions from many sources:

  • VNA instrumentation uncertainty: The VNA itself has specified accuracy limits for magnitude and phase measurements, which depend on factors like IF bandwidth, averaging, power level, and frequency. Manufacturer data sheets provide uncertainty specifications, often expressed as a function of measured magnitude.
  • Calibration standard uncertainty: Imperfect knowledge of calibration standard characteristics—such as offset delays in short/open standards, characteristic impedance of line standards, or frequency-dependent loss—introduces systematic errors that propagate through the calibration and into subsequent measurements.
  • Connector and interface repeatability: Random variations in connector mating contribute to measurement uncertainty. This component is typically characterized through repeatability studies as described in the previous section.
  • Drift and stability: Long-term drift in cables, connectors, and VNA electronics creates time-dependent variations. Calibration intervals must be short enough that drift remains within acceptable limits.
  • Mismatch uncertainty: Impedance mismatches between the VNA, cables, connectors, and DUT create multiple reflections that interfere constructively or destructively depending on electrical length. Ripple in measured S-parameters often indicates mismatch interactions.
  • Noise and dynamic range: At mm-wave frequencies with high cable loss and DUT insertion loss, signal levels may approach the VNA's noise floor, increasing measurement noise and uncertainty.

Uncertainty Budgets and Analysis

A complete uncertainty analysis requires identifying all significant uncertainty sources, quantifying each contribution, and combining them according to established statistical methods (typically following the ISO Guide to the Expression of Uncertainty in Measurement, or GUM). The process involves:

  1. Identifying all input quantities that affect the measurement result
  2. Determining the standard uncertainty (one standard deviation) for each input
  3. Calculating sensitivity coefficients that describe how changes in each input affect the output
  4. Combining the individual uncertainty contributions using root-sum-square for uncorrelated sources
  5. Expressing the final result with an expanded uncertainty (typically k=2 for approximately 95% confidence)

For S-parameter measurements, uncertainty analysis often separates magnitude and phase, as they have different uncertainty sources and sensitivities. The result is typically expressed as an uncertainty bound that varies with frequency, such as: S21 magnitude ±0.15 dB (k=2) and S21 phase ±2.5° (k=2) over the frequency range 75-110 GHz.

Validation Through Inter-laboratory Comparisons

The ultimate validation of measurement uncertainty claims comes from inter-laboratory comparisons, where multiple independent labs measure the same artifacts using their own equipment and methods. If the reported values from different labs agree within their stated uncertainties, this provides confidence that uncertainty analyses are realistic. Significant disagreements indicate that one or more labs have underestimated uncertainty or have unrecognized systematic errors.

Formal round-robin measurement programs, often organized by standards bodies or industry consortia, provide structured frameworks for these comparisons. They establish measurement protocols, provide traveling standards, and analyze results statistically to identify outliers and evaluate overall measurement consistency across the community.

Best Practices for mm-Wave Measurements

Success in mm-wave characterization requires attention to many details that can be overlooked at lower frequencies. Key best practices include:

  • Plan calibration carefully: Select calibration methods and standards appropriate for the measurement frequency, connector type, and required accuracy. Verify calibration quality before proceeding to DUT measurements.
  • Minimize connections: Every connector pair adds uncertainty. Use the shortest practical cable lengths and minimize adapters and other connections in the signal path.
  • Control the environment: Maintain stable temperature and humidity. Allow equipment to reach thermal equilibrium before calibration and measurements.
  • Handle connectors with care: Use proper torque, inspect regularly for damage, keep threads clean, and retire worn connectors before they degrade measurement quality.
  • Verify measurements: Use known standards or devices with expected characteristics to confirm that results are reasonable. Compare different measurement methods when possible.
  • Document everything: Record calibration details, equipment serial numbers, cable identification, measurement conditions, and procedures. This documentation enables troubleshooting and supports uncertainty analysis.
  • Understand limitations: Recognize when measurements approach equipment limits for noise floor, dynamic range, or frequency coverage. Don't over-interpret results near these boundaries.

Conclusion

Millimeter-wave measurements demand rigorous attention to calibration, fixturing, environmental control, and uncertainty analysis. The techniques described in this article—from VNA calibration and probe station setup through de-embedding, cable management, and repeatability assessment—form the foundation for accurate mm-wave system characterization. As frequencies continue to increase and devices become more complex, measurement challenges will intensify, requiring continued refinement of methods and development of new calibration standards, de-embedding techniques, and uncertainty analysis approaches.

Success in mm-wave measurements comes from understanding that nearly every physical detail matters at these frequencies. What appears to be a small imperfection—a worn connector, a slightly bent cable, a few degrees of temperature variation—can significantly impact measurement results. By applying the best practices and techniques outlined here, engineers can develop robust measurement capabilities that support development of next-generation mm-wave systems for communications, radar, sensing, and imaging applications.

Related Topics