Electronics Guide

Hardware Security in Analog Domains

Hardware security in analog domains addresses the vulnerabilities inherent in physical electronic systems that cannot be mitigated through software or cryptographic algorithms alone. While digital security focuses on computational hardness and algorithmic correctness, analog security confronts the physical reality that electronic circuits leak information through their power consumption, electromagnetic emissions, timing variations, and other observable phenomena. These side channels provide attackers with windows into otherwise secure systems.

From the power fluctuations that reveal cryptographic key bits to the electromagnetic emanations that broadcast internal operations, analog circuits inadvertently communicate their secrets to anyone with the right measurement equipment. Defending against these attacks requires understanding the physical mechanisms of information leakage and implementing countermeasures at the circuit and system level. This discipline combines analog circuit design, physics, and security engineering to create systems that resist physical attacks while maintaining their intended functionality.

Fundamentals of Physical Security

Physical security in electronics extends beyond locked enclosures and tamper-evident seals. At the circuit level, information leaks through multiple channels that arise from fundamental physical phenomena. Understanding these leakage mechanisms is essential for designing effective countermeasures.

The Analog Attack Surface

Every electronic circuit presents an analog attack surface composed of observable physical quantities:

  • Power consumption: Dynamic current draw varies with internal operations, creating a power signature that correlates with processed data
  • Electromagnetic emissions: Switching currents generate electromagnetic fields that propagate beyond the device boundary
  • Timing variations: Data-dependent processing times leak information about internal values
  • Acoustic emanations: Some circuits produce audible or ultrasonic emissions correlated with their operation
  • Thermal signatures: Heat distribution patterns reflect circuit activity and can reveal operational modes
  • Optical emissions: Switching transistors emit faint photons that can be detected and analyzed

These side channels exist because electronic circuits must obey physical laws. Current must flow to perform computation, and that current flow has observable consequences. The challenge is to minimize the correlation between observable quantities and sensitive information.

Attacker Capabilities

Threat modeling for hardware security considers various attacker capabilities:

  • Passive attacks: Observing emissions without modifying the device; includes power analysis, electromagnetic analysis, and timing attacks
  • Active attacks: Injecting faults, manipulating inputs, or altering environmental conditions to induce exploitable behaviors
  • Invasive attacks: Physically probing or modifying the circuit, including depackaging, microprobing, and focused ion beam editing
  • Semi-invasive attacks: Requiring physical access but not direct contact with internal circuitry, such as optical fault injection

The cost and sophistication required for each attack class varies dramatically. Power analysis can be performed with modest equipment, while invasive attacks require specialized laboratories and expertise. Security designs must balance protection levels against realistic threat models.

Defense-in-Depth Principles

Effective hardware security employs multiple layers of protection:

  • Minimize information leakage: Design circuits that inherently leak less information through side channels
  • Add noise and randomness: Mask useful signals with noise that attackers cannot remove
  • Detect and respond: Implement sensors that detect attack attempts and trigger protective responses
  • Bound exploitation: Even if attacks partially succeed, limit the information gained per attack
  • Increase attack cost: Make attacks expensive in time, equipment, or expertise required

No single countermeasure provides complete protection. Layered defenses force attackers to overcome multiple barriers, increasing the difficulty and cost of successful attacks.

Side-Channel Attack Prevention

Side-channel attacks exploit unintended information leakage to extract secrets from cryptographic implementations. Rather than attacking mathematical weaknesses in algorithms, these attacks target the physical implementation. Effective prevention requires understanding how information leaks and designing circuits that minimize or mask this leakage.

Sources of Side-Channel Leakage

Information leakage occurs through multiple mechanisms in digital and analog circuits:

  • Data-dependent switching: CMOS gates consume different amounts of power depending on whether transitions occur on their inputs and outputs
  • Hamming weight correlation: Power consumption often correlates with the number of ones in processed data
  • Hamming distance correlation: The number of bit transitions between successive values affects power consumption
  • Address-dependent timing: Memory access patterns can leak information about accessed data locations
  • Operation-dependent signatures: Different instructions or operations produce distinct power and electromagnetic signatures

These leakages are often subtle, requiring statistical analysis of many measurements to extract useful information. However, with sufficient traces, attackers can recover secret keys bit by bit.

Constant-Power Circuit Design

One fundamental approach to side-channel resistance is designing circuits with constant power consumption regardless of processed data:

  • Dual-rail logic: Each bit is represented by two complementary wires; every operation produces the same number of transitions
  • Wave Dynamic Differential Logic (WDDL): Combines dual-rail encoding with wave-pipelined evaluation for balanced power
  • Sense Amplifier Based Logic (SABL): Uses sense amplifier structures to achieve data-independent power consumption
  • Current-mode logic: Maintains constant current flow regardless of logic state by steering current between paths

These techniques significantly reduce power signature correlation with data but impose substantial area and power overhead. WDDL typically requires 2-3x the area and power of standard CMOS logic.

Masking and Blinding Techniques

Masking adds random values to intermediate computations to decorrelate power consumption from actual data:

  • Boolean masking: XORs secret values with random masks; computation proceeds on masked values
  • Arithmetic masking: Adds random values modulo some number; useful for arithmetic operations
  • Multiplicative masking: Multiplies values by random non-zero factors; often used in RSA implementations
  • Higher-order masking: Splits values into multiple shares, each masked independently; protects against higher-order attacks

The mask must be fresh for each operation and unknown to the attacker. First-order masking protects against attacks that analyze individual points; higher-order attacks combine multiple points and require higher-order masking for protection.

The security order of a masking scheme refers to the number of points an attacker must combine to extract information. An n-th order masking scheme requires attackers to combine n+1 points, exponentially increasing the number of traces needed for successful attacks.

Shuffling and Randomization

Randomizing the execution order of operations spreads side-channel leakage across time:

  • Operation shuffling: Randomly permuting the order of independent operations (e.g., S-box lookups in AES)
  • Dummy operations: Inserting random fake operations that process dummy data
  • Random delays: Adding variable delays between operations to prevent temporal alignment
  • Random execution paths: Taking equivalent but randomly chosen computational paths

Shuffling does not eliminate leakage but requires attackers to perform more sophisticated analysis. Combined with masking, shuffling significantly increases attack complexity.

Hiding Techniques

Hiding approaches reduce the signal-to-noise ratio available to attackers:

  • Noise generators: Active circuits that add uncorrelated noise to power and electromagnetic signatures
  • Random clock jitter: Varying clock timing to prevent trace alignment
  • Parallel noise paths: Current paths that switch with random data, masking the actual computation
  • Decoupling and filtering: On-chip capacitors and filters that smooth power signatures

Hiding increases the number of traces required for successful attacks but does not provide information-theoretic security. Given enough measurements, statistical techniques can filter out added noise.

Power Analysis Countermeasures

Power analysis attacks measure the power consumption of a device during cryptographic operations to extract secret keys. Simple Power Analysis (SPA) examines individual traces for visible patterns, while Differential Power Analysis (DPA) statistically processes many traces to extract subtle correlations. Effective countermeasures must address both attack types.

Simple Power Analysis Defenses

SPA attacks identify operations or data values from visible features in power traces. Defenses focus on eliminating distinguishable patterns:

  • Constant-time operations: Ensure all operations take the same time regardless of data values
  • Unified point operations: In elliptic curve cryptography, use formulas where point addition and doubling are indistinguishable
  • Regular execution patterns: Avoid conditional branches based on secret data
  • Balanced algorithm implementations: Process all bits identically regardless of their value

For example, the square-and-multiply algorithm for modular exponentiation reveals key bits through the sequence of squares and multiplies. The Montgomery ladder provides constant operation sequences regardless of key bit values.

Differential Power Analysis Defenses

DPA uses statistical correlation between power measurements and hypothetical intermediate values. Countermeasures break this correlation:

  • Data masking: Random masks decorrelate power from actual sensitive values
  • Address masking: Randomizing memory access patterns prevents address-based correlation
  • Time randomization: Variable timing prevents alignment of traces for statistical analysis
  • Amplitude randomization: Variable power supply or load conditions alter signal amplitudes

The effectiveness of DPA defenses is often quantified by the number of traces required for successful key recovery. A well-protected implementation may require millions or billions of traces compared to thousands for an unprotected implementation.

Power Supply Design for Security

The power delivery network significantly affects side-channel leakage:

  • On-chip voltage regulators: Integrated regulators reduce the correlation between on-chip activity and external power measurements
  • Switched-capacitor converters: Provide high isolation between digital core and power supply pins
  • Constant-current sources: Supply constant current regardless of load variations
  • Large on-chip decoupling: High-value on-chip capacitance filters high-frequency power signatures
  • Multiple power domains: Separate supplies for sensitive and non-sensitive circuits reduce leakage paths

On-chip regulators are particularly effective because they present an impedance barrier between internal switching activity and external measurement points. However, they consume area and introduce efficiency losses.

Current Flattening Circuits

Active current flattening circuits attempt to maintain constant supply current:

  • Current-steering compensation: Complementary current paths that sink current when the main circuit does not
  • Feedback regulators: Fast regulation loops that compensate for current variations
  • Charge recycling: Circuits that recirculate charge between power rails to reduce external current flow
  • Asynchronous design: Removing the global clock eliminates synchronized switching that creates strong power signatures

Perfect current flattening is difficult to achieve due to bandwidth limitations and parasitic effects. Practical implementations reduce but do not eliminate power-based leakage.

Correlation Power Analysis Resistance

Correlation Power Analysis (CPA) is a refined DPA technique using correlation coefficients. Resistance requires:

  • Eliminating intermediate value correlation: Ensure no intermediate computation correlates with both input and key
  • Breaking the leakage model: Violate assumptions about how power relates to data values
  • Multiple countermeasure layers: Combine masking, shuffling, and hiding for robust protection
  • Leakage assessment testing: Verify implementations using Test Vector Leakage Assessment (TVLA)

TVLA uses statistical tests to detect exploitable leakage without requiring successful attacks. It provides a quantitative measure of side-channel resistance during design and verification.

Electromagnetic Emanation Security

Electromagnetic (EM) emanations provide another side channel for extracting information from electronic devices. Unlike power analysis, which requires electrical contact with the power supply, EM attacks can be performed at a distance and can target specific circuit regions. Comprehensive security requires addressing both power and EM leakage.

Sources of EM Emanations

Electronic circuits emit electromagnetic fields through several mechanisms:

  • Switching currents: Rapid current changes in CMOS circuits generate magnetic fields
  • Transmission line effects: Signal traces act as antennas at high frequencies
  • Clock radiation: Periodic clock signals create strong emissions at clock frequency harmonics
  • Ground bounce: Inductance in ground paths creates voltage fluctuations that radiate
  • Substrate coupling: Currents through the semiconductor substrate create near-field emissions

EM measurements can be taken in the near field (within a wavelength) or far field. Near-field measurements provide spatial resolution, allowing targeting of specific circuit regions, while far-field measurements capture aggregate emissions.

EM Shielding Approaches

Physical shielding reduces EM emanations reaching potential attackers:

  • Metal enclosures: Conductive housings attenuate EM fields escaping the device
  • On-chip shields: Metal layers above sensitive circuitry provide localized shielding
  • Active shields: Driven shield layers that track sensitive signal voltages to minimize capacitive coupling
  • Twisted pair routing: Complementary signal routing that cancels magnetic field emissions
  • Ground planes: Continuous ground references that provide low-impedance return paths

Shielding effectiveness is measured in decibels of attenuation. High-security applications may require 60-100 dB of shielding effectiveness, which demands careful attention to seams, apertures, and cable penetrations.

Circuit-Level EM Countermeasures

Beyond physical shielding, circuit design affects EM emissions:

  • Differential signaling: Balanced signal pairs generate canceling magnetic fields
  • Current loop minimization: Small loop areas reduce magnetic dipole moments
  • Spread-spectrum clocking: Distributing clock energy across frequencies reduces peak emissions
  • On-chip noise generators: Adding random EM emissions to mask information-bearing signals
  • Reduced slew rates: Slower signal transitions generate less high-frequency content

Many of these techniques have tradeoffs. Reduced slew rates improve EM security but may impact circuit speed and noise margins. Design optimization must balance security with performance requirements.

TEMPEST and Emanations Security

TEMPEST is the NSA specification for limiting compromising emanations from electronic equipment. While detailed specifications are classified, general principles include:

  • Zoning: Physical separation between equipment processing classified and unclassified information
  • Filtering: EMI filters on all cables leaving secure areas
  • Shielded facilities: Faraday cage construction for secure processing areas
  • Equipment certification: Testing and approval of equipment for use at various classification levels
  • RED/BLACK separation: Strict separation between circuits handling encrypted versus plaintext data

Commercial applications increasingly adopt TEMPEST-derived principles for protecting sensitive financial, medical, and corporate information.

Localized EM Analysis Defenses

Near-field EM analysis can target individual circuit blocks, potentially bypassing system-level countermeasures. Defenses include:

  • Spatial distribution: Spreading sensitive computations across the chip to prevent localized measurement
  • Local shielding: Metal fill and dedicated shield layers over critical circuits
  • Dummy activity: Background switching in non-sensitive circuits to mask sensitive operations
  • Secure floor planning: Placing sensitive circuits in the chip interior, away from accessible edges

The increased spatial resolution of near-field probes compared to power measurements means that countermeasures effective against power analysis may not protect against EM attacks targeting specific circuit regions.

Physically Unclonable Functions

Physically Unclonable Functions (PUFs) exploit manufacturing variations to create unique device identifiers and cryptographic keys. Unlike stored secrets that can be copied or extracted, PUF responses derive from physical characteristics that are impossible to duplicate, even by the original manufacturer. PUFs provide hardware-based root of trust without requiring secure non-volatile memory.

PUF Fundamentals

A PUF is a physical system that maps challenges (inputs) to responses (outputs) based on its unique physical characteristics:

  • Uniqueness: Different devices produce different responses to the same challenge due to manufacturing variations
  • Reproducibility: The same device produces consistent responses across multiple measurements
  • Unpredictability: Responses cannot be predicted without access to the physical device
  • Unclonability: Physical characteristics cannot be duplicated, even with full knowledge of the design
  • Tamper evidence: Invasive analysis alters the physical structure, changing PUF responses

The security of PUFs rests on the practical impossibility of controlling manufacturing variations at the nanometer scale required to duplicate a PUF's response characteristics.

Silicon PUF Architectures

Several silicon PUF types exploit different physical phenomena:

  • Arbiter PUFs: Race two signals through parallel paths and compare arrival times; path delays vary due to transistor variations
  • Ring Oscillator PUFs: Compare frequencies of nominally identical ring oscillators; process variations create frequency differences
  • SRAM PUFs: Read SRAM power-up state; metastable cells settle to preferred states determined by transistor mismatch
  • Butterfly PUFs: Cross-coupled latches with controlled initialization exhibit preference determined by transistor asymmetry
  • Glitch PUFs: Exploit timing glitches in combinational logic that depend on path delay variations

Each architecture offers different tradeoffs in area, number of challenge-response pairs, reliability, and resistance to modeling attacks.

PUF Quality Metrics

PUF quality is characterized by several metrics:

  • Intra-distance (reliability): Hamming distance between repeated measurements of the same challenge on the same device; should be near zero
  • Inter-distance (uniqueness): Hamming distance between responses from different devices to the same challenge; should be near 50%
  • Uniformity: Distribution of zeros and ones in responses; should be balanced
  • Bit-aliasing: Tendency of specific response bits to have the same value across devices; should be minimal
  • Min-entropy: Information-theoretic measure of unpredictability; should approach 1 bit per response bit

Environmental variations (temperature, voltage, aging) affect reliability. Practical PUF systems require error correction to achieve stable outputs across operating conditions.

Fuzzy Extractors and Helper Data

PUF responses contain noise that must be corrected for cryptographic use:

  • Enrollment: During manufacturing, measure PUF responses and generate helper data for error correction
  • Reconstruction: During operation, use helper data to reproduce exact enrolled responses despite noise
  • Secure sketch: Helper data reveals no information about the PUF response (information-theoretically secure)
  • Fuzzy extractor: Combines secure sketch with randomness extraction to produce uniform cryptographic keys

Common constructions use BCH or Reed-Muller error-correcting codes. The helper data must be stored in non-volatile memory but does not require security since it reveals no information about the extracted key.

PUF Applications

PUFs enable various security applications:

  • Device authentication: Challenge-response protocols verify device identity without storing secrets
  • Key generation: Derive cryptographic keys from PUF responses, eliminating key storage
  • Anti-counterfeiting: Authenticate products by verifying PUF responses match enrolled values
  • Secure boot: Derive decryption keys for firmware from PUF, binding software to specific hardware
  • IP protection: Lock circuits to specific devices using PUF-derived keys

PUFs are particularly valuable in resource-constrained devices where secure non-volatile memory is expensive or unavailable.

PUF Attacks and Defenses

Several attack classes target PUF implementations:

  • Modeling attacks: Machine learning algorithms that predict responses from observed challenge-response pairs
  • Side-channel attacks: Power or EM analysis during PUF evaluation to extract internal state
  • Environmental manipulation: Extreme temperatures or voltages that alter PUF behavior predictably
  • Invasive attacks: Probing or imaging to directly measure physical characteristics

Strong PUF constructions use XOR networks, feed-forward structures, or other non-linear operations to resist modeling attacks. Implementation countermeasures similar to those for cryptographic circuits protect against side-channel attacks.

True Random Number Generators

True Random Number Generators (TRNGs) extract randomness from physical phenomena to produce unpredictable bits for cryptographic applications. Unlike Pseudo-Random Number Generators (PRNGs) that deterministically expand a seed, TRNGs provide information-theoretic randomness that cannot be predicted even with unlimited computational resources. Analog noise sources provide the physical entropy that makes TRNGs possible.

Physical Entropy Sources

TRNGs exploit various physical phenomena as entropy sources:

  • Thermal noise: Johnson-Nyquist noise in resistors provides fundamental randomness from thermal fluctuations
  • Shot noise: Statistical variation in electron flow across semiconductor junctions
  • Metastability: Circuits balanced at unstable equilibrium points that resolve randomly
  • Oscillator jitter: Phase noise in ring oscillators accumulates to provide timing randomness
  • Radioactive decay: Quantum mechanical randomness from nuclear decay events
  • Avalanche noise: Amplified noise from Zener diode breakdown

The choice of entropy source affects TRNG properties including bit rate, area, power consumption, and resistance to environmental manipulation.

Analog TRNG Architectures

Common TRNG architectures based on analog noise include:

  • Amplified thermal noise: High-gain amplifiers bring thermal noise to digital levels, followed by sampling and digitization
  • Oscillator sampling: A slow oscillator samples a fast, jittery oscillator; relative phase is random
  • Differential amplifier comparison: Compare noise voltages from two resistors to produce random bits
  • Metastable latch: Initialize a latch near its metastable point and observe resolution direction
  • Avalanche diode circuit: Amplify and digitize avalanche noise from reverse-biased Zener diodes

Each architecture presents different tradeoffs. Oscillator-based designs integrate well with digital CMOS processes but may have lower entropy density than true analog noise sources.

TRNG Quality Requirements

Cryptographic TRNGs must satisfy stringent quality requirements:

  • Entropy density: Raw output should contain close to 1 bit of entropy per output bit
  • Independence: Successive outputs must be statistically independent
  • Unpredictability: Future outputs must be unpredictable from past observations
  • Bias elimination: Output distribution must be uniform (equal probability of 0 and 1)
  • Attack resistance: Output quality must be maintained under adversarial conditions

Standards such as NIST SP 800-90B and AIS 31 define requirements and testing procedures for cryptographic entropy sources.

Post-Processing and Conditioning

Raw entropy source output typically requires conditioning to achieve cryptographic quality:

  • Von Neumann debiasing: Compare pairs of bits; output 0 for (0,1), 1 for (1,0), discard (0,0) and (1,1)
  • Hash-based conditioning: Cryptographic hash functions compress many input bits into fewer, higher-quality output bits
  • XOR correction: XOR multiple raw bits to produce one output bit with reduced bias
  • Linear feedback shift registers: LFSR-based whitening removes correlations in raw output
  • AES-based conditioning: Use AES in CBC-MAC mode to condition entropy source output

Conditioning cannot add entropy; it can only redistribute existing entropy. If the raw source provides insufficient entropy, no amount of conditioning produces secure output.

TRNG Health Monitoring

Online health monitoring detects failures or attacks that compromise randomness:

  • Repetition count test: Detects stuck-at faults where output becomes constant
  • Adaptive proportion test: Detects bias that develops over time
  • Total failure test: Detects complete entropy source failure
  • Startup tests: Extended testing during initialization to verify correct operation
  • Environmental monitoring: Temperature and voltage sensors detect manipulation attempts

Health tests must be carefully designed to detect real failures without excessive false alarms. The tests themselves must not leak information about TRNG output.

TRNG Attacks and Countermeasures

Attackers may attempt to manipulate or predict TRNG output:

  • Environmental attacks: Temperature extremes, voltage manipulation, or electromagnetic interference that reduce entropy
  • Frequency injection: External signals that lock oscillator-based TRNGs to predictable frequencies
  • Power supply manipulation: Controlled power supply noise that correlates with TRNG output
  • Backside attacks: Optical probing of internal signals that reveal TRNG state

Countermeasures include shielding from external interference, multiple independent entropy sources, continuous health monitoring, and environmental sensors that detect abnormal conditions.

Tamper Detection Circuits

Tamper detection circuits sense physical attacks and trigger protective responses before secrets can be extracted. Unlike passive countermeasures that make attacks more difficult, tamper detection provides active defense that can destroy sensitive information when attacks are detected. Effective tamper protection requires sensors that detect diverse attack types while avoiding false triggers during normal operation.

Attack Detection Requirements

Tamper detection systems must sense various attack modalities:

  • Mechanical intrusion: Physical opening, drilling, or cutting of protective enclosures
  • Environmental manipulation: Temperature, voltage, or frequency excursions beyond normal operating ranges
  • Probing attacks: Contact with internal signals for measurement or fault injection
  • Optical attacks: Light exposure for imaging or laser fault injection
  • Chemical attacks: Solvent or etchant exposure during depackaging
  • Radiation attacks: X-ray imaging or ion beam modification

No single sensor detects all attack types. Comprehensive protection requires multiple complementary sensor types.

Environmental Monitoring

Environmental sensors detect conditions outside normal operating ranges:

  • Temperature sensors: Detect freezing attacks (to slow or stop circuits) and heating (from focused energy attacks)
  • Voltage monitors: Sense undervoltage or overvoltage conditions used for fault injection
  • Frequency monitors: Detect clock glitching attacks that manipulate timing
  • Light sensors: Photodiodes detect illumination during optical attacks or decapping
  • Radiation detectors: Sense X-rays or particle radiation during imaging or fault injection

Threshold selection requires balancing sensitivity against false alarm rates. Sensors must respond quickly enough to trigger protection before attacks succeed.

Active Shield Meshes

Active shield meshes detect physical intrusion using dense sensor networks:

  • Metal mesh layers: Continuous conductive paths that break when cut or penetrated
  • Integrity monitoring: Continuous checking of mesh continuity and correct signal propagation
  • Time-domain reflectometry: Detecting changes in transmission line characteristics that indicate probing
  • Random pattern checking: Driving meshes with random patterns and verifying correct reception
  • Multi-layer protection: Multiple independent mesh layers provide redundant protection

Shield meshes must cover all accessible surfaces and cannot have gaps large enough for probe insertion. Typical mesh spacing is 10-20 micrometers for high-security applications.

Tamper Response Mechanisms

When tampering is detected, protective responses must execute before secrets are exposed:

  • Key zeroization: Immediate erasure of stored cryptographic keys
  • Memory clearing: Erasure of all sensitive data in volatile and non-volatile memory
  • State destruction: Resetting all security-critical state machines
  • Permanent lockout: Disabling device functionality permanently
  • Alert generation: Logging events or notifying external systems

Response timing is critical. If attacks can extract secrets faster than tamper response executes, the protection is ineffective. Responses should complete within microseconds of detection.

Battery-Backed Protection

Maintaining tamper detection when external power is absent requires battery backup:

  • Low-power monitoring: Tamper sensors that operate continuously on microamp currents
  • Battery capacity: Sufficient energy for years of standby monitoring
  • Battery tamper detection: Sensors that detect battery removal attempts
  • Potted enclosures: Encapsulation that prevents access to batteries without triggering sensors
  • Dual battery systems: Redundant batteries that prevent momentary power loss during replacement

Hardware Security Modules (HSMs) and cryptographic coprocessors commonly use battery-backed tamper meshes to protect stored keys continuously.

Anti-Probing Measures

Detecting and preventing direct probing of internal signals requires specialized measures:

  • Buried metal layers: Routing sensitive signals in lower metal layers covered by shield meshes
  • Glue logic: Distributed logic that prevents identifying functional blocks
  • Active sensing lines: Dummy signals near sensitive lines that detect probe contact
  • Capacitance monitoring: Detecting added capacitance from probes touching signal lines
  • Voltage clamping: Preventing injection of signals into internal nodes

Modern focused ion beam (FIB) tools can edit circuits at nanometer resolution. Defeating FIB attacks requires multiple redundant protection layers and verification circuits.

Secure Key Storage

Cryptographic keys are the foundation of secure systems, and their protection often determines overall system security. Secure key storage must protect keys at rest, during use, and throughout their lifecycle. Analog techniques complement digital protections to resist physical extraction attacks.

Key Storage Challenges

Storing keys securely presents multiple challenges:

  • Persistence: Keys must survive power cycles while remaining protected
  • Accessibility: Keys must be available for cryptographic operations without exposure
  • Protection from extraction: Physical attacks must not be able to read stored key values
  • Protection from modification: Attackers must not be able to alter keys or inject malicious values
  • Lifecycle management: Keys must be securely generated, distributed, used, and destroyed

Different threat models require different protection levels. Consumer devices may accept lower protection than payment terminals or military systems.

Non-Volatile Memory Protection

Keys stored in non-volatile memory require multiple protection layers:

  • Memory encryption: Keys stored encrypted, with decryption keys derived from PUFs or other protected sources
  • Error detection: Integrity checks that detect memory modification attempts
  • Redundant storage: Multiple copies with voting to detect single-bit attacks
  • Anti-fuse keys: One-time programmable storage that cannot be altered after programming
  • Active destruction: Circuits that erase keys when tampering is detected

Flash memory cells can be read through various physical techniques. Encryption with PUF-derived keys prevents extracted ciphertext from revealing key values.

Volatile Key Storage

Keys in active use must be protected in volatile memory:

  • Encrypted RAM: Memory encryption engines that encrypt all stored values
  • Scattered storage: Distributing key bits across multiple memory locations
  • Temporal masking: Continuously re-masking stored keys with fresh random values
  • Secure caches: Dedicated memory regions with additional access controls
  • Immediate clearing: Zeroing key storage immediately after use

Cold boot attacks can extract keys from DRAM that retains data briefly after power removal. Encrypted memory and rapid clearing mitigate this threat.

Key Derivation from PUFs

PUF-based key derivation eliminates key storage entirely:

  • Key generation: Derive keys from PUF responses during each use
  • Error correction: Use helper data to reconstruct exact keys despite PUF noise
  • Key hierarchy: Derive multiple application keys from a single PUF-based root key
  • Binding: Keys are inherently bound to specific hardware, preventing cloning
  • No static storage: Keys exist only during active use, disappearing when not needed

PUF-derived keys provide strong protection against extraction attacks since there is no stored key to extract. The key exists only transiently during cryptographic operations.

Key Usage Protection

Keys must be protected during cryptographic operations:

  • Secure computation: Process keys only in protected hardware that resists side-channel leakage
  • Key scheduling protection: Protect expanded key material as carefully as original keys
  • Register clearing: Clear intermediate values immediately after use
  • Access control: Hardware mechanisms that prevent unauthorized software from accessing keys
  • Usage counting: Limit the number of operations with each key to bound leakage

Side-channel attacks during key usage can be as dangerous as key extraction. The entire key usage flow must be designed with security in mind.

Analog Obfuscation Techniques

Analog obfuscation conceals circuit functionality and implementation details to prevent reverse engineering and intellectual property theft. Unlike digital obfuscation that focuses on logic obscuring, analog obfuscation must address continuous-valued signals and circuit topologies. Effective obfuscation makes circuit analysis expensive without degrading performance.

Goals of Analog Obfuscation

Analog obfuscation serves multiple security objectives:

  • IP protection: Prevent competitors from copying proprietary circuit designs
  • Counterfeiting prevention: Make circuit cloning difficult even with physical access
  • Trojan hiding: Obscure circuit structure to prevent malicious modification detection
  • Attack resistance: Complicate reverse engineering needed to plan physical attacks
  • Licensing enforcement: Enable different functionality based on legitimate licensing

Unlike digital circuits where gates can be identified and traced, analog circuits have interconnected components with continuous behavior, requiring different obfuscation approaches.

Topology Obfuscation

Concealing circuit topology prevents functional analysis:

  • Dummy components: Additional transistors, resistors, and capacitors that do not affect circuit function
  • Programmable connections: Switches that configure actual topology from a larger apparent structure
  • Multi-function blocks: Circuit sections that serve multiple purposes depending on configuration
  • Non-obvious layout: Physical arrangement that obscures the logical structure
  • Distributed functions: Spreading single functions across multiple physical locations

Effective topology obfuscation significantly increases the effort required to extract a schematic from physical analysis. Automated tools must consider many more potential configurations.

Parameter Obfuscation

Concealing component values prevents circuit recreation:

  • Programmable components: Resistors, capacitors, and current sources with configurable values
  • Post-fabrication trimming: Setting final values through laser trimming or fuse programming
  • Key-dependent biasing: Circuit operating points that depend on secret configuration data
  • Hidden calibration: Performance depends on calibration data not stored on-chip
  • Encrypted configuration: Configuration data stored encrypted and decrypted only during initialization

Even if circuit topology is discovered, incorrect component values prevent functional reproduction. Critical values can be locked to specific devices using PUF-derived keys.

Functional Camouflaging

Making different components appear identical prevents visual analysis:

  • Standard cell camouflage: Digital cells with identical appearance but different functions
  • Transistor-level camouflage: Transistors with appearance that does not reveal type or connection
  • Via camouflage: Vias that appear identical but may or may not provide electrical connection
  • Dopant-level camouflage: Variations visible only through expensive analysis techniques
  • Analog block camouflage: Circuit blocks with similar layout but different functionality

Camouflaging exploits limitations in imaging and analysis tools. Determining actual circuit function requires expensive techniques beyond standard reverse engineering.

Split Manufacturing

Splitting fabrication across multiple foundries prevents any single facility from knowing complete design:

  • Front-end/back-end split: Transistors fabricated separately from interconnect
  • Metal layer splitting: Lower and upper metal layers fabricated at different facilities
  • Interposer integration: Combining separately fabricated chiplets on a common substrate
  • Secure assembly: Final integration performed in trusted facilities
  • Partial design disclosure: Each foundry sees only part of the complete design

Split manufacturing is particularly relevant for protecting designs fabricated at offshore foundries where intellectual property theft or hardware Trojan insertion are concerns.

Active Anti-Reverse-Engineering

Active measures that respond to analysis attempts:

  • Destructive readout: Circuits that destroy themselves when read
  • Analysis detection: Sensors that detect voltage probing, electron beam imaging, or FIB access
  • Decoy circuits: False targets that waste attacker effort
  • Time-limited functionality: Circuits that degrade or fail after initial operation period
  • Environmental triggers: Functionality changes based on detected operating environment

Active measures increase the cost and risk of reverse engineering. Even partial success may destroy the sample, requiring attackers to obtain and sacrifice multiple devices.

Summary

Hardware security in analog domains addresses the physical vulnerabilities that exist in all electronic systems regardless of the strength of their cryptographic algorithms or software security. Side channels through power consumption, electromagnetic emissions, and timing variations provide attackers with windows into otherwise secure implementations. Defending against these attacks requires understanding the physical mechanisms of information leakage and implementing countermeasures at the circuit level.

Power analysis countermeasures employ constant-power logic, masking, shuffling, and careful power supply design to eliminate correlations between power consumption and sensitive data. Electromagnetic security requires both physical shielding and circuit-level techniques to prevent information-bearing emissions from escaping protected boundaries. These complementary approaches must be combined for comprehensive protection.

Physically Unclonable Functions transform manufacturing variations from a nuisance into a security feature, providing unique device identities and enabling key generation without secure storage. True Random Number Generators harvest physical entropy from analog noise sources to provide the unpredictable bits essential for cryptographic security. Both technologies leverage analog phenomena that cannot be replicated or predicted.

Tamper detection circuits provide active defense through sensors that detect physical attacks and trigger protective responses before secrets can be extracted. Secure key storage combines encryption, integrity verification, and PUF-based derivation to protect cryptographic keys throughout their lifecycle. Analog obfuscation techniques complement these protections by making circuit analysis expensive and uncertain.

The field of hardware security continues to evolve as both attack capabilities and defense techniques advance. New threats from machine learning-based attacks, quantum computing, and increasingly sophisticated probing tools drive the development of novel countermeasures. Effective hardware security requires continuous assessment of evolving threats and defense-in-depth strategies that layer multiple protection mechanisms.

Further Reading