Transponder-Aided Joint Calibration and Synchronization Compensation for Distributed Radar Systems

High-precision radiometric calibration and synchronization compensation must be provided for distributed radar system due to separate transmitters and receivers. This paper proposes a transponder-aided joint radiometric calibration, motion compensation and synchronization for distributed radar remote sensing. As the transponder signal can be separated from the normal radar returns, it is used to calibrate the distributed radar for radiometry. Meanwhile, the distributed radar motion compensation and synchronization compensation algorithms are presented by utilizing the transponder signals. This method requires no hardware modifications to both the normal radar transmitter and receiver and no change to the operating pulse repetition frequency (PRF). The distributed radar radiometric calibration and synchronization compensation require only one transponder, but the motion compensation requires six transponders because there are six independent variables in the distributed radar geometry. Furthermore, a maximum likelihood method is used to estimate the transponder signal parameters. The proposed methods are verified by simulation results.


Introduction
Distributed radar system operating with separated transmitters and receivers offers many operational advantages [1][2][3][4] to conventional monostatic and multi-frequency or multi-polarized radars [5][6][7][8], like the exploitation of additional information contained in bistatic reflectivity of targets [9], reduced vulnerability [10], and forward-looking imaging [11]. Distributed radar may offer reduced vulnerability to countermeasures such as jamming, as well as increased slow-moving target detection and identification capability via clutter tuning, in which the receiver maneuvers so that its motion compensates for the motion of the illuminator to create a zero Doppler shift for the area being searched. This could be worthwhile, e.g., for topographic features and drainage, to show the relationships that occur between forest, vegetation, and soils. This also provides important information for land classification and land-use management such as agriculture monitoring, soil mapping, and archaeological investigation. Attracted by these special advantages, various spaceborne and airborne distributed radar missions have been suggested or developed [12].
However, in a distributed radar the receiver uses an oscillator that is spatially displaced from that of the transmitter; hence, the phase noise of two independent oscillators cannot be canceled out. This superposed phase noise corrupts the received radar signal over the whole coherent integration time, and may significantly degrade subsequent imaging performance. Even when low-frequency or quadratic phase errors as large as 45 degree in a coherent processing interval can be tolerated, the requirement of frequency stability is only achieved by using ultra high-quality oscillators [13]. In the example of the bistatic spaceborne radar system TanDEM-X [2,14], the relative phase has to be measured with at least 1 Hz sampling frequency in order to follow, unwrap and compensate the oscillator phase drifts within the acquisition [15,16]. Furthermore, aggravating circumstances are often accompanied for airborne platforms due to different platform motions, the frequency stability will be further degraded. Thus, frequency synchronization compensation is required for distributed radar systems.
There is relative lack of practical synchronization technique for distributed radar systems. Since distributed radar is of great scientific and technological interest, several potential synchronization techniques have been suggested. The use of duplex links for oscillator frequency drift compensation was proposed in [14]. This concept is similar to the microwave ranging technique. However, this two-way operation is too complex to be applied for multistatic radar systems. We have investigated a direct-path signal-based phase synchronization technique in [17]. To receive the direct-path signal, the receiver must fly with a sufficient altitude and position to maintain line-of-sight contact with the transmitter/illuminator. In [18], we propose a time and phase synchronization method via global positioning systems (GPS) disciplined oscillators, but the GPS signals may be not available in some scenes. In this case, some other synchronization methods should be applied.
In terms of radiometry, the major goal in utilizing radar data is to infer some geophysical parameters about target areas within the scene via analysis of the recorded radar signal. The stability and consistency of the relation between the output voltage and antenna temperature, that is, the system gain is critical for quantitative remote sensing. Ideally, all radar imageries are absolutely calibrated such that the image pixel intensity is directly expressed in terms of the mean surface backscatter coefficients. This procedure, referred to as radiometric calibration [19], establishes a common basis for all image pixels, such that a given pixel intensity value represents an unique value of the backscattered signal power [20][21][22]. Internal calibration mechanisms indeed can be used to determine short term drifts, but an external calibration also is often required to provide a quantitative value for the measured backscatter and remove the system distortion. Although many calibration techniques have been developed [23][24][25][26], there are various disadvantages for passive calibrators due to their small radar cross-section. The most serious challenge is to find an area free of interference from man-made targets, namely, buildings and automobiles. It is well known that radar imagery is derived through correlating the raw data with a two-dimensional (2D) reference function. The azimuth component describes the phase history between the radar and target at a constant range, and the range component describes the phase history of the transmitted signal. An active calibrator provides the opportunity to change the phase history of the radar returns in either range or azimuth. Thus, modifying the retransmitted signal phase can displace the calibrator response in the final radar imagery. Additionally, it is useful to have an active calibrator that could shift its response location away from its physical location to a low backscatter area.
Literature search reveals that little work on distributed radar radiometric calibration has been published. But it is of great scientific and technological interest. A novel transponder used for calibrating high-resolution imaging radars was proposed in [27], which retransmits the original radar signal with two artificial Doppler shifts to the receiver. If the artificial Doppler shifts are chosen to be larger than the Doppler bandwidth of the raw data, then the transponder signal can be separated during subsequent radar signal processing. The details can be found in [27,28]. In fact, the transponder can also be used for many other applications [29,30].
This paper uses the transponder to jointly calibrate and synchronize airborne distributed radar for high-resolution remote sensing. Placing multiple transponders in both the alongtrack and cross-track dimensions, each transponder uses a distinct modulation frequency and yields an independent signal. Besides radiometric calibration, the motion and synchronization errors are also compensated by correlating the data collected from the transponder and distributed radar receivers. Moreover, a maximum likelihood (ML) algorithm is used to estimate the transponder signal parameters.

Problem Formation
Because distributed radar is a coherent system, to complete coherent accumulation in azimuth, the radar echoes of equal range but different azimuth time should have the same phase after range compression and range migration correction. Since the frequency synchronization errors in bistatic radar system are caused mainly by the phase noise in the local oscillator (LO), the modulation waveform used for range resolution can be ignored and the distributed radar model can be simplified to an ''azimuth only'' system [31].
Suppose the transmitted signal is sinusoidal whose phase argument is where φ T0 is the transmitter original phase, f T (t) is transmitter carrier frequency which can be expressed as where f 0 is the error-free carrier frequency and δ T (t) is the frequency fluctuation function. The receiver LO phase has the same form: where φ R0 is the frequency fluctuation and f R (t) is the receiver carrier frequency expressed as Suppose the transmit time is t 0 , for a time delay t d we can obtain the results by demodulating the received signal phase with the receiver LO phase Analogously, suppose the transmit time is t 1 , we can get similar results where From (5) and (7), we can express the phase synchronization errors as Since (9) can be approximated as Since image generation with distributed radar requires a frequency coherence for at least one aperture time, namely t 1 − t 0 > T s with T s being the synthetic aperture time. For spaceborne radar systems, the typical synthetic aperture time is 1 seconds, while the typical synthetic aperture time is 5-15 seconds for airborne radar systems [32]. Moreover, phase synchronization errors are usually random and too complex to use autofocus image formation algorithms to obtain focused distributed radar image. Therefore, some synchronization compensation technique or compensation algorithms must be applied for distributed radar imaging. Fig. 1 shows the system geometry of the transponder-aided joint radiometric calibration and frequency synchronization for distributed radar imaging, in which six transponders are placed in the observed scene. Each transponder consists of a low-noise amplifier followed by a bandpass filter. A voltage controlled attenuator (VCA) is used to modulate the radar signal in a manner that the retransmitted signal shows two additional Doppler shifts. Each transponder uses a distinct sinusoidal modulation frequency. The modulation frequency ω m for the mth transponder is controlled by a direct digital synthesizer (DDS). Thereafter, the signal is amplified to an appropriate level and retransmitted towards the receiver. Note that two antennas are used in each transponder, one for transmitting and the other for receiving to minimize crosscoupling interferences.

System Geometry and Transponder Arrangement
Suppose the radar transmitted signal is a linearly frequency modulation (LFM) pulse signal where k r is the chirp rate and ω t = 2πf t with f t being the transmitter carrier frequency. As the transponder can be seen as an amplitude modulator, the signal modulated by the mth transponder can be represented by where α m and β m are constants determined by the mth transponder, φ m is the starting phase and s m (t) is the signal arriving at the mth transponder. The ω m and φ m have direct relation to the receiver oscillator frequency, from which we can extract the phase synchronization errors (will be further investigated in Section 5). The signal coming to the distributed radar receiver is where s u m (t) denotes the normal radar returns unmodulated by the transponder and τ m is the time-delay required for the signal transmitting from the transmitter to the receiver. Fourier transforming (14) with respect to t yields where S m (ω) is the Fourier transforming of s m (t − τ m ). Since S u m (ω) ¼ : S m (ω), the upper and lower side bands of the mth transponder signal are represented in frequency domain, respectively, by [28] s up m ðtÞ ¼ b m 2 e jφ m s m ðt À t m Þ e jo m t ; ð16aÞ We then have The estimates of φ m and ω m denoted asφ m andô m can then be used for subsequent motion and synchronization compensation.

Transponder Signal Parameters Estimation
When noise is considered, (17) can be represented by a general signal model where a m is the signal amplitude, and n m (t) is complex additive white Gaussian noise with zero mean and variance σ 2 . Suppose the sampling interval is T s and N s samples of noisy discretetime observations are available, we then have where n m [k] = n m (kT s ). We assume the modulo-2π reduced frequency ω m has a probability density function (PDF) given by [33] pðoÞ and the modulo-2π reduced phase φ m also has a PDF given by This statistical model is a specific Tikhonov distribution which has been widely used in modeling the statistics of frequency/phase estimation errors [34], and is of sufficient practical importance. We want an estimate of ω m and φ m based on all the data samples y m [k], 0 k N s − 1. The values of ω m and φ m are estimated by maximizing the PDF given by [33] p o m ; φ m j y m ½k f g Since p y m ½k f g where arg{y m [k]} denotes the measurement phase of y m [k] and C is Taking the natural logarithm of both sides yields Suppose ω m and φ m are statistically independent of each other, the maximum likelihood estimatesô ðN s À1Þ m andφ ðN s À1Þ m of ω m and φ m can be computed, respectively, by [33] o ðN s À1Þ It can be noticed that this estimation algorithm makes use of both the measurement phase arg{y m [k]} and measurement magnitude jy m [k]j of the received signal samples y m [k]; however, it requires neither the knowledge of the amplitude a m nor that of the noise power σ 2 .
Since the measurement phase arg{y m [k]} is obtained from the principal argument of the complex phasor y m [k], phase unwrapping is necessary. Various phase unwrapping algorithms have been proposed in the literature [35][36][37][38]. Considering the continuous updating nature of the phase synchronization error estimation, here we use the Kalman filter-based phase unwrapping algorithm proposed in [33]. This algorithm assume that at time point k the ML estimatesô ðkÞ m ½k andφ ðkÞ m are computed from (26) and (27), respectively, and takeô ðkÞ m ½k þ 1 and φ ðkÞ m as the prediction of the next value for the estimated signal given measurements up to time k. Then the unwrapped phase is the one lying in the interval [33] o ðkÞ m ½k þ 1T s þφ ðkÞ m À p;ô ðkþ1Þ This algorithm can be recursively implemented like the Kalman filter [39].

Radiometric Calibration
According to (14), each transponder will yield two artificial Doppler signals in the distributed radar returns. Therefore, the distributed radar received signal corresponding to the mth transponder can be reexpressed as After being processed by general image formation algorithms, e.g., Range-Doppler imaging algorithms [40], the focused imagery will be [4] z m ðt; where t 0 and τ 0 denote the fast-time and slow-time in the normal radar imagery, respectively, t m is the fast-time for the transponder, τ m 1 and τ m 2 denote the slow-times for the two Doppler signals generated by the transponder. That is to say, each transponder will produce two artificial point-targets in the imagery. The transponder imagery can be quantitatively measured as γ m , namely Since β m is a known variable because it is measurable from the transponder signals, the quantitative radiometric coefficient σ 0 can then be determined as Since the transponder signal and SAR signal have similar channel effects (because the transponders are placed in the SAR imaging scene) and the transponder received and re-transmitted signal amplitudes can be measured in real-time on ground, the quantitative radiometric coefficient σ 0 can then be determined by comparing the transponder received and re-transmitted signal amplitudes with SAR received signal amplitude. In doing so, the distributed radar imagery is effectively calibrated and thus we can measure the fading effect, which is an important phenomena for a coherent system [41]. As the feasibility of the transponder in radiation calibration has been fully investigated and validated in [27], this paper mainly discusses the aspects of motion compensation and frequency synchronization compensation in the transponder-aided joint calibration and synchronization compensation for distributed radar systems.

Motion Compensation
In distributed radar imaging, as a matter of fact, motion compensation problems may arise due to the presence of atmospheric turbulence, which introduce platform trajectory deviations from nominal position, as well as altitude [42]. To account for such errors, onboard GPS and inertial navigation units are widely employed. If high-precision motion measurement sensors are not available, signal processing-based motion compensation must be applied.
The positions of the transmitter and receiver can be determined from the different pseudoranges and the knowledge of the transponder position (x m , y m , z m ) as follows: ð33aÞ ð33bÞ ð33cÞ ð33dÞ ð33eÞ ð33f Þ There are six unknown parameters, namely, (x t , y t , z t ) and (x r , y r , z r ), and six independent equations. Certainly, if a basic ranging method is employed, we cannot achieve the position at a fractional of the wavelength. To overcome this problem, in this paper the transponder position is obtained by taking each transponder as a specific target and comparing the relative distance between each transponder imagery and normal radar imagery with the method similar to that the strong target based inverse radar autofocus technique. Thus, all the unknown parameters are solely determinable. These position information can then be used for compensating the motion errors in subsequent distributed radar image formation processing.

Frequency Synchronization Compensation
The transponder signal can also be used for compensating the frequency synchronization errors. As the synchronization errors can be extracted by one transponder, in the following we consider only one transponder. Similar to existing literatures, physical frequency is used in section to discuss the frequency synchronization problem. To find a general mathematical model, suppose the nth transmitted pulse is where rect[Á] is the window function, f tn is the transmitter carrier frequency, k r is the chirp rate, T p is the pulse duration and φ e (n) is the original phase to be estimated. Suppose the demodulating signal in receiver is Hence the received transponder signal (can be seen as a strong point target) in baseband is where f rn is the the receiver carrier frequency, f dn is the Doppler shift, and τ n is the time delay corresponding to the time it takes the signal to travel the transmitter-transponder-receiver distance for the nth pulse.
Let Δf n = f tn − f rn , and suppose the range reference function is Range compressing yields We can notice that the maxima will be at t ¼ t dn À Df n k r when exp½jpDf n ðt À t n þ Df n =k r j t¼t n ÀDf n =k r ¼ 1: Then the residual phase term in (38) is As Δf n and k r are typically on the orders of 1 kHz and 1 × 10 13 Hz/s, respectively, pDf n 2 =k r has negligible effects. If let where f d0 and f r0 are the original Doppler shift and error-free demodulating frequency in receiver, respectively. Accordingly, δf dn and δf rn are the frequency errors for the nth pulse. Hence we have φ e ðn þ 1Þ À φ e ðnÞ ¼ ½cðn þ 1Þ À cðnÞ À 2pðf r0 þ f d0 Þðt nþ1 À t n Þ À2pðdf dn þ df rn Þðt nþ1 À t n Þ: Generally speaking, δf dn + δf rn and τ n + 1 − τ n are typically on the orders of 10 Hz and 10 −9 sec., respectively, then 2π(δf dn + δf rn )(τ n + 1 − τ n ) is found to be smaller than 2π × 10 −8 radian, which has negligible effects. Thus, (42) can be further simplified into φ e ðn þ 1Þ À φ e ðnÞ ¼ ½cðn þ 1Þ À cðnÞ À 2pðf r0 þ f d0 Þðt nþ1 À t n Þ: ð43Þ At this step, the estimation of the synchronization errors φ e (n) can be calculated according to (43).
In conventional synchronization links, the transmitter must transmit an additional synchronization signal to the receiver. Moreover, the synchronization signal must be sufficiently decoupled from the normal radar signal, so that they can effectively separated in subsequent synchronization compensation algorithm. Different from conventional methods, the proposed approach requires neither the transmitter transmits an additional synchronization signal to the receiver, nor a duplex hardware system. Thus, when compared to conventional synchronization links, the proposed synchronization approach has a lower hardware system complexity.
Moreover, the synchronization signal in conventional synchronization methods may be impacted by the normal radar signal or bring interferences on the radar signal. But the transponder-based approach is not impacted by the normal radar signal. If the designed transponder modulation frequency ω m is larger than the Doppler shift where v a is the relative velocity between the radar platform and the observed target. The transponder signal can be easily filtered out at the receiver through (15)- (16).

Simulation Parameters
In all the simulations, we suppose the transmitted radar signal is a LFM signal with the following parameters: carrier frequency is f c = 1.2 GHz, bandwidth is B = 50 MHz, pulse duration is T p = 5μs and PRF is 1500 Hz.

Transponder Modulation Results
The corresponding transmitted signal spectrum is shown in Fig. 2. Note that the carrier frequency is shifted to the zero frequency. We further assume there are three point targets with the slant range of −700 m, 0 m and 700 m, respectively. Figs. 3 and 4 compares the normalized spectra of the signal received and transmitted by the transponder. Note that f m = 500 Hz is assumed in the simulation. It can be noticed that, after modulated by the transponder, there is only a small bandwidth extension when compared to the unmodulated radar signal. Therefore, it is not necessary to make any hardware modification to radar receiver. methods. Next, after matched filtering and range pulse compressing in a manner like the general radar signal processing, the transponder signal can be easily extracted due to the amplitude modulation which results in a small ''Doppler'' shift. The phase synchronization errors can then be estimated by comparing the frequency distance between the original azimuth signal and the additional frequency shifted signals.

Calibration Errors Estimation Results
We suppose the transponder signal is y m (t) = 5e j(ω m t+θ m ) +2.5n(t) (see (18)), where n(t) is the Gaussian distributed noise with mean zero and unit covariance. Note that the normalized ω m is used in the simulation. Using the analytical oscillator frequency instability model that we developed in [43], we simulated the possible frequency synchronization errors. Fig. 5 shows the  estimate performance of the maximum likelihood estimator. Note that the frequency synchronization errors shown in Fig. 5 have been normalized and θ m = 0 is assumed in this case. It can be noticed that satisfactory performance is achieved for the estimator.
Although wideband radar signal is used in the simulation example, the signal that we use to estimate the carrier synchronization errors is a narrow-band signal because the transponder modulation signal is a monochromatic signal. Fig. 6 shows the cramér lower bound (CRLB) and root-mean-square errors (RMSE) on the frequency estimate versus signal-to-noise ratio (SNR) parameter. Note that the estimation RMSE are computed based on 500 independent runs. It can be seen that the estimator gives a satisfactory estimation performance.

Synchronization Compensation Results
To evaluate the motion compensation method, we performed statistically simulation investigations using the actual GPS data obtained from the IGS website (http://igscb.jpl.nasa.gov/). Fig. 7 compares the actual motion errors and our estimated motion errors. From the positive results we can conclude that the motion errors can be compensated by jointly exploiting the six transponder signals.
Finally, to evaluate the performance of this transponder-aided phase synchronization method, we consider a bistatic synthetic aperture radar system with the phase synchronization errors shown in Fig. 8. Using the proposed phase synchronization method, Fig. 9 shows the residual phase synchronization errors. It can be noticed that the residual phase synchronization errors fall into −0.2-−0.05 degree, which have ignorable impacts on most of distributed radar systems. This implies that phase synchronization errors can be effectively compensated by the transponder-aided estimation and compensation method.

Discussion
High-precision radiometric calibration, motion and synchronization compensation must be ensured for distributed radar, which operates with separate transmitters and receivers. This article proposes a joint radiometric calibration and synchronization compensation based on the transponders for distributed radar imaging. This method requires no hardware modifications in both the normal radar transmitter and receiver (the PRF is also not changed). It also does not change the range ambiguity characteristics of the normal radar. All the proposed methods are verified by simulation results. Although a high accuracy of the synchronization and motion compensation can be obtained if there is a transponder in each acquired image, it is not necessary because we can use the transponder to synchronize and calibrate the radar system intermittently at a time interval and compensate the remaining synchronization errors with autofocus processing algorithms. Another note is that only one transponder is required for bistatic radar frequency synchronization compensation. However, since at least variables are required to locate the bistatic radar transmitter and receiver, 6 transponders are needed to form six independent equations, so that their relative positions can be determined, equivalently the motion errors can be compensated.