1. Background
Optical interferometry is one important technique in precise metrology, capable of discerning target displacements with nanometer or subnanometer resolution [
1]. Thanks to its high accuracy and non-contact nature, optical interferometry has found widespread use in a variety of application domains such as sensing, manufacturing, medicine, and automobile industries [
2]. However, the signal retrieval process of optical interferometry is prone to system deviations such as spurious reflection, beam/polarization crosstalk, flaws in optical/electrical components [
3], and double-reflection from the vibrating target [
4]. These factors can introduce periodic nonlinearities in the output signals. Strong nonlinearities can generate spurious sidebands, thereby complicating signal analysis and leading to data misinterpretations.
To mitigate these nonlinearity issues, various correction strategies have been proposed, typically falling into two categories. One category focuses on hardware compensation techniques, e.g., using additional compensation beams [
5], introducing a rotated polarizer [
6], spatially separating/misaligning beams to avoid frequency/polarization mixing [
7,
8], adding a tunable attenuator for laser beam vector adjustment [
9], and introducing phase-encoding electronics for compensating for nonlinear errors [
10]. The second category employs software-based compensation methods. One prominent numerical method is the Heydemann correction [
11], which retrieves the information of the ellipse formed by the distorted in-phase-and-quadrature (
) signal and corrects the data based on the shape of the ellipse. This method can be used for both the homodyne [
12] and heterodyne methods [
13]. Other methods include a frequency-domain approach for separating the first and second harmonic nonlinearities when setting the target at a constant velocity [
14], a combination of recursive weighted least-squares methods and Kalman filters [
15], and a digital lock-in phase demodulator with an iterative algorithm [
13]. These methods generally work well. However, most of them require access to the raw measurement data, i.e., the aforementioned
data. In many instances, users only have demodulated data, meaning some raw data information has been lost. As a result, the aforementioned error correction algorithms will be ineffective.
In this paper, we propose and discuss a novel numerical method to compensate for the aforementioned nonlinearities. This paper uses one typical optical interferometry—laser Doppler vibrometry (LDV) [
16]—as an example for examining the methods of nonlinearity compensation. The discussed system can represent discrete optics-based, fiber-based, or photonic integrated circuit (PIC)-based interferometry.
The paper is organized as follows.
Section 2 outlines the working principle of the interferometer system and the associated nonlinear problems.
Section 3 introduces our proposed correction method. The detailed properties of the new method, along with their comparison to the conventional methods, particularly the Heydemann method, will be elaborated on in
Section 4. The same section also discusses the effectiveness of these methods on data collected with our developed LDV system. The final part is the conclusion.
2. Periodic Nonlinearity in Optical Interferometry
Generally, nonlinearities are discussed separately in heterodyne and homodyne interferometry systems due to distinct structures and demodulation methods.
Figure 1 illustrates typical schematic diagrams for both heterodyne and homodyne vibrometry systems and highlights the major sources of nonlinearities in each case.
Figure 1a depicts the structure of a typical heterodyne LDV based on optical fibers. The system comprises a laser diode (LD) source, a Mach–Zehnder interferometer (MZI), a photodiode (PD), and a pre-decoder. The laser source emits an optical beam with a wavelength
. An acousto-optic modulator (AOM) is used as the splitter of the incoming beam. The 0th order diffraction of the AOM (sensing beam with a phasor
b) is coupled to a fiber. After passing through a circulator (CIRC), the measurement light is coupled to free space and focused on the vibration target by a lens. The reflected light is collected by the same lens and re-coupled into the fiber system. Due to the Doppler effect, the reflected light carries a phase shift that is proportional to the instantaneous displacement of the target
with the following relation:
[
17]. The 1st-order diffraction of AOM (reference beam with a phasor
a) is coupled to the same fiber system and combines with the reflected light and is detected by the PD. Thanks to the AOM, a constant optical frequency shift,
, is created in the 1st-order diffraction. Therefore, the AOM acts as an optical frequency shifter (OFS) in this system. A polarization controller (PC) is used in the reference arm to ensure the polarizations of the reference and measurement signals are well aligned. The frequency shift will be converted to a carrier frequency in the photocurrent of the mixture signal. Thanks to this non-zero carrier frequency, the impact of low-frequency noise, e.g.,
noise, can be effectively reduced.
Given the complex amplitude reflection coefficient of the measurement light as
, the photocurrent can be expressed as [
1]
where
is the responsivity of the PD,
is the constant phase difference between the two arms due to their path length difference,
is the dc part of the photocurrent signal, and
is the amplitude of the ac part. Many methods can be used to retrieve
; here, a commonly used method is explained [
18]. In the pre-decoder, the photocurrent signals are multiplied with a pair of quadrature signals, generating two signals
Two low-pass filters with the cut-off frequency around
are used to remove the high-frequency components in
and
, leading to two signals:
Then an arc-tangent function can be used to derive . The displacement of the target can be derived as .
Several issues may induce nonlinearities in such a heterodyne interferometer. The first issue is the unwanted harmonics generated by the OFS. The purpose of using an OFS is to create a constant frequency shift. AOM is the most typical OFS. It can generate several harmonics at the same time. The unwanted harmonics are usually removed by only sending the requested frequency shift to the output port of the AOM. However, a small amount of other harmonics may still find their ways to the output port, through scattering or reflecting. One can also generate a frequency shift with other phase modulators (e.g., PN junction-based phase modulator in a silicon-based photonic integrated circuit [
19]) using single-sideband modulation techniques, but these methods usually suffer more from other harmonic orders. Most of these residue high-order harmonics can be removed by applying extra filters. However, some harmonics, e.g., the—1st order (
) and 2nd order (
), cannot be completely removed by the filters and will distort the output signals. Cross-talks between the two arms may also introduce the residue harmonics to the measurement arm. Additionally, the spurious reflection (
) in the measurement arm, usually caused by reflections in the optical interfaces, also contributes to these imperfections. Here,
is a complex number representing the amplitude reflectivity of the total spurious reflection. Accounting for these factors and omitting small values, the multiple terms of the
signals can be grouped into three parts and expressed as
In these equations, the terms , , , , and are introduced by the residue harmonics and cross-talks. Since only weak distortions are considered, we can assume . This Lissajous curve turns out to be an ellipse, with its center at (), the long axis equal to , the short axis equal to , and the rotation equal to .
Figure 1b illustrates a typical homodyne system. Unlike heterodyne systems, homodyne systems do not employ an OFS. Instead, a special optical mixer is often used to obtain the quadrature signals. A typical mixer is a
hybrid [
20], which has two optical inputs and two electrical outputs. Some
hybrids provide four electrical outputs, but they can be further combined and reduced to two signals. These two outputs can be written as
Since there is no OFS in homodyne systems, high-order harmonics do not exist and therefore cannot be a source of nonlinearities. However, there are some different impact factors in the homodyne system, such as the imperfect phase relation in the
hybrid and the uneven responsivities of the PDs. Taking these impact factors and the spurious reflections into account (see
Figure 1b), the
data can be written as
Here, and are the responsivities of the two PDs in the first and second ports of the hybrid, and stands for the constant phase error in the 90 hybrid. This Lissajous curve is an ellipse, with its center at (), the rotation angle , the semi-major and minor axis are , respectively, where , and . When the two responsivities are the same (), the rotation angle will be .
These results indicate that the effects of nonlinearities bear close resemblance in both homodyne and heterodyne systems: they both distort the Lissajous curves to ellipses, whereas ideally they should be circular. This is the reason why Heydemann’s correction method can be used to correct the nonlinearities in both homodyne and heterodyne systems.
To understand our proposed methods, it is important to understand how the nonlinearities distort the signals. These distortions can be categorized into two categories: the 1st-order and 2nd-order periodic nonlinearities [
3]. The 1st-order nonlinearity arises when the centers of the
circles are not positioned at the system’s origin, while the 2nd-order nonlinearity occurs when the
circle becomes an ellipse rather than a circle. For example, in a fiber-based homodyne system (see
Figure 1b), the 1st-order nonlinearity usually happens when the PD responsivities are not equal, while the 2nd-order nonlinearities happen when the output phase relation of the 90
hybrid is not perfect. The schematic representation of the
Lissajous curves of these two types of errors can be viewed in
Figure 2.
The deviated phase of the 1st-order nonlinearity can be expressed as
where
is the actual phase to be measured, and (
) denote the shift of the
circle’s origin relative to the coordinate system’s origin. The phase of the 2nd-order nonlinearity can be described as
where
is the rotation of the
ellipse,
r is the ratio of the semi-major axis (
a) and the semi-minor axis (
b) (see
Figure 2).
These nonlinearities introduce an additive periodic distortion in the demodulated signal, while the periodicity happens in the demodulated phase. The 1st-order errors have a period of
, while the 2nd-order errors possess a period of
. Some examples of simulated distorted signals are shown in
Figure 3. These plots represent extreme cases so that the periodic nonlinear errors in the demodulated phases can be easily observed. For the 1st-order deviation (
Figure 3a), the deviation is caused by a change in the origin of the
circle. Here, the distance between the origins of the
circle and the coordinate system is 75% of the
radius. The distortions show a period of
in the displacement domain. Thanks to this phenomenon, the distortion of a signal with a larger amplitude has a higher frequency in the time domain, which can be seen in
Figure 3a. For the 2nd-order nonlinear error case (
Figure 3b), the deviation is only caused by the elliptical shape of the
circle. In this special case, the eccentricity of the ellipse (
) is 0.9657 while the rotation angle is
is
. The deviations have a displacement periodicity of
and their shapes are different from those of the 1st-order deviations. The vibrations in these plots have the same frequency (24 Hz) but different amplitudes (1
m and 2
m).
These additive deviations lead to spurious signals with a large bandwidth, which is difficult to remove with a simple filter. The frequency spectra of two sinusoidal signals (
,
m and
m) with the same 2nd-order nonlinear deviation (2nd-order phase error with
) are shown in
Figure 4a. It is seen that both signals have a large number harmonics ranging from 24 Hz to kilo-hertz. A lot of useful signal in this frequency range can be influenced by the nonlinearities. Most harmonics in the spectra of
are higher than those of
, but this is not the case for all harmonics. The frequency spectrum of the neck skin displacement induced by carotid pulses is measured by an on-chip homodyne LDV [
21] and plotted in
Figure 4b (red spectrum). The measurement is made on the neck of one of the authors. The measured signal is the displacement of the neck skin with a duration of 1 s. A moving-average filter is firstly applied to the measured signal to remove the existing nonlinear errors and speckle noises. Note that the moving-average filter not only removes the nonlinearity, but also removes the useful data with higher frequency. After this procedure, a 2nd-order phase error with
is purposely added to the pulse signal numerically. It can be seen that the spectra of the signal with the added nonlinearities (blue curve in
Figure 4b) is considerably increased at the higher-frequency part. This indicates that the nonlinearity distortion in the spectrum may cause significant errors in signal analysis.
3. Proposed Methods
In this section, we will describe the proposed methods that can be used to compensate for the nonlinear effects directly on the demodulated data. We will introduce two methods, but the main discussion will be focus on the second one.
3.1. Inverse Function Method
Based on the periodic feature of the demodulated data, it is possible to retrieve the strength of the nonlinearity for certain datasets. This approach requires the data to have a strictly monotonic increase or decrease throughout its entire section. Once this criterion is met, the data can undergo an inverse mapping from to . Applying a Fast-Fourier Transform (FFT) to the allows us to obtain the strength of the periodic nonlinearities. Since the deviation is not sinusoidal in the inverse function, it will have a number of sidebands in the FFT spectrum.
Signals with several different 1st-order deviations are shown in
Figure 5a. Here, we employed a signal with an amplitude of 10
m, which ensures enough periods in the rising slope (here, we assume the optical wavelength is 1550 nm). Given the monotonic nature of these signals, we can extract their inverse functions (see
Figure 5b). The spectra of these inverse functions reveal the periodic feature at multiples of
in
Figure 5c. With a higher deviation (
), the spectra exhibit stronger and more fringes. Similarly, signals with different 2nd-order deviations (eccentricities
e = 0.5, 0.7 and 0.95), their inverse functions and the spectra of their inverse functions are shown in
Figure 6a,
Figure 6b, and
Figure 6c, respectively. The periodic features are located at multiples of
. Simulation shows that a higher eccentricity corresponds to a stronger peak at
.
Since the frequency at
is primarily for the 1st-order error, one can use this peak as an indicator of the 1st-order error’s strength. To realize the compensation for this error, one can scan the two parameters (
) to the measured phase
based on the following phase compensation algorithm
As an example, the strengths of the
peak for a signal with a 1st-order distortion (
and
) under the compensation scan are shown in
Figure 7. It is seen that
and
correspond to the lowest
peak, which are exactly the predetermined values. However, the scan demands substantial computational resources. To improve this speed, a more efficient algorithm to identify the global minimum might be beneficial.
Similarly, the 2nd-order error can also be compensated by conducting the following scan
where
and
are the two scan parameters. For the 2nd-order compensation, the figure of merit (FOM) should be the fringe strength of
. Since the 1st-order errors can also generate a harmonic at
, it would be better to compensate for the 1st-order error before the 2nd-order when both nonlinear errors exist.
However, this method requires several strict prerequisites. (1) The signals should be strictly monotonic and span more than several periods of wavelength. For a general signal, it requires another complex procedure to find a section that meets such criteria. (2) When strong noise exists, the signal’s monotonicity is further disrupted. Therefore, the noise should be kept low. (3) Scanning parameters is a relatively time-intensive operation. As discussed, a faster algorithm is to be developed. In practical applications, the use of the inverse-function method is not very feasible.
3.2. Piece-Wise-Fitting Method
To make it easier to handle more general data, we propose a second method with the following steps:
(1) Segmentation: Segment the entire displacement signal into displacement intervals of . There are two types of data segments. Some data start at 0 and end at , or start at and end at 0, which are called “complete sections”. Other data do not reach either 0 or , which are called “incomplete sections”. The incomplete sections will be discarded in the following steps. In this step, we usually convert the displacement signal to the corresponding phase changes . After segmentation, the phase of each section can be expressed as , where k is the index of the section.
(2) Baseline fitting: For each section, the target’s movement is assumed to be rapid enough that it can be approximated by either a linear function, i.e., , or a quadratic function, i.e., . The baseline function for each complete section can be determined using numerical fitting functions (e.g., Polyfit in Matlab). Note that only strictly monotonic baseline functions will be considered in the subsequent steps.
(3) Deviation Averaging: calculate the deviation of the phase as:
This allows for the separation of the measured signal into the baseline and deviation signals. Thanks to the simple shape of the baseline, the nonlinear distortions can only be present in the deviation parts. Given the strict monotonicity of
, its inverse function
exists. Therefore, the deviation can be written as a function of the baseline
phase as:
Assume the baseline
is the undistorted signal and
is the nonlinear distortion, the function
will be the same as Equation (
12) for the 1st-order nonlinearity and Equation (
13) for the 2nd-order nonlinearity. However, this is not the case most of the time. Only some special cases, e.g., when the target is moving at a constant speed, can make this happen. However, if we can average over a large number of sections, the averaged relation will approach the theoretical relations of the nonlinearities. To facilitate this averaging, we interpolate
to
, where
are an array of predefined phases ranging evenly from 0 to
. After interpolation, samples from all complete sections have the same independent variables
and the averaging can be realized by calculating the following value
. In this case, we can retrieve an averaged deviation based on the following relation
.
(4) Deviation Fitting: Based on our assumption that the averaged deviation functions converge to the theoretical nonlinearity errors, the key deviation parameters can be retrieved by fitting the averaged deviation to the theoretical relations. For the 1st-order nonlinearity, the averaged deviation will be fitted to
and the best values of
,
will be estimated as
,
, respectively. For the 2nd-order nonlinearity, the averaged deviation will be fitted to
The best values of
r,
will be obtained as
and
, respectively. Several methods can be used for deviation fitting. This paper demonstrates the method using Matlab’s in-built fit function.
(5) Compensation: compensate the distorted signals with the retrieved parameters (
,
or
,
). For the 1st-order nonlinearity, the compensated phase is calculated as
For the 2nd-order nonlinearity, the compensated phase is
In this step, we can calculate the FOM defined by the root-mean-square (RMS) of the signal deviation from the actual signal , denoted as , where is the compensated values. In this paper, the RMS values are expressed with a unit of half-wavelength, allowing these results to be extended to sensors with other sensing wavelengths.
Figure 8 illustrates an example of the steps involved in compensating for 1st-order nonlinearity. In this example, a sinusoidal signal with an amplitude of 6
m and a frequency of 24 Hz is used as the original signal. This signal is then distorted by shifting the
circle to
and
, resulting in an RMS deviation of
.
Figure 8a and
Figure 8b depict the decomposition process. The entire signal in
Figure 8a is segmented into seven sections. The first six sections are complete, while the final one is incomplete and, hence, discarded.
Figure 8c shows the linear baseline fitting of one section, with
Figure 8d displaying the obtained deviations for all complete sections. The deviations are averaged and fitted to Equation (
18), as shown in
Figure 8e. With the parameters of the fitted function (
and
), the distorted signal can be corrected, as illustrated in
Figure 8f. After post-compensation, the RMS deviation reduces to
. Additional compensation on the corrected signal using the same algorithm can further reduce the RMS deviation to
, which is a seven-fold reduction.
This method can also compensate for 2nd-order nonlinearity, with the retrieved parameters being
and
. An example of this compensation is shown in
Figure 9. The original signal used is the same as that in the 1st-order compensation. The deviation is due to an ellipse alteration with
and
, leading to an RMS deviation of
. The retrieved fitting parameters are
and
, reducing the RMS deviation to
—a 25-fold reduction. In this case, since the first compensation is already very effective, additional compensation steps will not further improve the results. Note that a quadratic fitting is used for baseline fitting in this case, instead of a linear fitting. A detailed discussion of the differences between linear and quadratic baseline fitting will follow in the next section.
In cases where both types of deviations are present, they are not additive. Instead, the total deviation should be expressed as
or
Because of this relationship, compensation for a general signal can be achieved by first applying the 1st-order (2nd-order) nonlinearity compensation to the distorted data, followed by the 2nd-order (1st-order) compensation. Note that the two orders may correspond to different deviation parameters. The effectiveness of the compensation can be different for different compensation orders, which will be discussed in the next section.
This piecewise fitting method has an advantage over the inverse-function method in that it does not require a large monotonic region of the signal. As such, it is applicable to a wider range of data. Due to the challenges in implementing the inverse-function compensation methods, the subsequent discussion will focus solely on the performance of the piecewise fitting method.
4. Discussion
We will present a comparison of compensations for signals with varying frequencies and amplitudes. The 1st-order deviations were introduced by incorporating
and
into the signal with variable amplitudes and frequencies. The RMS deviation results after the 1st-order compensation are illustrated in
Figure 10, where linear baseline fitting is applied in
Figure 10a, and quadratic fitting is used in
Figure 10b.
From
Figure 10, we can see that the compensation works well only when the vibration amplitude (half of the peak-to-peak amplitude) is larger than
. We also identified spikes at vibration amplitudes equating to integer multiples of
. Apart from these spikes, linear fitting provided marginally superior RMS deviation values compared to the qudratic fitting. These spikes are connected to instances where the vibration signal is sinusoidal. For a more random signal, such periodic features do not appear, as evidenced by a signal with two frequency components shown in
Figure 11. In this case, the spikes shown in
Figure 10a disappear. It is also found that the linear baseline fitting is slightly better than the quadratic baseline fitting in the sense of better RMS deviations for higher vibration amplitudes (see
Figure 11b).
For the 2nd-order nonlinearity, a similar study is conducted using sinusoidal signals with various amplitudes and frequencies, which are shown in
Figure 12.
When the 2nd-order compensation was applied to a complex signal, in contrast to the 1st-order compensation, the quadratic polyfit was found to outperform the linear baseline fitting (see
Figure 13).
These nonlinearity compensation methods have different results for different nonlinearities strengths. For the 1st-order nonlinearity, a larger initial deviation corresponds to a higher RMS deviation after the compensation. A demonstration of this effect is shown in
Figure 14a. For the 2nd-order nonlinearity, the relation between the original distortion and the compensation can be seen in
Figure 14b. In both cases, the vibration signal is the one described in
Figure 11 with
. The difference of the final compensation results for different original errors can be clearly seen in these figures.
For the data with both nonlinearities, the compensation can be acheived by applying the two algorithms sequentially. The compensation results for applying the orders of the 1st and 2nd nonlinearity compensation methods are shown in
Figure 15a. The vibration signal is the one described in
Figure 11 with
. The compensated signal of two compensation orders are shown in this figure. The results validate our claim that the 2nd-order nonlinearity compensation should be applied before the 1st-order. The effect of cascaded compensations is also studied and shown in
Figure 15b. It is seen that two cycles of compensation with the same method can further improve the results. However, it is also shown that this compensation will not provide an effective improvement anymore after applying it twice.
In practice, there is also a lot of noise in the signal. The impact of noise on the compensation results is shown in
Figure 16. The vibration signal is the same as described in
Figure 11 with
, and a two-cycle compensation is used in this case. It can be seen that the final compensation results depend on the strength of the noise: a higher noise means a worse compensation. When noise is weak, deviations for different 1st-order errors can be different, but their values will become the same when the noise is higher. The 2nd-order nonlinearities, however, have different phenomena. After a two-cycle compensation, the deviation caused by different 2nd-order nonlinearities is still different when the noise level is high.
We have typical measurement data of some signals retrieved on a silicon-photonics-based homodyne LDV [
21] with a sampling rate of 5.5 Msps. The detected vibration is from a silicone block fixed on a stage. During the measurement, the optical table where the silicone is installed is tapped by the operator with his hand. The silicone will shake as a result of the tapping at its resonance frequency (about 800 Hz), which can be detected by the LDV. Since the surface of the silicone is flat and can provide good specular reflection, the sensing light is directly reflected by the silicone surface. The on-chip LDV also has a typical nonlinearity, which can be seen in
Figure 17a. We used the Heydemann method and our proposed method to compensate for the errors in the data, the compensation results are also shown in
Figure 17a. It can be seen that both Heydemann’s and our methods can remove the clear nonlinear signals by similar RMS values (0.043
for Heydmann’s correction and 0.044
for our method). In the spectra, it is seen that the deviation introduced by the nonlinearity is mainly located at around 30 kHz. With both compensation methods, these nonlinear errors can be suppressed.