Next Article in Journal
Connecting Hazard Analysts and Risk Managers to Sensor Information
Next Article in Special Issue
A Methodology to Validate the InSAR Derived Displacement Field of the September 7th, 1999 Athens Earthquake Using Terrestrial Surveying. Improvement of the Assessed Deformation Field by Interferometric Stacking
Previous Article in Journal
Comparing Different Approaches for Mapping Urban Vegetation Cover from Landsat ETM+ Data: A Case Study on Brussels
Previous Article in Special Issue
An Assessment of the Altimetric Information Derived from Spaceborne SAR (RADARSAT-1, SRTM3) and Optical (ASTER) Data for Cartographic Application in the Amazon Region
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Interferometric Synthetic Aperture Microscopy: Computed Imaging for Scanned Coherent Microscopy

The Beckman Institute for Advanced Science and Technology, and The Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
Sensors 2008, 8(6), 3903-3931; https://doi.org/10.3390/s8063903
Submission received: 5 June 2008 / Revised: 9 June 2008 / Accepted: 9 June 2008 / Published: 11 June 2008
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR))

Abstract

:
Three-dimensional image formation in microscopy is greatly enhanced by the use of computed imaging techniques. In particular, Interferometric Synthetic Aperture Microscopy (ISAM) allows the removal of out-of-focus blur in broadband, coherent microscopy. Earlier methods, such as optical coherence tomography (OCT), utilize interferometric ranging, but do not apply computed imaging methods and therefore must scan the focal depth to acquire extended volumetric images. ISAM removes the need to scan the focus by allowing volumetric image reconstruction from data collected at a single focal depth. ISAM signal processing techniques are similar to the Fourier migration methods of seismology and the Fourier reconstruction methods of Synthetic Aperture Radar (SAR). In this article ISAM is described and the close ties between ISAM and SAR are explored. ISAM and a simple strip-map SAR system are placed in a common mathematical framework and compared to OCT and radar respectively. This article is intended to serve as a review of ISAM, and will be especially useful to readers with a background in SAR.

1. Introduction

Traditional sensing modalities such as X-ray projection imaging [1], nuclear magnetic resonance (NMR) spectroscopy [2, 3], radar [4] and focused optical imaging [5] rely primarily on physical instrumentation to form an image. That is, the instrument is constructed so that the resulting relation between the object of interest and the collected data is sufficiently simple so as to allow data interpretation with little or no data processing. However, for more than 40 years the performance of microelectronic devices has improved exponentially, as famously quantified, in part, by Moore's law [6]. The resulting abundance of powerful computing resources has been a great boon to almost every area of science and technology, and has transformed sensing and imaging. When significant computational data processing is added to an imaging system, the effect of the physical sensing may be mathematically inverted, allowing the use of instruments with more complicated, multiplex object-to-data relations. The resulting sensing systems provide new imaging modalities, improved image quality and/or increased flexibility in instrument design. This coupling of sensing instrumentation and physically-based inference is often known as computed imaging.
The application of computed imaging techniques to the imaging and non-imaging sensor systems listed above has been revolutionary: X-ray projection imaging has evolved into computed tomography [7, 8]; the contrast mechanisms of NMR spectroscopy form the basis of magnetic resonance imaging [9]; radar has led to synthetic aperture radar (SAR) [1012]; while the subject of this article, ISAM [1320], is an example of computed imaging in focused optical systems. Computed imaging techniques also appear in nature—perhaps the most ubiquitous example of what can arguably described as computed imaging is the stereoptic human visual system, where a pair of two-dimensional images (one collected by each eye) are processed in the brain to give depth perception [21]. These examples of computed imaging are far from forming an exhaustive list—the field is large field and growing. Other examples include array-based radio astronomy [22], diffusion tomography [2325] and positron emission tomography [26]. New applications and contrast mechanisms are still being discovered and the escalation of available computational power is allowing increasingly difficult inverse problems to be solved. For example, the recent explosion of activity in compressive sampling has already brought certain problems in analysis, inference and reconstruction, thought to be intractable, into the realm of tractable problems [27, 28]. Instruments employing compressive sensing not only draw inferences from data using a physical model, they exploit statistical redundancy in the description of the object to significantly decrease the amount of data required, e.g. [29].
This article is focused specifically on ISAM imaging technologies. In addition to the broad commonality ISAM has with other computed imaging techniques, it has strong physical and mathematical connections to a family of instruments including SAR, synthetic aperture sonar [3032], seismic migration imaging [33, 34] and certain modalities in ultrasound imaging [35, 36]. All of these systems apply computed imaging to multi-dimensional data collected using both spatial diversity and a time-of-flight measure from a spectrally-broad temporal signal. In this article ISAM and SAR are cast in the same mathematical framework, with similarities and differences between the two systems discussed throughout.
In the following section, OCT, the forerunner of ISAM, is described. In Sec. 3 a general framework for ISAM, OCT, SAR and radar is developed. The distinctions between the ISAM/SAR and OCT/radar models are discussed within this framework in Sec. 4. In Sec. 5 it is shown how the models used lead to a simple Fourier-domain resampling scheme to reconstruct the imaged object from the collected data. Simulated and experimental results are shown in Sec. 6, while alternative ISAM instrument geometries are briefly discussed in Sec. 7. Conclusions and references appear at the end of this article.

2. Optical Coherence Tomography

An obvious distinction between ISAM and SAR is the spectrum of the electromagnetic field used to probe the sample—ISAM operates in the near infrared (IR), while most SAR systems operate in the radio spectrum. Probing in the near-IR allows the formation of an image with resolution on the order of microns. Additionally, in many biological tissues the near-IR spectral band is primarily scattered rather than absorbed [37], allowing greater depth of penetration than at other wavelengths. Near-IR light backscattered from an object can be used to form a three-dimensional image using OCT [3841]. Since the image is formed based on the natural scattering properties of the object, OCT and related methods are non-invasive and non-perturbing, c.f., methods such as histology (which requires destruction of the sample) or fluorescence microscopy (which requires staining of the object).
OCT combines interferometry, optical imaging, and ranging. Due to its sensitivity to wavelength-scale distance changes, interferometry has been an important tool in physics (e.g., Young's experiment [42] and the Michelson-Morley experiment [43]) and is now widely applied using many techniques [44]. OCT can be implemented in a Michelson interferometer arrangement as shown in Fig. 1. The focusing optics localize the illumination and collection operations around a transverse focal point. This focal point is scanned in two (transverse) dimensions across the sample. Interferometry with a broadband source is used to image the sample along the third, axial, dimension. The coherently backscattered light and the reference light only interfere for backscattering from a narrow axial (depth) region, of length Lc, determined by the statistical coherence length of the source (see [Ref. 45], Sec. 4.2.1). That is, Lc is inversely proportional to the source bandwidth. The interferometric signal is then obtained as a function of axial position by altering the length of the reference arm for each point in the transverse scan. In this manner optical coherence ranging is used to construct a three-dimensional image.
As described above, depth discrimination in OCT is achieved via coherence gating, while transverse resolution is achieved using focusing optics. Ideal focusing optics would produce a thin collimated beam in the sample, described as a pencil beam in Fig.2.These ideal optics may not be physically realized, as the propagation laws of electromagnetic radiation prohibit beams that are both perfectly collimated and localized. For focusing systems, the beam is often quantified using a scalar Gaussian beam model [46], within which the depth of focus b (i.e., the axial depth over which the beam is approximatetly collimated) is proportional to the square of the minimum width ω0 of the beam. As illustratted in Fig. 2, this relationship between ω0 and b implies that the resolution, which improves with decreasing ω0, and the depth of focus are competing constraints in OCT. When the coherence gate is set to image planes outside of the depth of focus, the transverse resolution suffers as the beam becomes wider.
ISAM uses computational imaging to overcome the trade-off between depth of focus and resolution. By accurately modeling the scattering processes and the data collection system, including the defocusing ignored in OCT image formation, the scattering properties of the object can be quantitatively estimated from the collected data. As in SAR, diffraction-limited resolution is achieved throughout the final image. for both ISAM and SAR the key to this capability is the coherent collection of a complex data set.
Interferometric microscopes [47], such as OCT systems, give holographic data, i.e., the phase of the backscattered light can be recovered from the raw data. This is a substantial advantage over standard non-interferometric systems where the phase information is lost at detection. This holographic data collection is analogous to the coherent data collection used in SAR systems. Indeed, parallels between SAR and holographic microscopy were recognized and discussed in a series of papers [48-50]. In both ISAM and SAR, the collection of complex coherent data allows the numerical implementation of advantageous operations that would be prohibitively difficult to implement physically. In SAR the multiple along-track range profiles collected from a small aperture can be used to synthesize an aperture corresponding to the whole along-track path. In ISAM, multiple complex OCT range profiles can be computationally reconstructed so that all planes appear simultaneously in-focus, i.e., the blurred out-of-focus regions seen in OCT can be brought into focus numerically.

3. General Framework

In both SAR and ISAM an electromagnetic wave is used to probe the object, the detection apparatus is scanned in space, and a time-of-flight measurement is used to image an additional spatial dimension. Thus, in a fundamental sense, the connection between the data and the object is determined by the same physical laws in either case. This analogy can also be extended to other wave-based techniques such as ultrasound and seismic migration imaging. In this section a general model for radar, SAR, OCT and ISAM techniques is presented. While there are significant differences in system scale and operation, see Fig. 3, the analogy between SAR and ISAM is sufficiently strong to allow a common mathematical description.
As shown in Fig. 3, both SAR and ISAM systems involve a translation of the aperture. This aperture position will be described by a vector ρ, while the vector r describes the position in the imaged object. In the SAR case, a linear-track strip-map system is considered so that the detector is moved along points ρ = [x, 0, 0]T (superscript T indicates a transpose) and the object may be imaged at points in a plane r = [x, 0, z]T. In OCT and ISAM the data are collected as a function of two spatial variables in order to image a three-dimensional volume, so that the detector ranges over ρ = [x, y, 0]T and the object may be imaged for r = [x, y, z]T. Throughout this work a vector will be denoted by bold type, while the corresponding scalar magnitude is given in plain type, e.g., r is the magnitude of the vector r. The Fourier transform kernel is exp(iωt) for time domain signals and exp(−ik · r) for spatial domain signals, so that the complex plane wave exp [i(k0 ·rω0t)] is a delta function centered on (k0, ω0) in the Fourier domain.

3.1. The Back-Scattered Field

Consider the scattered field returned to the aperture when the aperture is offset from the origin by ρ, the object consists of a point scatterer at the position r, and an ideal temporal impulse response is used as input to the aperture. This returned scattered field will be denoted by ĥ(rρ, t), where the dependence on rρ is indicative of the transverse spatial invariance of the system. Under the assumption linearity and temporal invariance, the response to an arbitrary transmitted waveform Êr(t) is then,
E ^ s ( ρ , t ) = d 3 r d t E ^ r ( t ) h ^ ( r ρ , t t ) η ( r ) .
The linearity of the system is predicated on the assumption that multiple scattering effects are negligible— this is often known as the first Born approximation (see [Ref. 45], Sec. 7.6.2). The system input Êr(t) is the transmitted radar pulse for SAR systems and the temporal dependence of the optical plane wave incident on the objective lens in ISAM. The object is described by the reflectivity function η(r) which, in terms of Maxwell's equations, can be identified as the susceptibility (see [Ref. 52], Sec. 2.3). Note that in Eq. (1) the integration over r has been written in three dimensions, while it is a two-dimensional integration for the SAR system.
It is often convenient to represent the temporal convolution seen in Eq. (1) in the Fourier domain so that,
E s ( ρ , ω ) = d 3 r E r ( ω ) h ( r ρ , ω ) η ( r ) .
A caret (ˆ) above a function denotes that the function is represented in the space-time domain, while the absence of a caret denotes a function represented in the space-frequency domain. The fact that η(r) is not a function of ω in Eq. (2) is indicative of an implicit assumption made in Eq. (1). The assumption is that the imaged susceptibility is a constant function of the probing signal frequency, i.e., that the object is not dispersive. This assumption is adequate over sufficiently narrow regions of the spectrum or when the object does not have significant absorbing resonant peaks over the imaging band. This is often true to a good approximation in the biological samples imaged using ISAM.

3.2. Signal Detection in Radar

The backscattered field incident on the detecting aperture is represented in Eq. (1). Rather than being used directly, this field is typically processed in radar systems, in a technique known as pulse compression. The most common processing used is a matched filter [53], which can be expressed as,
I ^ R ( ρ , τ ) = E ^ s ( ρ , t ) E ^ r ( t τ ) d t ,
where ÎR represents the processed radar data. In Eq. (3), the detected field is filtered with a function matched to the broadcast pulse Êr (t). Note that, following standard practice, a complex analytic representation of the signals has been employed (see [Ref. 45], Sec. 3.1), so that a one-sided Fourier analysis can be used. Implicit in Eq. (3) is a coherent radar detection system sensitive to both the amplitude and phase of the detected oscillating field Ês(ρ, t).
Expressing Eq. (3) in the Fourier domain and using the description of the scattered field given by Eq. (2), the Fourier-domain SAR data may be written
S ( ρ , ω ) = E s ( ρ , ω ) E r ( ω ) , = A ( ω ) d 3 r h ( r ρ , ω ) η ( r ) ,
where
A ( ω ) = | E r ( ω ) | 2 ,
represents the spectral power distribution of the source.

3.3. Signal Detection in Time-Domain OCT and ISAM

While coherent detection of Ês(ρ, t) is possible at the frequencies used in radar, there exist no detectors capable of directly measuring the amplitude and phase of an optical field. However, the phase is indirectly captured through the use of the interferometer. Accurate control of the amplitude and phase of the probing optical signal presents a further complication. These obstacles are surmounted by using a broadband stochastic source and by relying on the coherence-gating effect to measure the time of flight. As shown below, broadband interferometry in OCT and ISAM essentially produces the same effects as coherent detection and pulse compression in radar.
The response times of optical detectors are generally of such a scale that the measured data can be considered a long-time average over optical time scales. Assuming that the fields in the system are statistically stationary and ergodic (see [Ref. 45], Sec. 2.2), these long time averages can be written as
I ^ T ( ρ , τ ) = | E ^ r ( t τ ) + E ^ s ( ρ , t ) | 2 , = Γ r r ( 0 ) + 2 Re { Γ s r ( ρ , τ ) } + Γ s s ( ρ , 0 ) ,
where τ is the temporal delay on the reference arm, Γ α β ( ρ , τ ) = E ^ α ( ρ , t ) E ^ β ( ρ , t τ ) and the brackets 〈 〉 represent an ensemble average. That ÎT(ρ, τ) does not depend on t is ensured by the assumption of stationarity.
Because Γrr(0) and Γss(ρ, 0) do not depend on τ in Eq. (6), they may be removed from the data. It can be seen that the data ÎT(ρ, τ) depend only on the real part of Γsr(ρ, τ), but by taking multiple measurements that include a phase shift in the reference arm (introduced by, for example, a very small translation of the reference arm) it is possible to recover the full complex function Γsr(ρ, τ) [44].
Using the definition of Γsr(ρ, τ) and Eq. (1),
Γ s r ( ρ , τ ) = E ^ ( ρ , τ ) E ^ r ( t τ ) = d 3 r d t E ^ r ( t ) E ^ r ( t τ ) h ^ ( r ρ , t t ) η ( r ) , = d 3 r d t Γ r r ( τ t ) h ^ ( r ρ , t ) η ( r ) .
As in Eq. (4), these data can be written in the Fourier domain. The Fourier domain data will again be denoted by S(ρ, ω) so that,
S ( ρ , ω ) = A ( ω ) d 3 r h ( r ρ , ω ) η ( r ) ,
where, in this case, A(ω) is the power spectral density of the reference beam, which is found, via the Wiener-Khintchine theorem (see [Ref. 45], Sec. 2.4), as the Fourier transform of Γrr(τ). This power spectral density and the Fourier intensity of Eq. (5) are both real, nonnegative functions which, for the purposes of the data processing examined here, play the same role describing the ω bandwidth of the data.
The identical forms of Eq. (4) and Eq. (8) illustrate the commonalities between radar and OCT. Both can be regarded as linear systems collecting data in N − 1 spatial and 1 spectral dimension, in order to estimate a spatial-domain object of N dimensions. Note that these data collection models have different integral kernels h(rρ, ω) and that simplifying assumptions are made to get to the forms of Eq. (4) and Eq. (8). For example, both multiple scattering and nonlinear object responses have been neglected, and it is assumed that a stable phase relation exists between points collected at different scan locations ρ. This last assumption can become problematic in both SAR and ISAM as small unknown variations in the scan path can disturb the assumed relation between data collected at different locations. In both instruments it is usually necessary to introduce some data preprocessing to address this problem. In SAR systems autofocus algorithms [54] are employed, while in current implementations of ISAM, a known structure (e.g., a coverslip boundary) is placed in the object and used as a phase reference point [18, 55]. Such techniques are not necessary in OCT and radar, where phase stability is not required for computed imaging and only the magnitude of the data is typically displayed.

3.4. Signal Detection in Fourier-Domain OCT and ISAM

The instrument modality described in the section above and illustrated in Fig. 1 is known as time-domain OCT or time-domain ISAM. It is however, possible to collect the data directly in the frequency domain, in a methodology known as Fourier-domain OCT [56]. In this system the reference mirror seen in Fig. 1 is fixed and the detector replaced with a spectrometer. Fourier-domain OCT eliminates the need for scanning the reference mirror position and has significant advantages in terms of image acquisition time and/or signal-to-noise ratio (SNR) [57, 58]. A complementary principle is applied in Fourier transform infrared spectroscopy [59, 60], where spectral information is measured using interferometric time-domain measurements.
In Fourier-domain OCT or ISAM the collected data are,
I F ( ρ , ω ) = | e i ω τ 0 E r ( ω ) + E s ( ρ , ω ) | 2 , = | E r ( ω ) | 2 + 2 R e { e i ω τ 0 E s ( ρ , ω ) E r ( ω ) } + | E s ( ρ , ω ) | 2 ,
where τ0 represents the fixed delay on the reference arm. Note that the Fourier-domain reference and sample fields appearing above are spectral domain representations of random processes. These Fourier domain representations are assumed to exist, at least in the sense of mean-square stochastic convergence of the Fourier integral [61].
The first term of Eq. (9) is the power spectral density A(ω) used above. This term is constant in ρ and typically slowly varying with ω, and thus can be removed. The last term is known as the autocorrelation artifact and is often small in comparison to the other terms. For this reason it will be assumed negligible here. Note that there are scenarios in which the autocorrelation term may be significant, and in these cases ISAM processing has been shown to mitigate this artifact via a blurring effect [13].
The Fourier spectrum, S(ρ, ω), appearing in Eq. (4), in the deterministic-field context is analogous to the cross-spectral density for the stochastic field,
S ( ρ , ω ) = E S ( ρ , ω ) E r ( ω ) .
This suggests that the remaining term in Eq. (9) be written 2Re{exp(-τ0)S(ρ, ω)}, with S(ρ, ω) being the desired complex data. While it is possible to determine the complex value of S(ρ, ω) through multiple measurements with different reference phases (as in the time-domain case), a simpler method may be employed if the reference mirror position is set appropriately. Since the sample generally has a well-defined boundary, it is possible to set the reference arm delay τ0 to be shorter than the least time-of-flight in sample arm plus the coherence length Lc. When this condition is met, the real and imaginary parts of S(ρ, ω) are related via a Hilbert transform. Using simple Fourier transform computations, it is thus possible to recover the imaginary part of S(ρ, ω) from the real observation given in a single measurement [13, 62].
In this section equivalent detection models have been posed for OCT and radar, as represented by Eq. (4) and Eq. (8) respectively. To understand image formation, the integral kernel h(rρ, ω) must be examined. This is done in the following section.

4. System Modeling

As shown by Eq. (4) and Eq. (8), the relationship between the object η(r) and the data S(ρ, ω) can be described by the same linear integral equation in both radar/SAR and OCT/ISAM. The modalities differ only in the specific kernels h(rρ, ω) which are determined from physics-based models. This section examines the models used for each modality.

4.1. Radar and OCT

As described in Sec. 3.1, the time-domain kernel ĥ(rρ, t) is the signal returned from a temporal impulse reflected from a scatterer at position r when the beam scan position is ρ. In OCT and strip-map radar, the transmitted and received beams are limited in the transverse directions by focusing, while the range is determined by the signal time of flight. This leads to the kernel,
h ^ ( r ρ , t ) = u ( r ρ ) υ ( r ρ ) δ [ t t d ( z ) ] ,
where r is the transverse component of r, u(r) describes the width of the illuminating beam, υ (r) describes the width of the detection sensitivity, and td(z) is the time of flight. The kernel described by Eq. (11) is separable in the transverse and axial coordinates and is therefore consistent with the pencil beam approximation illustrated in Fig. 2.
The temporal delay td(z) is proportional to the twice depth of the scatterer, as a round-trip time of flight is measured. Thus
t d ( z ) = 2 z c ,
where c is the speed of light. If the same aperture is used in both transmission and detection, reciprocity [63] requires that u(r) = υ (r). Appealing to Eq. (11) and Eq. (12), Eq. (7) becomes,
Γ s r ( ρ , τ ) = d 3 r Γ r r ( τ 2 z c ) u 2 ( r ρ ) η ( r , z ) .
This expression relates time-domain OCT data to the imaged object. The object is convolved with a PSF with transverse extent governed by u2(r) and axial extent determined by Γrr(τ). This relation is similar to one encountered in radar, where the beam width also determines the transverse resolution and the axial resolution is proportional to the length of the compressed broadcast pulse. As illustrated in Fig. 2, Eq. (13) is only valid within the focal region. Beyond this range, blurring and interference artifacts are observed because of beam spread.
It is convenient to take Eq. (13) into the temporal Fourier domain:
S ( ρ , ω ) = A ( ω ) d 3 r u 2 ( r ρ ) e i 2 k ( ω ) z η ( r , z ) ,
where k(ω) is the wavenumber given by the dispersion relation,
k ( ω ) = ω c .
This expression is an alternative representation of the time-domain data and directly describes the information bearing term in Fourier-domain OCT. Comparing Eq. (14) to Eq. (8) reveals that the kernel used for the OCT forward model is given by the expression
h ( r ρ , ω ) = u 2 ( r ρ ) e i 2 k ( ω ) z .
Reliance on this approximate model limits OCT and radar imaging systems—OCT images are of increasingly poor quality away from the depth of focus, and transverse radar resolution is limited by the beam width and hence the maximum aperture size.

4.2. SAR and ISAM

The computed imaging approaches of SAR and ISAM are based on models that more closely approximate solutions of Maxwell's equations. Contrary to the assumptions made in OCT, the transverse and axial system responses cannot be decoupled accurately, due to the beam-spreading illustrated in both Fig. 2 and Fig. 3. The changes in the model are reflected by changes in the kernel h(rρ, ω) that appears in Eq. (8). Below, this kernel is analyzed at each temporal harmonic, i.e., the form of the kernel is found at each fixed value of ω.
The kernel h(rρ, ω) is again separable into the product of illumination and detection patterns as,
h ( r ρ , ω ) = k 2 ( ω ) g ( r ρ , ω ) f ( r ρ , ω ) .
Here the objective lens (ISAM) or transmitting aperture (SAR) produces a field g(rρ, ω) in the sample, the detection sensitivity varies with f (rρ, ω) and the factor of k2(ω) describes the frequency dependence of scattering. A detailed discussion of this form for h(rρ, ω) can be found in [Ref. 14].
When the same aperture is used for both illumination and detection (as is typically the case), reciprocity can again be invoked to show that the illumination and detection patterns are equal. Furthermore, the illumination field g(rρ, ω) must obey propagation laws. This means that in a homogenous background medium, the illuminating field can be repressented by a spectrum of plane waves (see [Ref. 52], Sec. 11.4.2),
g ( r , ω ) = d 2 q G ( q , ω ) exp { i [ q r + k z ( q , ω ) z ] } ,
where,
k z ( q , ω ) = k 2 ( ω ) q 2 .
In free space k(ω) is given by Eq. (15), however more complicated dispersion relations can also be used for dispersive materials [18, 64, 65], where the speed of light depends on ω.
The angular spectrum of Eq. (18) must be modified for the two-dimensional SAR system. In SAR a two-dimensional (x, z) object is imaged, meaning that r and q are each one-dimensional. However, the electromagnetic fields present in the system spread in three dimensions. In the simple strip-map SAR system considered here, the SAR aperture track and the object are both assumed to lie in the xz plane, i.e., the aperture altitude is neglected. In this geometry the spreading in y can be modeled as a [k(ω)z]−1/2 decay so that, for the SAR system, Eq. (18) becomes,
g s ( r , ω ) = 1 k ( ω ) z d q x G s ( q x , ω ) exp { i [ q x x + k z ( q x , ω ) z ] } .
As will be seen subsequently, this difference in dimensionality between SAR and ISAM does not change the nature of the data processing required, only the dimensionality of the processing.
The angular spectra G(q, ω) and Gs(qx, ω) seen in Eq. (18) and Eq. (20) can be related to the aperture shapes used in ISAM and SAR. In ISAM the focal plane is defined to be at z = 0, resulting in the function G(q, ω) being directly related to the lens aperture. For high numerical aperture lenses the function G(q, ω) is broad and the beam width at the focus narrow. Aberrations on the lens can be included in the phase of the angular spectrum. A simple model for g(r, ω) is a Gaussian beam [46], where G(q, ω) is a Gaussian function. More thorough models, e.g., [66], can also be used within the developed framework. In SAR, the z = 0 plane is chosen to coincide with the track of the radar aperture. In this case the spectrum Gs(qx, ω) corresponds to the Fourier transform of the aperture profile. A small aperture gives a highly divergent beam.
The forward model used in SAR and ISAM (Eq. (17)) is more accurate than that assumed in radar and OCT (Eq. (16)). In the simple OCT model each point in the data set is associated with a point in the object, as described in Eq. (13). To correct for the out-of-focus blurring described by the more accurate kernel of Eq. (17), mathematical processing must be applied. The appropriate computed imaging algorithm is described in the next section.

5. The Inverse Problem for SAR and ISAM

The linear integral equation of Eq. (8) and the expression for the kernel, given in Eq. (17), form the forward model used in SAR and ISAM. This relation describes the dependence of the data on the object. Estimating the object from the data, using the forward model, requires solving the inverse problem. In general this problem may be ill-posed, but with the use of regularization techniques [67-69], an estimate of the object may be found. The quality of this estimate will depend on how much information is passed by the instrument.
Since the ISAM forward model is well defined, the inverse problem can, in principle, be solved using numerical techniques. However, an approximation to the forward model allows a more elegant, and significantly more efficient [20], solution to the inverse problem. This solution is explained in this section.

5.1. Transverse Spatial Fourier Representation of the Model

The angular spectrum representations seen in Eq. (18) and Eq. (20) give the transverse spatial Fourier transform of the illuminating field g(r, ω). The model kernel can then be taken to the transverse spatial Fourier domain, denoted by a tilde, by noting that the product seen in Eq. (17) becomes a convolution,
h ˜ ( q , z , ω ) = k 2 ( ω ) d 2 q G ( q , ω ) G ( q q , ω ) exp { i [ k z ( q , ω ) + k z ( q q , ω ) ] z } .
Comparing Eq. (18) and Eq. (20), it can be seen that the SAR result is similar to the expression above but in one fewer dimension and with a prefactor of [k(ω)z]−1.
As a first step towards the solution of the inverse problem, it is useful to recognize that the transverse part of the integral appearing in Eq. (8) is in the form of a two-dimensional convolution. Thus, by taking the two-dimensional (transverse) spatial Fourier transform of the data, the inverse problem may be reduced from a problem involving a three-dimensional integral equation to one of a series of one-dimensional integral equations, i.e.,
S ˜ ( q , ω ) = A ( ω ) d z h ˜ ( q , z , ω ) η ˜ ( q , z ) .

5.2. Model Approximation in Diverging Regions

As illustrated in Fig. 3, the fields used in SAR and ISAM are divergent away from the z = 0 plane. In Eq. (21), this implies that the complex exponential factor in the integrand is rapidly oscillating. Such oscillatory integrals can be approximated using the method of stationary phase (see [Ref. 45], Sec. 3.3). The stationary point occurs when the argument of the exponential has zero gradient, which in this case is at the point q = q / 2.
Applying the method of stationary phase in two dimensions gives the ISAM result,
h ˜ ( q , z , ω ) H D ( q , ω ) k ( ω ) z exp { i 2 k z ( q 2 , ω ) z } ,
where HD(q, ω) describes the bandwidth of the data (see [Ref. 14] for an exact description of this function). The factor of [k(ω)z]−1 appearing above describes the signal decay away from focus. In SAR, the method of stationary phase in one dimension is applied to a kernel based on the angular spectrum of Eq. (20). The result is of the same form as Eq. (23) but with a decay of [k(ω)z]−3/2.

5.3. Model Approximation in Focused Regions

As seen in Fig. 3, the object in ISAM, unlike in SAR, contains the focused z = 0 plane. Around this region the exponential seen in the integrand of Eq. (21) is not highly oscillatory, meaning the method of stationary phase can not be accurately applied. However, it is still possible to approximate the function (q, z, ω) to obtain an elegant inversion [19].
In the focal region, the integrand of Eq. (21) is dominated by the product G ( q , ω ) G ( q q , ω ). For symmetric apertures, this product will be peaked around the point q = q / 2. The exponential factor may be expanded in a Taylor series about this point and, since it is slowly varying for small k(ω)z, all but the leading term discarded. The consequent analysis, given in detail in [Ref. 14], then results in an approximation of the form,
h ˜ ( q , z , ω ) H F ( q , ω ) exp { i 2 k z ( q 2 , ω ) z } .
The exponential factor above is the same as for the diverging region.

5.4. Reduction to Resampling

The approximated models described above can be substituted into the data model of Eq. (22) to give,
S ˜ ( q , ω ) A ( ω ) H ( q , ω ) d z η ˜ ( q , z ) R ( z ) exp { i 2 k z ( q 2 , ω ) z } ,
where H (q, ω) = HF(q, ω) and R(z) = 1 when considering z in the focused region, and H (q, ω) = HD(q, ω) and R(z) = k(ω)z (or R(z) = [k(ω)z]3/2 in the SAR case) for z in the diverging region. The transition point between these two regimes is discussed in [Ref. 14].
In Eq. (25), A(ω) H(−q, ω) act as linear filters on the data. The effects of these filters can be compensated by standard means, such as the Wiener filter [70]. For systems without aberrations, the function H(q, ω) is slowly varying, as is A(ω), meaning that it may be acceptable to neglect the effects of A(ω)H(−q, ω) in many situations.
In either case, the remaining integral in Eq. (25) can be seen to be of the form of a Fourier transform. Consequently,
S ˜ ( q , ω ) η [ q , q z ( q , ω ) ] ,
where η͌′ is the three-dimensional Fourier transform of η(r)/R(z), the object with an attenuation away from focus, and
q z ( q , ω ) = 2 k z ( q / 2 , ω ) , = 4 k 2 ( ω ) q 2
This equation describes a Fourier domain warping relating the data and the object. This warping is known as the Stolt mapping and is illustrated in Fig. 4. The Stolt mapping was originally developed in the field of geophysical imaging [71, 72] and is used in Fourier migration techniques. In ultrasonic imaging, Eq. (27) forms the basis of the Synthetic Aperture Focusing Technique (SAFT) [73-76]. The Stolt mapping was also recognized as applicable in SAR [77], where it is typically known as the ωk algorithm or the wavenumber algorithm. This work shows the utility of the Stolt mapping in the field of interferometric broadband microscopy.
The equivalent Fourier mapping for OCT, found from the kernel of Eq. (16) and valid only within the focal region, is
q z ( q , ω ) = 2 k ( ω ) .
This OCT model describes only a rescaling of the axial coordinate, while the Stolt mapping of Eq. (26) describes the physical effects of out-of-focus beam spreading.
The relation given in Eq. (26) gives a clear indication of how to estimate the object from the collected data S(ρ, ω). This procedure can be summarized as
  • Starting with the complex data S(ρ, ω), collected as described in Sec. 3, take the transverse spatial Fourier transform to get (q, ω).
  • Implement a linear filtering, i.e., a Fourier-domain multiplication of a transfer function with (q, ω), to compensate for the bandpass shape given by A(ω) H(−q, ω) in Eq. (25). This step may often be omitted without significant detriment to the resulting image.
  • Warp the coordinate space of (q, ω) so as to account for the Stolt mapping illustrated in Fig. 4. Resample the result back to a regular grid to facilitate numerical processing.
  • Take the inverse three-dimensional Fourier transform to get an estimate of η(r)/R(z), the object with an attenuation away from focus.
  • If required, multiply the resulting estimate by R(z) to compensate for decay of the signal away from focus.
The operations described above are computationally inexpensive and allow a fast implementation of ISAM processing [20].

6. Results

In this section ISAM images are compared to those obtained using standard OCT methods. The high quality of the results obtained validates the calculations made above, while also showing that the approximations made to the forward model in Sec. 5, do not introduce significant error in the solution to the inverse problem.

6.1. Simulations

Numerical simulations of the ISAM system are useful for providing a theoretical corroboration of the proposed methods in a tightly controlled and well understood environment. In Fig. 5, simulation results are shown for the imaging of an isotropic point scatterer located out of focus on the z axis.
The data were produced using the focused vector beam formulation given in [66]. The electromagnetic field defined in that paper is an exact solution to Maxwell's equations, and obeys geometrical-optics boundary conditions on the lens aperture. An objective lens with 0.75 numerical aperture was simulated and light between the wavelengths of 660nm and 1000nm was collected. Further details of this type of data simulation can be found in [Ref. 14].
The magnitude of the spatial-domain OCT data gives a broadly spread and low-amplitude response. Ideally the image would be point-like, corresponding to the point scatterer. The blurring observed is due to the scatterer being in the out-of-focus region. When the OCT image is examined in the Fourier domain, curved phase fronts can be seen. For the offset point scatterer imaged, the Fourier spectrum should have flat phase fronts parallel to the qxqy plane.
The Fourier resampling of ISAM can be seen to take the curved OCT phase fronts to the expected straight lines. When the ISAM image is represented in the spatial domain, the desired high-amplitude, point-like image is seen. These simulations lend strong support to ISAM, as the detailed, vectorial forward model is inverted accurately by a simple Fourier-domain resampling only.

6.2. Imaging a Phantom

Beyond simulations, the next step in ISAM validation is to image an engineered object (i.e. a phantom) with known structure. Here the phantom was constructed by embedding titanium dioxide scatterers, with a mean diameter of 1μm, in silicone. This phantom was imaged with a spectral-domain ISAM system employing an objective lens with a numerical aperture of 0.05. A femtosecond laser (Kapteyn-Murnane Laboratories, Boulder, Colorado) was used as a source, to give a central wavelength of 800nm and a bandwidth of 100nm. The resulting focused pattern g(r, ω) can be approximated as a Gaussian beam with a spot size of 5.6μm and a depth of focus of approximately 240μm. Further details of the ISAM instrument and the phantom can be found in [Ref. 18].
ISAM processing, including dispersion compensation [64], was applied to the collected data to produce an image. Specific details of the computational implementation can be found in [Ref. 18], while [Ref. 20] gives a thorough general description of ISAM algorithms and computational demands. The raw data and the ISAM reconstruction are shown in Fig. 6 and Fig. 7 (transverse-axial and transverse-transverse planes respectively), with corresponding renderings in Fig. 8.
Out of focus blurring is clearly visible in the collected data. This blurring limits the depth of field in OCT. The ISAM reconstruction can be seen to bring the out-of-focus regions back into focus, as evidenced by the point-like features in the image, which correspond to individual titanium dioxide scatterers. It should be noted that the point-like reconstructions observed are produced by the physics-based computational imaging, not by the use of any assumed prior knowledge of the sample, e.g., [78]. The xy details of Fig. 7 provide further insight into the action of the ISAM resampling algorithm. In Fig.7(b) and Fig.7(c) interference fringes can be clearly seen. These result from the simultaneous illumination of two (or more) point scatterers and the consequent interference of the light scattered from each. The reconstructions of Fig.7(g) and Fig.7(h) show that these interference fringes are correctly interpreted as multiple point scatterers in the ISAM reconstruction.
To further illustrate the SAR-ISAM analogy, ISAM and SAR images are compared below. Strip-map radar and SAR images from a linear rail SAR imaging system [79,80] are shown in Fig.9. This imaging system consists of a small radar sensor mounted on linear rail that is 225cm in length. The radar sensor is moved down the rail at 2.5cm increments, acquiring a range profile of the target scene at each location along the rail. The radar sensor is a linear FM radar system with 5GHz of chirp bandwidth spanning approximately 7.5GHz to 12.5GHz. The chirp time is 10ms, the transmit power is approximately 10dBm, the receiver dynamic range is better than 120dB, and the digitizer dynamic range is 96dB. Range profile data from each increment across the rail are fed into a range-migration SAR algorithm [12], a stolt Fourier resampling, to yield a high-resolution SAR image of the target scene. Raw radar range profile data are similar to out of focus data in coherence microscopy, as seen in Fig. 9(a) for radar and Fig. 6(b) for OCT. The SAR image, after Stolt Fourier resampling, is shown in Fig. 9(b), which is analogous to the ISAM image of Fig. 6(c).

6.3. Imaging Tissue

OCT and ISAM are primarily biological imaging methods. As such, the most important capability of ISAM is the imaging of tissue. As described in [Ref. 18], human breast tissue was acquired and imaged with the same ISAM system used to image the titanium dioxide scatterers. Examples of the resulting images can be seen in Fig. 10. Once again, it can be seen that ISAM successfully removes blur and resolves interference artifacts in otherwise out of focus regions.
The improvement observed in the ISAM reconstructions has significant consequences in terms of the diagnostic utility of the images. In the out-of-focus OCT images, the cellular structure is almost entirely lost, while in the ISAM reconstructions, significant features can be seen on the micrometer scale. For example, cell membranes can be recognized, and the boundary between the adipose and fibrous tissue can be clearly seen. There is also a strong correspondence to the histological sections, although embedding, sectioning and staining of the tissue disrupt the sample to some extent. ISAM, unlike OCT, can be seen to allow diffraction-limited imaging at all planes within the sample, rather than just at the physical focus. As a result, significantly more information regarding the tissue can be extracted without increasing the measurement duration or scanning the focal plane. In contrast to the histological images, the structure visible in the ISAM images is observed without destruction of the sample. This suggests ISAM may be particularly useful in applications where in vivo imaging over a large tissue volume is preferable to biopsy.

7. Alternate ISAM Modalities

ISAM is a microscopic imaging technique and is implemented on a bench-top scale. This provides significant flexibility in the design of alternative ISAM modalities. In this section some alternative ISAM instruments are briefly discussed.

7.1. Vector ISAM

To achieve a maximum-resolution image it is necessary to use the highest possible numerical aperture objective lens (high-numerical-aperture OCT is often known as optical coherence microscopy [81]). For such high-angle lenses the electromagnetic fields present in the system cannot be accurately approximated as scalar fields. Furthermore, it has been shown that the vectorial nature of the high-aperture focused field can be explicitly exploited to probe anisotropic properties of the object, e.g., [82-87]. ISAM can be generalized to vectorial fields [14].
In the vectorial system, scattering from the object is recognized as being dependent on the polarization state of the relevant fields. As a result, the object is not a scalar function η (r), but a rank-two tensor function of position η̄ (r). The illumination and detection patterns, g(r, ω) and f (r, ω), are vectorial, which results in six independent ISAM kernels—one for each possible pair of field directions in illumination and detection. That is,
h ( r ρ , ω , α , β ) = k 2 ( ω ) g ( r ρ , ω , α ) f ( r ρ , ω , β ) ,
where g(rρ, ω, α) is an element of the field g(r,ω), and α takes on values of x, y and z. The scalar kernel of Eq. (17) is a special case of this expression.
It can be shown that the data then depend on the scattering tensor as [14],
S ( ρ , ω ) = α { x , y , z } β { x , y , z } d 3 r h ( r ρ , ω , α , β ) η ¯ ( r , α , β ) ,
where η̄(r, α, β) is an element of the tensor η̄(r). The scalar case of Eq. (8) is a special case of this expression.
It can be seen from Eq. (29) that h(rρ, ω, α, β) = h(rρ, β, ω, α). Symmetry arguments [88], can be used to show the equivalent property, η̄(r, α, β) = η̄(r, β, α), for the scattering tensor. The effect of each independent element of the scattering potential on the data is therefore described by a distinct kernel.

7.2. Full-Field ISAM

Full-field OCT systems [89-93] involve capturing xy images sequentially, one frequency ω, or delay τ, at a time. A similar system has been analyzed for ISAM [15]. In this system the object is illuminated with a z-propagating plane wave, so that the illumination pattern is
g ( r , ω ) = exp [ i k ( ω ) z ] .
The angular spectrum of the illuminating light is then,
G ( q , ω ) = δ ( q ) .
The scattered light is collected by an objective lens, so that the detection pattern f(r, ω) is of the same form as g(r, ω) in Eq. (18).
The spatial-domain kernel of Eq. (17), can then be taken into the Fourier domain using the same process as that used to find Eq. (21). That is,
h ˜ ( q , z , ω ) = k 2 ( ω ) d 2 q δ ( q ) F ( q q , ω ) exp { i [ k z ( q , ω ) + k z ( q q , ω ) ] z } = k 2 ( ω ) F ( q , ω ) exp { i [ k ( ω ) + k z ( q , ω ) ] z } .
This exact kernel is of the same form as the approximated forward models of Eq. (23) and Eq. (24). As a result, the relationship between the Fourier-domain data and the Fourier-domain object for the full-field ISAM system is of the same form as Eq. (26), but with the new mapping,
q z ( q , ω ) = k ( ω ) k z ( q , ω ) .
Thus the inverse problem in full-field ISAM is also solved by a Fourier-domain resampling, albeit on a different grid.
Confocal ISAM is analogous to SAR, and both techniques share the Stolt mapping. The full-field ISAM mapping of Eq. (34) also appears in diffraction tomography [9496], a technique applied in ultrasound, optical and microwave imaging.

7.3. Rotationally-Scanned ISAM

Rotationally-scanned ISAM [16] is a sensing system compatible with catheter-based imaging, as illustrated in Fig. 11. Rather than scanning the aperture in two coplanar dimensions, the aperture is scanned in one linear dimension (along the catheter) and one rotational dimension (along the azimuthal angle).
The complex analytic signal, S(p, θ, ω), is sampled at points given by the displacement p along the y axis and the azimuthal coordinate θ, as well the usual frequency ω. Taking the Fourier transform with respect to both p (argument ξ) and θ (argument ηθ) results in the data (ξ, ηθ, ω). After some calculation an approximate expression is obtained for the transformed data:
S ˜ ( ξ , n θ , ω ) = K ( ξ , n θ , ω ) κ ( ξ , n θ , ω ) ,
where K(ξ, ηθ, ω) describes the bandpass function of the rotationally-scanned ISAM system. The function κ͌(ξ, ηθ, ω) is a Fourier transform of a resampled version of the Fourier transform of the object being sought. That is,
κ ( ξ , η θ , ω ) = π π d θ exp ( i θ η θ ) η { [ x ^ cos θ 4 k 2 ( ω ) ξ 2 + y ^ ξ + z ^ sin θ 4 k 2 ( ω ) ξ 2 ] } ,
where , ŷ, are unit vectors and
η ( q ) = d 3 r exp ( i r q ) η ( r ) P ( r ) ,
is a weighted Fourier transform of the object η(r), with P(r) a function of the radial distance to focus [16]. It is thus seen that the solution of inverse problem may again be reduced to a filtering and resampling of the data between appropriate Fourier transforms and inverse Fourier transforms.

7.4. Partially-Coherent ISAM

In a recent analysis [97], it has been shown that a spatially-extended, statistically partially-coherent source can be incorporated in full-field ISAM to produce differing illumination and detection patterns g(r, ω) and f (r, ω). Varying the source coherence length allows considerable control of g(r, ω) and also results in a changing resampling scheme in the inverse problem. The multiple scattering artifacts that can be problematic in full-field ISAM can be effectively mitigated using partially-coherent ISAM.

8. Conclusions

ISAM is a computed imaging technique that quantitatively estimates a three-dimensional scattering object in broadband coherent microscopy. The solution of the inverse problem allows the reconstruction of areas typically regarded as out of focus. The result obviates the perceived trade-off between resolution and depth of focus in OCT.
ISAM, like OCT, is a tomographic method, i.e., the images produced are truly three-dimensional. While ISAM addresses an inherent weakness in OCT, namely the need to scan the focus axially to obtain images outside of the original focal plane, ISAM is not merely a method to refocus the field computationally. Refocusing may be achieved from a single interferometric image at a fixed frequency, but the resulting image is still inherently two-dimensional, failing to unambiguously distinguish contributions to the image from various depths. As in other ranging technologies, the broadband nature of ISAM allows a true three-dimensional reconstruction.
ISAM and SAR are closely related technologies, to the point where they can be cast in the same mathematical framework. Both techniques employ a Fourier domain resampling, based on the Stolt mapping, in the inverse processing. While the mathematics of the two systems are closely related, each uses a significantly different region of the electromagnetic spectrum and images objects of com-mensurately different scales. In SAR, translation of the aperture and computational imaging allow the synthesis of a virtual aperture of dimension dependent on the along track path length, rather than the physical aperture size. This larger synthetic aperture produces an image of higher resolution than would otherwise be achievable. In OCT the limitations on the size of the physical aperture (i.e., the objective lens) are not the limiting factor, rather the image acquisition time becomes prohibitively long if the focal plane must be scanned through an object of extended depth. The computational imaging in ISAM gives diffraction-limited resolution in all planes, not just at the physical focus, and hence eliminates the need for focal-plane scanning.
ISAM and SAR are examples in the broad class of modalities known as computed imaging. Like almost all computed imaging modalities in common practice today, they are based on the solution of linear inverse problems. Linear inversion problems offer advantages such as the option to pre-compute and store the elements of an inversion kernel for rapid computation of images from data. Moreover, error and stability may be well understood and there exist a wealth of well-studied methods for regularization (stabilization) of the inversion algorithms. ISAM and SAR are also members of the more restrictive class of problems that may be cast as data resampling. To arrive at the resampling view of these problems, the data must be Fourier transformed and the resampled data Fourier transformed again. Thus the methods take advantage (are even reliant on) one of the greatest advances in applied mathematics in the last half-century, the fast Fourier transform [98]. They may be made to run very fast and are amenable to parallelization.

Acknowledgments

The authors would like to thank Gregory L. Charvat for providing the SAR data seen in Fig. 9. This research was supported in part by grants from the National Science Foundation (CAREER Award 0239265 to P.S.C.; BES 03-47747, BES 05-19920, BES 06-19257 to S.A.B.), the National Institutes of Health (1 R01 EB005221 to S.A.B.), and the Grainger Foundation (to S.A.B. and P.S.C).

References

  1. Röntgen, W. C. On a new kind of rays. Nature 1896, 1369, 274–276. [Google Scholar]
  2. Bloch, F.; Hansen, W. W.; Packard, M. The nuclear induction experiment. Phys. Rev. 1946, 70, 474–485. [Google Scholar]
  3. Carr, H. Y.; Purcell, E. M. Effects of diffusion on free precession in nuclear magnetic resonance experiments. Phys. Rev. 1954, 94, 630–638. [Google Scholar]
  4. Buderi, R. The Invention That Changed the World.Abacus, second edition; 1998. [Google Scholar]
  5. Sabra, A. I. The Optics of Ibn Al-Haytham.; The Warburg Institute, 1989. [Google Scholar]
  6. Schaller, R. R. Moore's law: past, present and future. IEEE Spectrum 1997, 34(6), 52–59. [Google Scholar]
  7. Cormack, A. M. Representation of a function by its line integrals, with some radiological applications. J. Appl. Phys. 1963, 34, 2722–2727. [Google Scholar]
  8. Hounsfield, G. N. Computerized transverse axial scanning (tomography): part I. Description of system. Br. J. Radiol. 1973, 46, 1016–1022. [Google Scholar]
  9. Lauterbur, P. C. Image formation by induced local interactions: examples employing nuclear magnetic resonance. Nature 1973, 242, 190–191. [Google Scholar]
  10. Curlander, J. C.; McDonough, R. N. Synthetic Aperture Radar: Systems and Signal Processing; Wiley-Interscience, 1991. [Google Scholar]
  11. Gough, P. T.; Hawkins, D. W. Unified framework for modern synthetic aperture imaging algorithms. Int. J. Imag. Syst. Technol. 1997, 8, 343–358. [Google Scholar]
  12. Carrara, W. G.; Goodman, R. S.; Majewski, R. M. Spotlight Synthetic Aperture Radar Signal Processing Algorithms.; Artech House, 1995. [Google Scholar]
  13. Davis, B. J.; Ralston, T. S.; Marks, D. L.; Boppart, S. A.; Carney, P. S. Autocorrelation artifacts in optical coherence tomography and interferometric synthetic aperture microscopy. Opt. Lett. 2007, 32, 1441–1443. [Google Scholar]
  14. Davis, B. J.; Schlachter, S. C.; Marks, D. L.; Ralston, T. S.; Boppart, S. A.; Carney, P. S. Nonparax-ial vector-field modeling of optical coherence tomography and interferometric synthetic aperture microscopy. J. Opt. Soc. Am A 2007, 24, 2527–2542. [Google Scholar]
  15. Marks, D. L.; Ralston, T. S.; Boppart, S. A.; Carney, P. S. Inverse scattering for frequency-scanned full-field optical coherence tomography. J. Opt. Soc. Am A 2007, 24, 1034–1041. [Google Scholar]
  16. Marks, D. L.; Ralston, T. S.; Carney, P. S.; Boppart, S. A. Inverse scattering for rotationally scanned optical coherence tomography. J. Opt. Soc. Am A 2006, 23, 2433–2439. [Google Scholar]
  17. Ralston, T. S.; Marks, D. L.; Carney, P. S.; Boppart, S. A. Inverse scattering for optical coherence tomography. J. Opt. Soc. Am A 2006, 23, 1027–1037. [Google Scholar]
  18. Ralston, T. S.; Marks, D. L.; Carney, P. S.; Boppart, S. A. Interferometric synthetic aperture microscopy. Nat. Phys. 2007, 3, 129–134. [Google Scholar]
  19. Ralston, T. S.; Marks, D. L.; Boppart, S. A.; Carney, P. S. Inverse scattering for high-resolution interferometric microscopy. Opt. Lett. 2006, 24, 3585–3587. [Google Scholar]
  20. Ralston, T. S.; Marks, D. L.; Carney, P. S.; Boppart, S. A. Real-time interferometric synthetic aperture microscopy. Opt. Express 2008, 16, 2555–2569. [Google Scholar]
  21. Wheatstone, C. Contributions to the physiology of vision. Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philos. T. Roy. Soc. 1838, 128, 371–394. [Google Scholar]
  22. Kellermann, K. I.; Moran, J. M. The development of high-resolution imaging in radio astronomy. Annu. Rev. Astrophys. 2001, 39, 457–509. [Google Scholar]
  23. Markel, V. A.; Schotland, J. C. Inverse problem in optical diffusion tomography. I. Fourier-Laplace inversion formulas. J. Opt. Soc. Am. A 2001, 18, 1336–1347. [Google Scholar]
  24. Milstein, A. B.; Oh, S.; Webb, K. J.; Bouman, C. A.; Zhang, Q.; Boas, D. A.; MIllane, R. P. Fluorescence optical diffusion tomography. Appl. Opt. 2003, 42, 3081–3094. [Google Scholar]
  25. Boas, D. A.; Brooks, D. H.; Miller, E. L.; DiMarzio, C. A.; Kilmer, M.; Gaudette, R. J.; Zhang, Q. Imaging the body with diffuse optical tomography. IEEE Signal Proc. Mag. 2001, 18(6), 57–75. [Google Scholar]
  26. Ollinger, J. M.; Fessler, J. A. Positron-emission tomography. IEEE Signal Proc. Mag. 1997, 14(1), 43–55. [Google Scholar]
  27. Donoho, D. L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar]
  28. Cande's, E. J.; Romberg, J.; Tao, T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar]
  29. Takhar, D.; Laska, J. N.; Wakin, M. B.; Duarte, M. F.; Baron, D.; Sarvotham, S.; Kelly, K. F.; Bara-niuk, R. G. A new compressive imaging camera architecture using optical-domain compression. In Proc. Computational Imaging IV; volume 6065, SPIE, 2006. [Google Scholar]
  30. Sato, T.; Ueda, M.; Fukuda, S. Synthetic aperture sonar. J. Acoust. Soc. Am. 1973, 54, 799–802. [Google Scholar]
  31. Williams, R. E. Creating an acoustic synthetic aperture in the ocean. J. Acoust. Soc. Am. 1976, 60, 60–73. [Google Scholar]
  32. Hayes, M. P.; Gough, P. T. Broad-band synthetic aperture sonar. IEEE J. Oceanic Eng. 1992, 17, 80–94. [Google Scholar]
  33. Gray, S. H.; Etgen, J.; Dellinger, J.; Whitmore, D. Seismic migration problems and solutions. Geophysics 2001, 66, 1622–1640. [Google Scholar]
  34. Bleistein, N.; Cohen, J. K.; Stockwell, J. W. Mathematics of Multidimensional Seiesmic Imaging, Migration and Inversion; Springer, 2001. [Google Scholar]
  35. Angelsen, B. A. J. Ultrasound Imaging: Waves, Signals and Signal Processing.; Emantec, 2000. [Google Scholar]
  36. Nelson, T. R.; Pretorius, D. H. Three-dimensional ultrasound imaging. Ultrasound Med. Biol. 1998, 24, 1243–1270. [Google Scholar]
  37. Profio, A. E.; Doiron, D. R. Transport of light in tissue in photodynamic therapy. Photochem. Photobiol. 1987, 46, 591–599. [Google Scholar]
  38. Huang, D.; Swanson, E. A.; Lin, C. P.; Schuman, J. S.; Stinson, W. G.; Chang, W.; Hee, M. R.; Flotte, T.; Gregory, K.; Puliafito, C. A.; Fujimoto, J. G. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar]
  39. Schmitt, J. M. Optical coherence tomography (OCT): a review. IEEE J. Select. Topics Quantum Electron. 1999, 5, 1205–1215. [Google Scholar]
  40. Brezinski, M. E. Optical Coherence Tomography: Principles and Applications.; Academic Press, 2006. [Google Scholar]
  41. Zysk, A. M.; Nguyen, F. T.; Oldenburg, A. L.; Marks, D. L.; Boppart, S. A. Optical coherence tomography: a review of clinical development from bench to bedside. J. Biomed. Opt. 2007, 12. (051403). [Google Scholar]
  42. Young, T. A Course of Lectures on Natural Philosophy and the Mechanical Arts.; Joseph Johnson: London, 1807. [Google Scholar]
  43. Michelson, A. A.; Morley, E. W. On the relative motion of the Earth and the luminiferous ether. Am. J. Sci. 1887, 34, 333–345. [Google Scholar]
  44. Hariharan, P. Optical Interferometry.; Academic Press, 2003. [Google Scholar]
  45. Mandel, L.; Wolf, E. Optical Coherence and Quantum Optics; Cambridge University, 1995. [Google Scholar]
  46. Saleh, B. E. A.; Teich, M. C. Fundamentals of Photonics; chapter 3; pp. 80–107. John Wiley and Sons, 1991. [Google Scholar]
  47. Gabor, D.; Goss, W. P. Interference microscope with total wavefront reconstruction. J. Opt. Soc. Am. 1966, 56, 849–858. [Google Scholar]
  48. Arons, E.; Leith, E. Coherence confocal-imaging system for enhanced depth discrimination in transmitted light. Appl. Opt. 1996, 35, 2499–2506. [Google Scholar]
  49. Leith, E. N.; Mills, K. D.; Naulleau, P. P.; Dilworth, D. S.; Iglesias, I.; Chen, H. S. Generalized confocal imaging and synthetic aperture imaging. J. Opt. Soc. Am. A 1999, 16, 2880–2886. [Google Scholar]
  50. Chien, W.-C.; Dilworth, D. S.; Elson, L.; Leith, E. N. Synthetic-aperture chirp confocal imaging. Appl. Opt. 2006, 45, 501–510. [Google Scholar]
  51. Davis, B. J.; Ralston, T. S.; Marks, D. L.; Boppart, S. A.; Carney, P. S. Interferometric synthetic aperture microscopy: physics-based image reconstruction from optical coherence tomography data. In International Conference on Image Processing; volume 4, pp. 145–148. IEEE, 2007. [Google Scholar]
  52. Born, M.; Wolf, E. Principles of Optics.Cambridge University, 6 edition; 1980. [Google Scholar]
  53. Turin, G. An introduction to matched filters. IEEE Trans. Inf. Theory 1960, 6, 311–329. [Google Scholar]
  54. Wahl, D. E.; Eichel, P. H.; Ghiglia, D. C.; Jakowatz, C. V., Jr. Phase gradient autofocus—a robust tool for high resolution SAR phase correction. IEEE Trans. Aero. Elec. Sys. 1994, 30, 827–835. [Google Scholar]
  55. Ralston, T. S.; Marks, D. L.; Carney, P. S.; Boppart, S. A. Phase stability technique for inverse scattering in optical coherence tomography. 3rd International Symposium on Biomedical Imaging; 2006; pp. 578–581. [Google Scholar]
  56. Fercher, A. F.; Hitzenberger, C. K.; Kamp, G.; El-Zaiat, S. Y. Measurement of intraocular distances by backscattering spectral interferometry. Opt. Commun. 1996, 117, 43–48. [Google Scholar]
  57. Choma, M. A.; Sarunic, M. V.; Yang, C.; Izatt, J. A. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Opt. Express 2003, 11, 2183–2189. [Google Scholar]
  58. Leitgeb, R.; Hitzenberger, C. K.; Fercher, A. F. Performance of Fourier domain vs. time domain optical coherence tomography. Opt. Express 2003, 11, 889–894. [Google Scholar]
  59. Bhargava, R.; Wang, S.-Q.; Koenig, J. L. FTIR microspectroscopy of polymeric systems. Adv. Polym. Sci. 2003, 163, 137–191. [Google Scholar]
  60. Griffiths, P. R.; De Haseth, J. A. Fourier Transform Infrared SpectrometryWiley-Interscience, second edition; 2007. [Google Scholar]
  61. Papoulis, A.; Pillai, S. U. Probability, Random Variables and Stochastic Processes; chapter 11.4; pp. 513–522. McGraw-Hill, 2002. [Google Scholar]
  62. Zhao, Y.; Chen, Z.; Saxer, C.; Xiang, S.; de Boer, J. F.; Nelson, J. S. Phase-resolved optical coherence tomography and optical Doppler tomography for imaging blood flow in human skin with fast scanning speed and high velocity sensitivity. Opt. Lett. 2000, 25, 114–116. [Google Scholar]
  63. Potton, R. J. Reciprocity in optics. Rep. Prog. Phys. 2004, 67, 717–754. [Google Scholar]
  64. Marks, D. L.; Oldenburg, A. L.; Reynolds, J. J.; Boppart, S. A. A digital algorithm for dispersion correction in optical coherence tomography. Appl. Opt. 2003, 42, 204–217. [Google Scholar]
  65. Marks, D. L.; Oldenburg, A. L.; Reynolds, J. J.; Boppart, S. A. Autofocus algorithm for dispersion correction in optical coherence tomography. Appl. Opt. 2003, 42, 3038–3046. [Google Scholar]
  66. Richards, B.; Wolf, E. Electromagnetic diffraction in optical systems. II. Structure of the image field in an aplanatic system. Proc. R. Soc. London A 1959, 253, 358–379. [Google Scholar]
  67. Hansen, P. C. Numerical tools for analysis and solution of Fredholm integral equations of the first kind. Inverse Prob. 1992, 8, 849–872. [Google Scholar]
  68. Karl, W. C. Handbook of Image and Video Processing; chapter Regularization in Image Restoration and Reconstruction; pp. 141–161. Academic, 2000. [Google Scholar]
  69. Vogel, C. R. Computational Methods for Inverse Problems.; SIAM, 2002. [Google Scholar]
  70. Wiener, N. Extrapolation, Interpolation, and Smoothing of Stationary Time Series; The MIT Press, 1964. [Google Scholar]
  71. Gazdag, J.; Sguazzero, P. Migration of seismic data. Proc. IEEE 1984, 72, 1302–1315. [Google Scholar]
  72. Stolt, R. H. Migration by Fourier transform. Geophysics 1978, 43, 23–48. [Google Scholar]
  73. Langenberg, K. J.; Berger, M.; Kreutter, T.; Mayer, K.; Schmitz, V. Synthetic aperture focusing technique signal processing. NDT Int. 1986, 19, 177–189. [Google Scholar]
  74. Mayer, K.; Marklein, R.; Langenberg, K. J.; Kreutter, T. Three-dimensional imaging system based on Fourier transform synthetic aperture focusing technique. Ultrasonics 1990, 28, 241–255. [Google Scholar]
  75. Schmitz, V.; Chakhlov, S.; Müller. Experiences with synthetic aperture focusing technique in the field. Ultrasonics 2000, 38, 731–738. [Google Scholar]
  76. Passmann, C.; Ermert, H. A 100-MHz ultrasound imaging system for dermatologic and ophthal-mologic diagnostics. IEEE Trans. Ultrason. Ferr. 1996, 43, 545–552. [Google Scholar]
  77. Cafforio, C.; Prati, C.; Rocca, F. SAR data focusing using seismic migration techniques. IEEE Trans. Aero. Elec. Sys. 1991, 27, 194–207. [Google Scholar]
  78. Ralston, T. S.; Marks, D. L.; Kamalabadi, F.; Boppart, S. A. Deconvolution methods for mitigation of transverse blurring in optical coherence tomography. IEEE Trans. Image Proc. 2005, 14, 1254–1264. [Google Scholar]
  79. Charvat, G. L. A Low-Power Radar Imaging System. PhD thesis, Michigan State University, 2007. [Google Scholar]
  80. Charvat, G. L.; Kempel, L. C.; Coleman, C. A low-power, high sensitivity, X-band rail SAR imaging system. (to be published). IEEE Antenn. Propag. Mag. 2008, 50. [Google Scholar]
  81. Izatt, J. A.; Hee, M. R.; Owen, G. M.; Swanson, E. A.; Fujimoto, J. G. Optical coherence microscopy in scattering media. Opt. Lett. 1994, 19, 590–592. [Google Scholar]
  82. Abouraddy, A. F.; Toussaint, K. C., Jr. Three-dimensional polarization control in microscopy. Phys. Rev. Lett. 2006, 96, 153901. [Google Scholar]
  83. Beversluis, M. R.; Novotny, L.; Stranick, S. J. Programmable vector point-spread function engineering. Opt. Express 2006, 14, 2650–2656. [Google Scholar]
  84. Novotny, L.; Beversluis, M. R.; Youngworth, K. S.; Brown, T. G. Longitudinal field modes probed by single molecules. Phys. Rev. Lett. 2001, 86, 5251–5254. [Google Scholar]
  85. Quabis, S.; Dorn, R.; Leuchs, G. Generation of a radially polarizaed doughnut mode of high quality. Appl. Phys. B 2005, 81, 597–600. [Google Scholar]
  86. Sick, B.; Hecht, B.; Novotny, L. Orientational imaging of single molecules by annular illumination. Phys. Rev. Lett. 2000, 85, 4482–4485. [Google Scholar]
  87. Toprak, E.; Enderlein, J.; Syed, S.; McKinney, S. A.; Petschek, R. G.; Ha, T.; Goldman, Y. E.; Selvin, P. R. Defocused orientation and position imaging (DOPI) of myosin V. PNAS 2006, 103, 6495–6499. [Google Scholar]
  88. Butcher, P. N.; Cotter, D. The Elements of Nonlinear Optics; chapter 5.2; pp. 131–134. Cambridge University, 1990. [Google Scholar]
  89. Akiba, M.; Chan, K. P.; Tanno, N. Full-field optical coherence tomography by two-dimensional heterodyne detection with a pair of CCD cameras. Opt. Lett. 2003, 28, 816–818. [Google Scholar]
  90. Dubois, A.; Moneron, G.; Grieve, K.; Boccara, A. C. Three-dimensional cellular-level imaging using full-field optical coherence tomography. Phys. Med. Biol. 2004, 49, 1227–1234. [Google Scholar]
  91. Dubois, A.; Vabre, L.; Boccara, A.-C.; Beaurepaire, E. High-resolution full-field optical coherence tomography with a Linnik microscope. Appl. Opt. 2002, 41, 805–812. [Google Scholar]
  92. Laude, B.; De Martino, A.; Drévillon, B.; Benattar, L.; Schwartz, L. Full-field optical coherence tomography with thermal light. Appl. Opt. 2002, 41, 6637–6645. [Google Scholar]
  93. Považay, B.; Unterhuber, A.; Hermann, B.; Sattmann, H.; Arthaber, H.; Drexler, W. Full-field time-encoded frequency-domain optical coherence tomography. Opt. Express 2006, 14, 7661–7669. [Google Scholar]
  94. Devaney, A. J. Reconstructive tomography with diffracting wavefields. Inverse Prob. 1986, 2, 161–183. [Google Scholar]
  95. Pan, S. X.; Kak, A. C. A computational study of reconstruction algorithms for diffraction tomography: interpolations versus filtered backpropagation. IEEE Trans. Acoust. Speech Signal Proc. 1983, ASSP-31, 1262–1275. [Google Scholar]
  96. Wolf, E. Three-dimensional structure determination of semi-transparent objects from holographic data. Opt. Commun. 1969, 1, 153–156. [Google Scholar]
  97. Marks, D. L.; Davis, B. J.; Boppart, S. A.; Carney, P. S. Partially coherent illumination in full-field interferometric synthetic aperture microscopy (submitted). J. Opt. Soc. Am. A 2008. [Google Scholar]
  98. Brigham, E. The Fast Fourier Transform and Its Applications.; Prentice-Hall, 1988. [Google Scholar]
Figure 1. A basic illustration of an OCT system. Light traveling in one arm of a Michelson interferometer is focused into the sample. The length of the reference arm can be adjusted using a moveable mirror. The reference light and the light backscattered from the sample interfere at the detector.
Figure 1. A basic illustration of an OCT system. Light traveling in one arm of a Michelson interferometer is focused into the sample. The length of the reference arm can be adjusted using a moveable mirror. The reference light and the light backscattered from the sample interfere at the detector.
Sensors 08 03903f1
Figure 2. Illustration of focusing in OCT and the trade-off between depth of focus and resolution (figure adapted from [Ref. 17]). In OCT the light is implicitly assumed to be perfectly collimated in a pencil beam. In reality the light must diverge away from the focus. In low numerical aperture systems the beam with ω0 and the depth of focus b are both large. In high numerical aperture systems a tight focal width implies a small depth of field. Axial resolution depends on the coherence length, Lc, of the broadband source.
Figure 2. Illustration of focusing in OCT and the trade-off between depth of focus and resolution (figure adapted from [Ref. 17]). In OCT the light is implicitly assumed to be perfectly collimated in a pencil beam. In reality the light must diverge away from the focus. In low numerical aperture systems the beam with ω0 and the depth of focus b are both large. In high numerical aperture systems a tight focal width implies a small depth of field. Axial resolution depends on the coherence length, Lc, of the broadband source.
Sensors 08 03903f2
Figure 3. An illustration of the differences between the data acquisition geometries in SAR and ISAM. SAR involves a one-dimensional scan track, while ISAM scans over a plane. Unlike SAR beams, ISAM fields include a region within the object that is in focus. Note that the same aperture is assumed for both transmission and reflection in SAR; similarly the source is imaged onto the detector by the reference arm in ISAM (see Fig. 1). This figure is adapted from [Ref. 51].
Figure 3. An illustration of the differences between the data acquisition geometries in SAR and ISAM. SAR involves a one-dimensional scan track, while ISAM scans over a plane. Unlike SAR beams, ISAM fields include a region within the object that is in focus. Note that the same aperture is assumed for both transmission and reflection in SAR; similarly the source is imaged onto the detector by the reference arm in ISAM (see Fig. 1). This figure is adapted from [Ref. 51].
Sensors 08 03903f3
Figure 4. A geometric illustration of the Stolt mapping relating a point [q, k(ω)] in the Fourier-domain data to a point [q, −2kz (q/2, ω)] in the Fourier-domain object. Note that the ω dependence of the displayed quantities has been dropped for convenience. This figure is adapted from [Ref. 14].
Figure 4. A geometric illustration of the Stolt mapping relating a point [q, k(ω)] in the Fourier-domain data to a point [q, −2kz (q/2, ω)] in the Fourier-domain object. Note that the ω dependence of the displayed quantities has been dropped for convenience. This figure is adapted from [Ref. 14].
Sensors 08 03903f4
Figure 5. Simulated OCT image from a point scatterer located at (0, 0, 10)μm (a) and the real part of the corresponding Fourier representation (c). The ISAM Fourier resampling takes the data shown in (c) to the reconstruction of (d). The corresponding spatial-domain ISAM reconstruction is shown in (b). The ISAM reconstruction describes the point scatterer accurately, while defocus is clearly observed in the OCT image. Note that the two-dimensional images shown represent one plane of three-dimensional functions. This figure is adapted from [Ref. 51].
Figure 5. Simulated OCT image from a point scatterer located at (0, 0, 10)μm (a) and the real part of the corresponding Fourier representation (c). The ISAM Fourier resampling takes the data shown in (c) to the reconstruction of (d). The corresponding spatial-domain ISAM reconstruction is shown in (b). The ISAM reconstruction describes the point scatterer accurately, while defocus is clearly observed in the OCT image. Note that the two-dimensional images shown represent one plane of three-dimensional functions. This figure is adapted from [Ref. 51].
Sensors 08 03903f5
Figure 6. Images of titanium dioxide scatterers—OCT image before dispersion compensation (a), OCT image after dispersion compensation (b), and ISAM reconstruction (c). This figure is adapted from [Ref. 18].
Figure 6. Images of titanium dioxide scatterers—OCT image before dispersion compensation (a), OCT image after dispersion compensation (b), and ISAM reconstruction (c). This figure is adapted from [Ref. 18].
Sensors 08 03903f6
Figure 7. Planar xy slices of the OCT image volume (d) and the ISAM reconstruction volume (e). Three planes are shown, with details of extent 80μm×80μm for each. The planes are located at z = − 1100μm (c,h), z = − 475μm (b,g) and z = 240μm (a,f), where z = 0 is the focal plane. This figure is adapted from [Ref. 18]
Figure 7. Planar xy slices of the OCT image volume (d) and the ISAM reconstruction volume (e). Three planes are shown, with details of extent 80μm×80μm for each. The planes are located at z = − 1100μm (c,h), z = − 475μm (b,g) and z = 240μm (a,f), where z = 0 is the focal plane. This figure is adapted from [Ref. 18]
Sensors 08 03903f7
Figure 8. Three-dimensional renderings of the OCT (a) and ISAM (b) images of titanium dioxide scatterers. Out of focus blur can be seen in the OCT image, while the ISAM reconstruction has isotropic resolution. Note that the axial axis has been scaled by a factor of 0.25 for display purposes. This figure is adapted from [Ref. 18].
Figure 8. Three-dimensional renderings of the OCT (a) and ISAM (b) images of titanium dioxide scatterers. Out of focus blur can be seen in the OCT image, while the ISAM reconstruction has isotropic resolution. Note that the axial axis has been scaled by a factor of 0.25 for display purposes. This figure is adapted from [Ref. 18].
Sensors 08 03903f8
Figure 9. Raw strip-map radar image of a 1:32 scale model of a F14 fighter aircraft before Stolt Fourier resampling (a), and after Stolt Fourier resampling (b).
Figure 9. Raw strip-map radar image of a 1:32 scale model of a F14 fighter aircraft before Stolt Fourier resampling (a), and after Stolt Fourier resampling (b).
Sensors 08 03903f9
Figure 10. Breast tissue is imaged according to the geometry illustrated in the rendering in the upper left. Data are shown in the xy plane for two different values of z. Plane A is at z = − 643 μm, while plane B is at z = − 591 μm. ISAM image reconstruction can be seen to produce a significant improvement in image quality over the unprocessed OCT data in both planes. The ISAM reconstructions exhibit comparable features to histological sections. This figure is adapted from [Ref. 18].
Figure 10. Breast tissue is imaged according to the geometry illustrated in the rendering in the upper left. Data are shown in the xy plane for two different values of z. Plane A is at z = − 643 μm, while plane B is at z = − 591 μm. ISAM image reconstruction can be seen to produce a significant improvement in image quality over the unprocessed OCT data in both planes. The ISAM reconstructions exhibit comparable features to histological sections. This figure is adapted from [Ref. 18].
Sensors 08 03903f10
Figure 11. An illustration of the rotationally-scanned ISAM system. A single-mode fiber delivers light to focusing optics which project the beam into the object. The beam is scanned linearly inside a catheter sheath and is rotated about the long catheter axis. This figure is adapted from [Ref. 16].
Figure 11. An illustration of the rotationally-scanned ISAM system. A single-mode fiber delivers light to focusing optics which project the beam into the object. The beam is scanned linearly inside a catheter sheath and is rotated about the long catheter axis. This figure is adapted from [Ref. 16].
Sensors 08 03903f11

Share and Cite

MDPI and ACS Style

Davis, B.J.; Marks, D.L.; Ralston, T.S.; Carney, P.S.; Boppart, S.A. Interferometric Synthetic Aperture Microscopy: Computed Imaging for Scanned Coherent Microscopy. Sensors 2008, 8, 3903-3931. https://doi.org/10.3390/s8063903

AMA Style

Davis BJ, Marks DL, Ralston TS, Carney PS, Boppart SA. Interferometric Synthetic Aperture Microscopy: Computed Imaging for Scanned Coherent Microscopy. Sensors. 2008; 8(6):3903-3931. https://doi.org/10.3390/s8063903

Chicago/Turabian Style

Davis, Brynmor J., Daniel L. Marks, Tyler S. Ralston, P. Scott Carney, and Stephen A. Boppart. 2008. "Interferometric Synthetic Aperture Microscopy: Computed Imaging for Scanned Coherent Microscopy" Sensors 8, no. 6: 3903-3931. https://doi.org/10.3390/s8063903

APA Style

Davis, B. J., Marks, D. L., Ralston, T. S., Carney, P. S., & Boppart, S. A. (2008). Interferometric Synthetic Aperture Microscopy: Computed Imaging for Scanned Coherent Microscopy. Sensors, 8(6), 3903-3931. https://doi.org/10.3390/s8063903

Article Metrics

Back to TopTop