Next Article in Journal
The Effect of Acceleration on the Separation Force in Constrained-Surface Stereolithography
Next Article in Special Issue
Electrical Equalization Analysis of PAM-4 Transmission in Short-Reach Optical Systems
Previous Article in Journal
Application of Microgel as a Sorbent for Bisphenol Analysis in Liquid Food Samples
Previous Article in Special Issue
Lensless Optical Encryption of Multilevel Digital Data Containers Using Spatially Incoherent Illumination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rotation Invariant Parallel Signal Processing Using a Diffractive Phase Element for Image Compression

1
Faculty of Engineering, Uni de Moncton, Moncton, NB E1A3E9, Canada
2
Spectrum of Knowledge Production & Skills Development, Sfax 3027, Tunisia
3
Department of Electrical and Electronic Engineering, School of Electrical Engineering, Science, University of Johannesburg, Johannesburg 2006, South Africa
Appl. Sci. 2022, 12(1), 439; https://doi.org/10.3390/app12010439
Submission received: 22 November 2021 / Revised: 13 December 2021 / Accepted: 18 December 2021 / Published: 3 January 2022
(This article belongs to the Special Issue Optics in Information and Communication Technologies)

Abstract

:
We propose a new rotation invariant correlator using dimensionality reduction. A diffractive phase element is used to focus image data into a line which serves as input for a conventional correlator. The diffractive element sums information over each radius of the scene image and projects the result onto one point of a line located at a certain distance behind the image. The method is flexible, to a large extent, and might include parallel pattern recognition and classification as well as further geometrical invariance. Although the new technique is inspired from circular harmonic decomposition, it does not suffer from energy loss. A theoretical analysis, as well as examples, are given.

1. Introduction

Many methods have been used to achieve rotation invariant pattern recognition [1]. Recent methods are based on deep learning techniques [2,3]. The wedge-ring detector method is one approach that also exhibits scale invariance properties [4,5]. An analysis of the different techniques for recognizing and detecting objects under extreme scale variations were presented by [6]. One of the frequently used filters is the circular harmonic filter (CHF). The correlation using such a filter is invariant under the rotation of the scene image but suffers from the defects that result from only one circular harmonic component (CHC) being used. As a result, the correlation peak is not sufficiently sharp.
To relax such a limitation, various methods have been proposed. These attempts are still the object of active research. However, one should emphasize that the critical parameters of optical efficiency, the sharpness of correlation peaks, the peak-to-sidelobe ratios, resistance to noise, and circular harmonic techniques yield as good or better results than competitive methods. This paper presents a rotation invariant approach based on the optical lossless implementation of one circular harmonic component by means of a diffractive optical element. Both the scene image and reference are subjected to a projection onto one CHC. For clarity, we will use the zero-order CHC, but other CHCs can be used as well.
The diffractive element sums information over each radius of the scene image and projects the result onto one point of a line located at a certain distance behind the image. The method is, to a large extent, flexible and might include parallel pattern recognition and classification as well as further geometrical invariance. The database of references (of fingerprints of subscribers in a bank, for example) is compressed.
The contributions of this paper may be summarized as follows.
  • Compression is performed in a way that the rotated images give the same compressed data.
  • In contrast to CHC filters, rotation invariance is ensured without any significant energy loss.
  • By maintaining rotation invariance, the image compression technique allows parallel data processing.
  • The proposed method might be used to further add geometrical invariance, such as scale invariance, given that during the compressing task a scale factor can be easily included by means of the diffractive compressing element.
The remainder of the paper is organized as follows. Section 2 provides an overview on related works. Section 3 presents a mathematical analysis of the issue of rotation invariance. Section 4 proposes a possible optical implementation. An extension of the proposed architecture is given in Section 5. Results are presented in Section 6. Finally, Section 7 presents concluding remarks.

2. Related Works

Full rotation invariance can be ensured by using a matched filter built from a CHC of the reference [7]. In other words, the reference is replaced by one of its CHCs. To improve the peak sharpness and/or the discrimination ability of the classical CHF, several designs based on CHCs, such as the CH covariance filter [8], the phase-only CHF [9], and the phase-derived CHF [9] were proposed. These attempts of improvement, however, were possible at the cost of a decrease in the signal-to-noise ratio (SNR). It is fair to say that the oldest design, the classical CHF, yields the highest SNR among the CHF family. In addition, the classical CHF maximizes the SNR while maintaining the in-plane rotation invariance [10].
The SNR decreases as the order of the CHC is increased. The zero-order CHF is the best choice for pattern recognition under noisy conditions [10]. Unfortunately, low-order CHFs have a tendency to produce broad correlation peaks. This limitation can be overcome, and we can achieve, on the one hand, an important noise resistance, and on the other hand, sharp correlation peaks in addition to high optical efficiency. The technique consists of projecting both the scene image and the reference onto the zero-order CHC. The discrimination ability remains unchanged. The low discrimination ability of the CHFs is a direct consequence of image compression. The proposed approach can be combined with various design techniques to improve this criterion. However, this is not the objective of this work. The resulting image compression into one dimension can be extended to include parallel image processing and further geometrical invariance, as well as by using the second spatial dimension. An optical setup, including a bank of one-dimensional patterns, is proposed for parallel classification, where a dataset of scene images is entered simultaneously.

3. Analysis

A circular harmonic filter is one component from the circular harmonic expansion [7]:
f r , θ = m = + f m r e x p i m θ
with
f m r = 0 2 π f r , θ e x p i m θ d θ
The zero-order CHC is merely the sum of information over each ring with radius r:
f ˜ r = f 0 r = 0 2 π f r , θ d θ
Rotated images have the same zero-order CHC. When we apply the correlation operator between the zero-order CHCs of the image g and the reference f, a correlation peak is obtained if the input scene image is a rotated version of the reference, including rotation with zero degrees. We then produce a new input scene image g and a new reference f ˜ , which are the zero-order CHCs of g and f. The expected correlation peak must be sharp because, in contrast to the conventional method, we do not select one component among an infinite expansion. Apart from the fact that both the input scene image and reference are substituted by two derived images, conventional correlation is applied which should give a sharp correlation peak. Moreover, various correlation methods can be applied in conjunction with this image compression.
What is really performed is the following operation, g ˜ f , ˜ where g and f is the original input scene image and reference, and denotes the correlation operator. The new reference f ˜   is calculated numerically, whereas g ˜ is provided optically. The expected correlation peak arises on the optical axis because both g and f are projected onto the zero-order CHC. In practice, this peak is likely to be hidden by the zero order of diffraction, a useless bright spot on the optical axis. This diffraction order generally results from uncertainty in the fabrication process. To overcome this serious problem, we can laterally shift f ˜ by a certain amount x 0 , y 0 . The correlation peak is therefore laterally shifted by the same distance x 0 , y 0 , and the detector can be fixed in the same place for all, because its position is known a priori.
In the most general case, the zero-order CHC of the reference is numerically calculated. The lateral shift is also integrated in the numerical procedure. The main difficulty lies in the optical implementation of the projection of the scene image onto its zero-order CHC.
It is worth noting that image data looks squeezed or collapsed into a line that contains all the information of the image scene. This is different from the conventional concept of image compression. This is a result of the fact that rotation invariance is combined with image compression.

4. Optical Implementation

Inspired from Equations (1)–(3), we intend to produce the ring-to-point transformation illustrated in Figure 1 (do not consider dashed lines). The objective is to optically implement the integral of Equation (3). All information over each ring of the radius r is summed and the result is projected onto a point shifted by r from the optical axis in a response plane located at a distance z behind the object. The result of the transformation is, thus, a line segment which can be freely oriented in the response plane. For a given radius r, the rays covering the optical paths s(r,θ) must arrive to the collecting point with the same phase, that is, the phase corresponding to s(r,0). To satisfy this constraint, we have to introduce an additional phase distribution p(r,θ) in the plane of the object fulfilling the following condition:
p r , θ + 2 π λ s r , θ = p r , 0 + 2 π λ s r , 0 + 2 k π
where k is an integer and s(r,0) = z.
By choosing, for instance p(r,0) = 0, the required phase distribution is expressed as follows (see Figure 1):
p r , θ = 2 π λ z 2 π λ z 2 + 4 r 2 s i n 2 θ 2 + 2 k π
In the framework of the paraxial assumption, we can use the following approximation ( 1 + x 2 1 + x 2 if x 1. This approximation is used in the Fresnel transform [11,12]):
1 + 4 r 2 z 2 s i n 2 θ 2 1 + 2 r 2 z 2 s i n 2 θ 2
and the required transmittance t(r,θ) is obtained (for k = 0):
t r , θ = e x p i p r , θ = e x p 4 i π r 2 λ z s i n 2 θ 2
We notice that if the implementation of the m-order CHC is required, then we need only to add the term exp(-i ) in the expression (6). For instance, one can use the second-order CHF (m = 2) which is often a good compromise for both an acceptable SNR and the peak sharpness of the correlation peak [10]. It is also possible to combine several CHCs. This goes, however, beyond the scope of this work.
We need a diffractive phase element with the transmittance expressed by relation (6), which we refer to as the “diffractive compressing element (DCE)”. This Fresnel diffractive element has a continuous phase profile (kinoform) [13]. Figure 2 shows a quantized version of such an element, where only four phase levels are used. We projected the phase profile onto the closest phase level in the set {0,π/2, π, 3π/2}. We see in the neighborhood of the bottom half of the vertical axis that the profile is similar to that of a one-dimensional Fresnel lens. Over each ring r of the diffractive element, the phase corresponds to a two-dimensional off-axis Fresnel lens, where the off-axis translation corresponds to r. In order to decrease the resolution needed by the Fresnel diffractive element, we can put a spherical lens beside the diffractive element [14,15]. In fact, the distribution of DCE becomes the difference between Figure 2 and the distribution of the inserted lens. It is worth noting that the phase pattern becomes more and more dense in the boundary areas. The smallest feature should be large compared to the wavelength (let us say, more than five times) so that we could remain in the scope of scalar diffraction.
Figure 3 shows the setup of the correlator providing g ˜ f ˜ . The implementation of the projection of the input scene image is provided by a diffractive phase element placed just behind the input image. The result of the ring-to-point projection is observed at a distance z in the plane PC. The Fourier plane PF presents, as usual, the filter plane. Using the same diffractive phase element, the filter can be optically implemented.
The DCE is a Fresnel diffractive phase element. It must have a resolution at least as high as the resolution of the input image. The smallest feature of the input image must correspond to at least one phase value of the DCE so that the projection into the line segment is correctly performed (Figure 1). To obtain better results, the resolution of the DCE must be higher than that of the input image.

5. Extensions of the Architecture

5.1. Parallel Pattern Recognition

Scale considerations can be taken into account. For instance, we can sum information over each ring of radius r and project the result in a point shifted by the distance α r (instead of r) from the optical axis, yielding:
t r , θ = e x p i π r 2 λ z 1 + α 2 2 α cos θ
Here we used the Law of Cosines, also known as Al-Kashi’s theorem (see dashed lines in Figure 1). This technique can be used, for instance, to adapt the compressed data to the features of the spatial light modulator placed in the Fourier plane.
Owing to the image information compression, it is also possible to provide parallel pattern recognition. For each input scene image gm we add in the response plane of the diffractive phase element and in the plane of the compressed data PC, a linear phase distribution with a certain slope e x p i 2 π x m r λ f l ; fl is the focal length of the lens used for the correlation. The Fourier transform of the resulting line segment, containing compressed image data, is laterally shifted by the amount xm in the filter (Fourier) plane PF. The different slopes can be generated by an array of mini prisms placed in the plane PC. An alternative consists of integrating the slopes in the diffractive element. Therefore, for each input image gm, we need a diffractive phase element with the transmittance:
t m r , θ = e x p i π r λ r z 1 + α 2 2 α cos θ + 2 x m f l
We need a bank of references where each elementary reference fm is compared to one input image gm (Figure 4a). The two-dimensional input images are arranged in a matrix form. After compression by means of the diffractive compressing element DCE, we obtain, in the plane PC, an array of line segments g ˜ m (m=1, …, M). In the Fourier plane, each one-dimensional structure G ˜ m , i.e., the Fourier transform of the compressed image g ˜ m , is multiplied by the conjugate of the Fourier transform F ˜ m * and of the compressed pattern f ˜ m of a two-dimensional reference fm. The Fourier transform, as well as the compression of each reference fm, are performed numerically.
Instead of the output lens L2, we can use the subsystem of Figure 4b, which provides a one-dimensional Fourier transform, noted by FTy. The focal length of the lens placed in the middle of the subsystem of Figure 4b is twice as big as the focal length fl of the two other identical lenses. The incident wavefront is imaged with respect to one dimension and is Fourier-transformed with respect to the other. We note that the various input scene images can be illuminated by spatially incoherent monochromatic sources, such as a matrix of vertical-cavity surface-emitting lasers (VCSELs). The mutual spatial coherence of the sources is not necessary because the correlation products are performed independently.
Using the same technique of Figure 4, the scale and rotation invariance can be combined. In this case, several replicas of the input image g are entered simultaneously, and for each replica, we attribute a scale factor αm and a translation factor xm. Therefore, for each replica, we need a diffractive phase element:
t m r , θ = e x p i π r λ r z 1 + α m 2 2 α m cos θ + 2 x m f l
The filter bank, which is the rotation invariant, possesses a two-dimensional structure (Figure 4) where each line segment F ˜ m * , a conjugate of the Fourier transform of f ˜ m , corresponds to a certain scale αm. The position of the correlation peak is determined by the position of g ˜ m and that of f ˜ m .
The shift invariance is added to the system by sampling the intensity of the Fourier transforms of the input objects gm, for instance, in combination with a liquid crystal light valve [16]. In other words, what will be calculated is the correlation product of the CHCs of G m u , v 2 and F m u , v 2 instead of the CHCs of gm(x,y) and fm(x,y).
An alternative for providing scale and rotation invariance consists of entering the input scene image (which is not replicated) and changing the transmittance of the diffractive compressing element. This element must possess the sum of all transmittances (9):
t r , θ = 1 M m = 1 M i π r λ r z 1 + α m 2 2 α m cos θ + 2 x m f l
where M is the number of the required scale factors.
In general, the distribution of Equation (10) is not a phase distribution. A projection onto the phase distribution set must be undertaken and the lateral shifts xm must be correspondingly optimized.

5.2. Parallel Pattern Classification

According to Figure 4, each input image is correlated with one filter of the bank. The setup can be modified so as to perform parallel classifications of the input images. Each input image gm must be compared to all filters fm of the reference bank. For this purpose, the Fourier transform of each one-dimensional structure g ˜ m must be replicated in the filter plane. In Figure 5, this operation is performed by a one-dimensional Fourier transform noted by FTx. Each one-dimensional distribution G ˜ m , extended over the y-axis, is Fourier-transformed with respect to the x-axis. The distribution is then replicated to form a two-dimensional structure.
After compressing the input images by means of a multi-faceted diffractive compressing element (DCE), a two-dimensional Fourier transform is performed by means of a spherical lens L1 (Figure 5). Then, the resulting spectra are replicated by a subsystem similar to that of Figure 4. Each replica of one spectrum is multiplied by one one-dimensional pattern from the filter bank F ˜ 1 * to F ˜ N * . The correlation product, observed in the focal plane of the spherical lens L2, contains the Fourier transforms of all the products: G ˜ 1   F ˜ 1 * ,   G ˜ 1   F ˜ 2 * , …, G ˜ M   F ˜ N * .
The setup of Figure 5 allows for the parallel classification of M input images gm, where a bank of N references, fm, is used.
Rotation invariance is ensured, and scale invariance can be added by enlarging the reference bank. To add shift invariance, the intensity distribution of the Fourier transforms of the objects is taken as the input of the classification system.

6. Results

We focus our attention on rotation invariance. The input scene images are presented in Figure 6A. The reference image is shown in Figure 6A-a. Figure 6A-b,A-c are slightly laterally shifted and rotated versions of the reference image, whereas Figure 6A-d is a false image. The energy is normalized for the four images. To test the approach under noisy conditions, significant noise is voluntarily added to the input image. The noise energy is four times bigger than the energy of the useful signal. We designed a binary phase-only filter by projecting the phase profile onto the closest phase level among {0, π}. These kinds of filters can be implemented by a spatial light modulator if a programmable system for pattern recognition is required.
The simulation results on Matlab show that rotation invariance is ensured. Figure 6B also shows a high optical efficiency and a good correlation peak sharpness. The optical efficiency is defined as the amount of input light that will be detected for the determination of the correlation function. It is quantitatively measured by the Horner efficiency [17,18]. The energies of the peaks associated with the rotated images are less than that corresponding to the reference. The approach is sensitive to lateral shift and gives drastic results if the lateral shift is in the range of the image size. Thus, it is necessary, for instance, to use the Fourier transform of the input image to ensure shift invariance.
The diffractive compressing element was quantized onto four phase levels: {0, π/2, π, 3π/2}. Quantization has been performed by a simple projection onto the closest phase level. Because of quantization we noticed a non-negligible energy loss and it is worth using optimization methods [15].

7. Conclusions

The method described here is based on dimensionality reduction by means of image compression. This compression is performed in a way that the rotated images give the same compressed data. In practice, this operation is implemented by a diffractive phase element referred to as a diffractive compressing element. Diffractive phase elements become more and more attractive, mainly because of the technological progress which has especially influenced the fabrication of high resolution diffractive optical elements [19,20]. In our case, we can use a high resolution DCE to improve the discrimination ability. Indeed, the input image is divided in rings with smaller widths, and therefore less compression is performed.
In contrast to CHC filters, rotation invariance is ensured without any significant energy loss. The method makes the optical implementation of the filter possible as well. By maintaining rotation invariance, the image compression technique allows parallel data processing. This parallelism might be used for simultaneous pattern classification. Moreover, it might be used to further add geometrical invariance, such as scale invariance, given that during the compressing task a scale factor can be easily included by means of the diffractive compressing element. For technological reasons, this scale factor can be also used to fit the practical features of the filter, especially when spatial light modulators are used.
However, because of image compression, we obtain a relatively low discrimination ability in practice. Different images might have a similar zero-order circular harmonic component. An alternative might be the extension of the approach into the use of the Fresnel transform-based correlator [21].

Funding

The author thanks Natural Sciences and Engineering Research Council of Canada (NSERC) and New Brunswick Innovation Foundation (NBIF) for the financial support of the global project. These granting agencies did not contribute in the design of the study and collection, analysis, and interpretation of data.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Gualdron, O.; Arsenault, H. Optimum rotation-invariant filter for disjoint-noise scenes. Appl. Opt. 1996, 35, 2507–2513. [Google Scholar] [CrossRef] [PubMed]
  2. Salas, R.R.; Dokladal, P.; Dokladalova, E. Rotation Invariant Networks for Image Classification for HPC and Embedded Systems. Electronics 2021, 10, 139. [Google Scholar] [CrossRef]
  3. Jabra, B.M.; Ammar, A.; Koubaa, A.; Cheikhrouhou, O.; Hamam, H. AI-based Pilgrim Detection using Convolutional Neural Networks. In Proceedings of the 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia, 2–5 September 2020. [Google Scholar] [CrossRef]
  4. Stark, H. Applications of Optical Fourier Transforms; Academic New York: New York, NY, USA, 1982; pp. 167–170. [Google Scholar]
  5. George, N.; Wang, S.; Venable, D.L. Optical pattern recognition. In Proceedings of the 1989 International Congress On Optical Science And Engineering, Paris, France, 24–28 April 1989; Volume 1134. [Google Scholar] [CrossRef]
  6. Singh, B.; Davis, L.S. An Analysis of Scale Invariance in Object Detection-SNIP. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3578–3587. Available online: https://arxiv.org/pdf/1711.08189.pdf (accessed on 20 December 2021).
  7. Hsu, Y.N.; Arsenault, H. Optical pattern recognition using Circular harmonic expansion. Appl. Opt. 1982, 21, 4016–4019. [Google Scholar] [CrossRef] [PubMed]
  8. Leclerc, L.Y.; Sheng, Y.; Arsenault, H. Circular harmonic covariance filters for rotation invariant object recognition and discrimination. Opt. Commun. 1991, 85, 299–305. [Google Scholar] [CrossRef]
  9. Rosen, J.; Shamir, J. Circular harmonic phase filters for efficient rotation-invariant pattern recognition. Appl. Opt. 1988, 27, 2895–2899. [Google Scholar] [CrossRef] [PubMed]
  10. Gualdron, O.; Arsenault, H. Phase-derived circular harmonic filter. Opt. Commun. 1993, 1004, 32–34. [Google Scholar] [CrossRef]
  11. Goodman, J.W. Introduction to Fourier Optics; MacGraw-Hill: New York, NY, USA, 1968. [Google Scholar]
  12. Hamam, H.; de la Tocnaye, J.L.d.B. Programmable Joint Fractional Talbot Computer Generated Holograms. J. Opt. Soc. Am. A 1995, 12, 314–324. [Google Scholar] [CrossRef]
  13. Lesem, L.B.; Hirsch, P.M.; Jordan, J.A. The kinoform: A new wavefront reconstruction device. IBM J. Res. Develop. 1969, 13, 150–155. [Google Scholar] [CrossRef]
  14. Asselin, D.; Arsenault, H.H.; Prévost, D. Optical circular sampling system for translation and rotation invariant pattern recognition. Opt. Commun. 1994, 110, 507–513. [Google Scholar] [CrossRef]
  15. Hamam, H. Digital holography-based steganography. Opt. Lett. 2010, 35, 4175–4177. [Google Scholar] [CrossRef] [PubMed]
  16. de la Tocnaye, J.L.d.B.; Hamam, H.; Moignard, R. Light Diffraction Device Using Reconfigurable Spatial Light Modulators and the Fractional Talbot Effect. US Patent US5617227, 1 April 1997. [Google Scholar]
  17. Horner, J.L. Light utilization in optical correlators. Appl. Opt. 1982, 21, 4511–4514. [Google Scholar] [CrossRef] [PubMed]
  18. Moreno, I.; Davis, J.A.; Gutierrez, B.K.; Sánchez-López, M.; Cottrell, D.M. A new method for obtaining higher-order optical correlation/convolution using a spatial light modulator with extended phase modulation. In Proceedings of the Optics and Photonics for Information Processing XV, San Diego, CA, USA, 1–5 August 2021; Volume 1184104. [Google Scholar] [CrossRef]
  19. Hamam, H. Hartley holograms. Appl. Opt. 1996, 35, 5286–5292. [Google Scholar] [CrossRef] [PubMed]
  20. Hamam, H. Design of Talbot array illuminators. Opt. Commun. 1996, 131, 359–370. [Google Scholar] [CrossRef]
  21. Hamam, H.; Arsenault, H.H. Fresnel transform based correlator. Appl. Opt. 1997, 36, 7408–7414. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Rotation invariant image compression: Ring-to-point transformation.
Figure 1. Rotation invariant image compression: Ring-to-point transformation.
Applsci 12 00439 g001
Figure 2. Diffractive compressing element (DCE) with four phase levels.
Figure 2. Diffractive compressing element (DCE) with four phase levels.
Applsci 12 00439 g002
Figure 3. Setup of the correlator using a diffractive phase element for image compression. DCE: diffractive compressing element, g: input image.
Figure 3. Setup of the correlator using a diffractive phase element for image compression. DCE: diffractive compressing element, g: input image.
Applsci 12 00439 g003
Figure 4. Parallel pattern recognition using a bank of one-dimensional patterns (a) setup, (b) subsystem which provides a one-dimensional Fourier transform according to the y-axis. The focal length of the lens placed in the middle of the subsystem is twice as big as the focal length of the two other identical lenses.
Figure 4. Parallel pattern recognition using a bank of one-dimensional patterns (a) setup, (b) subsystem which provides a one-dimensional Fourier transform according to the y-axis. The focal length of the lens placed in the middle of the subsystem is twice as big as the focal length of the two other identical lenses.
Applsci 12 00439 g004
Figure 5. Parallel image classification using a bank of one-dimensional patterns, TFx: one-dimensional Fourier transform with respect to the x-axis.
Figure 5. Parallel image classification using a bank of one-dimensional patterns, TFx: one-dimensional Fourier transform with respect to the x-axis.
Applsci 12 00439 g005
Figure 6. (A) Input images (a) reference (b) and (c) rotated version of the correct image (d) false image (B) Response of the correlator for the four images in (A).
Figure 6. (A) Input images (a) reference (b) and (c) rotated version of the correct image (d) false image (B) Response of the correlator for the four images in (A).
Applsci 12 00439 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hamam, H. Rotation Invariant Parallel Signal Processing Using a Diffractive Phase Element for Image Compression. Appl. Sci. 2022, 12, 439. https://doi.org/10.3390/app12010439

AMA Style

Hamam H. Rotation Invariant Parallel Signal Processing Using a Diffractive Phase Element for Image Compression. Applied Sciences. 2022; 12(1):439. https://doi.org/10.3390/app12010439

Chicago/Turabian Style

Hamam, Habib. 2022. "Rotation Invariant Parallel Signal Processing Using a Diffractive Phase Element for Image Compression" Applied Sciences 12, no. 1: 439. https://doi.org/10.3390/app12010439

APA Style

Hamam, H. (2022). Rotation Invariant Parallel Signal Processing Using a Diffractive Phase Element for Image Compression. Applied Sciences, 12(1), 439. https://doi.org/10.3390/app12010439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop