Next Article in Journal
Reconfigurable Architecture for Noise Cancellation in Acoustic Environment Using Single Multiply Accumulate Adaline Filter
Next Article in Special Issue
A Hybrid Variable-Resolution GI without Prior Information
Previous Article in Journal
Merchant Recommender System Using Credit Card Payment Data
Previous Article in Special Issue
A Single-Pixel High-Precision Imaging Technique Based on a Discrete Zernike Transform for High-Efficiency Image Reconstructions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Development of Snapshot Multispectral Imaging Technology Based on Artificial Compound Eyes

1
Bionic Robot Key Laboratory of Ministry of Education, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
2
Yangtze Delta Region Academy, Beijing Institute of Technology, Jiaxing 314003, China
3
School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun 130013, China
4
Xi’an Modern Control Technology Research Institute, Xi’an 710018, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(4), 812; https://doi.org/10.3390/electronics12040812
Submission received: 14 December 2022 / Revised: 29 January 2023 / Accepted: 3 February 2023 / Published: 6 February 2023

Abstract

:
In the present study, the advantages of multispectral imaging over hyperspectral imaging in real-time spectral imaging are briefly analyzed, and the advantages and disadvantages of snapshot spectral imaging and other spectral imaging technologies are briefly described. The technical characteristics of artificial compound eyes and multi-aperture imaging and the research significance of snapshot artificial compound eye multispectral imaging are also introduced. The classification and working principle of the snapshot artificial compound eye multispectral imaging system are briefly described. According to the realization method of the optical imaging system, the ACE snapshot multi-aperture multispectral imaging system is divided into plane and curved types. In the planar compound eye spectral imaging system, the technical progress of the multispectral imaging system based on the thin observation module by bound optics (TOMBO) architecture and the multispectral imaging system based on the linear variable spectral filter are introduced. At the same time, three curved multispectral imaging systems are introduced. Snapshot artificial compound eye multispectral imaging technology is also briefly analyzed and compared. The research results are helpful to comprehensively understand the research status of snapshot multispectral multi-aperture imaging technology based on artificial compound eyes and to lay the foundation for improving its comprehensive performance even further.

1. Introduction

In comparison to hyperspectral imaging (HSI), multispectral imaging (MSI) uses a lower number of bands in exchange for greater spatial resolution, which not only maintains the advantages of the high spectral resolution and strong recognition ability of the HSI system, but also has the characteristics of high spatial resolution and minor distortion of the single-band array imaging system. In practice, according to the specific application scenario, the spectrum characteristics, and actual target, choosing the corresponding band combination, low coupling between the band, and achieving a better balance for considerable amounts of information are important factors, to effectively overcome the traditional hyperspectral data dimension being high and the real-time data processing dealing with a large amount of data and significant uncertainty, as well as the difficulty of performing sample selections and other shortcomings. It has the advantages of being more economical and convenient, and having a high signal-to-noise ratio and simple data processing techniques. Therefore, MSI plays an increasingly important role in several areas of spectral imaging within specific environments with high real-time requirements [1,2].
The spectral imaging system obtains a three-dimensional (3D) dataset of the target scene by collecting two-dimensional (2D) spatial information (x, y) and one-dimensional spectral information (λ), which is called a data cube (x, y, λ). As is shown in Figure 1, spectral imaging can be divided into scanning and snapshot types according to the method of obtaining a complete data cube. Spectral scanning also includes three scanning categories: point, line, and wavelength [3,4]. Among them, point (also known as sweep) and line (also known as push sweep) spectral scanning belong to the spatial scanning type, which needs to complete the 1D or 2D slicing of the target scene according to the time series with the help of complex optical paths and scanning mechanisms. Therefore, the system has a complex structure and large volume, and it is easy to introduce motion and registration errors; the spectral scanning (also known as staring- or frame-imaging types) requires changing different filters or obtaining 2D images of one spectral segment at a time through an internal beam-splitting system. Therefore, the spectral scanning imaging system, in principle, is unable to perform spectral imaging on moving targets and rapidly changing scenes [5,6,7].
The snapshot MSI system, usually consisting of multiple independent imaging units of different spectral characteristics, using a single or multiple focal-plane detector (FPA), under the condition of no scanning mechanism, through a single exposure can obtain more than a 2D spectrum image at the same time, through the post-processing combination of 2D data for the 3D data cube. With the ability to capture dynamic or static objects located in dynamic environments, it is a powerful tool used to accurately obtain 3D data cubes. At the same time, the snapshot MSI system also possesses the characteristics of rapid acquisition speed, small size, compact structure, light weight, and portability, which has become the research hotspot at present, and a variety of snapshot MSI technologies have also been developed and are widely used in military reconnaissance, public security criminal investigations, biomedicine, life science, food safety, ecological monitoring, precision agriculture, and other military and civilian fields [3,4,7,8,9].
In nature, the compound eyes of insects are considered to be an ideal, miniaturized multi-aperture and large field-of-view (FOV) optical imaging system with a good, intelligent detection ability, including the high-sensitivity detection of moving targets and the high-resolution detection of light intensity, wavelength (color), or polarization [10,11,12,13]. The ACE is a new type of multi-aperture vision system that simulates the natural structure and function of insects’ compound eyes. It is the synthesis and development of photoelectric and micro-photoelectric technology to date, and it is a highly integrated miniaturized, lightweight, and intelligent imaging system. In comparison to the single-aperture vision system, artificial compound eyes (ACEs) break the constrained relationship between a large field of view and high resolution. It has optical characteristics, such as large FOV, low aberration, and infinite depth of field, as well as outstanding properties, such as better target-motion-detection ability, higher light-intensity sensitivity, and more compact structure size, and it also has good application potential. Thanks to the rapid development of micro–nanomaterials technology, processing technology, and image-sensor technology, ACE visual imaging technology has become a popular, innovative research field in the literature, is rapidly developing, and will soon be used in the field of compact multi-aperture MSI [14,15,16,17].
The snapshot compound eye MSI system, as a special compact multi-aperture photoelectric imaging system, has multiple independent microlens imaging units; it can, according to the specific tasks for different imaging units assigned to different optical systems and imaging conditions, obtain a high-frame frequency and multiple sets of spectra of the target image; it can rapidly access information to test the integrity of the data cube. It is characterized by its compact hardware, powerful function, flexible design, and diverse integration, and has good universality and scalability properties. Its working mode is similar to ordinary cameras, and it is easy to be implemented in engineering. In recent years, it has attracted considerable attention from researchers at home and abroad, and has become an important research topic in image-science studies [18,19,20].
In Section 1, the main classifications and working principles of snapshot ACE MSI technology are briefly described; Section 2 focuses on the progress of planar ACE multispectral snapshot imaging technology in the research; Section 3 introduces the progress of curved compound eye multispectral snapshot imaging technology; finally, snapshot compound eye MSI technology is briefly summarized in the Conclusion.

2. System Classification and Basic Principles

The compound eye of insects has attracted the attention of numerous experts and scholars at home and abroad, because of its compact structure, multimodal imaging, and unique optical properties. After 20 years of research, several achievements in the research have been attained on the planar ACE with a relatively simple structure, and polytype planar ACE MSI systems have also emerged. However, due to the nature of insect compound eyes being very complex, there are several constraints and limitations regarding the existing materials, processes, and technologies. At present, the research on ACEs remains in the simulation and realization levels of curved microlens arrays (MLAs). Several issues still exist in relation to the realization of the complete curved ACE system, and this system is also under investigation in the research, at present [21,22,23,24].

2.1. Planar ACE Multispectral Snapshot Imaging System

As a novel photoelectric imaging system integrated with ACE, the planar ACE multispectral snapshot imaging system consists of four parts: a planar MLA, spectral interference filter array, signal isolation structure, and image sensor. At the same time, the planar ACE multispectral snapshot imaging system is also a multichannel spectral imaging array. Each interference filter of the filter array corresponds to a microlens. The interference filter and microlens jointly form a spectral imaging unit, and a sub-image of the spectrum segment can be obtained on the corresponding band. Through the acquired multispectral image sequence, the data cube of the target scene is obtained, as is presented in Figure 2. It should be noted that there can be two installation positions for the interference filter array. One is in front of the MLA, as is presented in Figure 3a. The other is positioned after the MLA and before the FPA, as is presented in Figure 3b [3,4].
In general, the microlenses used in the planar ACE multispectral snapshot imaging system is exactly the same, and the FOV of the imaging unit is basically the same; therefore, it not only reduces the overall cost of the system, but is also relatively easy to implement in engineering, and the image reconstruction algorithm is relatively simple. In addition, due to the use of interference filter arrays and the relatively mature focal plane image sensors available on the market, when the resolution of the system is certain, more spectral channels are obtained and the spectral resolution is higher; however, the smaller the aperture of the microlens, the lower the optical flux and spatial resolution characteristics.
Due to the use of microlenses, the ACE usually operates at a relatively close distance. In order to achieve the MSI of distant targets, different methods of enlarging the focal length were proposed in the study. For example, the first method was to install a telescopic objective lens in front of the MLA. The system first images the target through the front telescopic objective lens, and then realizes a secondary image on the detector through the MLA, whose structure is presented in Figure 4a. Another method is to add a telescopic objective lens in front of the MLA to compress the FOV of the incident beam, and the rear MLA performs secondary imaging at the exit pupil of the telescope system, whose structure is presented in Figure 4b [8].

2.2. Curved Compound Eye Multispectral Snapshot Imaging System

The curved compound eye multispectral snapshot imaging system usually adopts a curved MLA integrated with an interference filter to simulate the multi-aperture spectral imaging function of a natural insect parallel compound eye. As presented in Figure 5, the parallel compound eye optical system consists of multiple independent sub-eye imaging units, the whole sub-eye is distributed on a curved surface, and the photosensitive units of all sub-eyes are also distributed on a curved surface. However, at present, all mature image sensors are planar, so it is difficult for artificial ACEs to adopt a curved MLA to be directly coupled with it, and it is difficult to obtain clear target images on planar detectors [15,24,25].
There are two schemes to solve the problem of image plane coupling that occurs between a curved MLA and FPA.
Scheme 1: Direct imaging method. This method requires each microlens to be imaged directly onto a focal plane image sensor. In order to adapt to the planar image sensor, it is necessary to design each imaging unit on the surface independently, and to create the designed MLA through special structures, technology, and flows to ensure that each microlens can focus on the FPA, as presented in Figure 6. The advantages of this method are that the structure of the imaging system is relatively simple and compact, but the design and processing features cause difficulties, and the image quality is not easy to guarantee. At the same time, the number of microlenses and the spectral resolution are low.
Scheme 2: Secondary imaging method based on relay image transfer. In this method, a relay image transfer system is introduced between the curved lens array and image sensor to convert the image information of the curved MLA to ensure that clear sub-images of each imaging unit can be obtained on the image sensor, as presented in Figure 7. The advantages of this method are that it reduces the difficulty of the design and fabrication of the MLA, the number of microlenses is high, and the system FOV is relatively large. However, due to the introduction of the relay image transfer system, the structure and volume of the system will increase [24].

3. Planar Compound Eye Multispectral Snapshot Imaging System

3.1. Compound Eye MSI System Based on TOMBO

In the year 2000, Tanida’s team from Osaka University in Japan pioneered the TOMBO (Thin Observation Module by Bound Optics), a new multi-aperture optoelectronic imaging system inspired by arthropod compound eyes that consists of multiple independent optoelectronic imaging units; a series of low-resolution sub-images can be obtained on the same CMOS photo-focal plane imaging structure, as presented in Figure 8. Through digital image-processing technology in the latter, high-resolution images can be reconstructed [26,27,28,29,30,31]. Indeed, TOMBO is not a simple imitation of the compound eye organ of insects in nature, but rather an efficient way to adapt to focal plane sensors at present. This method is also considered as an effective computational imaging platform based on multi-aperture imaging, and has been thoroughly studied by numerous scholars [19,32,33].
In the year 2003, based on this platform, Tanida’s team proposed the color TOMBO compound eye imaging system; the effect picture is shown in Figure 9. By integrating the RGB wideband filter array in the compound eye lens array, color images were reconstructed, which laid the technical foundation for the realization of the TOMBO multispectral compound eye imaging system [34,35,36,37,38].

3.1.1. Compound Eye MSI System Based on TOMBO Architecture

In the year 2003, Shogenji et al. from Osaka University in Japan proposed a compact compound eye MSI system based on TOMBO architecture, which is also the earliest compound eye snapshot multispectral camera created to date [8]. Compared to the traditional TOMBO system, the MSI system integrates an interference narrow-band filter for each lens element at the front end of the TOMBO MLA. Therefore, the system can generally be divided into four parts: the MLA, optical signal isolation device, color interference filter array, and image sensor, as is presented in Figure 10. The team constructed the system’s experimental prototype and conducted a feasibility experimental verification of the static scene [39,40].
The MLA used an orthogonal right-angle MLA (array positive orthogonal—APO) as the main imaging unit, model APO-Q-P500-AF1.3, composed of quartz glass with a lens spacing of 500 μm, 1.3 mm, and a lens diameter of 500 μm. Figure 11 presents the structural diagram of this series of MLA [41].
The CMOS image sensor’s pixel value was 1040 × 960. The pixel size was 6.25 μm and the bit depth was 12 bits. The number of pixels corresponding to a single imaging unit was 80 × 80; therefore, a maximum of 12 × 12 imaging units could be achieved. The actual effective pixels were 960 × 960 and there were 40 pixels on the left and right sides used for the alignment and mounting of the lens array, respectively.
It should be noted that the experimental validation did not use an interference filter array, but instead used narrow-pass interference filters with different wavelengths of seven FS40-VIS-2.00 obtained from CVI Laser Corp (New York, NY, USA), filter characteristics as shown in Figure 12. The central wavelengths of the interference filter were 400, 450, 500, 550, 600, 650, and 700 nm. By successively changing the different interference filters, array images of different wavelengths were obtained. Then, according to the designed spectral filter array template, the 960 × 960-pixel image was cropped and reconstructed, and the compound eye image using the spectral filter array template was synthesized. Although this method does not produce the filter array, it can achieve experimental verification. According to the obtained sub-image array, further processing was conducted to reconstruct images of different wavelengths with 480 × 480 pixels and 12 bits of bit depth.
The optical signal isolation devices used were 21 stainless-steel plates (50 μm thick). This consisted of a square-hole array fabricated by the etching process. The inner surface of the square-hole array was coated with an antireflective film. The wall thickness and height were approximately 50 and 1050 μm, respectively. The hole spacing was 500 μm, which matched the microlens spacing.
The experimental prototype of the compact compound eye MSI system is presented in Figure 13, and the main parameters are summarized in Table 1.
The authors proposed an improved pixel rearrangement method that rearranged the pixels in the captured image geometrically onto a multi-channel virtual image plane. The processing flow is presented in Figure 14. Processing for the reconstruction consists of pre-processing, pixel-remapping, and post-processing stages.
To study the performance of the proposed method, a color image was captured using the prototype multispectral TOMBO system. Figure 15 presents some images of fruits and vegetables we obtained, as well as the reconstructed multispectral image results of different wavelengths. The image size was 480 × 480 pixels and the image bit depth was 12 bits. Figure 16 presents the RGB color image, which was obtained by applying a spectral-RGB conversion to the seven spectral images. This study only provides a brief introduction to the specific methods; please refer to the references for further information.

3.1.2. Compound Eye High-Speed Multispectral 3D-Imaging System Based on TOMBO

In the year 2010, Kagawa et al. from Osaka University in Japan proposed a compound eye high-speed multispectral 3D-imaging system based on TOMBO architecture [42]. The authors used a narrow-pass filter array and a rolling shutter mode of a CMOS image sensor to achieve a 2D decomposition of imaging wavelength and time, as presented in Figure 17. The imaging system consisted of three parts: an orthogonal array of Nx × Ny lenses, an optical crosstalk isolation array, and a CMOS image sensor. Among them, the CMOS image sensor functioned in the rolling shutter mode, and there were Ny imaging units with the same frequency bands in the vertical direction; these Ny imaging units were successively exposed in time. Therefore, Ny sub-images with the same frequency bands with different exposure times could be obtained within a frame period, and the effective frame rate was Ny times that of the original image sensor. At the same time, there were interference narrow-pass filters of different frequency bands of the Nx group in the horizontal direction to allow us to obtain the sub-images of different frequency bands of the Nx group.
Based on the abovementioned principles, the authors used an SXGA monochromatic CMOS image sensor, optical crosstalk isolation device, commercial 5 × 5 element MLA, and commercial color interference filter to build the experimental prototype and conduct the corresponding experimental verification. Figure 18 shows the workflow of the compound eye high-speed multispectral 3D-imaging system based on TOMBO architecture.
Experimental prototype parameters of the high-speed multispectral 3D-imaging system are shown in Table 2. The filter array used Fujifilm gelatin band-pass filters to obtain wavelength values of 500, 550, and 600 nm, and Kodak deep tricolor filters and Fujifilm sharp-end filters to obtain wavelengths of 440 nm and longer than 640 nm, respectively.
Figure 19 presents a 5 × 5 array filter image.
Based on the experimental prototype, the authors conducted the multispectral snapshot imaging of the rotating fan and verified its MSI characteristics for a high-speed moving target. Figure 20a presents the fan target image, with white letters marked on blades of different colors; Figure 20b presents the captured compound eye multispectral image. The original frame rate of the CMOS image sensor was 20 fps, but the actual frame rate was approximately 100 fps, which was 5 times the original frame rate (Ny = 5).
Based on the experimental prototype, the authors conducted the multispectral snapshot imaging of the rotating fan and verified its MSI characteristics for a high-speed moving target.

3.1.3. Multispectral Motion Imaging System Based on TOMBO

In the year 2020, Nakanishi et al. from Osaka University in Japan studied the TOMBO architecture multispectral motion imaging system used in a field environment [43]. The authors developed a prototype by connecting an embedded computer to the multispectral TOMBO imaging system. The prototype was mounted on the UAV, and the airborne observation experiment was conducted to verify the feasibility and potential ability of the motional TOMBO MSI system in the field application. Table 3 summarizes the characteristic parameters of the experimental prototype. There are 9 channels and 8 spectral channels in total, and the spectral channel with a wavelength of 450 nm was used to obtain a stereo view.
Figure 21a shows the UAV used in the field flight experiment; Figure 21b presents the multispectral TOMBO load on the UAV.
Figure 22 presents the multispectral images of a ground oil furnace and vehicle target obtained from the field flight experiment of the TOMBO multispectral motion imaging system. The flight altitude was between 3 and 7 m.
Finally, the author used the normalized vegetation index (NDVI) to analyze and evaluate the image obtained, as presented in Figure 23. The experimental results show that the system can basically complete the target classification, which proves the effectiveness of TOMBO multispectral motion imaging when performing target-recognition activity.

3.1.4. Low-Cost MSI System Based on TOMBO

According to Scott A. Mathews from the Catholic University of America, MSI systems based on interferometric bandpass filters have great potential advantages in terms of cost, volume, and imaging form. With the rapid development of large-array image sensor cameras, the system cost will be increasingly reduced. The spectral imaging system based on TOMBO architecture can effectively solve the problem of motion targets, which is an effective and low-cost solution for obtaining snapshot spectral imaging. Based on the abovementioned comprehensive considerations, the author proposed a low-cost multi-aperture MSI system based on a commercial large-array image sensor CCD in the year 2008 and constructed a system prototype. The main parameters are presented in Table 4 [44]. The system can be considered as a development of the TOMBO MSI system, with a total of 18 commercial lens imaging units (mounted on a lens array plate, as presented in Figure 24). Under the illumination condition of the active light source with the help of two 75 mm diameter, 15 mm focal-length flat convex lenses (as presented in Figure 25), the target scene was imaged on 18 sub-units at an appropriate rate, ensuring that the system could obtain 18 high-quality target sub-images in a short enough exposure time (as exhibited in Figure 26. The area marked in black in (a) is 400 × 400 pixels to reduce the artifacts and blur caused by motion, and the reconstructed data cube (x, y, λ) could attain dimensions of 400 × 400 × 17. The authors believe that the system can be used as a general platform for other forms of multi-aperture imaging.

3.1.5. Application of Compound Eye Multispectral Endoscopy Based on TOMBO

In the year 2006, Yamada et al. from the Hiroshima Institute of Technology in Japan first conducted a preliminary experimental verification of TOMBO for 3D endoscopy technology [45]. In the year 2011, Kagawa et al. from Shizuoka University in Japan applied TOMBO compound eye MSI technology to a 3D multifunctional compound eye endoscopic system, and combined it with wavefront coding technology, developing a principal prototype of the TOMBO endoscope based on a 3 × 2 aspheric lens array. The prototype presented in Figure 27a,b shows the image captured when observing the interior of a pipe with texture and a diameter of 20 mm. The main parameters are presented in Table 5 [20,46].
In the year 2012, Kagawa et al. proposed TOMBO-based compound eye variable FOV visible-light and near-infrared polarization endoscopy technology. They introduced fixed and moving mirrors to control the FOV, polarization, and wavelength of the system, and realized a variety of observation modes, such as 3D-shape measurements, wide FOV, and the close observation of tissue structures under the skin [47]. Based on this form of technology, in the year 2014, Kagawa et al. proposed the TOMBO compound eye compact endoscope technology, which can obtain a wide FOV, close range, considerable depth of field, and 3D-snapshot MSI by adding a narrow-band filter to the MLA. Figure 28 presents the multispectral image obtained using the experimental prototype.
In the year 2013, Yoshimoto et al. from Japan estimated the stiffness of the target and measured the 3D deformation of the target by measuring the deformation of the transparent silicone rubber projection pattern using a TOMBO compound eye endoscope [48]. The endoscopic detection process is shown in Figure 29.

3.2. Multi-Aperture MSI System Based on LVF

3.2.1. Ultra-Compact Multi-Aperture Snapshot MSI System

In the year 2018, Hubold et al. from the Fraunhofer Institute of Applied Optics and Precision Engineering in Germany conducted a study on an ultra-compact multispectral snapshot imaging system [49,50,51,52] (see Figure 30). The authors designed and manufactured an MLA using up-to-date advanced micro-optical manufacturing technology combined with a conventional full-frame-format image sensor and a commercially linear variable spectral filter (LVF). A multispectral experimental prototype with a wide FOV and high spatial resolution was constructed, and a snapshot image acquisition of 11 × 6 spectral channels in the wavelength range of 450~850 nm was realized. The size of the prototype was only 60 × 60 × 28 mm3, the weight was 200 g, the maximum FOV could reach 68°, the linear spectral sampling was approximately 6 nm, and the single-channel spatial sampling was 400 × 400 pixels.
The imaging system mainly consisted of three micro-optical structure layers, as presented in Figure 31. The customized MLA was used as the main layer, which enabled the parallel imaging of the object using a single image sensor. In order to avoid optical crosstalk from occurring between adjacent imaging channels, a 3D aperture structure was designed as an optical isolation array as the second layer. A circular aperture was placed above the optical isolation structure, which matched the size and position of a single aperture of the MLA. The circular aperture below the optical isolation layer was required to maximize the single-path FOV and effectively utilize the available area of the sensor. The LVF was set as the third layer above the MLA, and the LVF was set at a certain inclination angle relative to the MLA, to ensure that each channel had a corresponding specific wavelength range. The micro-optical imaging system was integrated on a full-frame CCD image sensor of the same size as the LVF to realize the multi-aperture multispectral snapshot imaging. A total of 66 spectral sub-images were obtained via the detector.
The key to the system design was the relationship of the positions of the LVF, MLA, and FPA detectors. As presented in Figure 32, if there is no angle present between the LVF and MLA, then in the vertical direction, the spectral images obtained from the five imaging units (−2~3) are the same, and only the spectral images with a total of ten imaging units (−5~5) can be obtained in the horizontal direction. The number of effective spectral images is low, which also leads to the waste of imaging resources. When a certain angle is set between the LVF and MLA, the spectral images obtained by the five spectral units in the vertical direction are different, which considerably increases the number of spectral channels present in the system.
The total length of the system was 7.2 mm, the focal length was 3.65 mm, the F number was 7, the FOV was 68°, the spatial angular resolution was 0.12°, each channel was approximately 400 × 400 pixels, and the optical dispersion spot was the same size as a pixel. The main parameters of the system are summarized in Table 6.
Figure 33 presents the MTF curves of the optical system with three spectral channels and two fields of view. The pixel size of the image sensor was 7.4 μm and the corresponding cutoff frequency was 67 LP/mm. In the entire FOV and spectral range, the MTF value at the half-cutoff frequency was a little higher than the expected value of 0.5.
The LVF was placed close to the MLA aperture’s surface at an inclination of approximately 9.5° between the LVF and MLA, as presented in Figure 34.
The image sensor was a KAI-16000 full-frame CCD, the target surface size was 24 × 36 mm2, the pixel size was 7.4 μm × 7.4 μm, the resolution was 4872 (H) × 3248 (V), and the number of pixels was 16 M. The image sensor (including mechanical structure), optical isolation array, and system prototype are presented in Figure 35.
Figure 36a shows the original spectral sub-images obtained using the prototype. Each image channel has a FOV of 68° and a spatial resolution of up to 400 × 400 pixels, spectral sampling at 6 nm steps, and a spectral range of 450 nm~850 nm. Figure 36b depicts the target scenario.
The experimental prototype and commercial spectrometer (1 nm spectral sampling) were used to image green leaves at different stages, and the obtained images were compared to verify that the system had both high spatial and spectral resolutions. Specifications of the system are shown in Table 7.
The advantages of the M. BOLD method are as follows: by tilting the LVF, the number of spectral channels is considerably increased, and the difficulty to manufacture the LVF is considerably reduced when compared to that of the segmented filter. The commercial LVF greatly reduced the difficulty of manufacturing the spectrometer. The system was small and light in weight. The disadvantage was that, although the spectrum of the LVF variable spectral direction was assumed to be approximately unchanged within a short distance, when the LVF region covered by a single microlens was large, the spectral perturbation was obvious, and the larger the region covered, the larger the spectral perturbation, which was unfavorable for the spectral imaging of large apertures.

3.2.2. Compact, Miniature Snapshot Optically Replicating and Remapping Imaging Spectrometer

In the year 2019, Mu et al. from the Xi’an Jiaotong University of China, based on the ultra-compact multi-aperture snapshot MSI system proposed by Hubold et al. in Germany, proposed a compact, miniature snapshot optically replicating and remapping imaging spectrometer (ORRIS) [53,54,55]. Its principle is based on the shifting of sub-images replicated using a specially organized lenslet array and the filtering of each sub-image by a continuous variable filter (CVF). The 3D data cube was recovered just by using a simple image-remapping process. The use of the lenslet array and CVF caused the system to be very compact and miniature in size. It covers a wavelength region of 360 to 860 nm with 80 spectral channels with a spatial resolution of 400 × 400 pixels. The volume of the prototype is approximately 230 mm (length) × 70 mm (width) × 70 mm (height) and the weight is approximately 1.0 kg for finite imaging, and these values change to 50 mm (length) × 70 mm (width) × 70 mm (height) and 0.5 kg for infinite imaging. The prototype is verified by measuring outdoor static and dynamic scenes.
The compact ORRIS system is mainly composed of an M × N lenslet array, a rectangular CVF, and a FPA sensor, as presented in Figure 37a. In contrast to the ultra-compact multi-aperture snapshot MSI system of Hubold et al., the ORRIS system swaps the position of the lens array with the filter. The CVF stands on top of the FPA sensor, behind the lenslet array. The lenslet array includes a lateral-shift row and vertical-column lenslets. The shifting angle θ between the row direction of the lenslet array and the waveband direction of the CVF is presented in Figure 37b. As a result, the replicated row-direction sub-images are laterally shifted, and they cover gradually varied wavebands of the CVF. Different slices of the sub-images are continuously filtered by a narrow waveband with a central wavelength λi, as shown in Figure 37c. Then, the spatiospectral sub-images are acquired during a single exposure time on the FPA. The spectral image at each central wavelength λi is finally reconstructed with the remapping of the simple image. No complex reconstruction algorithm is required.
Assuming that the spectral interval covered by each sub-image is Δλ, the effective minimum bandwidth is δλ, and the number of lenses is N, then δλ = Δλ/N. Assume that Δx and Δy are the pitches of the sub-images along the x and y directions, respectively. Then, θ = tan − 1[Δy/(N × Δx)]. If Δx = Δy, θ = tan1(1/N). The lens array has M rows in total; therefore, the detectable spectral range is δλ’ = [−M − 1) × N] × δλ. This means that the number of spectral channels is C = [−M − 1) × N] with a bandwidth of δλ. The dimensions of the obtained 3D data cube depend on the size of the FPA and the available CVF. There is a tradeoff between the spatial and spectral resolutions. The spectral resolution increases with the number of lenslets, and the spatial resolution achieves the opposite result.
During the image-mapping process, the regions in the spectrum range of δλ of N sub-images are sequentially spliced to form the completed scene. Because δλ is very narrow and meets the assumption of a constant CVF spectrum, the spectral bandwidth of the reconstructed image following Mosaic is very narrow and the spectral disturbance is minor.
Figure 38a presents a schematic layout of the ORRIS system, and Figure 38b presents the proof-of-principle prototype. The combination of objective (EF 50 mm) and collimating (EF 50 mm) lenses forms an afocal telescope that is used for collimating the incident light originating from the objects. The lenslet array reimages the intermediate image onto the CVF and FPA. The size of the intermediate image is limited by a field stop located on the common focal plane of the objective and collimating lenses. The CVF is a commercial non-customed continuously variable bandpass filter with a wavelength range of 360 to 860 nm. The FPA is a CCD array with a pixel size of 7.4 µm × 7.4 µm. The sensor size of 24 × 36 mm can maximally cover the effective size of the CVF. The 9 × 10 lenslet array comprises 90 off-the-shelf plano-convex lenslets. They are mounted in a black, anodized, aluminum plate. Each lenslet has a focal length of 12 mm. The lenslet pitches are placed 3 mm along the row and column directions. The spatial resolution of each sub-image will be 400 × 400. An aperture array with a diameter of 1.5 mm is placed before the lenslet array to improve the imaging quality. Then, the F# of the lenslet becomes 8.
When the imaging target reaches infinity, the collimating telescope is no longer required; only the baffle array is inserted at the front so that the system will be compacted and miniaturized even further. When imaging a finite-distance object, adding a diffuse screen to the FOV aperture position can relax the imaging requirements. The main, characteristic parameters of the experimental prototype are presented in Table 8.
The static and dynamic scenes were verified by the principal prototype. Figure 39 depicts the 25-channel imaging results of the static scene. The original gray sub-image shown in Figure 39a presents a certain deviation in the row direction; using a simple image-remapping process, the continuously remapped spectral images are depicted in Figure 39b. Figure 39c shows the color-fusion images corresponding to each channel. Figure 39d presents the recovered 3D spectral image data cube. Figure 39e shows a single, gray sub-image following amplification. Figure 39f shows the RGB image of a similar scene captured using a cell phone. From the objects in the complex experimental scene, we can observe that the spatial resolution of the system is good enough to recognize them.
Figure 40 presents the partial-image results recovered from the dynamic-scene video obtained by the prototype. The experiment proves that automobiles moving on a crossroads are captured by the ORRIS via a video recording, which can be used for conducting surveillance in real time.

4. Curved Compound Eye Multispectral Snapshot Imaging System

4.1. Multi-Layer Compound Eye for MSI

In the year 2017, Chen et al. from the Chinese University of Hong Kong proposed the design, fabrication process, and verification results of a multi-layer artificial compound eye (ACE) for MSI [56]. A high-precision 3 × 4 MLA was designed and fabricated on the convex surface of a plano-convex lens with a diameter of 20 mm, focal length of 50 mm, and refractive index of 1.47 using vacuum hybrid imprinting technology. The curvature and diameter of each microlens were designed and optimized independently to ensure that each one could focus on the FPA. The thickness of the 3 × 4 microlenses from the surface of the large lens was 24 μm, the refractive index was 1.65, and the distance of the centers between neighboring microlenses was 2.54 mm. The diameters of each group were 900, 896, 891, and 887 μm. A light-absorption layer was coated on the upper surface of the large lens to suppress stray light in the imaging process. A multi-channel color filter array was fabricated by lithography at the bottom of the lens to perform spectral separation. A high-resolution CMOS sensor was coupled beneath a multilayer ACE. The overall structure of the multi-layer ACE is presented in Figure 41.
The multi-channel filters are blue (400–500 nm), green (500–600 nm), red (600–800 nm), and near-infrared (800–1200 nm), respectively. The spectral transmittance curves of the four filters are presented in Figure 42.
Figure 43 depicts the processing technology and flow of the multi-channel filter.
The minimum bandwidth of the color filter is 100 nm, which is relatively wide. In order to prevent the transmission of infrared light, an IR-CUT filter was added after the color filter. Figure 44 presents the working and field diagrams of the illumination source, target, and MSI system.
Several experiments were conducted to verify the spectral separation ability and application feasibility of the system. Figure 45 depicts the test image of the color blindness test card of the system. The number “86” can be observed via the red band and the number “9” can be observed via the green and blue bands, indicating that the frequency separation is good.

4.2. Multispectral Curved Compound Eye Camera

In the year 2020, Yu et al. (Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences) proposed a novel multispectral curved compound eye camera (MCCEC) [57]. The MCCEC mainly consists of a curved MLA integrated with selected narrow-band optical filters, an optical transmission subsystem, and a data-processing unit with an image sensor. The novel MCCEC system can achieve MSI at an ultra-large FOV and can obtain information for multiple spectrum segments in real-time. Moreover, the system has the advantages of its small size, light weight, and high sensitivity in comparison with conventional multispectral cameras.
The optical layout of the curved MLA, narrow-band filter array, and optical relay system for image plane transformations was carefully designed and optimized. The system structure is presented in Figure 46. The FOV of the MCCEC was determined by the front bionic curved compound eye. Narrow-band optical filters were fixed directly behind the MLA, and there were seven wavebands in total for MSI.
The main purpose of the introduction of the optical relay system is to solve the problem of the adaptability between curved MLA imaging and FPA and to improve the image resolution and quality of the system. The curved MLA has 117 microlenses with a focal length of 0.4 mm. MCCEC can achieve seven waveband MSIs with center wavelengths of 480, 550, 591, 676, 704, 740, and 767 nm in a large FOV of 120° and when the spectral bandwidth is 10 nm. The whole size of the optical system was 93 mm × 42 mm × 42 mm.
Figure 47 shows the overall layout of the multispectral channels of the MCCEC system. As is shown in Figure 47a, a cluster of seven microlenses with one microlens surrounded by six microlenses was utilized as a MSI unit, and these microlenses were attached with filters at different wavebands. Similarly, each microlens can be surrounded by the six microlenses so that a new cluster of seven microlenses can act as a new MSI unit. In this way, the whole working FOV of the MCCED can be covered by many MSI units. As a result, the images obtained by those microlenses with the same spectral filter can be stitched together to form an image with the same spectrum channel in the whole FOV, as shown in Figure 47b. The angle between adjacent lenses in the compound eye was designed to be 8°. By following the abovementioned design parameters, the curved compound eye can be designed to achieve an MSI function in the entire FOV. In this way, an MCCEC that can realize real-time MSIs in an ultra-wide FOV can be obtained.
The optical transmission subsystem has a FOV of 120°, an F-number of 4, and an effective focal length of 3 mm. The schematic diagram of the optical design of the MCCEC system is shown in Figure 48.

4.3. Biomimetic Multispectral Curved Compound Eye Camera

In 2021, Zhang et al., from the Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, proposed a novel biomimetic multispectral curved compound eye camera (BMCCEC) [58,59]. A BMCCEC has 104 effective multispectral ommatidia and a FOV of 98° × 98°, which is able to realize 7-band MSIs with center wavelengths of 500, 560, 600, 650, 700, 750, and 800 nm, and a spectral resolution of 10 nm. The principal prototype of a BMCCEC is shown in Figure 49.
The main performance parameters of the BMCCEC prototype are shown in Table 9. Compared to the former MCCEC system, it improves the focal length of the system from 0.4 to 5 mm, in order to allow the system to work at long distances with a high spatial resolution, and the maximum FOV of the MCCEC was reduced from 120° to 98°.
Following spectral calibration, the authors tested the MSI capability of the BMCCEC prototype. A green-leaf plant and plant model were selected as the targets in the experiment. Following MSI, the image in the whole FOV containing all ommatidia units was reconstructed and is presented in Figure 50a. Based on the reconstructed image in the whole FOV, it is easy for one to locate the interesting targets. To retrieve a sampled spectral image for every spectrum channel, the following steps were taken. First, the reconstructed multispectral image in the whole FOV was differentiated and divided into seven spectral groups. Then, the multispectral images were reconstructed by using the projection method. Figure 50b shows the reconstructed images of all spectrum channels. In the seven spectral images, the target shows different gray levels.
In order to test the uniformity of the MSI in the whole FOV, the same object was placed at the center and edge of the FOV in the experiment. The target selected for the experiment was still green plants, which has a high absorption rate at around 600 nm and a high reflection rate at the band of 750~800 nm. Figure 51a,b show multispectral images obtained at different FOVs. Figure 51c shows the retrieved reflection spectrum curve. As can be observed, two curves coincide well with an average relative error of less than 10%. Therefore, the BMCCEC principal prototype performs well in the whole FOV of 98° × 98°.

5. Conclusions

As an application development direction of compound eye imaging technology, compound eye MSI technology has been widely considered by researchers at home and abroad in recent years and has become an important research topic in the field of image science. Numerous achievements have been made in the scientific field of planar compound eyes and their MSI systems. Among them, the compound eye MSI system based on TOMBO architecture has gradually developed its characteristics of achieving a high spatial resolution, low cost, high frame rate, and field-motion imaging. The technology tends to be well-developed and has been initially applied in the research. However, due to the limitations of the narrow-band interference filter array, the numbers of imaging channels and multi-spectra are relatively low at 18 and 17, respectively. Based on the LVF or CVF MSI technology, the numbers of imaging channels and multispectra significantly increase, and the number of imaging channels can reach 80 at present. As the detector used at present is mainly planar, most of the research conducted on curved-surface ACEs still simulates and implements characteristics of curved-surface MLAs. Several problems still exist in the implementation of curved-surface compound eye systems, and the curved-surface compound eye MSI system is still in the process of being explored and researched in the field. Although the results contribute to the advantages of the wide FOV attained by a curved compound eye and the number of imaging channels is significantly increased, several shortcomings still exist in terms of imaging quality and the number of spectra. In the future, image-processing studies will also be significantly increased and several problems in the application of real-time MSI will still exist.

Author Contributions

Y.S. was responsible for the analysis of the data and the conception of the article; H.L. and Q.L. (Qianghui Liu) collected the data; J.L. and Q.L. (Qiang Luo) wrote the first draft of the manuscript; the corresponding author, Q.H. and J.C., was responsible for the guidance and revision of the work; Y.C., H.C. and L.L. participated in the revision of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (62275022, 62275017).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors without undue reservations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Y. Research on the Key Technologies of Image Processing Based on Multis-Pectral Imaging System. Ph.D. Thesis, University of Chinese Academy of Sciences, Changchun, China, 2015. [Google Scholar]
  2. Xu, H.; Wang, X. A applications of multispectral/hyperspectral imaging technologies in military. Infrared Laser Eng. 2007, 36, 13–17. [Google Scholar]
  3. Hagen, N.A.; Kudenov, M.W. Review of snapshot spectral imaging technologies. Opt. Eng. 2013, 52, 090901. [Google Scholar] [CrossRef]
  4. Gao, Z.; Gao, H.; Zhu, Y.; Li, J.; Hao, Q.; Liu, Y.; Chen, C.; Cheng, G.; Cao, J.; Meng, H. Review of snapshot spectral imaging technologies. Opt. Precis. Eng. 2020, 28, 1323. (In Chinese) [Google Scholar]
  5. Yi, D.; Kong, L.; Zhao, Y.; Yang, Z. Color recovery method for snapshot narrow band spectral imaging technology. Spectrosc. Spectr. Anal. 2021, 41, 183–187. [Google Scholar]
  6. Cao, B. The Study of Multispectral Image Acquisition and Reflectance Reconstruction of Transient Target. Ph.D. Thesis, Beijing Institute of Technology, Beijing, China, 2017. [Google Scholar]
  7. Kudenov, M.W.; Dereniak, E.L. Compact snapshot real-time imaging spectrometer. Proc. SPIE 2021, 8186, 81860W. [Google Scholar]
  8. Li, Y. Research on Compact Division-Aperture Snapshot Spectral Imaging System. Ph.D. Thesis, University of Chinese Academy of Sciences, Xi’an, China, 2018. [Google Scholar]
  9. Zhu, S. Research on Key Technologies of Snapshot Multidimensional Imaging. Ph.D. Thesis, Harbin Institute of Technology, Harbin, China, 2018. [Google Scholar]
  10. Labhart, T.; Meyer, E.P. Detectors for polarized skylight in insects: A survey of ommatidial specializations in the dorsal rim area of the compound eye. Microsc. Res. Tech. 1999, 47, 368–379. [Google Scholar] [CrossRef]
  11. Xue, J.; Qiu, S.; Wang, X.; Jin, W. A compact visible bionic compound eyes system based on micro-surface fiber faceplate. In Proceedings of the 2019 International Conference on Optical Instruments and Technology: Optoelectronic Imaging/Spectroscopy and Signal Processing Technology, Beijing, China, 26 October 2019; Volume 114380. [Google Scholar]
  12. Cao, A.; Pang, H.; Zhang, M.; Shi, L.; Deng, Q.; Hu, S. Design and Fabrication of an Artificial Compound Eye for Multi-Spectral Imaging. Micromachines 2019, 10, 208. [Google Scholar] [CrossRef] [PubMed]
  13. Xiao, J.; Song, Y.M.; Xie, Y.; Malyarchuk, V.; Jung, I.; Choi, K.-J.; Liu, Z.; Park, H.; Lu, C.; Kim, R.-H.; et al. Bio-inspired hemispherical compound eye camera. Proc. SPIE 2014, 8958, 89580A. [Google Scholar]
  14. Phan, H.L.; Yi, J.; Bae, J.; Ko, H.; Lee, S.; Cho, D.; Seo, J.-M.; Koo, K.-I. Artificial compound eye systems and their application: A review. Micromachines 2021, 12, 847. [Google Scholar] [CrossRef]
  15. Cheng, Y.; Cao, J.; Zhang, Y.; Hao, Q. Review of state-of-the-art artificial compound eye imaging systems. Bioinspiration Biomim. 2019, 14, 031002. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, Y.; Shi, C.; Xu, H.; Zhang, Y.; Yu, W. A compact bionic compound eye camera for imaging in a large field of view. Opt. Laser Technol. 2021, 135, 106705. [Google Scholar] [CrossRef]
  17. Fu, Y.; Zhao, Y.; Liu, Z.; Zhang, K.; Zhu, Q.; Li, Y. Design of compact bionic compound eye optical system used for target identification. Infrared Laser Eng. 2017, 46, 0602001. [Google Scholar]
  18. Shen, Y.; Li, J.; Lin, W.; Chen, L.; Huang, F.; Wang, S. Camouflaged Target Detection Based on Snapshot Multispectral Imaging. Remote Sens. 2021, 13, 3949. [Google Scholar] [CrossRef]
  19. Tanida, J. Multi-aperture imaging and its application. In Proceedings of the 2015 14th Workshop on Information Optics (WIO), Kyoto, Japan, 1–5 June 2015; pp. 1–2. [Google Scholar]
  20. Kagawa, K.; Yamada, K.; Tanaka, E.; Tanida, J. Endoscopic compound-eye camera with extended depth of focus. In Proceedings of the 2011 ICO International Conference on Information Photonics, Ottawa, ON, Canada, 18–20 May 2011; pp. 1–2. [Google Scholar] [CrossRef]
  21. Jian, H. Research on Key Technology of Artificial Compound Eye System for 3D Target Detection. Ph.D. Thesis, University of Science and Technology of China, Hefei, China, 2019. [Google Scholar]
  22. Yang, T.; Liu, Y.; Mu, Q.; Zhu, M.; Pu, D.; Chen, L.; Huang, W. Compact compound-eye imaging module based on the phase diffractive microlens array for biometric fingerprint capturing. Opt. Express 2019, 27, 7513–7522. [Google Scholar] [CrossRef] [PubMed]
  23. Xiao, J.; Song, Y.M.; Xie, Y.; Malyarchuk, V.; Jung, I.; Choi, K.-J.; Liu, Z.; Park, H.; Lu, C.; Kim, R.-H.; et al. Arthropod eye-inspired digital camera with unique imaging characteristics. Proc. SPIE 2014, 9083, 90831L. [Google Scholar]
  24. Shi, C. Research on the Design and Image Process of Bioinspired Spherical Compound Eye Imaging System. Ph.D. Thesis, University of Chinese Academy of Sciences, Changchun, China, 2017. [Google Scholar]
  25. Song, Y.M.; Park, H.G.; Lee, G.J.; Ju, S.P. Artificially Engineered Compound Eye Sensing Systems Smart Sensors and Systems; Springer: Berlin, Germany, 2017; pp. 157–174. [Google Scholar]
  26. Tanida, J.; Kumagai, T.; Yamada, K.; Miyatake, S.; Ishida, K.; Morimoto, T.; Kondou, N.; Miyazaki, D.; Ichioka, Y. Thin observation module by bound optics (TOMBO): An optoelectronic image capturing system. Proc. SPIE 2000, 89, 1030–1036. [Google Scholar]
  27. Tanida, J.; Kumagai, T.; Yamada, K.; Miyatake, S.; Ishida, K.; Morimoto, T.; Kondou, N.; Miyazaki, D.; Ichioka, Y. Thin observation module by bound optics (TOMBO): Concept and experimental verification. Appl. Opt. 2001, 40, 1806–1813. [Google Scholar] [CrossRef] [Green Version]
  28. Kitamura, Y.; Shogenji, R.; Yamada, K.; Miyatake, S.; Miyamoto, M.; Morimoto, T.; Masaki, Y.; Kondou, N.; Miyazaki, D.; Tanida, J.; et al. Reconstruction of a high-resolution image on a compound-eye image-capturing system. Appl. Opt. 2004, 43, 1719–1727. [Google Scholar] [CrossRef]
  29. Tanida, J.; Kumagai, T.; Yamada, K.; Miyatake, S.; Miyamoto, M.; Morimoto, T.; Masaki, Y.; Kondou, N.; Miyazaki, D.; Ichioka, Y. Compact image capturing system based on compound imaging and digital reconstruction. Proc. SPIE 2001, 4455, 34–41. [Google Scholar]
  30. Tanida, J.; Yamada, K. TOMBO: Thin observation module by bound optics. In Proceedings of the 15th Annual Meeting of the IEEE Lasers and Electro-Optics Society, Glasgow, UK, 10–14 November 2002; Volume 1, pp. 233–234. [Google Scholar]
  31. Yamada, K.; Tanida, J.; Kitamura, Y.; Ichioka, Y. An opto-electronic image capturing system using multiple-imaging CMOS sensor. In Conference on Lasers and Electro-Optics/Pacific Rim; Optica Publishing Group: Washington, DC, USA, 2001; p. ThI2_4. [Google Scholar]
  32. El-Sallam, A.A.; Boussaid, F. Spectral-Based Blind Image Restoration Method for Thin TOMBO Imagers. Sensors 2008, 8, 6108–6124. [Google Scholar] [CrossRef] [PubMed]
  33. Kennedy, G.T.; Kagawa, K.; Rowland, R.A.; Ponticorvo, A.; Tanida, J.; Durkin, A. Spatial frequency domain imager based on a compact multiaperture camera: Testing and feasibility for noninvasive burn severity assessment. J. Biomed. Opt. 2021, 26, 086001. [Google Scholar] [CrossRef]
  34. El-Sallam, A.A.; Boussaid, F. A High Resolution Color Image Restoration Algorithm for Thin TOMBO Imaging Systems. Sensors 2009, 9, 4649–4668. [Google Scholar] [CrossRef]
  35. Horisaki, R.; Tanida, J. Multidimensional TOMBO imaging and its applications. In Unconventional Imaging, Wavefront Sensing, and Adaptive Coded Aperture Imaging and Non-Imaging Sensor Systems; SPIE: Bellingham, WA, USA, 2011; Volume 8165, p. 816516. [Google Scholar]
  36. Tanida, J.; Shogenji, R. Versatile application of a compact compound imaging system. Photonic Devices and Algorithms for Computing VII. SPIE 2005, 5907, 59–66. [Google Scholar]
  37. Jun, T.; Rui, S.; Yoshiro, K.; Kenji, Y.; Masaru, M.; Shigehiro, M. Color imaging with an integrated compound imaging system. Opt. Express 2003, 11, 2109–2117. [Google Scholar]
  38. Miyatake, S.; Shogenji, R.; Miyamoto, M.; Nitta, K.; Tanida, J. Thin observation module by bound optics (TOMBO) with color filters. In Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications, V; SPIE: Bellingham, WA, USA, 2004; Volume 5301, pp. 7–12. [Google Scholar]
  39. Shogenji, R.; Kitamura, Y.; Yamada, K. Multispectral imaging system by compact compound optics. In Nano-and Micro-Optics for Information Systems; SPIE: Bellingham, WA, USA, 2003; Volume 5225, pp. 93–100. [Google Scholar]
  40. Shogenji, R.; Kitamura, Y.; Yamada, K.; Miyatake, S.; Tanida, J. Multispectral imaging using compact compound optics. Opt. Express 2004, 12, 1643–1655. [Google Scholar] [CrossRef]
  41. Available online: https://www.amus.de/apo.html (accessed on 1 October 2022).
  42. Kagawa, K.; Fukata, N.; Tanida, J. High-speed multispectral three-dimensional imaging with a compound-eye camera TOMBO. In Optics and Photonics for Information Processing IV; SPIE: Bellingham, WA, USA, 2010; Volume 7797, p. 77970N. [Google Scholar]
  43. Nakanishi, T.; Kagawa, K.; Masaki, Y.; Tanida, J. Development of a mobile TOMBO system for multi-spectral imaging. In Proceedings of the Fourth International Conference on Photonics Solutions (ICPS2019), Chiang Mai, Thailand, 20–22 November 2019; SPIE: Bellingham, WA, USA, 2020; Volume 11331, p. 1133102. [Google Scholar]
  44. Mathews, S.A. Design and fabrication of a low-cost, multispectral imaging system. Appl. Opt. 2008, 47, F71–F76. [Google Scholar] [CrossRef] [PubMed]
  45. Yamada, K.; Ishida, S.; Shougenji, R. Development of three dimensional endoscope by Thin Observation by Bound Optics (TOMBO). In Proceedings of the 2006 World Automation Congress, Budapest, Hungary, 24–26 July 2006; IEEE: Budapest, Hungary, 2006; pp. 1–4. [Google Scholar]
  46. Kagawa, K.; Yamada, K.; Tanaka, E.; Tanida, J. A three-dimensional multifunctional compound-eye endoscopic system with extended depth of field. Electron. Commun. Jpn. 2012, 95, 120–130. [Google Scholar] [CrossRef]
  47. Kagawa, K.; Shogenji, R.; Tanaka, E.; Yamada, K.; Kawahito, S.; Tanida, J. Variable field-of-view visible and near-infrared polarization compound-eye endoscope. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August 2012–1 September 2012; pp. 3720–3723. [Google Scholar]
  48. Yoshimoto, K.; Yamada, K.; Sasaki, N.; Takeda, M.; Shimizu, S.; Nagakura, T.; Takahashi, H.; Ohno, Y. Evaluation of a Compound Eye Type Tactile Endoscope. Endoscopic Microscopy VIII; SPIE: Bellingham, WA, USA, 2013; Volume 8575, p. 85750Z. [Google Scholar]
  49. Hubold, M.; Berlich, R.; Gassner, C.; Brüning, R.; Brunner, R. Ultra-compact micro-optical system for multispectral imaging. In MOEMS and Miniaturized Systems XVII; SPIE: Bellingham, WA, USA, 2018; Volume 10545, pp. 206–213. [Google Scholar]
  50. Hubold, M.; Berlich, R.; Brüning, R.; Brunner, R. System calibration and characterization of an ultra-compact multispectral snapshot imaging system. In Photonics and Education in Measurement Science 2019; SPIE: Bellingham, WA, USA, 2019; Volume 11144, pp. 213–218. [Google Scholar]
  51. Hubold, M.; Montag, E.; Berlich, R.; Brunner, R.; Brüning, R. Multi-aperture system approach for snapshot multispectral imaging applications. Opt. Express 2021, 29, 7361–7378. [Google Scholar] [CrossRef] [PubMed]
  52. Oliver, P.; Martin, H. Snapshot Multispectral Imaging. PhotonicsViews 2019, 16, 34–36. [Google Scholar] [CrossRef]
  53. Mu, T.; Han, F.; Bao, D. Compact miniature snapshot imaging spectrometry using continuous variable filter. In Proceedings of the Fifth Symposium on Novel Optoelectronic Detection Technology and Application, Xi’an, China, 24–26 October 2018; SPIE: Bellingham, WA, USA, 2019; Volume 11023, pp. 648–654. [Google Scholar]
  54. Mu, T.; Han, F.; Bao, D.; Zhang, C.; Liang, R. Compact snapshot optically replicating and remapping imaging spectrometer (ORRIS) using a focal plane continuous variable filter. Opt. Lett. 2019, 44, 1281–1284. [Google Scholar] [CrossRef]
  55. Mu, T.; Han, F.; Li, H.; Tuniyazi, A.; Li, Q.; Gong, H.; Wang, W.; Liang, R. Snapshot hyperspectral imaging polarimetry with full spectropolarimetric resolution. Opt. Lasers Eng. 2022, 148, 106767. [Google Scholar] [CrossRef]
  56. Chen, J.; Lee, H.H.; Wang, D.; Di, S.; Chen, S.-C. Hybrid imprinting process to fabricate a multi-layer compound eye for multispectral imaging. Opt. Express 2017, 25, 4180–4189. [Google Scholar] [CrossRef]
  57. Yu, X.; Liu, C.; Zhang, Y.; Xu, H.; Wang, Y.; Yu, W. Multispectral curved compound eye camera. Opt. Express 2020, 28, 9216–9231. [Google Scholar] [CrossRef] [PubMed]
  58. Xu, H.; Zhang, Y.; Wu, D.; Zhang, G.; Wang, Z.; Feng, X.; Hu, B.; Yu, W. Biomimetic curved compound-eye camera with a high resolution for the detection of distant moving objects. Opt. Lett. 2020, 45, 6863–6866. [Google Scholar] [CrossRef] [PubMed]
  59. Zhang, Y.; Xu, H.; Guo, Q.; Wu, D.; Yu, W. Biomimetic multispectral curved compound eye camera for real-time multispectral imaging in an ultra-large field of view. Opt. Express 2021, 29, 33346–33356. [Google Scholar] [CrossRef]
Figure 1. Schematic of the way data cubes are acquired by different spectral imaging techniques. (a) Spectral scanning imaging. (b) Snapshot spectral imaging.
Figure 1. Schematic of the way data cubes are acquired by different spectral imaging techniques. (a) Spectral scanning imaging. (b) Snapshot spectral imaging.
Electronics 12 00812 g001
Figure 2. Working principle of multispectral compound eye snapshot imaging system.
Figure 2. Working principle of multispectral compound eye snapshot imaging system.
Electronics 12 00812 g002
Figure 3. Diagram of planar snapshot MSI system’s structure. (a) The filter array placed in front of the MLA. (b) The filter array placed behind the MLA.
Figure 3. Diagram of planar snapshot MSI system’s structure. (a) The filter array placed in front of the MLA. (b) The filter array placed behind the MLA.
Electronics 12 00812 g003
Figure 4. Schematic diagram of planar snapshot MSI system for front optical system. (a) Front-view far objective lens type. (b) Front telescopic system type.
Figure 4. Schematic diagram of planar snapshot MSI system for front optical system. (a) Front-view far objective lens type. (b) Front telescopic system type.
Electronics 12 00812 g004
Figure 5. The schematic of an apposition compound eye.
Figure 5. The schematic of an apposition compound eye.
Electronics 12 00812 g005
Figure 6. Surface microlens direct imaging scheme.
Figure 6. Surface microlens direct imaging scheme.
Electronics 12 00812 g006
Figure 7. Surface compound eye imaging scheme based on relay imaging system.
Figure 7. Surface compound eye imaging scheme based on relay imaging system.
Electronics 12 00812 g007
Figure 8. Hardware configuration of the TOMBO system.
Figure 8. Hardware configuration of the TOMBO system.
Electronics 12 00812 g008
Figure 9. Color TOMBO compound eye imaging system.
Figure 9. Color TOMBO compound eye imaging system.
Electronics 12 00812 g009
Figure 10. TOMBO system with different filters on each unit.
Figure 10. TOMBO system with different filters on each unit.
Electronics 12 00812 g010
Figure 11. Array positive orthogonal—APO.
Figure 11. Array positive orthogonal—APO.
Electronics 12 00812 g011
Figure 12. Filter characteristics.
Figure 12. Filter characteristics.
Electronics 12 00812 g012
Figure 13. Photograph of the prototype multispectral TOMBO system.
Figure 13. Photograph of the prototype multispectral TOMBO system.
Electronics 12 00812 g013
Figure 14. Flow of the reconstruction processing.
Figure 14. Flow of the reconstruction processing.
Electronics 12 00812 g014
Figure 15. Experimental results obtained from MSI using the prototype TOMBO system.
Figure 15. Experimental results obtained from MSI using the prototype TOMBO system.
Electronics 12 00812 g015
Figure 16. Spectrum to RGB converted image.
Figure 16. Spectrum to RGB converted image.
Electronics 12 00812 g016
Figure 17. Wavelength and temporal mapping of the high-speed multispectral 3D-imaging system based on TOMBO.
Figure 17. Wavelength and temporal mapping of the high-speed multispectral 3D-imaging system based on TOMBO.
Electronics 12 00812 g017
Figure 18. Flowchart of the proposed system.
Figure 18. Flowchart of the proposed system.
Electronics 12 00812 g018
Figure 19. A close-up of the wavelength filter array.
Figure 19. A close-up of the wavelength filter array.
Electronics 12 00812 g019
Figure 20. High-speed MSI: (a) a fan and (b) captured compound-eye image with the fan rotating in a clockwise direction. The effective frame rate is approximately 100 fps.
Figure 20. High-speed MSI: (a) a fan and (b) captured compound-eye image with the fan rotating in a clockwise direction. The effective frame rate is approximately 100 fps.
Electronics 12 00812 g020
Figure 21. UAV used for the aerial observation: (a) overview and (b) multispectral TOMBO mounted on the UAV.
Figure 21. UAV used for the aerial observation: (a) overview and (b) multispectral TOMBO mounted on the UAV.
Electronics 12 00812 g021
Figure 22. Aerial images obtained with the mobile TOMBO system: (a) oil stove and (b) car.
Figure 22. Aerial images obtained with the mobile TOMBO system: (a) oil stove and (b) car.
Electronics 12 00812 g022
Figure 23. Evaluation of aerial images using the NDVI.
Figure 23. Evaluation of aerial images using the NDVI.
Electronics 12 00812 g023
Figure 24. Photograph of a lens plate containing 18 lens assemblies in a hexagonally close-packed array.
Figure 24. Photograph of a lens plate containing 18 lens assemblies in a hexagonally close-packed array.
Electronics 12 00812 g024
Figure 25. The complete multispectral camera, including the objective assembly and integrated ring illuminator.
Figure 25. The complete multispectral camera, including the objective assembly and integrated ring illuminator.
Electronics 12 00812 g025
Figure 26. Target images taken using different cameras. (a) A complete image of a target obtained using a multispectral camera. (b) An image of the object taken using a conventional camera.
Figure 26. Target images taken using different cameras. (a) A complete image of a target obtained using a multispectral camera. (b) An image of the object taken using a conventional camera.
Electronics 12 00812 g026
Figure 27. Photograph of the TOMBO endoscope prototype ((a,b) show the image captured when looking inside a textured pipe with a diameter of 20 mm).
Figure 27. Photograph of the TOMBO endoscope prototype ((a,b) show the image captured when looking inside a textured pipe with a diameter of 20 mm).
Electronics 12 00812 g027
Figure 28. Multispectral images obtained from the experiments.
Figure 28. Multispectral images obtained from the experiments.
Electronics 12 00812 g028
Figure 29. Overview of compound eye-type tactile endoscope.
Figure 29. Overview of compound eye-type tactile endoscope.
Electronics 12 00812 g029
Figure 30. Working principle of the proposed MSI system capturing an object via remote sensing. Each channel captures only a certain spectral part of the object.
Figure 30. Working principle of the proposed MSI system capturing an object via remote sensing. Each channel captures only a certain spectral part of the object.
Electronics 12 00812 g030
Figure 31. Schematic optical design of the ultra-compact micro-optical system for MSI with a linear variable filter (LVF), microlens array (MLA), and customized baffle array.
Figure 31. Schematic optical design of the ultra-compact micro-optical system for MSI with a linear variable filter (LVF), microlens array (MLA), and customized baffle array.
Electronics 12 00812 g031
Figure 32. Position relationship between LVF and MLA. ((a,b) are the cases without and with included angles, respectively).
Figure 32. Position relationship between LVF and MLA. ((a,b) are the cases without and with included angles, respectively).
Electronics 12 00812 g032
Figure 33. MTF diagrams for three different wavelengths (450, 665, and 880 nm) and two different field positions.
Figure 33. MTF diagrams for three different wavelengths (450, 665, and 880 nm) and two different field positions.
Electronics 12 00812 g033
Figure 34. (a) Front view of the MSI concept based on a multi-aperture system approach with a slanted LVF. (b) Spectral sampling for tilted (red) and non-tilted (blue) orientations of the LVF.
Figure 34. (a) Front view of the MSI concept based on a multi-aperture system approach with a slanted LVF. (b) Spectral sampling for tilted (red) and non-tilted (blue) orientations of the LVF.
Electronics 12 00812 g034
Figure 35. (a) Photograph of the KAI-16000 CCD image sensor included in the mechanical housing, (b) multispectral micro-optical unit from the bottom view showing the square-shaped holes of the baffle array, and (c) size comparison of the complete multispectral camera with a EUR 2 coin.
Figure 35. (a) Photograph of the KAI-16000 CCD image sensor included in the mechanical housing, (b) multispectral micro-optical unit from the bottom view showing the square-shaped holes of the baffle array, and (c) size comparison of the complete multispectral camera with a EUR 2 coin.
Electronics 12 00812 g035
Figure 36. (a) Raw image of the multispectral camera with an array of 11 × 6 sub-images. (b) RGB image of the object scene using a conventional camera for comparison purposes. (c) Magnified image of one separated channel.
Figure 36. (a) Raw image of the multispectral camera with an array of 11 × 6 sub-images. (b) RGB image of the object scene using a conventional camera for comparison purposes. (c) Magnified image of one separated channel.
Electronics 12 00812 g036
Figure 37. (a) Schematic of the ORRIS system. (b) An angle θ exists between the lenslet row and waveband directions and (c) the filtering process of the replicated sub-images and reconstruction of the spectral images.
Figure 37. (a) Schematic of the ORRIS system. (b) An angle θ exists between the lenslet row and waveband directions and (c) the filtering process of the replicated sub-images and reconstruction of the spectral images.
Electronics 12 00812 g037
Figure 38. (a) The optical schema of the ORRIS system. (b) The proof-of-principle prototype.
Figure 38. (a) The optical schema of the ORRIS system. (b) The proof-of-principle prototype.
Electronics 12 00812 g038
Figure 39. Image results of a static scene. (a) Original, shifted, gray sub-images; (b) remapped spectral images; (c) remapped spectral images with color fusion; (d) the 3D data cube; (e) a single, gray sub-image; (f) an RGB image of a similar scene captured using a cell phone.
Figure 39. Image results of a static scene. (a) Original, shifted, gray sub-images; (b) remapped spectral images; (c) remapped spectral images with color fusion; (d) the 3D data cube; (e) a single, gray sub-image; (f) an RGB image of a similar scene captured using a cell phone.
Electronics 12 00812 g039
Figure 40. Image results of a dynamic scene. (a) The first-, (b) second-, and (c) third-snapshot spectral images extracted from video 1.
Figure 40. Image results of a dynamic scene. (a) The first-, (b) second-, and (c) third-snapshot spectral images extracted from video 1.
Electronics 12 00812 g040
Figure 41. Schematic of the multi-layer artificial compound eye.
Figure 41. Schematic of the multi-layer artificial compound eye.
Electronics 12 00812 g041
Figure 42. Transmission spectra of (a) red, green, and blue filters, and (b) near-infrared (NIR) filters.
Figure 42. Transmission spectra of (a) red, green, and blue filters, and (b) near-infrared (NIR) filters.
Electronics 12 00812 g042
Figure 43. Illustration of the repeated lithographic processes for fabricating the multi-channel filters and the optical images of the color filter.
Figure 43. Illustration of the repeated lithographic processes for fabricating the multi-channel filters and the optical images of the color filter.
Electronics 12 00812 g043
Figure 44. Optical configuration of the MSI system.
Figure 44. Optical configuration of the MSI system.
Electronics 12 00812 g044
Figure 45. MSI: (a) color blindness test card and (b) imaging results.
Figure 45. MSI: (a) color blindness test card and (b) imaging results.
Electronics 12 00812 g045
Figure 46. The cross-sectional schematic of the MCCEC.
Figure 46. The cross-sectional schematic of the MCCEC.
Electronics 12 00812 g046
Figure 47. Multispectral channel layout for MCCEC. (a) Layout for a cluster of microlenses with seven spectrum channels. (b) Layout for a whole curved compound eye.
Figure 47. Multispectral channel layout for MCCEC. (a) Layout for a cluster of microlenses with seven spectrum channels. (b) Layout for a whole curved compound eye.
Electronics 12 00812 g047
Figure 48. Optical design for the MCCEC system.
Figure 48. Optical design for the MCCEC system.
Electronics 12 00812 g048
Figure 49. The prototype of BMCCEC: (a) its mechanical structure, (b) a photograph of BMCCEC, (c) a photograph of the multispectral curved compound eye.
Figure 49. The prototype of BMCCEC: (a) its mechanical structure, (b) a photograph of BMCCEC, (c) a photograph of the multispectral curved compound eye.
Electronics 12 00812 g049
Figure 50. MSI experiment results: (a) reconstructed image in the whole FOV, (b) reconstructed image for each spectrum channel, (c) reconstructed multispectral 3D cube.
Figure 50. MSI experiment results: (a) reconstructed image in the whole FOV, (b) reconstructed image for each spectrum channel, (c) reconstructed multispectral 3D cube.
Electronics 12 00812 g050
Figure 51. MSI of the BMCCEC prototype in the whole FOV: (a) reconstructed multispectral image in the central FOV, (b) reconstructed multispectral image in the edge FOV, (c) retrieved reflection spectrum curves for multispectral images in different FOVs.
Figure 51. MSI of the BMCCEC prototype in the whole FOV: (a) reconstructed multispectral image in the central FOV, (b) reconstructed multispectral image in the edge FOV, (c) retrieved reflection spectrum curves for multispectral images in different FOVs.
Electronics 12 00812 g051
Table 1. Characteristic parameters of the compact TOMBO multispectral compound eye imaging system.
Table 1. Characteristic parameters of the compact TOMBO multispectral compound eye imaging system.
System UnitKey ParametersParameter Values
Microlens arrayModelAPO-Q-P500-AF1.3
MaterialQuartz glass
Lens-to-image distance (μm)500
Focal length of lens (mm)1.3
Lens diameter (μm)500
Imaging sensorTypeCMOS
Pixel number1040 × 960
Size of pixel (μm)6.25 × 6.25
Number of cell pixels80 × 80
Imaging unit12 × 12
The bit depth of the image12 bits
Effective pixels960 × 960
Interference filterCentral wavelength (μm)400, 450, 500, 550, 600, 650, 700
Spectral coverage7
Distance packing glass (μm)0.3
Optical isolation deviceThickness (μm)50
Height (μm)1050
Table 2. Specifications of the TOMBO prototype.
Table 2. Specifications of the TOMBO prototype.
Imaging UnitLens Size (mm)Lens Focal Length (mm)Pixel Size (μm)Number of Cell PixelsField RangeCentral Wavelength (nm)
5 × 50.85 × 0.852.353.2 × 3.2220 × 22017° × 17°440, 500, 550, 600, 640
Table 3. Specifications of multispectral TOMBO.
Table 3. Specifications of multispectral TOMBO.
Number of imaging units3 × 3
Lens typeAchromatic lens
Lens focal length1.5 mm
Lens F number6
Lens diameter1 mm
Prototype quality45 g
Image sensorUI-1482LE-M
Size of pixel2.2 μm × 2.2 μm
FOV of the prototype50° × 50°
Sub-image pixel550 × 550
Power consumption of the prototype2.4 W
Spectral channels8
Central wavelength of a narrow-pass filter450, 520, 650, 740, 770, 850, 920, and 970 nm
Table 4. Specifications of the low-cost MSI system.
Table 4. Specifications of the low-cost MSI system.
System UnitKey ParametersParameter Values
Camera (LW11059)CCD modelsKAI-11002
Target surface size36.1 mm × 24 mm
Picture element quantity4008 × 2672
Pixel size9 μm × 9 μm
LensQuantity18
Focal length5.9 mm
F#3
Filter arrayNumber of narrow-pass filters17
FWHM8–10 nm
Number of neutral filters1 (OD3)
Table 5. Parameters of the TOMBO endoscope prototype.
Table 5. Parameters of the TOMBO endoscope prototype.
System UnitKey ParametersParameter Values
Image sensorPixel number640 × 480
Size of pixel3.6 μm × 3.6 μm
LensQuantity3 × 2
Focal length of lens1.1 mm
Lens F#3.5
Lens diameter0.8 mm
Whole view40°
Image heightAbout 0.56 mm
Distance between the lens0.85 mm × 0.85 mm
Filter arrayColor filterRGB Bayer
Table 6. Overview of the design parameters of the ultra-compact micro-optical system for MSI.
Table 6. Overview of the design parameters of the ultra-compact micro-optical system for MSI.
System UnitKey ParametersParameter Values
LVF (LF103245)Size (L × W × H)50 × 25 × 1 mm3
Mean transmittance60–90%
Central wavelengthλcenter: 450 nm–880 nm
Bandwidth2%*λcenter
Distance to the MLA0.3 mm
Dip angle9.5°
MLAAperture diameter (front)0.51 mm
Thickness of glass substrate1.75 mm
Diameter of microlens1.6 mm/1.4 mm
Radius of curvature of the microlens1.94 mm
Aperture diameter (rear)1.6 mm
Microlens spacing3 mm
Filter arrayAperture diameter (front)1.6 mm (round)
Thickness2.41 mm
Maximum aperture (rear)2.2 mm
Distance from the enclosed glass0.3 mm
Cover glassThickness0.75 mm
Distance to the image sensor0.57 mm
Table 7. Specifications of the system.
Table 7. Specifications of the system.
Key ParametersParameter Values
Channel number66
FPA typeA frame CCD
Dimensions of pixels7.4 μm × 7.4 μm
Target surface size24 mm × 36 mm
CCD pixels4872 × 3248
Spectral region450 nm~850 nm
Spectrum sampling6 nm
Spatial resolution400 × 400
Lens array11×6
Lens F#7
Focal length of lens3.65 mm
Field range68°
Optical structure length7.2 mm
Distance between the lens3 mm
Focal length of lens12 mm
Weight200 g
Prototype of volume60 mm × 60 mm × 28 mm
Table 8. Specifications of the system.
Table 8. Specifications of the system.
Key ParametersParameter Values
Channel number80
FPA typeCCD
Pixel size7.4 μm × 7.4 μm
Target surface size24 mm × 36 mm
Spectral region360~860 nm
Spatial resolution400 × 400
OL lensEF50mm
CL lensEF50mm
Lens array9 × 10
Distance between the lens3 mm
Focal length of lens12 mm
Lens F#8
230 mm × 70 mm × 70 mm (limited object distance)
50 mm × 70 mm × 70 mm (infinite distance)
Prototype weight1.0 kg (limited object distance), 0.5 kg (infinite distance)
Table 9. Specifications of BMCCEC.
Table 9. Specifications of BMCCEC.
Key ParametersParameter Values
Number of ommatidia127
Radius of the hemisphere68 mm
Model of cameraDalsa C5180M
Pixel size4.5 μm × 4.5 μm
Image frame rate30 fps
Spectral resolution10 nm
Number of effective ommatidia104
System focal length5 mm
Sensor typeCMOS
Camera resolution5120 × 5120
FOV98° × 98°
Number of spectrum7
The central wavelength of the bands500, 560, 600, 650, 700, 750, and 800 nm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hao, Q.; Song, Y.; Cao, J.; Liu, H.; Liu, Q.; Li, J.; Luo, Q.; Cheng, Y.; Cui, H.; Liu, L. The Development of Snapshot Multispectral Imaging Technology Based on Artificial Compound Eyes. Electronics 2023, 12, 812. https://doi.org/10.3390/electronics12040812

AMA Style

Hao Q, Song Y, Cao J, Liu H, Liu Q, Li J, Luo Q, Cheng Y, Cui H, Liu L. The Development of Snapshot Multispectral Imaging Technology Based on Artificial Compound Eyes. Electronics. 2023; 12(4):812. https://doi.org/10.3390/electronics12040812

Chicago/Turabian Style

Hao, Qun, Yanfeng Song, Jie Cao, Hao Liu, Qianghui Liu, Jie Li, Qiang Luo, Yang Cheng, Huan Cui, and Lin Liu. 2023. "The Development of Snapshot Multispectral Imaging Technology Based on Artificial Compound Eyes" Electronics 12, no. 4: 812. https://doi.org/10.3390/electronics12040812

APA Style

Hao, Q., Song, Y., Cao, J., Liu, H., Liu, Q., Li, J., Luo, Q., Cheng, Y., Cui, H., & Liu, L. (2023). The Development of Snapshot Multispectral Imaging Technology Based on Artificial Compound Eyes. Electronics, 12(4), 812. https://doi.org/10.3390/electronics12040812

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop