Next Article in Journal / Special Issue
Classification of Contaminated Insulators Using k-Nearest Neighbors Based on Computer Vision
Previous Article in Journal
Perceptions about the Future of Integrating Emerging Technologies into Higher Education—The Case of Robotics with Artificial Intelligence
Previous Article in Special Issue
Assessment of Gradient Descent Trained Rule-Fact Network Expert System Multi-Path Training Technique Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Brief Review of Some Interesting Mars Rover Image Enhancement Projects

Applied Research LLC, Rockville, MD 20850, USA
Computers 2021, 10(9), 111; https://doi.org/10.3390/computers10090111
Submission received: 13 August 2021 / Revised: 4 September 2021 / Accepted: 6 September 2021 / Published: 8 September 2021
(This article belongs to the Special Issue Feature Paper in Computers)

Abstract

:
The Curiosity rover has landed on Mars since 2012. One of the instruments onboard the rover is a pair of multispectral cameras known as Mastcams, which act as eyes of the rover. In this paper, we summarize our recent studies on some interesting image processing projects for Mastcams. In particular, we will address perceptually lossless compression of Mastcam images, debayering and resolution enhancement of Mastcam images, high resolution stereo and disparity map generation using fused Mastcam images, and improved performance of anomaly detection and pixel clustering using combined left and right Mastcam images. The main goal of this review paper is to raise public awareness about these interesting Mastcam projects and also stimulate interests in the research community to further develop new algorithms for those applications.

1. Introduction

NASA has sent several rovers to Mars over the past few decades. The Sojourner rover landed on 4 July 1997. It practically worked for a short while because communication link was broken after two months. Sojourner traveled slightly more than 100 m. Spirit, also known as the Mars Exploration Rover (MER-A), landed on 4 January 2004. Spirit lasted for six years and had traveled 7.73 km. Opportunity (MER-B), a twin version of Spirit, launched on 7 July 2003 and landed on 25 January 2004. Opportunity lasted for 8 Martian years or 15 Earth years) and traveled 45.16 km. Curiosity rover was launched on 26 November 2011 and landed on 6 August 2012. It is still collecting data and moving around as of April 2021. Perseverance, a rover very similar to Curiosity, just landed on 18 February 2021 [1].
This paper will focus on the Curiosity rover. Onboard the Curiosity rover, there are a few important instruments. The laser induced breakdown spectroscopy (LIBS) instrument, ChemCam, performs rock composition analysis from distances as far as seven meters [2]. Another type of instrument is the mast cameras (Mastcams). There are two Mastcams [3]. The cameras have nine bands in each with six of them overlapped. The range of wavelengths covers the blue (445 nanometers) to the short-wave near-infrared (1012 nanometers).
The Mastcams can be seen in Figure 1. The right imager has three times better resolution than the left. As a result, the right camera is usually for short range image collection and the right is for far field data collection. The various bands of the two Mastcams are shown in Table 1 and Figure 2. There are a total of nine bands in each Mastcam. One can see that, except for the RGB bands, the other bands in the left and right images are non-overlapped, meaning that it is possible to generate a 12-band data cube by fusing the left and right bands. The dotted curves in Figure 2 are known as the “broadband near-IR cutoff filter”, which has a filter bandwidth (3 dB) of 502 to 678 nm. Its purpose is to help the Bayer filter in the camera [3]. In a later section, the 12-band cube was used for accurate data clustering and anomaly detection.
The objective of this paper is to briefly review some recent studies done by our team for Mastcam. First, we review our work on perceptually lossless compression effort for Mastcam images. The motivation of this study was to demonstrate that, with the help of recent compression technologies, it is plausible to adopt perceptually lossless compression (ten to one compression) instead of lossless compression (three to one compression) for NASA’s Mastcam images. This will save three times the precious bandwidth between Mars and Earth. Second, we review our recent study on debayering for Mastcam images. The Mastcam is still using a debayering algorithm developed in 2004. Our study shows that some recent debayering algorithms can achieve better artifact reduction and enhanced image quality. Third, we review our work on image enhancement for the left Mastcam images. Both conventional and deep learning approaches were studied. Fourth, we review our past work on stereo imaging and disparity map generation for Mastcam images. The approach was to combine left and right images for stereo imaging. Fifth, we further summarize our study on fusing both Mastcam images to enhance the performance of data clustering and anomaly detection. Finally, we will conclude our paper and discuss some future research opportunities, including Mastcam-Z, which is the new Mastcam imager onboard the Perseverance rover, image enhancement and stereo imaging by combining left and right Mastcam images.
We would like to emphasize that one key goal of our paper is to publicize some interesting projects related to Mastcam in the Curiosity rover and hopefully this will stimulate some interest from the research community to look into these interesting projects and perhaps further develop some new algorithms to improve the state-of-the-art. Our team worked with NASA Jet propulsion Laboratory (JPL) and two other universities on the Mastcam project for more than five years. Few researchers in the world actually know the fact that NASA has archived Mastcam images as well as data acquired by quite a few other instruments (LIBS, Alpha Particle X-Ray Spectrometer (APXS), etc.) onboard the Mars rover Curiosity. The database is known as the Planetary Data System (PDS) (https://pds.nasa.gov/ accessed on 6 September 2021). All these datasets are available to the public free of charge. If researchers are interested in applying some new algorithms to demosaic the Mastcam images, there are millions of images available. Another objective of our review paper is to summarize some preliminary algorithm improvement in five applications so that interested researchers can look at this review paper alone and can gather about the state-of-the-art algorithms in processing Mastcam images.
The NASA Mastcam projects are very specific applications. Few people are even aware of these projects. For all of the five applications, NASA JPL first implemented some baseline algorithms, and our team was the next one to continue the investigations. To the best of our knowledge, no one else has performed detailed investigations in these areas. For instance, in the demosaicing of Mastcam images, NASA used the Malvar-He-Cutler algorithm, which was developed in 2004. Since then, there has been tremendous developments in demosaicing. We worked with NASA JPL to compare a number of conventional and deep learning demosaicing algorithms and eventually convinced NASA that it is probably time to adopt newer algorithms. For the image compression project, NASA is still using the JPEG standard, which was developed in the 1990s. We performed thorough comparative studies and advocated the importance of using perceptually lossless compression. For the fusion of left and right Mastcam images, no one has done this before. Similarly, for anomaly detection and image enhancement, we are the only team working in this area.

2. Perceptually Lossless Compression for Mastcam Images

Up to now, NASA is still compressing the Mastcam images without loss using JPEG, which is a technology developed around 1990 [6]. JPEG is computationally efficient. However, it can achieve a compression ratio of at most three times in the lossless compression mode. In the past two decades, new compression standards, including JPEG-2000 (J2K) [7], X264 [8], and X265 [9], were developed. These video codecs can also compress still images. Lossless compression options are also present in these codecs.
In addition to the above codecs, some researchers developed lapped transform (LT) [10] and incorporated it into a new codec known as Daala [11] in recent years. Daala can compress both still images and videos. A lossless option is also present.
The objective of our recent study [12] was to perform thorough comparative studies and advocated the importance of using perceptually lossless compression for NASA’s missions. In particular, in our recent paper [12], we evaluated five image codecs, including Daala, X265, X264, J2k, and JPEG. The objective is to investigate which one of the above codecs can attain a 10:1 compression ratio, which we consider as perceptually lossless compression. We emphasize that some suitable metrics are required to quantify perceptual performance. In the past, researchers have found that peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), two popular and widely used metrics, do not correlate well with human’s subjective evaluations. In recent years, some metrics known as human visual system (HVS) and HVS with masking (HVSm) [13] were developed. For Mastcam images, HVS and HVSm were adopted in our compression studies. For perceptually lossless compression studies, we could have used CIELab metric too, but did not do so because we wanted to compare with other existing compression methods in the literature which only used PSNR, SSIM, HVS, and HVSm. Moreover, we also evaluated the decompressed RGB Mastcam images using subjective assessment. We noticed that perceptually lossless compression can be attained even at 20 to 1 compression. If one focuses at ten to one compression using Daala, the objective metrics of HVS and HVSm are 5 to 10 dBs higher than those of JPEG.
Our findings are as follows. Details can be found in [12].
  • Comparison of different approaches
    For the nine-band multispectral Mastcam images, we compared several approaches (principal component analysis (PCA), split band (SB), video, and two-step). It was observed that the SB approach performed better than others using actual Mastcam images.
  • Codec comparisons
    In each approach, five codecs were evaluated. In terms of those objective metrics (HVS and HVSm), Daala yielded the best performance amongst the various codecs. At ten to one compression, more than 5 dBs of improvement was observed by using Daala as compared to JPEG, which is the default codec by NASA.
  • Computational complexity
    Daala uses discrete cosine transform (DCT) and is more amenable for parallel processing. J2K is based on wavelet which requires the whole image as input. Although X265 and X264 are also based on DCT, they did not perform well at ten to one compression in our experiments.
  • Subjective comparisons
    Using visual inspections on RGB images, it was observed that at 10:1 and 20:1 compression, all codecs have almost no loss. However, at higher compression ratios such as 40 to 1 compression, it was observed that there are noticeable color distortions and block artifacts in JPEG, X264, and X265. In contrast, we still observe good compression performance in Daala and J2K even at 40:1 compression.

3. Debayering for Mastcam Images

The nine bands in each Mastcam camera contain RGB bands. Different from other bands, the RGB bands are collected by using a Bayer pattern filter, which first came out in 1976 [14]. In the past few decades, many debayering algorithms were developed [15,16,17,18,19]. NASA still uses the Malvar-He-Cutler (MHC) algorithm [20] to demosaic the RGB Mastcam images. Although MHC was developed in 2004, it is an efficient algorithm that can be easily implemented in the camera’s control electronics. In [3], another algorithm known as the directional linear minimum mean square-error estimation (DLMMSE) [21] was also compared against the MHC algorithm.
Deep learning has gained popularity since 2012. In [22], a joint demosaicing and denoising algorithm was proposed. For the sake of easy referencing, this algorithm can be called DEMOsaic-Net (DEMONET). Two other deep learning-based algorithms for demosaicing [23,24] have been identified as well.
The objective of our recent work [4] is to compare a number of conventional and deep learning demosaicing algorithms and eventually convince NASA that it is probably time to adopt newer algorithms.
In our recent work [4], we have thoroughly compared traditional and deep learning-based algorithms for demosaicing Mastcam images. We had two contributions. First, our research investigated the existence of better algorithms, developed after 2004, for debayering Mastcam images. In our paper [25], we started this effort and investigated several pixel-level fusion approaches [26]. Other methods [27,28,29,30] with publicly available codes were investigated. From the NASA’s Planetary Data System (PDS), we extracted 31 representative Mastcam images in our comparative studies. Second, we focused on comparing conventional and deep learning based demosaicing algorithms. Four recent conventional algorithms [31,32] were added to those non-deep learning based algorithms [19,20,21,22,23,24,25,26,27,28,29,30] in our experiments.
We have several observations on our Mastcam image demosaicing experiments. First, we observe that the MHC algorithm still generated reasonable performance in Mastcam images even though some recent ones yielded better performance. Second, we observe that some deep learning algorithms did not always perform well. Only the DEMONET generated better performance than conventional methods. This shows that the performance of demosaicing algorithms depends on the applications. Third, we observe that DEMONET performed better than others only for right Mastcam images. DEMONET has comparable performance to a method know as exploitation of color correlation (ECC) [31] for the left Mastcam images.
We compared the following algorithms: linear directional interpolation and nonlocal adaptive thresholding (LDI-NAT) [19], MHC [20], DLMMSE [21], Lu and Tan interpolation (LT) [27], adaptive frequency domain (AFD) [28], alternate projection (AP) [29], primary-consistent soft-decision PCSD [30], ATMF [26], DEMONET [22], fusion using three best (F3) [25], bilinear, sequential energy minimization (SEM) [24], deep residual network (DRL) [23], ECC [31], minimized-Laplacian residual interpolation (MLRI) [32], adaptive residual interpolation (ARI) [33], directional difference regression (DDR) [34].
Due to the fact that there are no ground truth demosaiced images, we adopted an objective blind image quality assessment metric known as natural image quality evaluator (NIQE). Low NIQE scores mean better performance. Figure 3 shows the NIQE metrics of various methods. One can see that ECC and DEMONET have better performance than others.
From Figure 4, we see obvious color distortions in demosaiced image using bilinear, MHC, AP, LT, LDI-NAT, F3, and ATMF. One can also see strong zipper artifacts in the images from AFD, AP, DLMMSE, PCSD, LDI-NAT, F3, and ATMF. There are slight color distortions in the results of ECC and MLRI. Finally, we can observe that the images of DEMONET, ARI, DRL, and SEM are more perceptually pleasing than others.

4. Mastcam Image Enhancement

The left Mastcam images have three times lower resolution than that of the right. We have tried to improve the spatial resolution of the left images so that left and right images may be fused for some applications such as anomaly detection. It should be noted that no one, including NASA, has done this work before. Here, we summarize two approaches that we have tried. The first one is based image deconvolution, which is a standard technique in image restoration. The second one is to apply deep learning algorithms.

4.1. Model Based Enhancement

In [35], we presented an algorithm to improve the left Mastcam images. There are two steps in our approach. First, a pair of left and right Mastcam bands is used to estimate the point spread function (PSF) using a sparsity-based approach. Second, the estimated PSF is then applied to improve the other left bands. Preliminary results using real Mastcam images indicated that the enhancement performance is mixed. In some left images, improvements can be clearly seen, but not so good results appeared in others.
From Figure 5, we can clearly observe the sharpening effects of the deblurred image (i.e., Figure 5f) compared with the aligned left images (i.e., Figure 5e). The estimated kernel in Figure 5c, was obtained using a pair of left and right green bands. We can see better enhancement in Figure 5 for the LR band. However, in some cases in [35], some performance degradations were observed.
The mixed results suggest a new direction for future research, which may involve deep learning techniques for PSF estimation and robust deblurring.

4.2. Deep Learning Approach

Over the past two decades, a large number of papers was published on the subject of pansharpening, which is the fusion of a high resolution (HR) panchromatic (pan) image with a low resolution (LR) multispectral image (MSI) [36,37,38,39,40]. Recently, we proposed an unsupervised network for image super-resolution (SR) of hyperspectral image (HSI) [41,42]. Similar to MSI, HSI has found many applications. The key features of our work in HSI include the following. First, our proposed algorithm extracts both the spectral and spatial information from LR HSI and HR MSI with two deep learning networks, which share the same decoder weights, as shown in Figure 6. Second, sum-to-one and sparsity are two physical constraints of HSI and MSI data representation. Third, our proposed algorithm directly addresses the challenge of spectral distortion by minimizing the angular difference of these representations. The proposed method is coined as unsupervised sparse Dirichlet network (uSDN). Details of uDSN can be found in our recent work [43].
Two benchmark datasets, CAVE [44] and Harvard [45], were used to evaluate the proposed uSDN. More details can be found in [41,42]. Here, we include results of applying uDSN to Mastcam images. As mentioned before, the right Mastcam has higher resolution than the left. Consequently, the right Mastcam images are treated as HR MSI and the left images are treated as LR HSI.
To generate objective metrics, we used the root mean squared error (RMSE) and spectral angle mapper (SAM), which are widely used in the image enhancement and pansharpening literature. Smaller values imply better performance.
Figure 7 shows the images of our experiments. One can see that the reconstructed image is comparable to the ground truth. Here, we only compare the proposed method with coupled nonnegative matrix factorization (CNMF) [46] which has been considered a good algorithm. The results in Table 2 show that the proposed approach was able to outperform the CNMF in two metrics.

5. Stereo Imaging and Disparity Map Generation for Mastcam Images

In the past few years, more research has been investigated in using virtual reality and augmented reality tools to Mars rover missions [47,48,49]. For example, a software called OnSight was developed by NASA and Microsoft to enable scientists to virtually work on Mars using Microsoft HoloLens [50]. Mastcam images have been used by OnSight software to create a 3D terrain model of the Mars. The disparity maps extracted from stereo Mastcam images are important by providing depth information. Some papers [51,52,53] proposed methods to estimate disparity maps using monocular images. Since the two Mastcam images do not have the same resolution, a generic disparity map estimation using the original Mastcam images may not take the full potential of the right Mastcam images that have three times higher image resolution. It will be more beneficial to NASA and other users of Mastcam images if a high-resolution disparity map can be generated.
In [54], we introduced a processing framework that can generate high resolution disparity maps for the Mastcam image pairs. The low-resolution left camera image was improved and the impact of the image enhancement on the disparity map estimation was studied quantitatively. It should be noted that, in our earlier paper [55], we generated stereo images using the Mastcam instruments. However, no quantitative assessment of the impact of the image enhancement was carried out.
Three algorithms were used to improve left camera images. The bicubic interpolation [56] was used as the baseline technique. Another method [57] is an adaptation of the technique in [5] with pansharpening [57,58,59,60,61]. Recently, deep learning-based SR techniques [62,63,64] have been developed. We used the enhanced deep super resolution (EDSR) [65] as one representative deep learning-based algorithm in our experiments. It should be emphasized that no one, including NASA, has carried out any work related to this stereo generation effort. As a result, we do not have a baseline algorithm from NASA to compare with.
Here, we include some comparative results. From Figure 8, we observe that the image quality with EDSR and the pansharpening-based method are better when compared with the original and bicubic images.
Figure 9 shows the objective NIQE metrics for the various algorithms. It is worth mentioning that even though the pansharpening-based method provides the lowest NIQE values (best performance) and provides visually very appealing enhanced images, it is noticed that some pixel regions in the enhanced images do not seem to be well registered in the sub-pixel level. Since the NIQE metric does not take into consideration issues related to registration in its assessment, it clearly favors the pansharpening-based method over others as shown in Figure 9. Other objective metrics using RMSE, PSNR, SSIM were used in [54] to demonstrate that the EDSR algorithm performed better than other methods.
Figure 10 shows the estimated disparity maps with the three image enhancement methods for Image Pair 6 in [54]. Figure 10a shows the estimated disparity map using the original left camera image. Figure 10b–d show the resultant disparity maps with the three methods. Figure 10e shows to the mask used when computing the average absolute error values. According to our paper [54], the disparity map shown in Figure 9d has the best performance. More details can be found in [54].

6. Anomaly Detection Using Mastcam Images

One important role of Mastcam imagers is to help locate anomalous or interesting rocks so that the rover can go to that rock and collect some samples for further analysis.
A two-step image alignment approach was introduced in [5]. The performance of the proposed approach was demonstrated using more than 100 pairs of Mastcam images, selected from over 500,000 images in NASA’s PDS database. As detailed in [5], the fused images have improved the performance of anomaly detection and pixel clustering applications. We would like to emphasize that this anomaly detection work was not done before by NASA and hence there is no baseline approach from NASA.
Figure 11 illustrates the proposed two-step approach. The first step uses RANSAC (random sample consensus) technique [66] for an initial image alignment. SURF features [67] and SIFT features [68] are then matched within the image pair.
The second step uses the diffeomorphic registration [69] technique to perform a refinement on the alignment. We observed that the second step achieves subpixel alignment performance. After the alignment, we can then perform anomaly detection and pixel clustering with the constructed multispectral image cubes.
We used K-means for pixel clustering. The number of clusters are set to be six following suggestion of the gap statistical method [70]. Figure 12 shows the results. In each figure, we enlarged one clustering region to showcase the performance. There are several important observations:
(i)
We observe that the clustering performance is improved after the first and second registration step of our proposed two-step framework;
(ii)
The clustering performance of the two-step registration for the M34-resolution and M100-resolution is comparable;
(iii)
The pansharpened data show the best clustering results with fewer randomly clustered pixels.
Figure 13 displays the anomaly detection results of two LR-pair cases for the three competing methods (global-RX, local-RX and NRS methods) applied to the original nine-band data captured only by the right Mastcam (second row) and the five twelve-band fused data counterparts (third to seventh rows). There is no ground-truth information about anomaly targets. Consequently, we relied on visual inspection. From Figure 13, we observe better detection results when both RANSAC and diffeomorphic registration steps are applied as compared with just RANSAC registration. Moreover, the results using BDSD and PRACS pan-sharpening produce less noise than the detection outputs of purely registration-based MS data.

7. Conclusions and Future Work

With the goals of raising public awareness and stimulating research interests of some interesting Mastcam projects, we briefly review our recent projects related to Mastcam image processing. First, we studied various new image compression algorithms and observed that perceptually lossless compression at ten to one compression can be achieved. This will tremendously save scarce bandwidth between Mars rover and JPL. Second, we compared recent debayering algorithms with the default algorithm used by NASA and found that recent algorithm can yield less artifacts. Third, we investigated image enhancement algorithms for left Mastcam images. It was observed that, with the help of right Mastcam images, it is possible to improve the resolution of left Mastcam images. Fourth, we investigated stereo image and disparity map generation by combining left and right Mastcam images. It was noticed that the fusion of enhanced left and right images can create higher resolution stereo image and disparity maps. Finally, we investigated the fusion of left and right images to form a 12-band multispectral image cube and its application to pixel clustering and anomaly detection.
The new Mars rover Perseverance that has landed on Mars in 2021 contains a new generation of stereo instrument known as Mastcam-Z, (https://mars.nasa.gov/mars2020/spacecraft/instruments/mastcam-z/ accessed on 7 September 2021). We are currently pursuing funding to continue our customization of our algorithms described in this paper to those images in Mastcam-Z. In the stereo imaging work, more research is needed in order to deal with left and right images from different view angles.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Mars Rover. Available online: https://en.wikipedia.org/wiki/Mars_rover#cite_note-Mars2_NSSDC-17 (accessed on 7 September 2021).
  2. Wang, W.; Li, S.; Qi, H.; Ayhan, B.; Kwan, C.; Vance, S. Revisiting the Preprocessing Procedures for Elemental Concentration Estimation based on CHEMCAM LIBS on MARS Rover. In Proceedings of the 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014. [Google Scholar]
  3. Bell, J.F., III; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; Van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.M.; et al. The Mars Science Laboratory Curiosity Rover Mast Camera (Mastcam) Instruments: Pre-Flight and In-Flight Calibration, Validation, and Data Archiving. AGU J. Earth Space Sci. 2018. [Google Scholar] [CrossRef] [Green Version]
  4. Kwan, C.; Chou, B.; Bell, J.F., III. Comparison of Deep Learning and Conventional Demosaicing Algorithms for Mastcam Images. Electronics 2019, 8, 308. [Google Scholar] [CrossRef] [Green Version]
  5. Ayhan, B.; Dao, M.; Kwan, C.; Chen, H.; Bell, J.F.; Kidd, R. A Novel Utilization of Image Registration Techniques to Process Mastcam Images in Mars Rover with Applications to Image Fusion, Pixel Clustering, and Anomaly Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4553–4564. [Google Scholar] [CrossRef]
  6. JPEG. Available online: http://en.wikipedia.org/wiki/JPEG (accessed on 30 March 2021).
  7. JPEG-2000. Available online: http://en.wikipedia.org/wiki/JPEG_2000 (accessed on 30 March 2021).
  8. X264. Available online: http://www.videolan.org/developers/x264.html (accessed on 30 March 2021).
  9. X265. Available online: https://www.videolan.org/developers/x265.html (accessed on 30 March 2021).
  10. Tran, T.D.; Liang, J.; Tu, C. Lapped transform via time-domain pre-and post-filtering. IEEE Trans. Signal Process. 2003, 51, 1557–1571. [Google Scholar] [CrossRef]
  11. Daala. Available online: http://xiph.org/daala/ (accessed on 30 March 2021).
  12. Kwan, C.; Larkin, J. Perceptually Lossless Compression for Mastcam Multispectral Images: A Comparative Study. J. Signal Inf. Process. 2019, 10, 139–166. [Google Scholar] [CrossRef] [Green Version]
  13. Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J.; Lukin, V. On between-coefficient contrast masking of DCT basis functions. In Proceedings of the Third International Workshop on Video Processing and Quality Metrics for Consumer Electronics VPQM-07, Scottsdale, AZ, USA, 25–26 January 2007. [Google Scholar]
  14. Bayer, B.E. Color Imaging Array. U.S. Patent 3971065, 20 July 1976. [Google Scholar]
  15. Li, X.; Gunturk, B.; Zhang, L. Image demosaicing: A systematic survey. In Proceedings of the Visual Communications and Image Processing 2008, San Jose, CA, USA, 28 January 2008; Volume 6822. [Google Scholar]
  16. Losson, O.; Macaire, L.; Yang, Y. Comparison of color demosaicing methods. Adv. Imaging Electron Phys. Elsevier 2010, 162, 173–265. [Google Scholar]
  17. Kwan, C.; Chou, B.; Kwan, L.M.; Budavari, B. Debayering RGBW color filter arrays: A pansharpening approach. In Proceedings of the IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York, NY, USA, 19–21 October 2017; pp. 94–100. [Google Scholar]
  18. Kwan, C.; Chou, B. Further Improvement of Debayering Performance of RGBW Color Filter Arrays Using Deep Learning and Pansharpening Techniques. J. Imaging 2019, 5, 68. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Zhang, L.; Wu, X.; Buades, A.; Li, X. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J. Electron. Imaging 2011, 20. [Google Scholar] [CrossRef] [Green Version]
  20. Malvar, H.S.; He, L.-W.; Cutler, R. High-quality linear interpolation for demosaciking of color images. In Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; Volume 3, pp. 485–488. [Google Scholar]
  21. Zhang, L.; Wu, X. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process. 2005, 14, 2167–2178. [Google Scholar] [CrossRef]
  22. Gharbi, M.; Chaurasia, G.; Paris, S.; Durand, F. Deep joint demosaicking and denoising. ACM Trans. Graph 2016, 35. [Google Scholar] [CrossRef]
  23. Tan, R.; Zhang, K.; Zuo, W.; Zhang, L. Color image demosaicking via deep residual learning. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 793–798. [Google Scholar]
  24. Klatzer, T.; Hammernik, K.; Knobelreiter, P.; Pock, T. Learning joint demosaicing and denoising based on sequential energy minimization. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Evanston, IL, USA, 13–15 May 2016; pp. 1–11. [Google Scholar]
  25. Kwan, C.; Chou, B.; Kwan, L.M.; Larkin, J.; Ayhan, B.; Bell, J.F.; Kerner, H. Demosaicking enhancement using pixel-level fusion. J. Signal Image Video Process. 2018. [Google Scholar] [CrossRef]
  26. Bednar, J.; Watt, T. Alpha-trimmed means and their relationship to median filters. IEEE Trans. Acoust. Speech Signal Process. 1984, 32, 145–153. [Google Scholar] [CrossRef]
  27. Lu, W.; Tan, Y.P. Color filter array demosaicking: New method and performance measures. IEEE Trans. Image Process. 2003, 12, 1194–1210. [Google Scholar]
  28. Dubois, E. Frequency-domain methods for demosaicking of Bayer-sampled color images. IEEE Signal Proc. Lett. 2005, 12, 847–850. [Google Scholar] [CrossRef]
  29. Gunturk, B.; Altunbasak, Y.; Mersereau, R.M. Color plane interpolation using alternating projections. IEEE Trans. Image Process. 2002, 11, 997–1013. [Google Scholar] [CrossRef] [PubMed]
  30. Wu, X.; Zhang, N. Primary-consistent soft-decision color demosaicking for digital cameras. IEEE Trans. Image Process. 2004, 13, 1263–1274. [Google Scholar] [CrossRef] [PubMed]
  31. Jaiswal, S.P.; Au, O.C.; Jakhetiya, V.; Yuan, Y.; Yang, H. Exploitation of inter-color correlation for color image demosaicking. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 1812–1816. [Google Scholar]
  32. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Beyond color difference: Residual interpolation for color image demosaicking. IEEE Trans. Image Process. 2016, 25, 1288–1300. [Google Scholar] [CrossRef]
  33. Monno, Y.; Kiku, D.; Tanaka, M.; Okutomi, M. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking. Sensors 2017, 17, 2787. [Google Scholar] [CrossRef] [Green Version]
  34. Wu, J.; Timofte, R.; Gool, L.V. Demosaicing based on directional difference regression and efficient regression priors. IEEE Trans. Image Process. 2016, 25, 3862–3874. [Google Scholar] [CrossRef]
  35. Kwan, C.; Dao, M.; Chou, B.; Kwan, L.M.; Ayhan, B. Mastcam Image Enhancement Using Estimated Point Spread Functions. In Proceedings of the IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York, NY, USA, 19–21 October 2017; pp. 186–191. [Google Scholar]
  36. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. Mtf-tailored multiscale fusion of high-resolution ms and pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  37. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of ms+ pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45. [Google Scholar] [CrossRef]
  38. Akhtar, N.; Shafait, F.; Mian, A. Bayesian sparse representation for hyperspectral image super resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3631–3640. [Google Scholar]
  39. Akhtar, N.; Shafait, F.; Mian, A. Hierarchical beta process with Gaussian process prior for hyperspectral image super resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 103–120. [Google Scholar]
  40. Borengasser, M.; Hungate, W.S.; Watkins, R. Hyperspectral Remote Sensing: Principles and Applications; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  41. Qu, Y.; Qi, H.; Kwan, C. Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2511–2520. [Google Scholar]
  42. Qu, Y.; Qi, H.; Kwan, C. Unsupervised and Unregistered Hyperspectral Image Super-Resolution with Mutual Dirichlet-Net. arXiv 2019, arXiv:1904.12175. [Google Scholar]
  43. Qu, Y.; Qi, H.; Kwan, C. Application of Deep Learning Approaches to Enhancing Mastcam Images. In Recent Advances in Image Restoration with Application to Real World Problems; InTech: Ahmedabad, India, 2020. [Google Scholar]
  44. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 193–200. [Google Scholar]
  46. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2017, 50, 528–537. [Google Scholar] [CrossRef]
  47. Arya, M.; Hassan, S.; Binjola, S.; Verma, P. Exploration of Mars Using Augmented Reality. In Proceedings of the International Conference on Intelligent Computing and Applications, Wuhan, China, 15–18 August 2018; Springer: Singapore. [Google Scholar]
  48. Casini, A.E.; Maggiore, P.; Viola, N.; Basso, V.; Ferrino, M.; Hoffman, J.A.; Cowley, A. Analysis of a Moon outpost for Mars enabling technologies through a Virtual Reality environment. Acta Astronaut. 2018, 143, 353–361. [Google Scholar] [CrossRef]
  49. Mars Virtual Reality Software Wins NASA Award. Available online: https://www.jpl.nasa.gov/news/news.php?feature=7249 (accessed on 30 March 2021).
  50. Boyle, R. NASA uses Microsoft’s HoloLens and ProtoSpace to build its Next Mars rover in augmented reality. GeekWire 2016. Available online: https://www.geekwire.com/2016/nasa-uses-microsoft-hololens-build-mars-rover-augmented-reality/ (accessed on 7 September 2021).
  51. Alhashim, I.; Wonka, P. High Quality Monocular Depth Estimation via Transfer Learning. arXiv 2018, arXiv:1812.11941. [Google Scholar]
  52. Poggi, M.; Tosi, F.; Mattoccia, S. Learning monocular depth estimation with unsupervised trinocular assumptions. In Proceedings of the IEEE International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018. [Google Scholar]
  53. Godard, C.; Mac Aodha, O.; Brostow, G.J. Unsupervised monocular depth estimation with left-right consistency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  54. Ayhan, B.; Kwan, C. Mastcam Image Resolution Enhancement with Application to Disparity Map Generation for Stereo Images with Different Resolutions. Sensors 2019, 19, 3526. [Google Scholar] [CrossRef] [Green Version]
  55. Kwan, C.; Chou, B.; Ayhan, B. Enhancing Stereo Image Formation and Depth Map Estimation for Mastcam Images. In Proceedings of the IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York, NY, USA, 8–10 November 2018. [Google Scholar]
  56. Keys, R. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1153–1160. [Google Scholar] [CrossRef] [Green Version]
  57. Qu, Y.; Qi, H.; Ayhan, B.; Kwan, C.; Kidd, R. Does Multispectral/Hyperspectral Pansharpening Improve the Performance of Anomaly Detection? In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 6130–6133. [Google Scholar]
  58. Kwan, C.; Budavari, B.; Bovik, A.C.; Marchisio, G. Blind Quality Assessment of Fused WorldView-3 Images by Using the Combinations of Pansharpening and Hypersharpening Paradigms. IEEE Geosci. Remote Sens. Lett. 2017, 1835–1839. [Google Scholar] [CrossRef]
  59. Loncan, L.; de Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 27–46. [Google Scholar] [CrossRef] [Green Version]
  60. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 2565–2586. [Google Scholar] [CrossRef]
  61. Kwan, C.; Budavari, B.; Dao, M.; Ayhan, B.; Bell, J.F. Pansharpening of Mastcam images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5117–5120. [Google Scholar]
  62. Dong, C.; Loy, C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
  63. Yu, J.; Fan, Y.; Yang, J.; Xu, N.; Wang, Z.; Wang, X.; Huang, T. Wide activation for efficient and accurate image super-resolution. arXiv 2018, arXiv:1808.08718. [Google Scholar]
  64. Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  65. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  66. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  67. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. Comput. Vis. Image Underst. (CVIU) 2008, 110, 346–359. [Google Scholar] [CrossRef]
  68. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  69. Chen, H.; Goela, A.; Garvin, G.J.; Li, S. A Parameterization of Deformation Fields for Diffeomorphic Image Registration and Its Application to Myocardial Delineation. Med. Image Comput. Comput.-Assist. Interv. 2010, 13, 340–348. [Google Scholar] [PubMed]
  70. Tibshirani, R.; Walther, G.; Hastie, T. Estimating the number of data clusters via the gap statistic. J. R. Stat. Soc. B 2001, 63, 411–423. [Google Scholar] [CrossRef]
  71. Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE Pan sharpening of very high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
  72. Choi, J.; Yu, K.; Kim, Y. A new adaptive component-substitution-based satellite image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
  73. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Proc. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  74. Li, W.; Tramel, E.W.; Prasad, S.; Fowler, J.E. Nearest regularized subspace for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 477–489. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The Mars rover—Curiosity, and its onboard instruments [4]. Mastcams are located just below the white box near the top of the mast.
Figure 1. The Mars rover—Curiosity, and its onboard instruments [4]. Mastcams are located just below the white box near the top of the mast.
Computers 10 00111 g001
Figure 2. Spectral response curves for the left eye (top panel) and the right eye (bottom panel) [5].
Figure 2. Spectral response curves for the left eye (top panel) and the right eye (bottom panel) [5].
Computers 10 00111 g002
Figure 3. Mean of NIQE scores of demosaiced R, G, and B bands for 16 left images using all methods. MHC is the default algorithm used by NASA.
Figure 3. Mean of NIQE scores of demosaiced R, G, and B bands for 16 left images using all methods. MHC is the default algorithm used by NASA.
Computers 10 00111 g003
Figure 4. Debayered images for left Mastcam Image 1. MHC is the default algorithm used by NASA.
Figure 4. Debayered images for left Mastcam Image 1. MHC is the default algorithm used by NASA.
Computers 10 00111 g004
Figure 5. Image enhancement performance of an LR-pair on sol 0100 taken on 16 November 2012 for L0R filter band aligned image using PSF estimated from 0G filter bands. (a) Original R0G band; (b) L0G band; (c) estimated PSF using L0G image in (b) and R0G image in (a); (d) R0R band; (e) L0R band-PSNR = 24.78 dB; (f) enhanced image of (e) using PSF estimated in (c) PSNR = 30.08 dB.
Figure 5. Image enhancement performance of an LR-pair on sol 0100 taken on 16 November 2012 for L0R filter band aligned image using PSF estimated from 0G filter bands. (a) Original R0G band; (b) L0G band; (c) estimated PSF using L0G image in (b) and R0G image in (a); (d) R0R band; (e) L0R band-PSNR = 24.78 dB; (f) enhanced image of (e) using PSF estimated in (c) PSNR = 30.08 dB.
Computers 10 00111 g005
Figure 6. Simplified architecture of the proposed uSDN.
Figure 6. Simplified architecture of the proposed uSDN.
Computers 10 00111 g006
Figure 7. Results of pansharpening for Mars images. Left column (a,c,e) shows the original images; right column (b,d,f) is the zoomed in view of the blue rectangle areas of the left images. The first row (a,b) shows third band from the left camera. The second row (c,d) shows the corresponding reconstructed results. The third row (e,f) shows the third band from the right camera.
Figure 7. Results of pansharpening for Mars images. Left column (a,c,e) shows the original images; right column (b,d,f) is the zoomed in view of the blue rectangle areas of the left images. The first row (a,b) shows third band from the left camera. The second row (c,d) shows the corresponding reconstructed results. The third row (e,f) shows the third band from the right camera.
Computers 10 00111 g007
Figure 8. Image enhancements on 0183ML0009930000105284E01_DRCX_0PCT.png. (a) Original left camera image; (b) bicubic-enhanced left camera image; (c) pansharpening-based method enhanced left camera image; (d) EDSR enhanced left camera image.
Figure 8. Image enhancements on 0183ML0009930000105284E01_DRCX_0PCT.png. (a) Original left camera image; (b) bicubic-enhanced left camera image; (c) pansharpening-based method enhanced left camera image; (d) EDSR enhanced left camera image.
Computers 10 00111 g008aComputers 10 00111 g008b
Figure 9. Natural image quality evaluator (NIQE) metric results for enhanced “original left Mastcam images” (scale: ×2) by the bicubic interpolation, pansharpening-based method, and EDSR.
Figure 9. Natural image quality evaluator (NIQE) metric results for enhanced “original left Mastcam images” (scale: ×2) by the bicubic interpolation, pansharpening-based method, and EDSR.
Computers 10 00111 g009
Figure 10. Disparity map estimations with the three methods and the mask for computing average absolute error. (a) Ground truth disparity map; (b) disparity map (bicubic interpolation); (c) disparity map (pansharpening-based method); (d) disparity map (EDSR); (e) mask for computing average absolute error.
Figure 10. Disparity map estimations with the three methods and the mask for computing average absolute error. (a) Ground truth disparity map; (b) disparity map (bicubic interpolation); (c) disparity map (pansharpening-based method); (d) disparity map (EDSR); (e) mask for computing average absolute error.
Computers 10 00111 g010
Figure 11. A two-step image alignment approach to registering left and right images.
Figure 11. A two-step image alignment approach to registering left and right images.
Computers 10 00111 g011
Figure 12. Clustering results with six classes of an LR-pair on sol 0812 taken on 18 November 2014. (a) Original RGB right image; (b) original RGB left image; (c) using nine-band right camera MS cube; (d) using twelve-band MS cube after first registration step with lower (M-34) resolution; (e) using twelve-band MS cube after the second registration step with lower resolution; (f) using twelve-band MS cube after the second registration step with higher (M-100) resolution; (g) using pan-sharpened images by band dependent spatial detail (BDSD) [71]; and (h) using pan-sharpened images by partial replacement adaptive CS (PRACS) [72].
Figure 12. Clustering results with six classes of an LR-pair on sol 0812 taken on 18 November 2014. (a) Original RGB right image; (b) original RGB left image; (c) using nine-band right camera MS cube; (d) using twelve-band MS cube after first registration step with lower (M-34) resolution; (e) using twelve-band MS cube after the second registration step with lower resolution; (f) using twelve-band MS cube after the second registration step with higher (M-100) resolution; (g) using pan-sharpened images by band dependent spatial detail (BDSD) [71]; and (h) using pan-sharpened images by partial replacement adaptive CS (PRACS) [72].
Computers 10 00111 g012
Figure 13. Comparison of anomaly detection performance of an LR-pair on sol 1138 taken on 10-19-2015. The first row shows the RGB left and right images; and the second to seventh rows are the anomaly detection results of the six MS data versions listed in [5] in which the first, second, and third columns are results of global-RX, local-RX [73] and NRS [74] methods, respectively.
Figure 13. Comparison of anomaly detection performance of an LR-pair on sol 1138 taken on 10-19-2015. The first row shows the RGB left and right images; and the second to seventh rows are the anomaly detection results of the six MS data versions listed in [5] in which the first, second, and third columns are results of global-RX, local-RX [73] and NRS [74] methods, respectively.
Computers 10 00111 g013aComputers 10 00111 g013b
Table 1. Mastcam bands [4]. There are nine bands with six overlapping bands in each camera.
Table 1. Mastcam bands [4]. There are nine bands with six overlapping bands in each camera.
The Left MastcamThe Right Mastcam
FilterWavelength (nm)FilterWavelength (nm)
L2445R2447
L0B495R0B493
L1527R1527
L0G554R0G551
L0R640R0R638
L4676R3805
L3751R4908
L5867R5937
L61012R61013
Table 2. Performance metrics using two methods.
Table 2. Performance metrics using two methods.
MethodsRMSESAM
CNMF0.0562.48
Proposed0.0332.09
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kwan, C. A Brief Review of Some Interesting Mars Rover Image Enhancement Projects. Computers 2021, 10, 111. https://doi.org/10.3390/computers10090111

AMA Style

Kwan C. A Brief Review of Some Interesting Mars Rover Image Enhancement Projects. Computers. 2021; 10(9):111. https://doi.org/10.3390/computers10090111

Chicago/Turabian Style

Kwan, Chiman. 2021. "A Brief Review of Some Interesting Mars Rover Image Enhancement Projects" Computers 10, no. 9: 111. https://doi.org/10.3390/computers10090111

APA Style

Kwan, C. (2021). A Brief Review of Some Interesting Mars Rover Image Enhancement Projects. Computers, 10(9), 111. https://doi.org/10.3390/computers10090111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop