sensors-logo

Journal Browser

Journal Browser

Advanced Measures for Imaging System Performance and Image Quality

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 19268

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Westminster, 101 New Cavendish St, Fitzrovia, London W1W 6XH, UK
Interests: deep neural networks; image quality; image systems performance; automotive applications; visual motion modelling and recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Westminster, 101 New Cavendish St, Fitzrovia, London W1W 6XH, UK
Interests: Imaging system modeling and performance; image quality metrics; colour imaging; automated vision

E-Mail Website
Guest Editor
Team Leader of DeepCamera MRG at CYENS, Nicosia CY-1011, Cyprus
Interests: computer graphics; image processing; imaging; image/video encoding; image/video quality evaluation; deep-learning; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Imaging systems are integral to our everyday experience – from mobile devices capturing personal photographs to cameras on board self-driving cars, in surveillance environments and smart cities to virtual and mixed-reality technologies and all forms of entertainment. The measurement of imaging system performance and the evaluation of the produced image quality are essential in specifying system requirements and in optimizing their outputs for human or machine (vision) consumption.

Advances in imaging technologies (for example, sensors and image signal processes), as well as the wide use of deep learning networks for image understanding, require imaging system performance and quality measures that can successfully describe the imaging system variables and outputs with the appropriate accuracy, precision, and range, while they are adaptable to the environments and conditions that the systems operate.

This Special Issue addresses research associated with advanced imaging system performance and image quality measures that are suitable to the quantification of contemporary and future imaging systems. Papers discussing mechanistic/physical, statistical, and neural network-based evaluation and quantification methods, or other novel relevant techniques.

Dr. Alexandra Psarrou
Prof. Dr. Sophie Triantaphillidou
Dr. Alessandro Artusi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 990 KiB  
Article
No-Reference Video Quality Assessment Using the Temporal Statistics of Global and Local Image Features
by Domonkos Varga
Sensors 2022, 22(24), 9696; https://doi.org/10.3390/s22249696 - 10 Dec 2022
Cited by 3 | Viewed by 4635
Abstract
During acquisition, storage, and transmission, the quality of digital videos degrades significantly. Low-quality videos lead to the failure of many computer vision applications, such as object tracking or detection, intelligent surveillance, etc. Over the years, many different features have been developed to resolve [...] Read more.
During acquisition, storage, and transmission, the quality of digital videos degrades significantly. Low-quality videos lead to the failure of many computer vision applications, such as object tracking or detection, intelligent surveillance, etc. Over the years, many different features have been developed to resolve the problem of no-reference video quality assessment (NR-VQA). In this paper, we propose a novel NR-VQA algorithm that integrates the fusion of temporal statistics of local and global image features with an ensemble learning framework in a single architecture. Namely, the temporal statistics of global features reflect all parts of the video frames, while the temporal statistics of local features reflect the details. Specifically, we apply a broad spectrum of statistics of local and global features to characterize the variety of possible video distortions. In order to study the effectiveness of the method introduced in this paper, we conducted experiments on two large benchmark databases, i.e., KoNViD-1k and LIVE VQC, which contain authentic distortions, and we compared it to 14 other well-known NR-VQA algorithms. The experimental results show that the proposed method is able to achieve greatly improved results on the considered benchmark datasets. Namely, the proposed method exhibits significant progress in performance over other recent NR-VQA approaches. Full article
(This article belongs to the Special Issue Advanced Measures for Imaging System Performance and Image Quality)
Show Figures

Figure 1

16 pages, 6742 KiB  
Article
A High-Resolution, Wide-Swath SAR Imaging System Based on Tandem SAR Satellites
by Liwei Sun and Chunsheng Li
Sensors 2022, 22(20), 7747; https://doi.org/10.3390/s22207747 - 12 Oct 2022
Cited by 3 | Viewed by 1703
Abstract
For the spaceborne synthetic aperture radar (SAR), it is difficult to obtain high resolution and wide width at the same time. This paper proposes a novel imaging system based on tandem SAR satellites, where one obtains coarse resolution and wide swath by the [...] Read more.
For the spaceborne synthetic aperture radar (SAR), it is difficult to obtain high resolution and wide width at the same time. This paper proposes a novel imaging system based on tandem SAR satellites, where one obtains coarse resolution and wide swath by the scanning mode, and the other obtains the undersampled echo from the same swath. The high resolution is achieved by associating the tandem SARs’ echo and using the minimum-energy-based algorithm. Finally, a high-resolution wide-swath SAR system is designed, and its imaging performance is verified by simulated data and real airborne SAR data. Full article
(This article belongs to the Special Issue Advanced Measures for Imaging System Performance and Image Quality)
Show Figures

Figure 1

21 pages, 2099 KiB  
Article
A Human Visual System Inspired No-Reference Image Quality Assessment Method Based on Local Feature Descriptors
by Domonkos Varga
Sensors 2022, 22(18), 6775; https://doi.org/10.3390/s22186775 - 7 Sep 2022
Cited by 7 | Viewed by 2935
Abstract
Objective quality assessment of natural images plays a key role in many fields related to imaging and sensor technology. Thus, this paper intends to introduce an innovative quality-aware feature extraction method for no-reference image quality assessment (NR-IQA). To be more specific, a various [...] Read more.
Objective quality assessment of natural images plays a key role in many fields related to imaging and sensor technology. Thus, this paper intends to introduce an innovative quality-aware feature extraction method for no-reference image quality assessment (NR-IQA). To be more specific, a various sequence of HVS inspired filters were applied to the color channels of an input image to enhance those statistical regularities in the image to which the human visual system is sensitive. From the obtained feature maps, the statistics of a wide range of local feature descriptors were extracted to compile quality-aware features since they treat images from the human visual system’s point of view. To prove the efficiency of the proposed method, it was compared to 16 state-of-the-art NR-IQA techniques on five large benchmark databases, i.e., CLIVE, KonIQ-10k, SPAQ, TID2013, and KADID-10k. It was demonstrated that the proposed method is superior to the state-of-the-art in terms of three different performance indices. Full article
(This article belongs to the Special Issue Advanced Measures for Imaging System Performance and Image Quality)
Show Figures

Figure 1

24 pages, 5490 KiB  
Article
A Novel Integration of Face-Recognition Algorithms with a Soft Voting Scheme for Efficiently Tracking Missing Person in Challenging Large-Gathering Scenarios
by Adnan Nadeem, Muhammad Ashraf, Kashif Rizwan, Nauman Qadeer, Ali AlZahrani, Amir Mehmood and Qammer H. Abbasi
Sensors 2022, 22(3), 1153; https://doi.org/10.3390/s22031153 - 3 Feb 2022
Cited by 11 | Viewed by 4201
Abstract
The probability of losing vulnerable companions, such as children or older ones, in large gatherings is high, and their tracking is challenging. We proposed a novel integration of face-recognition algorithms with a soft voting scheme, which was applied, on low-resolution cropped images of [...] Read more.
The probability of losing vulnerable companions, such as children or older ones, in large gatherings is high, and their tracking is challenging. We proposed a novel integration of face-recognition algorithms with a soft voting scheme, which was applied, on low-resolution cropped images of detected faces, in order to locate missing persons in a challenging large-crowd gathering. We considered the large-crowd gathering scenarios at Al Nabvi mosque Madinah. It is a highly uncontrolled environment with a low-resolution-images data set gathered from moving cameras. The proposed model first performs real-time face-detection from camera-captured images, and then it uses the missing person’s profile face image and applies well-known face-recognition algorithms for personal identification, and their predictions are further combined to obtain more mature prediction. The presence of a missing person is determined by a small set of consecutive frames. The novelty of this work lies in using several recognition algorithms in parallel and combining their predictions by a unique soft-voting scheme, which in return not only provides a mature prediction with spatio-temporal values but also mitigates the false results of individual recognition algorithms. The experimental results of our model showed reasonably good accuracy of missing person’s identification in an extremely challenging large-gathering scenario. Full article
(This article belongs to the Special Issue Advanced Measures for Imaging System Performance and Image Quality)
Show Figures

Figure 1

16 pages, 4805 KiB  
Article
Spectral Reconstruction Using an Iteratively Reweighted Regulated Model from Two Illumination Camera Responses
by Zhen Liu, Kaida Xiao, Michael R. Pointer, Qiang Liu, Changjun Li, Ruili He and Xuejun Xie
Sensors 2021, 21(23), 7911; https://doi.org/10.3390/s21237911 - 27 Nov 2021
Cited by 8 | Viewed by 2191
Abstract
An improved spectral reflectance estimation method was developed to transform captured RGB images to spectral reflectance. The novelty of our method is an iteratively reweighted regulated model that combines polynomial expansion signals, which was developed for spectral reflectance estimation, and a cross-polarized imaging [...] Read more.
An improved spectral reflectance estimation method was developed to transform captured RGB images to spectral reflectance. The novelty of our method is an iteratively reweighted regulated model that combines polynomial expansion signals, which was developed for spectral reflectance estimation, and a cross-polarized imaging system, which is used to eliminate glare and specular highlights. Two RGB images are captured under two illumination conditions. The method was tested using ColorChecker charts. The results demonstrate that the proposed method could make a significant improvement of the accuracy in both spectral and colorimetric: it can achieve 23.8% improved accuracy in mean CIEDE2000 color difference, while it achieves 24.6% improved accuracy in RMS error compared with classic regularized least squares (RLS) method. The proposed method is sufficiently accurate in predicting the spectral properties and their performance within an acceptable range, i.e., typical customer tolerance of less than 3 DE units in the graphic arts industry. Full article
(This article belongs to the Special Issue Advanced Measures for Imaging System Performance and Image Quality)
Show Figures

Figure 1

21 pages, 1663 KiB  
Article
Multi-Dimensional Feature Fusion Network for No-Reference Quality Assessment of In-the-Wild Videos
by Jiu Jiang, Xianpei Wang, Bowen Li, Meng Tian and Hongtai Yao
Sensors 2021, 21(16), 5322; https://doi.org/10.3390/s21165322 - 6 Aug 2021
Cited by 6 | Viewed by 2478
Abstract
Over the past few decades, video quality assessment (VQA) has become a valuable research field. The perception of in-the-wild video quality without reference is mainly challenged by hybrid distortions with dynamic variations and the movement of the content. In order to address this [...] Read more.
Over the past few decades, video quality assessment (VQA) has become a valuable research field. The perception of in-the-wild video quality without reference is mainly challenged by hybrid distortions with dynamic variations and the movement of the content. In order to address this barrier, we propose a no-reference video quality assessment (NR-VQA) method that adds the enhanced awareness of dynamic information to the perception of static objects. Specifically, we use convolutional networks with different dimensions to extract low-level static-dynamic fusion features for video clips and subsequently implement alignment, followed by a temporal memory module consisting of recurrent neural networks branches and fully connected (FC) branches to construct feature associations in a time series. Meanwhile, in order to simulate human visual habits, we built a parametric adaptive network structure to obtain the final score. We further validated the proposed method on four datasets (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC) to test the generalization ability. Extensive experiments have demonstrated that the proposed method not only outperforms other NR-VQA methods in terms of overall performance of mixed datasets but also achieves competitive performance in individual datasets compared to the existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advanced Measures for Imaging System Performance and Image Quality)
Show Figures

Figure 1

Back to TopTop