Topic Editors

National Research Council, Institute for the Applications of Calculus "M. Picone", Via P. Castellino 111, 80131 Naples, Italy
National Research Council, Institute for the Applications of Calculus "M. Picone", Via P. Castellino 111, 80131 Naples, Italy

Color Image Processing: Models and Methods (CIP: MM)

Abstract submission deadline
30 May 2025
Manuscript submission deadline
30 July 2025
Viewed by
19437

Topic Information

Dear Colleagues,

Color information plays a crucial role in digital image processing since it is a robust descriptor that can often improve data compression and simplify scene understanding for humans and automatic vision systems. Research about color presents new challenges since it makes it possible to expand the currently available methods, most of which are limited to the gray-level class of images. Furthermore, the multivariate nature of color image data requires the design of appropriate models and methods at both the mathematical and percentual/computational levels. As a result, Color Image Processing (CIP) has become an active research area witnessed by many papers during the past two decades. It finds wide application in numerous fields such as, among many others, Agriculture, Biomedicine, Cultural Heritage, Remote Sensing, Defense, and Security.

This Topic aims to give an overview of the state-of-the-art in color image processing and provide present/future directions in several applicative contexts. Specifically, the Topic focuses on two aspects that traditionally are considered separately: mathematical modeling and computational design of methods. Papers presenting reviews, alternative perspectives, new models/methods in the field of CIP facing both these aspects are welcome. All submitted papers will be peer-reviewed and selected on the basis of both their quality and relevance to the theme of this Topic.

We invite original contributions that provide novel solutions to these challenging problems. Submitted papers can address theoretical or practical aspects of progress and directions in CIP.

Issues of interest include, but are not limited to:

  • Information Theory and Entropy-based method for CIP
  • Color space models
  • Mathematical modeling for CIP
  • Numerical approximation for CIP
  • Color image enhancement, segmentation, and resizing
  • Data augmentation for CIP
  • Deep learning for CIP
  • Color content-based image retrieval
  • Color quality image assessment
  • Biometric CIP
  • Color medical imaging
  • CIP Models and Methods applied to Agriculture, Cultural Heritage, Remote Sensing, Defense, and Security

Dr. Giuliana Ramella
Dr. Isabella Torcicollo
Topic Editors

Keywords

  • color images
  • mathematical models
  • computational methods
  • color visual processing

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400 Submit
Computation
computation
1.9 3.5 2013 19.7 Days CHF 1800 Submit
Entropy
entropy
2.1 4.9 1999 22.4 Days CHF 2600 Submit
Journal of Imaging
jimaging
2.7 5.9 2015 20.9 Days CHF 1800 Submit
Optics
optics
1.1 2.2 2020 19.6 Days CHF 1200 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (11 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
24 pages, 8204 KiB  
Article
A Comprehensive Method for Example-Based Color Transfer with Holistic–Local Balancing and Unit-Wise Riemannian Information Gradient Acceleration
by Zeyu Wang, Jialun Zhou, Song Wang and Ning Wang
Entropy 2024, 26(11), 918; https://doi.org/10.3390/e26110918 - 29 Oct 2024
Viewed by 485
Abstract
Color transfer, an essential technique in image editing, has recently received significant attention. However, achieving a balance between holistic color style transfer and local detail refinement remains a challenging task. This paper proposes an innovative color transfer method, named BHL, which stands for [...] Read more.
Color transfer, an essential technique in image editing, has recently received significant attention. However, achieving a balance between holistic color style transfer and local detail refinement remains a challenging task. This paper proposes an innovative color transfer method, named BHL, which stands for Balanced consideration of both Holistic transformation and Local refinement. The BHL method employs a statistical framework to address the challenge of achieving a balance between holistic color transfer and the preservation of fine details during the color transfer process. Holistic color transformation is achieved using optimal transport theory within the generalized Gaussian modeling framework. The local refinement module adjusts color and texture details on a per-pixel basis using a Gaussian Mixture Model (GMM). To address the high computational complexity inherent in complex statistical modeling, a parameter estimation method called the unit-wise Riemannian information gradient (uRIG) method is introduced. The uRIG method significantly reduces the computational burden through the second-order acceleration effect of the Fisher information metric. Comprehensive experiments demonstrate that the BHL method outperforms state-of-the-art techniques in both visual quality and objective evaluation criteria, even under stringent time constraints. Remarkably, the BHL method processes high-resolution images in an average of 4.874 s, achieving the fastest processing time compared to the baselines. The BHL method represents a significant advancement in the field of color transfer, offering a balanced approach that combines holistic transformation and local refinement while maintaining efficiency and high visual quality. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

11 pages, 1320 KiB  
Article
Mobility Support with Intelligent Obstacle Detection for Enhanced Safety
by Jong Hyeok Han, Inkwon Yoon, Hyun Soo Kim, Ye Bin Jeong, Ji Hwan Maeng, Jinseok Park and Hee-Jae Jeon
Optics 2024, 5(4), 434-444; https://doi.org/10.3390/opt5040032 - 24 Oct 2024
Viewed by 758
Abstract
In recent years, assistive technology usage among the visually impaired has risen significantly worldwide. While traditional aids like guide dogs and white canes have limitations, recent innovations like RFID-based indoor navigation systems and alternative sensory solutions show promise. Nevertheless, there is a need [...] Read more.
In recent years, assistive technology usage among the visually impaired has risen significantly worldwide. While traditional aids like guide dogs and white canes have limitations, recent innovations like RFID-based indoor navigation systems and alternative sensory solutions show promise. Nevertheless, there is a need for a user-friendly, comprehensive system to address spatial orientation challenges for the visually impaired. This research addresses the significance of developing a deep learning-based walking assistance device for visually impaired individuals to enhance their safety during mobility. The proposed system utilizes real-time ultrasonic sensors attached to a cane to detect obstacles, thus reducing collision risks. It further offers real-time recognition and analysis of diverse obstacles, providing immediate feedback to the user. A camera distinguishes obstacle types and conveys relevant information through voice assistance. The system’s efficacy was confirmed with a 90–98% object recognition rate in tests involving various obstacles. This research holds importance in providing safe mobility, promoting independence, leveraging modern technology, and fostering social inclusion for visually impaired individuals. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

16 pages, 2995 KiB  
Article
Fundus-DANet: Dilated Convolution and Fusion Attention Mechanism for Multilabel Retinal Fundus Image Classification
by Yang Yan, Liu Yang and Wenbo Huang
Appl. Sci. 2024, 14(18), 8446; https://doi.org/10.3390/app14188446 - 19 Sep 2024
Viewed by 592
Abstract
The difficulty of classifying retinal fundus images with one or more illnesses present or missing is known as fundus multi-lesion classification. The challenges faced by current approaches include the inability to extract comparable morphological features from images of different lesions and the inability [...] Read more.
The difficulty of classifying retinal fundus images with one or more illnesses present or missing is known as fundus multi-lesion classification. The challenges faced by current approaches include the inability to extract comparable morphological features from images of different lesions and the inability to resolve the issue of the same lesion, which presents significant feature variances due to grading disparities. This paper proposes a multi-disease recognition network model, Fundus-DANet, based on the dilated convolution. It has two sub-modules to address the aforementioned issues: the interclass learning module (ILM) and the dilated-convolution convolutional block attention module (DA-CBAM). The DA-CBAM uses a convolutional block attention module (CBAM) and dilated convolution to extract and merge multiscale information from images. The ILM uses the channel attention mechanism to map the features to lower dimensions, facilitating exploring latent relationships between various categories. The results demonstrate that this model outperforms previous models in classifying fundus multilocular lesions in the OIA-ODIR dataset with 93% accuracy. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

25 pages, 43210 KiB  
Article
Chromaticity Analysis on Ethnic Minority Color Landscape Culture in Tibetan Area: A Semantic Differential Approach
by Liyun Zeng, Rita Yi Man Li and Rongjia Li
Appl. Sci. 2024, 14(11), 4672; https://doi.org/10.3390/app14114672 - 29 May 2024
Viewed by 952
Abstract
The color–area ratio in ethnic minority areas is one way to perceive cultural elements visually. The openness of spaces, sense of rhythm, and richness of color affect people’s emotions and induce different psychological perceptions. Despite many ethnic minority areas being more colorful than [...] Read more.
The color–area ratio in ethnic minority areas is one way to perceive cultural elements visually. The openness of spaces, sense of rhythm, and richness of color affect people’s emotions and induce different psychological perceptions. Despite many ethnic minority areas being more colorful than the main traits of Han, there is no systematic quantitative study for the color elements in ethnic minority areas’ landscapes, not to mention the research on the color–area ratio, main and auxiliary colors and embellishments, and layouts. Therefore, this paper studies the color–area ratio of Xiangcheng County in the Tibetan area of Ganzi Prefecture in Sichuan Province. Colors are extracted and quantitatively analyzed from six different aspects using the semantic differential (SD) method and color quantitative analysis method. In this way, low-scored (B group) and high-scored (A group) color landscape samples were extracted from the landscape image library and quantitatively analyzed by ColorImpact V4.1.2. The results show that the ethnic minority group’s color layout is characterized by richer colors and stronger contrasts than the Han group. This paper contributes to academic scholarship regarding color culture in ethnic minority areas. It also provides theoretical support for preserving ethnic minority groups’ cultural heritage and practical insights into color planning for urban and landscape designs. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

17 pages, 7440 KiB  
Article
Multi-Scale Cross-Attention Fusion Network Based on Image Super-Resolution
by Yimin Ma, Yi Xu, Yunqing Liu, Fei Yan, Qiong Zhang, Qi Li and Quanyang Liu
Appl. Sci. 2024, 14(6), 2634; https://doi.org/10.3390/app14062634 - 21 Mar 2024
Cited by 1 | Viewed by 1250
Abstract
In recent years, deep convolutional neural networks with multi-scale features have been widely used in image super-resolution reconstruction (ISR), and the quality of the generated images has been significantly improved compared with traditional methods. However, in current image super-resolution network algorithms, these methods [...] Read more.
In recent years, deep convolutional neural networks with multi-scale features have been widely used in image super-resolution reconstruction (ISR), and the quality of the generated images has been significantly improved compared with traditional methods. However, in current image super-resolution network algorithms, these methods need to be further explored in terms of the effective fusion of multi-scale features and cross-domain application of attention mechanisms. To address these issues, we propose a novel multi-scale cross-attention fusion network (MCFN), which optimizes the feature extraction and fusion process in structural design and modular innovation. In order to make better use of the attention mechanism, we propose a Pyramid Multi-scale Module (PMM) to extract multi-scale information by cascading. This PMM is introduced in MCFN and is mainly constructed by multiple multi-scale cross-attention modules (MTMs). To fuse the feature information of PMMs efficiently in both channel and spatial dimensions, we propose the cross-attention fusion module (CFM). In addition, an improved integrated attention enhancement module (IAEM) is inserted at the network’s end to enhance the correlation of high-frequency feature information between layers. Experimental results show that the algorithm significantly improves the reconstructed images’ edge information and texture details, and the benchmark dataset’s performance evaluation shows comparable performance to current state-of-the-art techniques. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

21 pages, 22780 KiB  
Article
Ref-MEF: Reference-Guided Flexible Gated Image Reconstruction Network for Multi-Exposure Image Fusion
by Yuhui Huang, Shangbo Zhou, Yufen Xu, Yijia Chen and Kai Cao
Entropy 2024, 26(2), 139; https://doi.org/10.3390/e26020139 - 3 Feb 2024
Viewed by 1667
Abstract
Multi-exposure image fusion (MEF) is a computational approach that amalgamates multiple images, each captured at varying exposure levels, into a singular, high-quality image that faithfully encapsulates the visual information from all the contributing images. Deep learning-based MEF methodologies often confront obstacles due to [...] Read more.
Multi-exposure image fusion (MEF) is a computational approach that amalgamates multiple images, each captured at varying exposure levels, into a singular, high-quality image that faithfully encapsulates the visual information from all the contributing images. Deep learning-based MEF methodologies often confront obstacles due to the inherent inflexibilities of neural network structures, presenting difficulties in dynamically handling an unpredictable amount of exposure inputs. In response to this challenge, we introduce Ref-MEF, a method for color image multi-exposure fusion guided by a reference image designed to deal with an uncertain amount of inputs. We establish a reference-guided exposure correction (REC) module based on channel attention and spatial attention, which can correct input features and enhance pre-extraction features. The exposure-guided feature fusion (EGFF) module combines original image information and uses Gaussian filter weights for feature fusion while keeping the feature dimensions constant. The image reconstruction is completed through a gated context aggregation network (GCAN) and global residual learning GRL. Our refined loss function incorporates gradient fidelity, producing high dynamic range images that are rich in detail and demonstrate superior visual quality. In evaluation metrics focused on image features, our method exhibits significant superiority and leads in holistic assessments as well. It is worth emphasizing that as the number of input images increases, our algorithm exhibits notable computational efficiency. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

28 pages, 35527 KiB  
Article
Nighttime Image Stitching Method Based on Image Decomposition Enhancement
by Mengying Yan, Danyang Qin, Gengxin Zhang, Huapeng Tang and Lin Ma
Entropy 2023, 25(9), 1282; https://doi.org/10.3390/e25091282 - 31 Aug 2023
Viewed by 1409
Abstract
Image stitching technology realizes alignment and fusion of a series of images with common pixel areas taken from different viewpoints of the same scene to produce a wide field of view panoramic image with natural structure. The night environment is one of the [...] Read more.
Image stitching technology realizes alignment and fusion of a series of images with common pixel areas taken from different viewpoints of the same scene to produce a wide field of view panoramic image with natural structure. The night environment is one of the important scenes of human life, and the night image stitching technology has more urgent practical significance in the fields of security monitoring and intelligent driving at night. Due to the influence of artificial light sources at night, the brightness of the image is unevenly distributed and there are a large number of dark light areas, but often these dark light areas have rich structural information. The structural features hidden in the darkness are difficult to extract, resulting in ghosting and misalignment when stitching, which makes it difficult to meet the practical application requirements. Therefore, a nighttime image stitching method based on image decomposition enhancement is proposed to address the problem of insufficient line feature extraction in the stitching process of nighttime images. The proposed algorithm performs luminance enhancement on the structural layer, smoothes the nighttime image noise using a denoising algorithm on the texture layer, and finally complements the texture of the fused image by an edge enhancement algorithm. The experimental results show that the proposed algorithm improves the image quality in terms of information entropy, contrast, and noise suppression compared with other algorithms. Moreover, the proposed algorithm extracts the most line features from the processed nighttime images, which is more helpful for the stitching of nighttime images. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

19 pages, 17842 KiB  
Article
Order Space-Based Morphology for Color Image Processing
by Shanqian Sun, Yunjia Huang, Kohei Inoue and Kenji Hara
J. Imaging 2023, 9(7), 139; https://doi.org/10.3390/jimaging9070139 - 7 Jul 2023
Cited by 4 | Viewed by 1854
Abstract
Mathematical morphology is a fundamental tool based on order statistics for image processing, such as noise reduction, image enhancement and feature extraction, and is well-established for binary and grayscale images, whose pixels can be sorted by their pixel values, i.e., each pixel has [...] Read more.
Mathematical morphology is a fundamental tool based on order statistics for image processing, such as noise reduction, image enhancement and feature extraction, and is well-established for binary and grayscale images, whose pixels can be sorted by their pixel values, i.e., each pixel has a single number. On the other hand, each pixel in a color image has three numbers corresponding to three color channels, e.g., red (R), green (G) and blue (B) channels in an RGB color image. Therefore, it is difficult to sort color pixels uniquely. In this paper, we propose a method for unifying the orders of pixels sorted in each color channel separately, where we consider that a pixel exists in a three-dimensional space called order space, and derive a single order by a monotonically nondecreasing function defined on the order space. We also fuzzify the proposed order space-based morphological operations, and demonstrate the effectiveness of the proposed method by comparing with a state-of-the-art method based on hypergraph theory. The proposed method treats three orders of pixels sorted in respective color channels equally. Therefore, the proposed method is consistent with the conventional morphological operations for binary and grayscale images. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

18 pages, 59354 KiB  
Article
Enhanced CycleGAN Network with Adaptive Dark Channel Prior for Unpaired Single-Image Dehazing
by Yijun Xu, Hanzhi Zhang, Fuliang He, Jiachi Guo and Zichen Wang
Entropy 2023, 25(6), 856; https://doi.org/10.3390/e25060856 - 26 May 2023
Cited by 3 | Viewed by 2076
Abstract
Unpaired single-image dehazing has become a challenging research hotspot due to its wide application in modern transportation, remote sensing, and intelligent surveillance, among other applications. Recently, CycleGAN-based approaches have been popularly adopted in single-image dehazing as the foundations of unpaired unsupervised training. However, [...] Read more.
Unpaired single-image dehazing has become a challenging research hotspot due to its wide application in modern transportation, remote sensing, and intelligent surveillance, among other applications. Recently, CycleGAN-based approaches have been popularly adopted in single-image dehazing as the foundations of unpaired unsupervised training. However, there are still deficiencies with these approaches, such as obvious artificial recovery traces and the distortion of image processing results. This paper proposes a novel enhanced CycleGAN network with an adaptive dark channel prior for unpaired single-image dehazing. First, a Wave-Vit semantic segmentation model is utilized to achieve the adaption of the dark channel prior (DCP) to accurately recover the transmittance and atmospheric light. Then, the scattering coefficient derived from both physical calculations and random sampling means is utilized to optimize the rehazing process. Bridged by the atmospheric scattering model, the dehazing/rehazing cycle branches are successfully combined to form an enhanced CycleGAN framework. Finally, experiments are conducted on reference/no-reference datasets. The proposed model achieved an SSIM of 94.9% and a PSNR of 26.95 on the SOTS-outdoor dataset and obtained an SSIM of 84.71% and a PSNR of 22.72 on the O-HAZE dataset. The proposed model significantly outperforms typical existing algorithms in both objective quantitative evaluation and subjective visual effect. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

21 pages, 23878 KiB  
Article
Manipulating Pixels in Computer Graphics by Converting Raster Elements to Vector Shapes as a Function of Hue
by Tajana Koren Ivančević, Nikolina Stanić Loknar, Maja Rudolf and Diana Bratić
J. Imaging 2023, 9(6), 106; https://doi.org/10.3390/jimaging9060106 - 23 May 2023
Viewed by 1614
Abstract
This paper proposes a method for changing pixel shape by converting a CMYK raster image (pixel) to an HSB vector image, replacing the square cells of the CMYK pixels with different vector shapes. The replacement of a pixel by the selected vector shape [...] Read more.
This paper proposes a method for changing pixel shape by converting a CMYK raster image (pixel) to an HSB vector image, replacing the square cells of the CMYK pixels with different vector shapes. The replacement of a pixel by the selected vector shape is done depending on the detected color values for each pixel. The CMYK values are first converted to the corresponding RGB values and then to the HSB system, and the vector shape is selected based on the obtained hue values. The vector shape is drawn in the defined space, according to the row and column matrix of the pixels of the original CMYK image. Twenty-one vector shapes are introduced to replace the pixels depending on the hue. The pixels of each hue are replaced by a different shape. The application of this conversion has its greatest value in the creation of security graphics for printed documents and the individualization of digital artwork by creating structured patterns based on the hue. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Graphical abstract

13 pages, 3016 KiB  
Article
Comparing Different Algorithms for the Pseudo-Coloring of Myocardial Perfusion Single-Photon Emission Computed Tomography Images
by Abdurrahim Rahimian, Mahnaz Etehadtavakol, Masoud Moslehi and Eddie Y. K. Ng
J. Imaging 2022, 8(12), 331; https://doi.org/10.3390/jimaging8120331 - 19 Dec 2022
Cited by 2 | Viewed by 1955
Abstract
Single-photon emission computed tomography (SPECT) images can significantly help physicians in diagnosing patients with coronary artery or suspected coronary artery diseases. However, these images are grayscale with qualities that are not readily visible. The objective of this study was to evaluate the effectiveness [...] Read more.
Single-photon emission computed tomography (SPECT) images can significantly help physicians in diagnosing patients with coronary artery or suspected coronary artery diseases. However, these images are grayscale with qualities that are not readily visible. The objective of this study was to evaluate the effectiveness of different pseudo-coloring algorithms of myocardial perfusion SPECT images. Data were collected using a Siemens Symbia T2 dual-head SPECT/computed tomography (CT) scanner. After pseudo-coloring, the images were assessed both qualitatively and quantitatively. The qualities of different pseudo-color images were examined by three experts, while the images were evaluated quantitatively by obtaining indices such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), normalized color difference (NCD), and structure similarity index metric (SSIM). The qualitative evaluation demonstrated that the warm color map (WCM), followed by the jet color map, outperformed the remaining algorithms in terms of revealing the non-visible qualities of the images. Furthermore, the quantitative evaluation results demonstrated that the WCM had the highest PSNR and SSIM but the lowest MSE. Overall, the WCM could outperform the other color maps both qualitatively and quantitatively. The novelty of this study includes comparing different pseudo-coloring methods to improve the quality of myocardial perfusion SPECT images and utilizing our collected datasets. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

Back to TopTop