New and Specialized Methods of Image Compression

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: closed (15 December 2021) | Viewed by 18935

Special Issue Editor


E-Mail Website
Guest Editor
Department of Algorithmics and Software, Silesian University of Technology, 44-100 Gliwice, Poland
Interests: image compression; data compression; image processing; biomedical imaging; image compression standards; lifting-based reversible transforms (color space transforms and DWT); reversible denoising and lifting steps; adaptive algorithms

Special Issue Information

Dear Colleagues,

The most dynamic period in the development of image compression methods was at the turn of the century, when such algorithms were created as, for example, JPEG200, which so far has not had a worthy successor. Since then, several new image compression methods and algorithms have been proposed as well as certain categories of images previously considered exotic have become popular and now are demanding efficient compression.

The purpose of this Special Issue “New and Specialized Methods of Image Compression” is to provide a broad and current overview of new developments in the image compression domain. The focus is placed on promising image compression methods targeted at both typical (photographic) images and other image types that are increasingly used today. We especially look forward to contributions of research and overview papers on:

* New image compression methods, including (but not limited to):

  • compression based on neural networks, convolutional networks, and deep learning;
  • employment of minimum rate predictors;
  • inpainting-based image compression;
  • new transforms for image compression and adaptive and hybrid transforms; and
  • the use of video coding algorithms for the compression of still images.

* Coding of special types of images, such as

  • screen content images;
  • images with a reduced number of colors;
  • medical image modalities, including multimodal and volumetric images;
  • raw camera sensor images (e.g., Bayer pattern);
  • multispectral and hyperspectral images, satellite images; and
  • light field images.

* Older promising techniques that have fallen out of the mainstream interest are of interest if possibly effective in conjunction with recent techniques or for special image types (like the use of fractal coding, Burrows–Wheeler transform, and histogram packing in image compression).

Prof. Dr. Roman Starosolski
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning-based image compression
  • minimum rate predictors
  • inpainting-based image compression
  • fractal image coding
  • adaptive and hybrid transforms
  • screen content coding
  • multimodal and volumetric medical images
  • raw camera sensor images
  • multispectral and hyperspectral images
  • satellite images
  • light field image coding

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

2 pages, 175 KiB  
Editorial
New and Specialized Methods of Image Compression
by Roman Starosolski
J. Imaging 2022, 8(2), 48; https://doi.org/10.3390/jimaging8020048 - 16 Feb 2022
Viewed by 2147
Abstract
Due to the enormous amounts of images produced today, compression is crucial for consumer and professional (for instance, medical) picture archiving and communication systems [...] Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)

Research

Jump to: Editorial, Review

21 pages, 10792 KiB  
Article
Adaptive Digital Hologram Binarization Method Based on Local Thresholding, Block Division and Error Diffusion
by Pavel A. Cheremkhin, Ekaterina A. Kurbatova, Nikolay N. Evtikhiev, Vitaly V. Krasnov, Vladislav G. Rodin and Rostislav S. Starikov
J. Imaging 2022, 8(2), 15; https://doi.org/10.3390/jimaging8020015 - 18 Jan 2022
Cited by 12 | Viewed by 3529
Abstract
High-speed optical reconstruction of 3D-scenes can be achieved using digital holography with binary digital micromirror devices (DMD) or a ferroelectric spatial light modulator (fSLM). There are many algorithms for binarizing digital holograms. The most common are methods based on global and local thresholding [...] Read more.
High-speed optical reconstruction of 3D-scenes can be achieved using digital holography with binary digital micromirror devices (DMD) or a ferroelectric spatial light modulator (fSLM). There are many algorithms for binarizing digital holograms. The most common are methods based on global and local thresholding and error diffusion techniques. In addition, hologram binarization is used in optical encryption, data compression, beam shaping, 3D-displays, nanofabrication, materials characterization, etc. This paper proposes an adaptive binarization method based on a combination of local threshold processing, hologram division into blocks, and error diffusion procedure (the LDE method). The method is applied for binarization of optically recorded and computer-generated digital holograms of flat objects and three-dimensional scenes. The quality of reconstructed images was compared with different methods of error diffusion and thresholding. Image reconstruction quality was up to 22% higher by various metrics than that one for standard binarization methods. The optical hologram reconstruction using DMD confirms the results of the numerical simulations. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

23 pages, 2346 KiB  
Article
A Benchmark Evaluation of Adaptive Image Compression for Multi Picture Object Stereoscopic Images
by Alessandro Ortis, Marco Grisanti, Francesco Rundo and Sebastiano Battiato
J. Imaging 2021, 7(8), 160; https://doi.org/10.3390/jimaging7080160 - 23 Aug 2021
Cited by 1 | Viewed by 2308
Abstract
A stereopair consists of two pictures related to the same subject taken by two different points of view. Since the two images contain a high amount of redundant information, new compression approaches and data formats are continuously proposed, which aim to reduce the [...] Read more.
A stereopair consists of two pictures related to the same subject taken by two different points of view. Since the two images contain a high amount of redundant information, new compression approaches and data formats are continuously proposed, which aim to reduce the space needed to store a stereoscopic image while preserving its quality. A standard for multi-picture image encoding is represented by the MPO format (Multi-Picture Object). The classic stereoscopic image compression approaches compute a disparity map between the two views, which is stored with one of the two views together with a residual image. An alternative approach, named adaptive stereoscopic image compression, encodes just the two views independently with different quality factors. Then, the redundancy between the two views is exploited to enhance the low quality image. In this paper, the problem of stereoscopic image compression is presented, with a focus on the adaptive stereoscopic compression approach, which allows us to obtain a standardized format of the compressed data. The paper presents a benchmark evaluation on large and standardized datasets including 60 stereopairs that differ by resolution and acquisition technique. The method is evaluated by varying the amount of compression, as well as the matching and optimization methods resulting in 16 different settings. The adaptive approach is also compared with other MPO-compliant methods. The paper also presents an Human Visual System (HVS)-based assessment experiment which involved 116 people in order to verify the perceived quality of the decoded images. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

26 pages, 81868 KiB  
Article
Spline-Based Dense Medial Descriptors for Lossy Image Compression
by Jieying Wang, Jiří Kosinka and Alexandru Telea
J. Imaging 2021, 7(8), 153; https://doi.org/10.3390/jimaging7080153 - 19 Aug 2021
Cited by 5 | Viewed by 2448
Abstract
Medial descriptors are of significant interest for image simplification, representation, manipulation, and compression. On the other hand, B-splines are well-known tools for specifying smooth curves in computer graphics and geometric design. In this paper, we integrate the two by modeling medial descriptors with [...] Read more.
Medial descriptors are of significant interest for image simplification, representation, manipulation, and compression. On the other hand, B-splines are well-known tools for specifying smooth curves in computer graphics and geometric design. In this paper, we integrate the two by modeling medial descriptors with stable and accurate B-splines for image compression. Representing medial descriptors with B-splines can not only greatly improve compression but is also an effective vector representation of raster images. A comprehensive evaluation shows that our Spline-based Dense Medial Descriptors (SDMD) method achieves much higher compression ratios at similar or even better quality to the well-known JPEG technique. We illustrate our approach with applications in generating super-resolution images and salient feature preserving image compression. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

15 pages, 3473 KiB  
Article
Improved JPEG Coding by Filtering 8 × 8 DCT Blocks
by Yasir Iqbal and Oh-Jin Kwon
J. Imaging 2021, 7(7), 117; https://doi.org/10.3390/jimaging7070117 - 15 Jul 2021
Cited by 5 | Viewed by 2507
Abstract
The JPEG format, consisting of a set of image compression techniques, is one of the most commonly used image coding standards for both lossy and lossless image encoding. In this format, various techniques are used to improve image transmission and storage. In the [...] Read more.
The JPEG format, consisting of a set of image compression techniques, is one of the most commonly used image coding standards for both lossy and lossless image encoding. In this format, various techniques are used to improve image transmission and storage. In the final step of lossy image coding, JPEG uses either arithmetic or Huffman entropy coding modes to further compress data processed by lossy compression. Both modes encode all the 8 × 8 DCT blocks without filtering empty ones. An end-of-block marker is coded for empty blocks, and these empty blocks cause an unnecessary increase in file size when they are stored with the rest of the data. In this paper, we propose a modified version of the JPEG entropy coding. In the proposed version, instead of storing an end-of-block code for empty blocks with the rest of the data, we store their location in a separate buffer and then compress the buffer with an efficient lossless method to achieve a higher compression ratio. The size of the additional buffer, which keeps the information of location for the empty and non-empty blocks, was considered during the calculation of bits per pixel for the test images. In image compression, peak signal-to-noise ratio versus bits per pixel has been a major measure for evaluating the coding performance. Experimental results indicate that the proposed modified algorithm achieves lower bits per pixel while retaining quality. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

22 pages, 31534 KiB  
Article
Deep Concatenated Residual Networks for Improving Quality of Halftoning-Based BTC Decoded Image
by Heri Prasetyo, Alim Wicaksono Hari Prayuda, Chih-Hsien Hsia and Jing-Ming Guo
J. Imaging 2021, 7(2), 13; https://doi.org/10.3390/jimaging7020013 - 25 Jan 2021
Cited by 4 | Viewed by 1944
Abstract
This paper presents a simple technique for improving the quality of the halftoning-based block truncation coding (H-BTC) decoded image. The H-BTC is an image compression technique inspired from typical block truncation coding (BTC). The H-BTC yields a better decoded image compared to that [...] Read more.
This paper presents a simple technique for improving the quality of the halftoning-based block truncation coding (H-BTC) decoded image. The H-BTC is an image compression technique inspired from typical block truncation coding (BTC). The H-BTC yields a better decoded image compared to that of the classical BTC scheme under human visual observation. However, the impulsive noise commonly appears on the H-BTC decoded image. It induces an unpleasant feeling while one observes this decoded image. Thus, the proposed method presented in this paper aims to suppress the occurring impulsive noise by exploiting a deep learning approach. This process can be regarded as an ill-posed inverse imaging problem, in which the solution candidates of a given problem can be extremely huge and undetermined. The proposed method utilizes the convolutional neural networks (CNN) and residual learning frameworks to solve the aforementioned problem. These frameworks effectively reduce the impulsive noise occurrence, and at the same time, it improves the quality of H-BTC decoded images. The experimental results show the effectiveness of the proposed method in terms of subjective and objective measurements. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

17 pages, 370 KiB  
Review
Performance Overview of the Latest Video Coding Proposals: HEVC, JEM and VVC
by Miguel O. Martínez-Rach, Héctor Migallón, Otoniel López-Granado, Vicente Galiano and Manuel P. Malumbres
J. Imaging 2021, 7(2), 39; https://doi.org/10.3390/jimaging7020039 - 22 Feb 2021
Cited by 6 | Viewed by 2779
Abstract
The audiovisual entertainment industry has entered a race to find the video encoder offering the best Rate/Distortion (R/D) performance for high-quality high-definition video content. The challenge consists in providing a moderate to low computational/hardware complexity encoder able to run Ultra High-Definition (UHD) video [...] Read more.
The audiovisual entertainment industry has entered a race to find the video encoder offering the best Rate/Distortion (R/D) performance for high-quality high-definition video content. The challenge consists in providing a moderate to low computational/hardware complexity encoder able to run Ultra High-Definition (UHD) video formats of different flavours (360°, AR/VR, etc.) with state-of-the-art R/D performance results. It is necessary to evaluate not only R/D performance, a highly important feature, but also the complexity of future video encoders. New coding tools offering a small increase in R/D performance at the cost of greater complexity are being advanced with caution. We performed a detailed analysis of two evolutions of High Efficiency Video Coding (HEVC) video standards, Joint Exploration Model (JEM) and Versatile Video Coding (VVC), in terms of both R/D performance and complexity. The results show how VVC, which represents the new direction of future standards, has, for the time being, sacrificed R/D performance in order to significantly reduce overall coding/decoding complexity. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

Back to TopTop