Next Article in Journal
Technological Aspects and Potential Cutaneous Application of Wine Industry By-Products
Previous Article in Journal
An Intelligent Approach to the Unit Nesting Problem of Coil Material
Previous Article in Special Issue
Video-Based Recognition of Human Activity Using Novel Feature Extraction Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Gradient-Weighted Voting Approach for Classical and Fuzzy Circular Hough Transforms and Their Application in Medical Image Analysis—Case Study: Colonoscopy

1
Doctoral School of Multidisciplinary Engineering Sciences, Széchenyi István University, 9026 Győr, Hungary
2
Department of Telecommunications, Széchenyi István University, 9026 Győr, Hungary
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(16), 9066; https://doi.org/10.3390/app13169066
Submission received: 9 June 2023 / Revised: 11 July 2023 / Accepted: 4 August 2023 / Published: 8 August 2023
(This article belongs to the Special Issue Computational Intelligence in Image and Video Analysis)

Abstract

:

Featured Application

The circular fuzzy Hough transform with gradient-weighted voting can be used for finding the contours of circle-like shapes, such as colorectal polyps on colonoscopy images, as well as other cases that require a given relative gradient edge around the circle-like objects.

Abstract

Classical circular Hough transform was proven to be effective for some types of colorectal polyps. However, the polyps are very rarely perfectly circular, so some tolerance is needed, that can be ensured by applying fuzzy Hough transform instead of the classical one. In addition, the edge detection method, which is used as a preprocessing step of the Hough transforms, was changed from the generally used Canny method to Prewitt that detects fewer edge points outside of the polyp contours and also a smaller number of points to be transformed based on statistical data from three colonoscopy databases. According to the statistical study we performed, in the colonoscopy images the polyp contours usually belong to gradient domain of neither too large, nor too small gradients, though they can also have stronger or weaker segments. In order to prioritize the gradient domain typical for the polyps, a relative gradient-based thresholding as well as a gradient-weighted voting was introduced in this paper. For evaluating the improvement of the shape deviation tolerance of the classical and fuzzy Hough transforms, the maximum radial displacement and the average radius were used to characterize the roundness of the objects to be detected. The gradient thresholding proved to decrease the calculation time to less than 50% of the full Hough transforms, and the number of the resulting circles outside the polyp’s environment also decreased, especially for low resolution images.

1. Introduction

As some polyps in the colon and rectal tract of the bowel can develop into extremely dangerous cancers, and they cause symptoms only rather late in their developed phase, it is recommended that even healthy individuals undergo colorectal screening after a certain age. There are multiple methods available for colon screening, with colonoscopy being the most commonly used. A colonoscope, which is an endoscope equipped with a flexible tube, camera, light source, and other tools, can navigate within the bowel, and perform various tasks like inflating the bowel, spraying liquids, and removing lesions [1,2].
Even for experienced gastrologists, detecting polyps with the naked eye can be challenging due to various factors such as inadequate intestinal preparation and visual exhaustion. A computer-aided system for localizing colorectal polyps would simplify the diagnostic process and facilitate the assessment of the examined cases severity. Thus, healthcare professionals would be able to prioritize urgent cases, promptly select appropriate treatments, and efficiently analyze clinical results. Therefore, developing reliable computer-aided diagnosis techniques that can automatically and precisely localize polyps in colonoscopy images is currently one of the most demanding needs in the healthcare sector [1,2,3].
To aid the gastroenterologists, various machine learning and deep learning architectures have been currently developed as prominent solutions to automatize polyp detection and localization tasks and enhance the accuracy of their consequences [4,5,6]. A method for real-time detection, classification, and localization of gastrointestinal tract disorders from colonoscopy images was presented in [7]. Both online available (hyper-Kvasir dataset) and private locally collected samples were utilized. The method was developed using the pre-trained transfer learning SSD, YOLOv4, and YOLOv5 object detection models, with minimal fine-tuning of the hyperparameters, and their final performance was compared, whereas the utilization of the YOLOv5 object detection algorithm and the artificial bee colony (ABC) optimization algorithm was proposed in [8]. The YOLOv5 algorithm was employed for polyp detection, while the ABC algorithm was used to enhance the performance of the model by finding the optimal activation functions and hyperparameters for the YOLOv5 algorithm. The proposed method was executed on the SUN and PICCOLO datasets and achieved good performance in real-time polyp detection.
Not only the deep learning approaches but also the polyp characterization computational methods are widespread in the literature. These methods mainly compute some shape [9,10] and texture-based [11,12] feature descriptors over the full image or a segment of it to be able to detect and localize and the polyp’s area precisely.
In pattern recognition literature, Hough transform is considered as a great mathematical tool for object detection since the first appearance of its classical version for machine analysis of bubble chamber pictures by Paul V C Hough in 1959 [13]. In 1981, it was extended by D. H. Ballard into the Generalized HT (GHT), which is a two-phase learning-detection process to detect arbitrary complex non-parametric shapes [14]. Continuously, different variations of Hough transform have been proposed by many researchers, like probabilistic Hough transform [15], randomized Hough transform [16], and Vector–Gradient Hough Transform (VGHT) [17].
In 1994, Han, Kóczy, and Poston introduced the fuzzy Hough transform [18] to detect fuzzy lines and circles in the noisy environments by roughly fitting the data points to given parametric forms.
Hough transform and all its successive versions have proven to be typical powerful techniques with promising outcomes in numerous fields of applications, for example but not limited to, object detection [19,20], lanes and roads detection [21,22], industrial automation [23], mechanical engineering [24], medical image processing [25,26,27,28,29], and robot navigation [30] fields.
Drawing from our experience and a comprehensive analysis of previous studies employing the Hough transform, we briefly outline the practical strengths and weaknesses of this method in Table 1.
The research community has investigated the limitations of Hough transform and suggested different approaches to make it a more plausible tool. To solve large calculation demand and to ensure integration with the Wireless Capsule Endoscopy (WCE) system, authors in [31] improved the real-time computation of the Hough transform. The design of the new approach took into consideration specific constraints of WCE such as limited space and limited energy resources. Within the same limitation direction (i.e., minimizing the Hough tansform’s computational cost), an Edge Orientation-based Fuzzy Hough Transform (EOFHT) was proposed in [32]. Instead of using all the edge-detected image points in the voting process, just those specific points whose representation is consistent with the selected gradient orientation range were eligible to vote.
Moreover, since shorter curves give fewer votes, thus, circles with smaller radii give weaker peaks in the accumulator space, a weighted vote which is inversely proportional to the radius in the parameter space was given to each entitled point in the image space by the modified voting method in [33].
Currently, automatic colorectal polyp detection and localization in colonoscopy images is a promising research area and a challenging problem because of the high variability of the colorectal polyps’ characteristics in both shape and texture. Thus, the efficiency of applying Hough transform was also studied in this field.
Classical Hough transform was applied in [34] to identify potential regions of interest (ROIs) in 300 video endoscopy pictures. A good detection of the ROIs containing a polyp was possible using classical Hough transform based on Canny edge detection approach. However, in several samples the method generated many alternative weaker circles, and raised the classification system’s False Positive Rate (FPR). To enhance the effectiveness of the proposed method, after Hough transform step, the textural characteristics from co-occurrence matrices were computed, and then used within a boosting-based technique for the final classification purpose. Hough transform was also used in other colorectal polyp localizing methods [35,36] as a ready-made preprocessing step to find ROIs. However, they concentrated on the steps determining whether a ROI contains polyp or not, and not on the Hough transform step itself. (Linear Hough transform was also used in colonoscopy for detecting folds within the bowel [37].)
In our research, instead of improving the steps after selecting the ROIs with Hough transform (like in [34,35,36]), we would like to overcome some weaknesses of the Hough transform itself. The circles arising from the fuzzy Hough transform can also serve as initial masks for active contour methods [38,39].
As a first step we introduced fuzzy Hough transform, as it provides more tolerance to the deviations from the ideal curve’s points, and colorectal polyps in real colonoscopy images are not precisely circular [40].
As a second step, we targeted the large computational demand of the Hough transform, as the low computational load is one of the most essential requirements for algorithms used in computer-aided diagnosis (CAD) systems.
Hough transform starts with an edge detection (mostly Canny edge detection), and all the edge pixels have to be transformed. However, the edge pixels can be decimated, as the Canny edge detection tends to have a dense edge map because of the necessarily high connectivity of the edge points. Colonoscopy images contain edges other than colorectal polyps’ borderlines, however, these edges often either have smaller or larger intensity steps compared to the edges of colorectal polyp contours. Based on these considerations, the possibility of removing Canny edge detected points with too small or too high gradient magnitude values was investigated in [41]. The research was performed on all the images of three colonoscopy image datasets CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34]. The study’s goal was to eliminate edge points that do not belong to polyp contours, and at the same time, to keep as many polyp contour edge points, as it is necessary for the colorectal polyp contour to be detectable.
Setting a global gradient magnitude threshold domain that could achieve both a low total number of Canny edge pixels and at the same time a sufficiently accurate matching with the colorectal polyp contour was not possible. However, if the continuity of the Canny edges is given up, other edge detecting methods could provide a better basis for Hough transform, as there the continuity of the lines is not needed, only that the edge points are on the contours of the polyp. This is one of the ideas that were studied in this paper.
The main contributions of this article are as follows. First, the performance of four edge detection algorithms (Canny, Prewitt, Roberts, and Sobel) was compared, and the most ideal one that gave the most polyp-contour-related and least unnecessary edge points was selected. Two metrics, based on the normalized gradients of contour and non-contour edges were used to determine which algorithm is the most appropriate. Second, to further reduce the number of edges that do not belong to the polyp contours, a gradient magnitude thresholding process was applied for the results of the selected edge detection method. Finally, to make the circle detection more tolerant to shape uncertainty, the fuzzy version of the Hough transform was also tested together with the classical one using a gradient-weighted voting approach. To evaluate the results, the radial displacement and the average radius were introduced to characterize the roundness of the objects to be detected. These contributions are summarized in the following points,
  • selecting the edge detection method that is the most suitable for colorectal polyp localization purposes. Developing a metric to base this selection,
  • determining gradient limits for removing the unnecessary edges,
  • applying fuzzy Hough transform on colonoscopy images and comparing its results with the classical Hough transform,
  • introducing a gradient-weighted voting to both classical and fuzzy Hough transforms and study its effects,
  • characterizing the roundness of the objects to be detected.
The paper is organized the following way. In the next section, in Section 2, the summary of the mathematical methods used in the article are given, the classical and fuzzy Hough transforms, the gradient filtering and edge detection algorithms. Section 3 contains the proposed method in detail. In Section 4 the results are given and discussed, while in the last section the conclusion is drawn.

2. Theoretical Background

2.1. Classical and Fuzzy Hough Transforms

Hough transform has been in use for detecting straight lines and circles since a rather long time. The Hough transform is meant to find a parametric curve fitting to some measured points of the curve [13], if the shape, i.e., the general parametric formula of the curve is known. One of the simplest Hough transforms uses the two-parameter equation of a straight line, i.e.,
y = a 0 · x + b 0
with x and y being the coordinates of points in the Cartesian space, and a 0 and b 0 being the parameters from the parameter space ( a , b ) . Hough transform basically generates the curves belonging to each line point ( x 0 , y 0 ) in the space of the parameters; in the case of straight lines, each point x 0 , y 0 will form a straight line
b = x 0 · a + y 0
in the parameter space, too. If another point is on line (1) in the real space, it will have another line in the transformed space, similar to (2), but with different slope and offset. However, these lines, formed by the points on line (1) will have an intersection at ( a 0 , b 0 ) , i.e., at the parameter pair belonging to (1). This means, that the point ( a 0 , b 0 ) will be arising from all the points ( x , y ) of (1), thus, if we add one vote to each point of (1), then ( a 0 , b 0 ) will have a high number of votes, whereas other points, that belong only to one of the lines of type (2) will have only one. This consideration was used by Hough and later by many others to develop the following method.
  • Divide the space ( x , y ) by a finite grid (if not already executed).
  • Divide the transformed space ( a , b ) by a finite grid, it gives the resolution of the result.
  • For each point a , b in the transformed space add a vote for each point ( x , y ) in the original space, that is contained by the line.
  • Search for the maximum of votes
    • If there is just one line, the global maximum ( a m a x , 0 , b m a x , 0 ) will be the approximation of parameter ( a 0 , b 0 ) of the line we were looking for.
    • If there are multiple lines, longer and shorter segments, the local maxima ( a m a x , k , b m a x , k ) will also approximate parameter pairs of lines.
      The longer the line in the original space, the more votes it obtains. By setting up a threshold, the length of the detected line segment can be controlled.
  • The lines y = a m a x , k · x + b m a x , k with the detected approximate parameters ( a m a x , k , b m a x ,   k ) can be drawn in the original space ( x , y ) .
The algorithm is given for the case of straight lines like (1), but it can be generated easily to any kind of parametric curve. In the case of colorectal polyps, the circular or elliptic Hough transform is the most plausible, with parametric equations
r 0 2 = x a 0 2 + y b 0 2 ,
1 = x h 0 2 a 0 2 + y k 0 2 b 0 2
It can be seen that the circle has three parameters, the radius r 0 , and the centre coordinates a 0 and b 0 , while the ellipse has four parameters, the half axes a 0 and b 0 , and the centre coordinates h 0 and k 0 .
Nowadays, Hough transforms are applied to images after edge detection [14,15,16], thus, the original space already has an innate grid, i.e., step 1 is mostly not necessary. Very often this edge detection step is considered as the first step of the Hough transform.
As the focus of our paper is on circular Hough transform, here the pseudocode of the classical circular Hough transform is given in Algorithm 1.
Algorithm 1: Classical Hough transform for a circle with parameters a, b and r
Requirements:
  an edge image I [ i x , i y ] with size L x , L y ,
  a finite parameter space V j a ,   j b ,   j r with size L a ,   L b , L r with initial values of 0
  a threshold for peak percentage P p ,
  a result image R i x , i y , with size L x , L y , with initial values of 0
1:for each image row i x from 1 to L x
2:for each image column i y from 1 to L y
3:  for each parameter space row j a from 1 to L a
4:   for each parameter space column j b from 1 to L b
5:    for each parameter space 3rd dimension j r from 1 to L r
6:     if  j r 2 = i x j a 2 + i y j b 2
7:       V j a ,   j b , j r = V j a ,   j b ,   j r + 1
8:      end if
9:     end for
10:    end for
11:   end for
12:end for
13:end for
14:compute the global maximum M G in V j a ,   j b , j r
15:compute local maxima M k = V j a , k ,   j b , k , j r , k
16:select local maxima with M k > P p M G
17:calculate the number N M of the local maxima from line 16
18:for each local maximum k from 1 to N M
19:for each result image row i x from 1 to L x
20:  for each result image column i y from 1 to L y
21:   if  j r , k 2 = i x j a , k 2 + i y j b , k 2
22:     R i x , i y = 1
23:    end if
24:   end for
25:end for
26:end for
In real life, the images have noise, thus, even if the original objects had straight lines, circles or ellipses as their edges, the images will probably have distorted edges. If this distortion is not too large, then the classical Hough transform is still effective, though sometimes more lines arise instead of one. However, there is a method that can handle not only slightly distorted edges, but larger deviations from the circles. This method is the fuzzy Hough transform, introduced by Han, Kóczy and Poston [18].
The fuzzy Hough transform considers the points ( x , y ) as fuzzy points. Fuzzy points are based on Zadeh’s original idea of fuzzy sets [43], which generalized the classical, Boolean sets to have fuzzy perimeters by introducing a so-called membership function μ . In the classical, Boolean algebra, only the membership values of μ = 0 and μ = 1 are possible, i.e., something can either be a member of a set or not. In Zadeh’s approach, the objects can not only be elements or not of a set, but there is a strength of their membership. In the case of a geometrical point ( x , y ) , it can also be considered as a fuzzy set, it has membership value μ = 1 at the coordinate point ( x , y ) , and the membership value decreases to zero monotonously as we get further from the coordinate ( x , y ) , leaving its environment still be partially belonging to the fuzzy point ( x , y ) . Using this approach in the fuzzy transform, we can consider not only the coordinate point ( x , y ) to give a vote 1 to the corresponding parameter space points, but also its environment can give votes proportional to their membership value.
This approach modifies the previously described classical Hough transform in the following way. (For the sake of generality, a , b ,   were used, instead of ( a , b ) to express the applicability to any type of parametric curve, not only straight lines).
  • Divide the space of fuzzy points ( x , y ) by a finite grid (if not already executed).
  • Divide the transformed space ( a , b ,   ) by a finite grid, it gives the resolution of the result.
  • For each point a , b ,   in the transformed space, and its environment add a vote proportional to the membership function for each fuzzy point ( x , y ) in the original space, that is part of the fuzzy curve.
  • Search for the maximum of votes
    • If there is just one curve segment, the global maximum ( a m a x , 0 , b m a x , 0 ,   ) will be the approximation of parameter ( a 0 , b 0 ,   ) of the curve we were looking for.
    • If there are multiple curves, with longer and shorter segments, the local maxima ( a m a x , k , b m a x , k ,   ) will also approximate parameters of the curves.
      The longer the curve in the original space, the more votes it obtains. By setting up a threshold the length of the detected segment can be controlled.
  • The curves with the detected approximate parameters ( a m a x , k , b m a x , k ,   ) can be drawn in the original space ( x , y ) .
Practically, if all fuzzy points ( x , y ) have the same membership distribution around them, then the 3rd step of the voting manifests in adding a vote μ α a , β b ,   to the neighboring points α , β ,   of the studied parameter space point a , b ,   .
Using this approach made the Hough transform more tolerant to distortions from the original parametric curves. This made us able to use circular Hough transform for searching for contours of polyps, i.e., the three-parameter Equation (3) could be used instead of the four-parameter Equation (4).
Using these considerations, the pseudocode of the circular fuzzy Hough transform is as follows in Algorithm 2.
Algorithm 2: Fuzzy Hough transform for a circle with parameters a, b, and r
Requirements:
  an edge image I [ i x , i y ] with size L x , L y ,
  a finite parameter space V j a ,   j b , j r with size L a , L b , L r , with initial values of 0
  a threshold for peak percentage P p ,
  a voting membership matrix μ j a ,   j b , j r with size 2 d a + 1 ,   2 d b + 1 ,   2 d r + 1
  a result image R i x , i y , with size L x , L y , with initial values of 0
1:for each image row i x from 1 to L x
2:for each image column i y from 1 to L y
3:  for each parameter space row j a from 1 to L a
4:   for each parameter space column j b from 1 to L b
5:    for each parameter space 3rd dimension j r from 1 to L r
6:     if  j r 2 = i x j a 2 + i y j b 2
7:       V j a d a , , j a + d a ,   j b d b , , j b + d b ,   j r d r , , j r + d r =
       V j a d a , , j a + d a ,   j b d b , , j b + d b ,   j r d r , , j r + d r + μ j a ,   j b , j r
8:      end if
9:     end for
10:    end for
11:   end for
12:end for
13:end for
14:compute the global maximum M G in V j a ,   j b ,   j r
15:compute local maxima M k = V j a , k ,   j b , k ,   j r ,   k
16:select local maxima with M k > P p M G
17:calculate the number N M of the local maxima from line 16
18:for each local maximum k from 1 to N M
19:for each result image row i x from 1 to L x
20:  for each result image column i y from 1 to L y
21:   if  j r , k 2 = i x j a , k 2 + i y j b , k 2
22:     R i x , i y = 1
23:    end if
24:   end for
25:end for
26:end for
The applied voting membership function was a 3D-Gaussian, as it had a rather wide region around the center that is still close to 1. For a given parameter point a 0 , b 0 , r 0   , the votes were given to the neighboring points a 0 σ , a 0 + σ , b 0 σ , b 0 + σ , r 0 σ , r 0 + σ   according to the membership function
μ a , b , r = exp a a 0 2 + b b 0 2 + r r 0 2 2 σ .
As an example, an image (No. 220 form database CVC-Colon), its preprocessing steps, and transformed images are shown in the following figures. Figure 1 shows the image after reflection removal, its ground truth mask and its Prewitt edge detected version. Figure 2 shows the resulting votes for classical and fuzzy Hough transforms (for one with σ = 5 slight fuzziness, and another with σ = 15 , wide fuzziness). Each column has four images, two at the radius values belonging to the two main detected circles, and there are 2 pictures with radii slightly smaller and larger than the circle at the polyp. Figure 3 shows the detected circles for different thresholds compared to the global maximum of the votes.

2.2. Gradient Filtering

In mathematics, for a 2D continuous function, we use the partial derivatives to measure the degree of variation along each dimension. The edges in an image are segments that can be formed from the point locations where there is a rapid change in the image gray-level intensity in a small region. The connection between the previous two concepts made it possible to apply gradient filtering techniques in the field of image processing to detect edges.
The gradient of an image intensity function is a 2D vector with two components defined by the horizontal and vertical derivatives at each image point, and using these two values, we can identify the strength of the edge’s magnitude and its orientation at each pixel.
The common mathematical formulation of the gradient for 2D image is the following vector:
G f x , y = G x G y = f x f y ,
where f is the image intensity function, and x and y are the spatial coordinates of the image. The magnitude and direction of the gradient are consecutively given by the two equations below:
G f x , y = G x 2 + G y 2 ,
α x , y = tan 1 G y G x .
All of the aforementioned considerations are carried out in the continuous domain. In the case of a digital image, where the intensity function is sampled at image discrete points, we replace the gradient operator by a discrete operation, i.e., by a convolution between the image and a kernel, which is a matrix of smaller size. For partial differentiation the discrete counterpart is taking the difference of neighboring pixels. The following gradient kernels are often used in practice.
R o b e r t s 1 = 1 0 0 1 , R o b e r t s 2 = 0 1 1 0 .
P r e w i t t x = 1 1 1 0 0 0 1 1 1 , P r e w i t t y = 1 0 1 1 0 1 1 0 1 .
    S o b e l x = 1 0 1 2 0 2 1 0 1 , S o b e l y = 1 2 1 0 0 0 1 2 1 .

2.3. The Implemented Edge Detection Algorithms

Edge detection methods can be used as mathematical techniques to identify particular locations in an image where the gray level intensities show discontinuities. The resulting edge maps serve as the basis for subsequent processing steps in numerous significant computer vision applications. In this section of the paper, a brief description of the four used edge detection methods is provided.

2.3.1. Roberts, Prewitt, and Sobel Edge Detection Algorithms

The well-known Roberts, Prewitt, and Sobel edge detection algorithms are widely used because of their simplicity and easiness of implementation. All of these algorithms have the same work mechanism, but with different kernels. Each kernel has the effect of calculating the gradient in the specified direction. However, the choice of algorithm to be used depends on the desired application and the characteristics of the image being processed.
Roberts edge detection method uses convolutional filters to detect the variations in the image gray-level intensity in the diagonal directions [44], whereas Prewitt and Sobel methods use convolutional matrices to detect the changes in both x and y directions [45,46].
The previously mentioned kernels, (9), (10), and (11), are used by the Roberts, Prewitt, and Sobel edge detection algorithms, consecutively.

2.3.2. Canny Edge Detection Algorithm

John Canny first presented Canny edge detection in 1986 [47] as a multistep algorithm. Canny algorithm is looking for the connectivity of the edge points as well as the high gradient image points which makes it the most popular edge detection technique in many computer vision and image processing applications. This technique produces very reliable and highly accurate edge maps that are close to the human perception of edges.
The process of Canny algorithm consists of four main steps. First, the original image is refined using a Gaussian filter to remove unwanted noise. The applied Gaussian filter is defined as follows:
g x , y = G σ x , y f x , y ,
where
G σ ( x , y ) = 1 2 π σ 2 e x p ( x 2 + y 2 2 σ 2 )
The convolution operator is represented by the symbol , and the indices x and y are used to identify a pixel’s location within an image. The two-dimensional function G σ ( x , y ) is a Gaussian function with the variance of σ 2 .
The smoothened image’s gradient magnitude and direction are then calculated using a certain gradient operator, i.e., Roberts, Prewitt, or Sobel. The third step is implementing the Non-maximum Suppression (NMS) approach to check if the pixels are part of the local maxima, and if not, they are put down to zero. Two hysteresis high and low thresholds are computed in the final step. Every edge point with a gradient value greater than the higher threshold is identified as a strong edge, whereas the edge points whose gradient values fall below the lower threshold are eliminated. The connectivity of the residual edge points which have gradient values between the low and high thresholds is tested: the examined point is considered as an edge pixel only if at least one of the neighboring pixels is a strong edge pixel [48].

3. Practical Application of the Proposed Method

3.1. Edge Detection Methods—Application, Evaluation, and Selection

For these tasks, several processing steps were carried out. They are sequentially summarized below together with the illustrated figures and plots. Three publicly available colonoscopy image databases, CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34] are used.
  • Cutting off the black frame surrounding all original images to reduce unnecessary information.
  • Removing the colonoscope’s light reflections: The colonoscope light’s reflections (and consequently their contours) were removed from all the databases’ images as a step towards reducing the number of redundant edge pixels (Figure 4b). The histogram of the image pixel intensities was used as the basis for the reflection removal step. Briefly, we cut out the histogram’s highest (and lowest) intensity peaks, and then the pixel intensities were re-normalized to the original [0 to 255] domain. A “white mask” was created using the pixels that made up the histogram’s highest peak. Similar to the procedure described in [49], the “white mask” was extended and smoothened into the neighboring pixels.
  • Extracting polyp contour: For each of the ground truth masks (Figure 4c), the contour was defined (Figure 4d). The number of pixels that make up the polyp mask contour was calculated for later use in the evaluation process.
  • Generating the “ring mask” for the colorectal polyp contour: In many cases of the manually drawn masks, the edges of the polyp’s contour are not completely visible, either because the polyp is located in the area of bowel folds, or it is covered with impurities. Moreover, reasons related to the human fatigue or error can also affect the drawing accuracy of the colorectal polyp mask. These are the main reasons why it was necessary to extend the contour of the manually drawn database mask to a finite width ring which is proportional to the size of the examined image. The previously extracted contour was extended into a ring (Figure 4e). To do that, we selected the first x nearest pixels of the entire contour as a width of the ring mask. As the databases images are with different sizes, we used different ring mask widths based on the database images size (they are x = 3 for CVC-Clinic database [9], x = 5 for CVC-Colon database [42], and x = 10 for ETIS-Larib database [34]).
  • Calculating the gradient magnitude for each of the studied samples, like in (Figure 4f).
  • Detecting polyp edges: Canny, Prewitt, Roberts, and Sobel techniques were applied as four different edge detection methods (Figure 5a,c,e,g respectively). By employing this edge detection operation, it becomes possible to decrease the time required for the following pre-processing steps and offers a comparatively consistent data source that tolerates geometric and environmental variations while performing the Hough transform calculations. The total number of edge pixels resulting from each filtering technique for all the images of the three databases was calculated and plotted in Figure 6.
  • Finding the gradient-weighted edges: The edge filtered images (Figure 5a,c,e,g) were multiplied by the gradient magnitude output (Figure 4f). The reason for performing this multiplication step is to determine the gradient magnitude domain, where the polyp edges are most likely to be present within the whole gradient domain.
    As an example, we received images like (Figure 5b,d,f,h) for Canny, Prewitt, Roberts, and Sobel techniques. (It is visible, that in contrast with the full gradient subplot (Figure 4f), subplots (Figure 5b,d,f,h) contain the gradient values only where the edge mask value is 1, i.e., where the white pixels are located in subplots (Figure 5a,c,e,g).
  • Normalizing: To make the proposed approach universally applicable, for all the pictures in all the databases, the gradient-weighted edges pixels were normalized into the interval [0, 1] for each image separately.
  • Counting the number of the edge pixels located inside the ring mask, (this number serves as the reference: it is the number of the useful edge pixels).
  • Calculating the final evaluation metrics: Considering our application requirements, two quantities were introduced to evaluate each of the four implemented edge detection methods. For each image of the three studied databases, the statistics calculated in the previous steps (the total number of pixels in the polyp mask contour, the total number of edge pixels resulted from each edge detection method, and the total number of edge pixels inside the ring mask) were used in composition the following two metrics.
    • The first evaluation parameter, i.e., the one referring to the calculation efficiency, is defined by the ratio between the number of edge pixels in the ring mask around the polyp contour and the total number of edge pixels in the entire image,
      R _ c a l c = ( N o .   o f   e d g e   p i x e l s   i n   t h e   r i n g   m a s k ) ( T o t a l   N o .   o f   e d g e   p i x e l s ) .
      This ratio represents the quality of the edge detection method with regard to the Hough transforms and polyp detection. R _ c a l c values’ range is between 0 and 1. The higher this ratio is, the fewer non-mask contour edges are identified, and the less unneeded calculation is required in the classical or fuzzy Hough transform. Figure 7 shows the values of this metric resulting from Canny, Prewitt, Roberts, and Sobel filtering techniques for the three databases.
    • The second evaluation parameter, i.e., the metric referring to the goodness of the edge pixels finding the ideal polyp contour (derived from the ground truth mask), is given by the ratio between the number of edge pixels in the ring mask and the number of pixels in the database polyp mask contour,
      R _ e d g e = ( N o .   o f   e d g e   p i x e l s   i n   t h e   r i n g   m a s k ) ( N o .   o f   p i x e l s   i n   m a s k   c o n t o u r ) .
      In ideal case, this ratio should be as close to 1 as possible. Figure 8 displays the values of this metric resulting from all edge detection techniques for all studied databases.
      We have to note that the detected edge pixels in the ring mask may not exactly be the same as the edge pixels in the manually drawn database mask contour, but it still could be used for finding the polyp.
  • Selecting the most appropriate edge detection technique: Canny method detects a wide range of fine edges and gives a dense detailed edges map. It also tends to connect edge pixels to continuous edge lines, in contrast with the other three techniques. Figure 6 clearly shows the large difference between the total number of edge pixels resulting from Canny and the other three edge detection methods. Prewitt, Roberts, and Sobel have very similar results for the majority of samples. As we are interested not only in decreasing the number of edge points scanned by the Hough transform, but also in increasing the efficiency of finding the colorectal polyp, we relied on the definition of the previous two evaluation metrics R _ c a l c and R _ e d g e as a basic for selecting the most appropriate edge detection technique as following.
    • Selection of the most appropriate edge detection technique using  R _ c a l c : We tested two different selection strategies using metric R _ c a l c . Based on the definition of R _ c a l c , the nearest the value R _ c a l c to 1 is, the better the filter is.
      According to the 1st strategy, we can select the filter, that has the mean of the metric R _ c a l c values closest to 1. For each database and each type of the four edge detection methods we calculated the mean of the R _ c a l c values.
      However, for the values of metric R _ c a l c , in all databases, for all the four edge detection techniques, there was many samples between [0, 0.1], as it is visible in Figure 7. This is the reason why another strategy has to be considered as well.
      According to the 2nd strategy, we can select the filter, which has the most samples close to the ideal value. For this purpose, a goodness interval can be defined, and the number of samples within that interval can be calculated. In our case, for every edge detection technique, we checked how many samples in each database had a R _ c a l c value greater than 0.1 as a measure of the filter suitability. Accordingly, the higher the number of resulting samples, the better the filter is. Of course, the percentage of this number within the total number of images in each database has to be considered; database CVC-Clinic [9] has 612 images, database CVC-Colon [42] has 379 images, and database ETIS-Larib [34] has 196 images). Table 2 lists the total results of this step.
    • Selection of the most appropriate edge detection technique using  R _ e d g e : We also tested two different choosing strategies based on metric R _ e d g e .
      According to the 1st strategy, similar to what we executed for R _ c a l c , we can test how similar the results to their ideal value are. However, instead of calculating the mean value, we calculated the mean absolute error (MAE) value for metric R _ e d g e from its ideal value 1. For this criterion, the smallest MAE value nominates the better filter.
      According to the 2nd strategy, as metric R _ e d g e should be as close to 1 as possible, we suggested finding the goodness interval around R _ e d g e = 1 , i.e., considering each sample that has R _ e d g e value within [0.5, 1.5] among the good samples. Consequently, the higher the number of samples within the goodness interval, the better the filter is. Table 3 arranges the total results of this step.
The highlighted results presented in the above two tables show that the Prewitt filter is the most appropriate edge detection technique based on our proposed selection strategies using both metrics R _ c a l c and R _ e d g e in most cases.
It should be noted from Table 2 (2nd strategy) that the results of Sobel filter are very close to the Prewitt ones, especially for database CVC-Clinic: they are the same. Moreover, for database CVC-Colon, the number of samples that had an R _ c a l c value greater than 0.1 is three samples more in the Sobel filter than in the Prewitt case.
The flowchart in Figure 9 summarizes the overall edge detection method selection procedure.

3.2. Gradient-Based Thresholding for Prewitt Edge Detection Results

In this part of the proposed method, the dynamic distribution of the normalized gradient-weighted edges pixels resulting from the Prewitt method of both the ring mask (i.e., the area surrounding the ground truth mask contour) and the full image was studied by generating the individual histograms of all images in all databases.
Four individual histograms of four different samples can be found in Figure 10. Please note that as in most of the cases the gradient intensities were in the first 20%, the density of the histogram bins in that domain were made to be larger. This dividing of the normalized gradient range resulted in a more detailed tendency view in the domain with dense bin distribution. Additionally, as the column height in the dense bin distribution domain became relatively smaller, thus, the visibility of the small column magnitudes in the other parts of the histogram (i.e., in the higher normalized gradient domain) became better.
In most of the cases, the distributions of the full image edges (cyan) and the ring mask edges (yellow) showed a very similar tendency with different magnitude, like in the case of the first example, subplot (a) of Figure 10. Very often beside the similar tendency in the low gradient domain, the higher gradient parts were missing in the case of the ring mask edges, like in the 2nd and 3rd example (subplots (b) and (c)). In some other cases, the distributions of the full picture and the ring mask had different tendencies, like in the case of the 4th example, in subplot (d).
To summarize the results of the individual histograms, 3D-histograms were created for the three databases with the picture number being the 3rd dimension. These total histograms are plotted in Figure 11 and Figure 12. Figure 11 shows the perspective view in linear scale, while Figure 12 gives the top view of the same histograms in logarithmic scale. It is worth mentioning that the linear and logarithmic scale plots of the same histogram are both given, because the smaller valued histogram parts at the lowest and the highest normalized gradient bins cannot be observed well in the linear scale plots of the total histograms. On the other hand, the top view of the logarithmic scale plots makes the entire set of data viewable without columns blocking the ones behind them.
Based on results in Figure 11 and Figure 12, it is visible that the edges with very high gradient values do not belong to the polyp edge, i.e., most of the ring mask edge results have 0 values over 0.3. Additionally, they have zero values below 0.06. Moreover, in many cases this lower limit can be brought up to 0.08, and the upper limit can be as low as 0.2. We decided to use this property and omit the edges with too high and too low gradient magnitudes in order to both reduce the number of pixels to be Hough transformed and to sort out those pixels that certainly do not belong to the polyp edges. Thus, in the next calculations, beside the full Hough transforms for all the edges, we performed restricted Hough transform for the edges with normalized gradient values within a wide threshold interval [0.06, 0.3] and a thin threshold interval [0.08, 0.2]. The full Hough transform results can be interpreted as a reference, to analyze how the restricted transforms influence the results.

3.3. Gradient-Weighted Voting Approach for Classical and Fuzzy Hough Transforms

In addition to excluding the edge points with too high or too low contrast (i.e., gradient thresholding), the gradient values can be used for another purpose, namely, to modify the voting process.
Even after the thresholding, polyp contours usually have segments with higher gradient magnitudes, mixed with lower gradients, whereas some of the background patterns have very low gradients. To decrease the influence of these lower gradient background edges on the Hough transform results, the following weighted voting approach was introduced.
During the original Hough transform, all edge points receive the same vote, no matter how strong these edges are. In our method, instead of 1, each point uses its normalized gradient magnitude as a vote. In the case of the fuzzy Hough transforms, the whole voting membership function is multiplied by the gradient magnitude of the given pixel. As a result, the smaller the step in the intensity at the edge is, the smaller the weight in the voting becomes for the edge point, no matter if it is a classical or a fuzzy point. To clarify better, the pseudocode of the fuzzy circular Hough transform with gradient-weighted voting approach is given in Algorithm 3.
Algorithm 3: Fuzzy Hough transform for a circle with parameters a, b, and r with gradient-weighted voting approach
Requirements:
  an edge image I [ i x , i y ] with size L x , L y
  a gradient magnitude image G [ i x , i y ] with size L x , L y
  a finite parameter space V j a ,   j b , j r with size L a , L b , L r with initial values of 0
  a threshold for peak percentage P p ,
  a threshold interval for the gradient magnitudes [ g m i n ,   g m a x ]
  a voting membership matrix μ j a ,   j b , j r with size 2 d a + 1 ,   2 d b + 1 ,   2 d r + 1
  a result image R i x , i y , with size L x , L y , with initial values of 0
1:for each image row i x from 1 to L x
2:for each image column i y from 1 to L y
3:  for each parameter space row j a from 1 to L a
4:   for each parameter space column j b from 1 to L b
5:    for each parameter space 3rd dimension j r from 1 to L r
6:     if  j r 2 = i x j a 2 + i y j b 2 and g m i n G [ i x , i y ] g m a x
7:       V j a d a , , j a + d a ,   j b d b , , j b + d b ,   j r d r , , j r + d r =
       V j a d a , , j a + d a ,   j b d b , , j b + d b ,   j r d r , , j r + d r + G i x , i y μ j a ,   j b , j r
8:      end if
9:     end for
10:    end for
11:   end for
12:end for
13:end for
14:compute the global maximum M G in V j a ,   j b ,   j r
15:compute local maxima M k = V j a , k ,   j b , k ,   j r ,   k
16:select local maxima with M k > P p M G
17:calculate the number N M of the local maxima from line 16
18:for each local maximum k from 1 to N M
19:for each result image row i x from 1 to L x
20:  for each result image column i y from 1 to L y
21:   if  j r , k 2 = i x j a , k 2 + i y j b , k 2
22:     R i x , i y = 1
23:    end if
24:   end for
25:end for
26:end for
The performance of the proposed voting technique was evaluated using selected samples from each database for three main cases. The first case was the reference case where the full Hough transforms (classical and fuzzy) for all the edges without any thresholding were applied. The traditional voting technique was followed, where each edge point obtains one vote if it is eligible. The other two cases were the restricted Hough transforms for the edges with normalized gradient values within the wide threshold interval [0.06, 0.3] and the thin threshold interval [0.08, 0.2]. For these two cases, the modified, gradient-weighted voting technique was used.
In addition to the classical Hough transform, the fuzzy Hough transform was also studied in each case, with σ = 3,5 , 7,9 , 11,13   and   15 from the voting membership function (5). (In order to simplify the referring to these transform types, the classical transform may be considered as a fuzziness parameter σ = 0 ).
When selecting the final circles, i.e., the local maxima in the transformed image, a threshold P p is to be set within the Hough transform. The arising circles (their number and location) were studied for different local maximum thresholds, namely for P p = 50 % ,   60 % ,   70 % ,   80 % ,   and   90 % of the global maximum of the votes.
The goal of this step is to determine, how many of the final circles are around the polyp for the various σ s and local maximum thresholds P p s. For this purpose, the total number of circles N t o t a l was counted. A circle was considered to be around the polyp by testing, if its center was inside the ground truth mask, and it had points within the ring mask around the polyp contour. The number of such circles within the ring mask N r i n g was also counted. The ratio
A r = N r i n g N t o t a l
was used as a metric for effectiveness of finding circles related to the polyp. If A r is 0, then the polyp is not found, if A r is too small, then too many other circles are found, and if A r is around 1, then the circle(s) around the polyp are found, but not many other circles can be seen in the final results.
To test the different types of Hough transforms’ tolerance degree to the deviation from the circle, it is necessary to know the size of the polyps as well as their roundness. To calculate the average radius of the polyp r a v g , the maximum and minimum coordinates of the polyp mask in x and y direction were used as follows,
r a v g = x m a x x m i n + ( y m a x y m i n ) 2 .
The center of the polyp was determined as the average of these coordinates,
c x , c y = x m a x + x m i n 2 , ( y m a x + y m i n ) 2 .
This center point and the average radius were used to determine the radial displacement for each mask contour point ( m x , m y ) by
Δ r a d i a l = m x c x 2 + m y c y 2 r a v g .
The roundness error can be measured by the maximum of the ratio of the radial displacements and the average radius:
δ r o u n d n e s s = max Δ r a d i a l r a v g .
The larger the roundness error δ r o u n d n e s s , the more the shape deviates from circle. Please note, the roundness error of a circle with diameter of 30 pixels can still be as large as δ r o u n d n e s s , 30 = 0.04 , which decreases for a circle twice as large (i.e., of diameter 60 pixels) to δ r o u n d n e s s , 60 = 0.03 . Typically, an ellipse with large axis being double the size of the smaller axis have roundness error of about δ r o u n d n e s s , 30 0.33 , while with axis ratio 1:3, this increases to δ r o u n d n e s s , 30 0.45 , depending on the size of the ellipse in pixels.

4. Results and Discussion

4.1. Number of Circles Found by the Algorithm

In this section, the values of A r by σ s and P p s are given for databases CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34]. The total number of circles found on the images N t o t a l is also important, as we need as few circles, and thus, ROIs as possible for the steps after the Hough transform. For this reason, N t o t a l is also plotted for the three databases. In Figure 13 and Figure 14, the values A r and N t o t a l are given for CVC-Clinic, then in Figure 15 and Figure 16 for CVC-Colon, and finally for ETIS-Larib in Figure 17 and Figure 18. Figure 14, Figure 16 and Figure 18 together with Figure 13, Figure 15 and Figure 17, completely cover the number of circles found in the complete image and in the ring masks.
The test images were selected so that their R _ c a l c values would be larger than 0.1. Both roundish and elongated polyps were selected from all three databases. The images with their ground truth masks are given in Appendix A. The image numbers and their respective R _ c a l c and R _ e d g e values together with the total number of Prewitt edge pixels for each tested sample can be found in Table 4. To make the paper more accessible to the reader, a nomenclature section of the proposed approach is also presented in Table A1.
In order to check the applicability of our method, we selected some images (two from each database) that have unfavorable, very small R _ c a l c values. These “bad” images are shown as the first two images of the figures illustrating the tests samples in Appendix A, as well as the first two lines of Table 4. These images all found zero circles in the ring mask, except for one case CVC-Colon 255, for the original, non-gradient-weighted voting method. This is why their results are not shown in Figure 13, Figure 15 and Figure 17.
In Figure 13, Figure 15 and Figure 17 the values of the ratio A r = N r i n g / N t o t a l are shown for databases CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34], respectively.
Each plot has three segments along the vertical axis. The uppermost segment shows the original Hough transform’s A r values. These are the reference values to which the other results should be compared. The next, middle segment corresponds to the gradient-weighted voting with wider threshold, and the lowermost segment belongs to the gradient-weighted voting with a thinner threshold. Each line in each segment belongs to one image, in the order of Table 4 (except for the first two lines in each database, i.e., the “bad” images).
The plots have segments along the horizontal axes, too: the upper subplot has the results grouped by the peak percentage P p (i.e., threshold for the local maxima of the voting map a , b , r to be considered as the circles), and each segment lists the σ s from classical (i.e., σ = 0 ) till the widest fuzziness parameter σ = 15 . The lower subplot orders the values the opposite way: the segments belong to various fuzziness parameters σ , while the columns within the segments belong to the peak percentages P p = 0.5 ,   0.6 ,   0.7 ,   0.8 ,   0.9 . The colored squares mean the number of circles found in the ring mask versus the total number of circles for a given image, for a given P p , σ pair. The darker red the little square is, the fewer circles outside of the polyp contour domain are found. If the little square is bright green, then no circle is found in the polyp contour’s ring mask.
From Figure 13 we can conclude that for database CVC-Clinic the fuzzy Hough transforms can mostly find circles belonging to polyp contours (most of the little squares in the maps are red).
The classical transform’s results are either very good (dark red) or very bad (green).
Mostly, increasing the threshold improves the results, the circles will be more in the area of the polyp contour, but the 90% threshold might be too much. It loses the circles in the ring mask for some cases. Thus, the most ideal value for P p is 70 80 % of the global maximum of the votes.
The fuzziness has a kind of optimal value around σ = 5 (there the images have the largest number of dark red points and darkest red points), but this optimum is not very sharp, the neighboring σ values have very similar results in the domain of σ = 3,5 , 7 . Even for those images that have no circle found in the ring mask for the classical case, the results can be improved by the fuzzy version of the transform.
Regarding the gradient-weighted voting, and the gradient-based thresholding, the results improve compared to the original voting scheme, but the difference is not extremely large.
There is one sample image where the circles found are not around the polyp contour for the classical transform: image No. 475. However, it can be seen that this particular polyp has barely visible circular contour segments. The results improve even for this case for the gradient-weighted voting processes (except for the classical Hough transform in the thin threshold case). Here, we have to note that in order to have the transform calculable and also to remove the very large and very small circles from the results, we limited our search for the r domain between 1 % and 25 % of the longer image dimension, so the longer edges of the very elongated polyps, like the one on image No. 475 could not be detected.
From Figure 15, for database CVC-Colon, it can be seen that more circles are found outside of the polyp contour ring mask, i.e., the little squares in the plots tend towards the yellow and greenish part of the color palette.
The value of the ideal sigma seems to be a little bit higher, around σ = 7 or σ = 9 , but there is much more variation than in the case of the previous database. For some images the higher σ domains give better results, for others the lower σ domains.
The thin threshold loses the circles in the ring mask for more images, so it is not a good option for this database.
For the peak percentage P p this database behaves similarly to the previous one.
Again, there is an image that does not show circles in the ring mask for most of the three cases: image No. 230, which has a brightly lighted front part and the polyp is at the background.
From Figure 17 it can be seen that for database ETIS-Larib, the ratio of circles outside the ring mask is further increased, i.e., the A r became even smaller and the plots became even less red.
Here, the higher σ values perform slightly better, but this tendency is even less clear than in the case of CVC-Colon.
The P p value here is also similar to the previous cases, 70 80 %.
The thin threshold is worsening the results: even though there are pictures, where the number of circles outside the ring mask decreases, there are more images, where the circles within the ring mask at the polyp contour disappear.
For image No. 82, where the polyp contour is very sharp, the polyp’s circle gets lost due to the gradient thresholding for the thin threshold.

4.2. Roundness Metrics Evaluation

In order to test how tolerant the classical and fuzzy Hough transforms for the deviation from the circle, we used a test image of five ellipses in an image of size 200 × 200 pixels. The larger axis of the ellipse was the same for each of the objects in one image, and the ratio of the two axes were 1:1, 5:6, 4:6, 3:6, and 2:6. Three images were used, one with larger axis of 30 pixels, one with 60 pixels and one with 90 pixels. The properties of the ellipses are given in Table 5. In Figure 19, the number of circles that were found by the Hough transforms are summarized for each of the ellipses separately. As the centers of the ellipses were rather close to each other especially on the largest axis case, for lower thresholds one or more unphysical “ghost” circles appeared, typically at the regions between 2 or 3, almost overlapping ellipses. The number of these ghost circles are also given under the numbers belonging to the real, physical ellipses.
It can be seen from Figure 19 that of the artificial image with objects of the smallest size, the classical Hough transform could only detect the precise circle, even the roundness error δ r o u n d n e s s = 0.135 was not tolerable. As the fuzziness parameter increased, the peaks belonging to more and more elongated ellipses grew above the half of the global maximum of the votes, i.e., above P p = 50 %. Already for σ = 5 all the ellipses were found in the P p = 50 % case. The corresponding radii were smaller than the larger axis of the ellipse.
As the size of the ellipses increased, the limit for finding all the ellipses went higher. Additionally, due to the closeness of the ellipses compared to the full width at half maximum of the voting matrix, overlaps arose between the individual ellipses and thus, false peaks appeared in the accumulator space, either in the area at the center of the image, which is circumvented by the ellipses, or at the two most elongated ellipses, that sometimes appeared as two twin peaks at the vote maps.
It can be seen that for the larger sized ellipse, even the classical Hough transform has a small roundness error tolerance: δ r o u n d n e s s = 0.1 is still found, although as two circles.
If the roundness of the tested colonoscopy images is studied in the last two columns of Table 4, we can see that very much elongated polyps also exist with δ r o u n d n e s s > 0.9 roundness error, but the typical roundness error is between 0.15 and 0.5. This is the reason we selected this domain for the ellipses to be studied in our artificial examples.
Similar to the case of the artificially generated test images, the polyps with very much elongated shapes δ r o u n d n e s s > 0.4 , the polyps were very often not found if their sizes were large, or found only for very low threshold P p , like in the cases of ETIS 1st, 3rd and 4th images, or CVC-Clinic 4th.
Additionally, the smaller polyps with radii around 60 are usually detected, regardless of their roundness.

4.3. Time Evaluation

To demonstrate the effect of the running time, the ratio between the total running times of the gradient-weighted method with wide and thin thresholds to the original Hough transform is given in Figure 20. It can be seen that in the case of most of the images, the wide threshold decreases the running time of the Hough transform algorithms by about 50%, and the thin threshold decreases the runtime by another roughly 50%, but in some cases, it can go down to lower than 10% of the original transform’s running time. For database CVC-Colon, the results are a bit better than for the other databases. From the 2nd subplot of Figure 20, we can conclude that these results are almost size independent, though for a smaller number of edge pixels, the results are usually a little bit better.
It is important to note that the number of the pixels to be transformed has an almost linear effect on the transformation time, so using Prewitt edge detection is a really important step, as it can be seen from Figure 6. As the continuity of the edges is not a key point in Hough transform, and by giving it up, we can get rid of a lot of weaker edges in the background. This step has a double advantage.

5. Conclusions

A novel voting method was suggested for circular Hough transform analysis of colonoscopy images. As the colorectal polyp contours have neither too large nor very small gradients, a gradient-based thresholding was also introduced.
Based on statistical data, thresholds of [ 0.06 , 0.3 ] and [ 0.08 , 0.2 ] of the normalized gradient domain were suggested to improve the ratio of the circles in the surroundings of the polyp among all the circles that are found by the Hough transform. The first, wider threshold was proven to give more reliable results. Using this method, the running time of the algorithms decreased to less than 50 % compared to the full Hough transforms. The thin threshold removes so many valuable edge points from the polyp contour edges for some of the images that the circles there cannot be found by Hough transforms, so even though the threshold [ 0.08 , 0.2 ] further improves the running time, it is safer to keep the wider threshold [ 0.06 , 0.3 ] .
The main strength of the gradient-weighted voting mechanism is that it improves the ratio of the circles near the polyp among all the circles for the Hough transforms, especially for the fuzzy Hough transform. The ideal threshold for the local maxima of the voting map a , b , r to be considered as circles is 70 80 % of the global maximum for colonoscopy images.
The fuzzy Hough transforms perform better in finding circles in the polyp contour ring mask than the classical transform, but the best performing fuzziness parameter varies with the image size. For the smaller images (CVC-Clinic has image size of 384 × 288 ) the small σ = 3 , 5 , or 7 voting matrices are the most ideal, for the medium image size (CVC-Colon has image size of 574 × 500 ) the σ = 7 or 9 is better, while for the largest images (ETIS-Larib, of size 1225 × 966 ) the larger, σ values are the most fitting for the purpose of finding polyp contours in colonoscopy images. However, this sensitivity for the values of σ becomes less and less expressed for the larger images.
The detection success also depends on the size of the polyp and its roundness. To measure the tolerance for shape deviation of the classical and fuzzy Hough transform, the roundness error was introduced together with the average radius of the polyp. For objects larger in size, smaller tolerance is allowed in roundness, while for smaller objects, the tolerance is larger, which is rather advantageous, as in colonoscopy, mostly the smaller lesions are the ones that might be missed by humans.
The main strength of applying fuzzy Hough transform and gradient-weighted voting together is that fuzzy Hough transform has higher tolerance for shape deviations. Although fuzzy voting needs more calculations than the classical one, the introducing of the gradient threshold and the gradient-weighted voting and thus, decimating the edge points to be transformed helps to keep the calculation time lower. The weaknesses of our methods include that it is hard to determine a universal gradient threshold that is valid for all databases and image sizes. Additionally, the non-conventional angles, when the polyps are in front of dark background, need to be treated separately, though these polyps are usually hard to miss for a human expert. In addition, many polyps with extremely large, irregular shapes are not found by Hough transforms. As computer aided diagnosis is meant to aid the human expert and not to substitute them, finding the smaller polyps that are easy to miss with naked eye is the most important task in colorectal polyp localization.
It is also interesting to see that the higher resolution images with more detail tend to give more circles outside of the polyp contour domain due to the more visible background patterns, so it seems to be plausible to perform a mean or Gaussian filtering as the preprocessing step of the Hough transform for these larger images. However, to investigate this idea is out of the scope of this paper and it needs further studies. Additionally, the applying the fuzziness to 1D Hough transform or introducing parallelization can further improve the efficiency of the method by means of computational demand. Additionally, other on-line available databases [50] should be used to draw more universal conclusions in the future.

Author Contributions

Conceptualization, S.N. and R.I.; investigation, R.I. and S.N.; writing—original draft preparation, R.I.; Numerical modeling, R.I. and S.N.; writing—review and editing, S.N. and R.I.; supervision, S.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

All test samples with their masks are given in this appendix.
Figure A1. The test samples for database CVC-Clinic and their ground truth masks.
Figure A1. The test samples for database CVC-Clinic and their ground truth masks.
Applsci 13 09066 g0a1
Figure A2. The test samples for database CVC-Colon and their ground truth masks.
Figure A2. The test samples for database CVC-Colon and their ground truth masks.
Applsci 13 09066 g0a2
Figure A3. The test samples for database ETIS-Larib and their ground truth masks.
Figure A3. The test samples for database ETIS-Larib and their ground truth masks.
Applsci 13 09066 g0a3
Table A1. Abbreviations and nomenclature.
Table A1. Abbreviations and nomenclature.
R _ c a l c   (14)A ratio to represent the quality of the edge detection method regarding the calculation efficiency of Hough transforms and polyp detection
R _ e d g e (15)A ratio to measure the goodness of the edge pixels finding the ideal polyp contour
σ σ = 3,5 , 7,9 , 11,13 and 15 are the chosen values for the width of the voting membership function of the fuzzy Hough transforms
P p P p = 50 % ,   60 % ,   70 % ,   80 % , and 90 % are the peak percentage values of the selected local maximum thresholds of the global maximum of the votes
N t o t a l The total number of the final resulting circles
N r i n g The number of final circles within the ring mask
A r (16)A metric to measure the effectiveness of finding polyp-related final circles
x m i n , x m a x The minimum and maximum coordinates of the ground truth mask points in x direction
y m i n , y m a x The minimum and maximum coordinates of the ground truth mask points in y direction
r a v g (17)Average radius of the polyp mask
c x , c y   ( 18 ) The x and y coordinates of the left of the polyp mask
Δ r a d i a l   ( 19 ) The radial displacement of mask contour point m x , m y
δ r o u n d n e s s   ( 20 ) The maximum of the roundness error
OriginalThe full Hough transforms for all the edge points
WideThe restricted Hough transforms for the edge points with normalized gradient values within a wide threshold interval [0.06, 0.3]
ThinThe restricted Hough transforms for the edge points with normalized gradient values within a thin threshold interval [0.08, 0.2]

References

  1. Alam, M.J.; Fattah, S.A. SR-AttNet: An Interpretable Stretch-Relax Attention based Deep Neural Network for Polyp Segmentation in Colonoscopy Images. Comput. Biol. Med. 2023, 160, 106945. [Google Scholar] [CrossRef]
  2. Krenzer, A.; Banck, M.; Makowski, K.; Hekalo, A.; Fitting, D.; Troya, J.; Sudarevic, B.; Zoller, W.G.; Hann, A.; Puppe, F. A Real-Time Polyp Detection System with Clinical Application in Colonoscopy Using Deep Convolutional Neural Networks. J. Imaging 2023, 9, 26. [Google Scholar] [CrossRef]
  3. Yue, G.; Han, W.; Li, S.; Zhou, T.; Lv, J.; Wang, T. Automated polyp segmentation in colonoscopy images via deep network with lesion-aware feature selection and refinement. Biomed. Signal Process. Control 2022, 78, 103846. [Google Scholar] [CrossRef]
  4. Ahmad, O.F.; Brandao, P.; Sami, S.S.; Mazomenos, E.; Rau, A.; Haidry, R.; Vega, R.; Seward, E.; Vercauteren, T.K.; Stoyanov, D.; et al. Artificial intelligence for real-time polyp localization in colonoscopy withdrawal videos. Gastrointest. Endosc. 2019, 89, AB647. [Google Scholar] [CrossRef]
  5. Sornapudi, S.; Meng, F.; Yi, S. Region-based automated localization of colonoscopy and wireless capsule endoscopy polyps. Appl. Sci. 2019, 9, 2404. [Google Scholar] [CrossRef] [Green Version]
  6. Wittenberg, T.; Zobel, P.; Rathke, M.; Mühldorfer, S. Computer aided detection of polyps in whitelight-colonoscopy images using deep neural networks. Curr. Dir. Biomed. Eng. 2019, 5, 231–234. [Google Scholar] [CrossRef]
  7. Aliyi, S.; Dese, K.; Raj, H. Detection of gastrointestinal tract disorders using deep learning methods from colonoscopy images and videos. Sci. Afr. 2023, 20, e01628. [Google Scholar] [CrossRef]
  8. Karaman, A.; Pacal, I.; Basturk, A.; Akay, B.; Nalbantoglu, U.; Coskun, S.; Sahin, O.; Karaboga, D. Robust real-time polyp detection system design based on YOLO algorithms by optimizing activation functions and hyper-parameters with Artificial Bee Colony (ABC). Expert Syst. Appl. 2023, 221, 119741. [Google Scholar] [CrossRef]
  9. Bernal, J.; Sánchez, F.J.; Fernández-Esparrach, G.; Gil, D.; Rodríguez, C.; Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput. Med. Imaging Graph. 2015, 43, 99–111. [Google Scholar] [CrossRef]
  10. Iwahori, Y.; Hattori, A.; Adachi, Y.; Bhuyan, M.; Woodham, R.J.; Kasugai, K. Automatic Detection of Polyp Using Hessian Filter and HOG Features. Procedia Comput. Sci. 2015, 60, 730–739. [Google Scholar] [CrossRef] [Green Version]
  11. Rácz, I.; Horváth, A.; Szalai, M.; Spindler, S.; Kiss, G.; Regöczi, H.; Horváth, Z. Digital Image Processing Software for Predicting the Histology of Small Colorectal Polyps by Using Narrow-Band Imaging Magnifying Colonoscopy. Gastrointest. Endosc. 2015, 81, AB259. [Google Scholar] [CrossRef]
  12. Georgieva, V.; Nagy, S.; Kamenova, E.; Horváth, A. An Approach for Pit Pattern Recognition in Colonoscopy Images. Egypt. Comput. Sci. J. 2015, 39, 72–82. [Google Scholar]
  13. Hough, P.V.C. Machine Analysis of Bubble Chamber Pictures. In Proceedings of the 2nd International Conference on High-Energy Accelerators and Instrumentation, HEACC 1959, CERN, Geneva, Switzerland, 14–19 September 1959. [Google Scholar]
  14. Ballard, D.H. Generalizing the Hough Transform to detect arbitrary shapes. Pattern Recognit. 1981, 13, 111–122. [Google Scholar] [CrossRef] [Green Version]
  15. Nahum, K.; Eldar, Y.; Bruckstein, A.M. A probabilistic Hough Transform. Pattern Recognit. 1991, 24, 303–316. [Google Scholar] [CrossRef]
  16. Lei, X.; Erkki, O. Randomized Hough Transform (RHT): Basic Mechanisms, Algorithms, and Computational Complexities. CVGIP Image Underst. 1993, 57, 131–154. [Google Scholar] [CrossRef]
  17. Cucchiara, R.; Filicori, F. The Vector-Gradient Hough Transform. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 746–750. [Google Scholar] [CrossRef]
  18. Han, J.H.; Kóczy, L.T.; Poston, T. Fuzzy Hough Transform. Pattern Recognit. Lett. 1994, 15, 649–658. [Google Scholar] [CrossRef]
  19. Zhao, K.; Han, Q.; Zhang, C.; Xu, J.; Cheng, M. Deep Hough Transform for Semantic Line Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4793–4806. [Google Scholar] [CrossRef] [PubMed]
  20. Lin, G.; Tang, Y.; Zou, X.; Cheng, J.; Xiong, J. Fruit detection in natural environment using partial shape matching and probabilistic Hough transform. Precis. Agric 2020, 21, 160–177. [Google Scholar] [CrossRef]
  21. Liu, W.; Zhang, Z.; Li, S.; Tao, D. Road Detection by Using a Generalized Hough Transform. Remote Sens. 2017, 9, 590. [Google Scholar] [CrossRef] [Green Version]
  22. Mathavan, S.; Vaheesan, K.; Kumar, A.; Chandrakumar, C.; Kamal, K.; Rahman, M.; Stonecliffe-Jones, M. Detection of pavement cracks using tiled fuzzy Hough Transform. J. Electron. Imaging 2017, 26, 053008. [Google Scholar] [CrossRef] [Green Version]
  23. Pugin, E.; Zhiznyakov, A.; Zakharov, A. Pipes Localization Method Based on Fuzzy Hough Transform. In Advances in Intelligent Systems and Computing, Proceedings of the Second International Scientific Conference “Intelligent Information Technologies for Industry” (IITI’17); Abraham, A., Kovalev, S., Tarassov, V., Snasel, V., Vasileva, M., Sukhanov, A., Eds.; IITI 2017; Springer: Cham, Switzerland, 2018; Volume 679, pp. 536–544. [Google Scholar] [CrossRef]
  24. Nagy, S.; Solecki, L.; Sziová, B.; Sarkadi-Nagy, B.; Kóczy, L.T. Applying Fuzzy Hough Transform for Identifying Honed Microgeometrical Surfaces. In Computational Intelligence and Mathematics for Tackling Complex Problems; Kóczy, L., Medina-Moreno, J., Ramírez-Poussa, E., Šostak, A., Eds.; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2020; Volume 819, pp. 35–42. [Google Scholar] [CrossRef]
  25. Nagy, S.; Kovács, M.; Sziová, B.; Kóczy, L.T. Fuzzy Hough Transformation in aiding computer tomography-based liver diagnosis. In Proceedings of the 2019 IEEE AFRICON, Accra, Ghana, 15–17 September 2019. [Google Scholar] [CrossRef]
  26. Djekoune, A.; Messaoudi, K.; Amara, K. Incremental circle hough transform: An improved method for circle detection. Optik 2017, 133, 17–31. [Google Scholar] [CrossRef]
  27. Hapsari, R.K.; Utoyo, M.I.; Rulaningtyas, R.; Suprajitno, H. Iris segmentation using Hough Transform method and Fuzzy C-Means method. J. Phys. Conf. Ser. 2020, 1477, 022–037. [Google Scholar] [CrossRef]
  28. Vijayarajeswari, R.; Parthasarathy, P.; Vivekanandan, S.; Basha, A.A. Classification of mammogram for early detection of breast cancer using SVM classifier and Hough transform. Measurement 2019, 146, 800–805. [Google Scholar] [CrossRef]
  29. Shaaf, Z.F.; Jamil, M.M.A.; Ambar, R. Automatic Localization of the Left Ventricle from Short-Axis MR Images Using Circular Hough Transform. In Lecture Notes in Networks and Systems, Proceedings of the Third International Conference on Trends in Computational and Cognitive Engineering; Kaiser, M.S., Ray, K., Bandyopadhyay, A., Jacob, K., Long, K.S., Eds.; Springer: Singapore, 2022; Volume 348, p. 348. [Google Scholar] [CrossRef]
  30. Chen, J.; Qiang, H.; Wu, J.; Xu, G.; Wang, Z. Navigation path extraction for greenhouse cucumber-picking robots using the prediction-point Hough transform. Comput. Electron. Agric. 2021, 180, 105911. [Google Scholar] [CrossRef]
  31. Chuquimia, O.; Pinna, A.; Dray, X.; Granado, B. A Low Power and Real-Time Architecture for Hough Transform Processing Integration in a Full HD-Wireless Capsule Endoscopy. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 646–657. [Google Scholar] [CrossRef]
  32. Montseny, E.; Sobrevilla, P.; Marès Martí, P. Edge orientation-based fuzzy Hough transform (EOFHT). In Proceedings of the 3rd Conference of the European Society for Fuzzy Logic and Technology, Zittau, Germany, 10–12 September 2003. [Google Scholar]
  33. Barbosa, W.O.; Vieira, A.W. On the Improvement of Multiple Circles Detection from Images using Hough Transform. TEMA 2019, 20, 331–342. [Google Scholar] [CrossRef]
  34. Silva, J.; Histace, A.; Romain, O.; Dray, X.; Granado, B. Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer. Int. J. Comput. Assist. Radiol. Surg. 2014, 9, 283–293. [Google Scholar] [CrossRef] [PubMed]
  35. Ruano, J.; Barrera, C.; Bravo, D.; Gomez, M.; Romero, E. Localization of Small Neoplastic Lesions in Colonoscopy by Estimating Edge, Texture and Motion Saliency. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, 23–27 July 2019; pp. 5945–5948. [Google Scholar] [CrossRef]
  36. Ruiz, L.; Guayacán, L.; Martínez, F. Automatic polyp detection from a regional appearance model and a robust dense Hough coding. In Proceedings of the 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), Bucaramanga, Colombia, 24–26 April 2019; pp. 1–5. [Google Scholar] [CrossRef]
  37. Yao, H.; Stidham, R.W.; Soroushmehr, R.; Gryak, J.; Najarian, K. Automated Detection of Non-Informative Frames for Colonoscopy Through a Combination of Deep Learning and Feature Extraction. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, 23–27 July 2019; pp. 2402–2406. [Google Scholar] [CrossRef]
  38. Ismail, R.; Nagy, S. On Metrics Used in Colonoscopy Image Processing for Detection of Colorectal Polyps. In New Approaches for Multidimensional Signal Processing; Kountchev, R., Mironov, R., Li, S., Eds.; Smart Innovation, Systems and Technologies; Springer: Singapore, 2021; Volume 216, pp. 137–151. [Google Scholar] [CrossRef]
  39. Ismail, R.; Nagy, S. Ways of improving of active contour methods in colonoscopy image segmentation. Image Anal. Stereol. 2022, 41, 7–23. [Google Scholar] [CrossRef]
  40. Nagy, S.; Ismail, R.; Sziová, B.; Kóczy, L.T. On classical and fuzzy Hough transform in colonoscopy image processing. In Proceedings of the IEEE AFRICON 2021, Virtual Conference, Arusha, Tanzania, 13–15 September 2021; pp. 124–129. [Google Scholar] [CrossRef]
  41. Ismail, R.; Prukner, P.; Nagy, S. On Applying Gradient Based Thresholding on the Canny Edge Detection Results to Improve the Effectiveness of Fuzzy Hough Transform for Colonoscopy Polyp Detection Purposes. In New Approaches for Multidimensional Signal Processing; Kountchev, R., Mironov, R., Nakamatsu, K., Eds.; NAMSP 2022. Smart Innovation, Systems and Technologies; Springer: Singapore, 2023; Volume 332, pp. 110–121. [Google Scholar] [CrossRef]
  42. Bernal, J.; Sanchez, F.J.; Vilariño, F. Towards Automatic Polyp Detection with a Polyp Appearance Model. Pattern Recognit. 2012, 45, 3166–3182. [Google Scholar] [CrossRef]
  43. Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  44. Roberts, L. Machine Perception of 3-D Solids. Ph.D. Thesis, Massachusetts Institute of Technology, Department of Electrical Engineering, Cambridge, MA, USA, 1965. [Google Scholar]
  45. Prewitt, J.M.S. Object enhancement and extraction. In Picture Processing and Psychopictorics, 1st ed.; Lipkin, B., Rosenfeld, A., Eds.; Academic Press: New York, NY, USA, 1970; pp. 75–149. [Google Scholar]
  46. Sobel, I. Neighborhood coding of binary images for fast contour following and general binary array processing. Comput. Graph. Image Process. 1978, 8, 127–135. [Google Scholar] [CrossRef]
  47. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef]
  48. Kalbasi, M.; Nikmehr, H. Noise-Robust, Reconfigurable Canny Edge Detection and its Hardware Realization. IEEE Access 2020, 8, 39934–39945. [Google Scholar] [CrossRef]
  49. Csimadia, G.; Nagy, S. The Effect of the Contrast Enhancement Processes on the Structural Entropy of Colonoscopic Images. In Proceedings of the ICEST 2014, Nis, Serbia, 20–27 June 2014. [Google Scholar]
  50. Pogorelov, K.; Randel, K.R.; Griwodz, C.; Eskeland, S.L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.-T.; Lux, M.; Schmidt, P.T.; et al. KVASIR: A Multi-Class Image Dataset for Computer Aided Gastrointestinal Disease Detection. In Proceedings of the 8th ACM on Multimedia Systems Conference 2017, Taipei, Taiwan, 20–23 June 2017; pp. 164–169. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A sample image (220) from database CVC-Colon [42] for demonstrating the Hough transform steps, is shown in subplot (a), its ground truth mask in subplot (b), and the Prewitt edge detected version of the picture in subplot (c).
Figure 1. A sample image (220) from database CVC-Colon [42] for demonstrating the Hough transform steps, is shown in subplot (a), its ground truth mask in subplot (b), and the Prewitt edge detected version of the picture in subplot (c).
Applsci 13 09066 g001
Figure 2. The Hough transformed images of the edge detected picture in Figure 1c. The center coordinates a and b are shown in one plane for 4 different values of the radius r . The number of the votes are indicated by colors. It can be seen in the colorbar beside each image. The rows mean the different radii, the 1st row is r = 42 , the 2nd row is r = 57 (the radius of the final circle around the polyp), the 3rd row is r = 72 , and the 4th, last row is r = 115   (the radius of the other final circle). The columns mean the following: the 1st column: the classical Hough transform result, the 2nd column: σ = 5 fuzzy Hough transform, and the 3rd column: σ = 15 fuzzy Hough transform.
Figure 2. The Hough transformed images of the edge detected picture in Figure 1c. The center coordinates a and b are shown in one plane for 4 different values of the radius r . The number of the votes are indicated by colors. It can be seen in the colorbar beside each image. The rows mean the different radii, the 1st row is r = 42 , the 2nd row is r = 57 (the radius of the final circle around the polyp), the 3rd row is r = 72 , and the 4th, last row is r = 115   (the radius of the other final circle). The columns mean the following: the 1st column: the classical Hough transform result, the 2nd column: σ = 5 fuzzy Hough transform, and the 3rd column: σ = 15 fuzzy Hough transform.
Applsci 13 09066 g002
Figure 3. The resulting final circles from the inverse Hough transform of the picture in Figure 1. Similar to Figure 2., the columns indicate the fuzziness of the transform, i.e., the 1st column: classical Hough transform, the 2nd and 3rd columns: fuzzy Hough transform for σ = 5 and 15 , respectively. The rows in this case give the local maximum threshold compared to the global maximum. The 1st row: 50%, the 2nd row: 70%, and the 3rd row: 90% of the global maximum.
Figure 3. The resulting final circles from the inverse Hough transform of the picture in Figure 1. Similar to Figure 2., the columns indicate the fuzziness of the transform, i.e., the 1st column: classical Hough transform, the 2nd and 3rd columns: fuzzy Hough transform for σ = 5 and 15 , respectively. The rows in this case give the local maximum threshold compared to the global maximum. The 1st row: 50%, the 2nd row: 70%, and the 3rd row: 90% of the global maximum.
Applsci 13 09066 g003
Figure 4. 1st row: (a) The original image of the sample (111) from database CVC-Clinic [9], (b) the preprocessed image version, and (c) its ground truth polyp mask. 2nd row: (d) The mask contour, (e) the extended contour ring mask, i.e., the 3 nearest neighbors in all directions for all the contour pixels, and (f) the gradient magnitude filtered image.
Figure 4. 1st row: (a) The original image of the sample (111) from database CVC-Clinic [9], (b) the preprocessed image version, and (c) its ground truth polyp mask. 2nd row: (d) The mask contour, (e) the extended contour ring mask, i.e., the 3 nearest neighbors in all directions for all the contour pixels, and (f) the gradient magnitude filtered image.
Applsci 13 09066 g004
Figure 5. 1st row: (a) The Canny edge detected version of Figure 4. image, and (b) the Canny edge masked gradient magnitude image. The 2nd, 3rd, and 4th rows are exactly like the 1st row, but for Prewitt (c,d), Roberts (e,f), and Sobel (g,h) edge detection methods consecutively.
Figure 5. 1st row: (a) The Canny edge detected version of Figure 4. image, and (b) the Canny edge masked gradient magnitude image. The 2nd, 3rd, and 4th rows are exactly like the 1st row, but for Prewitt (c,d), Roberts (e,f), and Sobel (g,h) edge detection methods consecutively.
Applsci 13 09066 g005
Figure 6. The total number of edge pixels resulting from Canny, Prewitt, Roberts, and Sobel techniques for the three databases.
Figure 6. The total number of edge pixels resulting from Canny, Prewitt, Roberts, and Sobel techniques for the three databases.
Applsci 13 09066 g006
Figure 7. Metric R _ c a l c values resulting from Canny, Prewitt, Roberts, and Sobel techniques for the three databases.
Figure 7. Metric R _ c a l c values resulting from Canny, Prewitt, Roberts, and Sobel techniques for the three databases.
Applsci 13 09066 g007
Figure 8. Metric R _ e d g e values resulting from Canny, Prewitt, Roberts, and Sobel techniques for the three databases.
Figure 8. Metric R _ e d g e values resulting from Canny, Prewitt, Roberts, and Sobel techniques for the three databases.
Applsci 13 09066 g008
Figure 9. Flowchart of the edge detection method selection procedure.
Figure 9. Flowchart of the edge detection method selection procedure.
Applsci 13 09066 g009
Figure 10. Histograms of the normalized gradient-weighted edge pixels resulting from Prewitt method for four different samples (examples from Database CVC-Clinic (a,c), CVC-Colon (d), and ETIS-Larib (b), are selected so, that typical behaviors would be visible). The interval [0, 0.2] has narrower bins, as most of the values in most of the samples were concentrated there, and we needed higher resolution results in this domain. (The yellow histograms for the ring masks are semitransparent, plotted in front of the teal columns for the full images.)
Figure 10. Histograms of the normalized gradient-weighted edge pixels resulting from Prewitt method for four different samples (examples from Database CVC-Clinic (a,c), CVC-Colon (d), and ETIS-Larib (b), are selected so, that typical behaviors would be visible). The interval [0, 0.2] has narrower bins, as most of the values in most of the samples were concentrated there, and we needed higher resolution results in this domain. (The yellow histograms for the ring masks are semitransparent, plotted in front of the teal columns for the full images.)
Applsci 13 09066 g010
Figure 11. The total histograms (linear scale) of the normalized gradient-weighted edges pixels resulted from Prewitt edge detection method. The 1st column is for the edge points in the full images, and the 2nd column is for the edge points in the corresponding ring masks, for the databases CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34]. The horizontal axes are the normalized gradient magnitudes intervals and the picture number in the given database.
Figure 11. The total histograms (linear scale) of the normalized gradient-weighted edges pixels resulted from Prewitt edge detection method. The 1st column is for the edge points in the full images, and the 2nd column is for the edge points in the corresponding ring masks, for the databases CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34]. The horizontal axes are the normalized gradient magnitudes intervals and the picture number in the given database.
Applsci 13 09066 g011
Figure 12. The top view of the logarithmic scale plots for the total histograms of the normalized gradient-weighted edges pixels resulted from Prewitt edge detection method. The 1st column is for the edge points in the full images, and the 2nd column is for the edge points in the corresponding ring masks, for the databases CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34]. The horizontal axis is the picture number in the given database and the vertical axis is the normalized gradient magnitudes intervals.
Figure 12. The top view of the logarithmic scale plots for the total histograms of the normalized gradient-weighted edges pixels resulted from Prewitt edge detection method. The 1st column is for the edge points in the full images, and the 2nd column is for the edge points in the corresponding ring masks, for the databases CVC-Clinic [9], CVC-Colon [42], and ETIS-Larib [34]. The horizontal axis is the picture number in the given database and the vertical axis is the normalized gradient magnitudes intervals.
Applsci 13 09066 g012
Figure 13. The ratio A r for the database CVC-Clinic [9]. The vertical axis represents the various sample pictures (images 111, 150, 188, 217, 265, 390, 475, 480, 503, and 504) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. Each section has all the sample images in the above given order, and there is a line of white points to indicate the borders between the sections. The horizontal axes have the local maximum threshold percentages P p , as well as the various fuzziness parameters σ of the Hough transform. The 1st subplot shows the different values of σ for each local maximum threshold level in increasing order. Here also a column of white dots separates the sections belonging to the given threshold rates. The 2nd subplot gives the same results as the 1st one, only the horizontal axis is grouped the opposite way: the main groups belong to the various σ values, and within each segment the local maximum threshold values increase.
Figure 13. The ratio A r for the database CVC-Clinic [9]. The vertical axis represents the various sample pictures (images 111, 150, 188, 217, 265, 390, 475, 480, 503, and 504) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. Each section has all the sample images in the above given order, and there is a line of white points to indicate the borders between the sections. The horizontal axes have the local maximum threshold percentages P p , as well as the various fuzziness parameters σ of the Hough transform. The 1st subplot shows the different values of σ for each local maximum threshold level in increasing order. Here also a column of white dots separates the sections belonging to the given threshold rates. The 2nd subplot gives the same results as the 1st one, only the horizontal axis is grouped the opposite way: the main groups belong to the various σ values, and within each segment the local maximum threshold values increase.
Applsci 13 09066 g013
Figure 14. The total number of circles N t o t a l found in the sample images of database CVC-Clinic [9]. The vertical axis represents the various sample pictures (images 111, 150, 188, 217, 265, 390, 475, 480, 503, and 504) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axis is grouped the following way: the main groups belong to the various σ values, and within each segment the local maximum threshold values increase.
Figure 14. The total number of circles N t o t a l found in the sample images of database CVC-Clinic [9]. The vertical axis represents the various sample pictures (images 111, 150, 188, 217, 265, 390, 475, 480, 503, and 504) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axis is grouped the following way: the main groups belong to the various σ values, and within each segment the local maximum threshold values increase.
Applsci 13 09066 g014
Figure 15. The ratio A r for the database CVC-Colon [42]. The vertical axis represents the various sample pictures (images 62, 74, 101, 128, 149, 220, 230, and 283) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axes have the local maximum threshold percentages P p , as well as the various fuzziness parameters σ of the Hough transform, similar to Figure 13.
Figure 15. The ratio A r for the database CVC-Colon [42]. The vertical axis represents the various sample pictures (images 62, 74, 101, 128, 149, 220, 230, and 283) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axes have the local maximum threshold percentages P p , as well as the various fuzziness parameters σ of the Hough transform, similar to Figure 13.
Applsci 13 09066 g015
Figure 16. The total number of circles N t o t a l found in the sample images of database CVC-Colon [42]. The vertical axis represents the various sample pictures (images 62, 74, 101, 128, 149, 220, 230, and 283) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axis is grouped similarly to Figure 14.
Figure 16. The total number of circles N t o t a l found in the sample images of database CVC-Colon [42]. The vertical axis represents the various sample pictures (images 62, 74, 101, 128, 149, 220, 230, and 283) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axis is grouped similarly to Figure 14.
Applsci 13 09066 g016
Figure 17. The ratio A r for the database ETIS-Larib [34]. The vertical axis represents the various sample pictures (images 25, 65, 82, 138, and 160) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axes have the local maximum threshold percentages P p , as well as the various fuzziness parameters σ of the Hough transform, similar to Figure 13 and Figure 15.
Figure 17. The ratio A r for the database ETIS-Larib [34]. The vertical axis represents the various sample pictures (images 25, 65, 82, 138, and 160) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axes have the local maximum threshold percentages P p , as well as the various fuzziness parameters σ of the Hough transform, similar to Figure 13 and Figure 15.
Applsci 13 09066 g017
Figure 18. The total number of circles N t o t a l found in the sample images of database ETIS-Larib [34]. The vertical axis represents the various sample pictures (images 25, 65, 82, 138, and 160) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axis is grouped similarly to Figure 14 and Figure 16.
Figure 18. The total number of circles N t o t a l found in the sample images of database ETIS-Larib [34]. The vertical axis represents the various sample pictures (images 25, 65, 82, 138, and 160) for both the non-gradient-weighted (original), and gradient-weighted voting process with wide and thin thresholds, respectively. The horizontal axis is grouped similarly to Figure 14 and Figure 16.
Applsci 13 09066 g018
Figure 19. The number of circles found in the artificial images containing ellipses of large axes with size 30, 60, and 90 pixels. In the horizontal axis, the fuzziness parameter σ is the main ordering parameter, and within the cluster of each σ , the peak percentages P p increase from 50% to 90%. In the vertical direction the three different greater axis sizes are denoted. Within each group, the topmost row belongs to the circle (axis ratio 1:1), the next row to the axis ratio 5:6, then 4:6, 3:6, 2:6. The last row, separated by a horizontal line, represents the non-physical ghost circles.
Figure 19. The number of circles found in the artificial images containing ellipses of large axes with size 30, 60, and 90 pixels. In the horizontal axis, the fuzziness parameter σ is the main ordering parameter, and within the cluster of each σ , the peak percentages P p increase from 50% to 90%. In the vertical direction the three different greater axis sizes are denoted. Within each group, the topmost row belongs to the circle (axis ratio 1:1), the next row to the axis ratio 5:6, then 4:6, 3:6, 2:6. The last row, separated by a horizontal line, represents the non-physical ghost circles.
Applsci 13 09066 g019
Figure 20. The ratio of the classical and fuzzy Hough transformations’ total running time for the wide and thin gradient thresholds compared to the original, full Hough transform times. The two subplots have different horizontal axes: the first subplot just lists the images according to their order in Table 4, the second subplot’s horizontal axis is the number of pixels to be transformed in the original image (i.e., the 5th column of Table 4).
Figure 20. The ratio of the classical and fuzzy Hough transformations’ total running time for the wide and thin gradient thresholds compared to the original, full Hough transform times. The two subplots have different horizontal axes: the first subplot just lists the images according to their order in Table 4, the second subplot’s horizontal axis is the number of pixels to be transformed in the original image (i.e., the 5th column of Table 4).
Applsci 13 09066 g020
Table 1. The practical strengths and weaknesses of Hough transform.
Table 1. The practical strengths and weaknesses of Hough transform.
StrengthsWeaknesses
Not limited to specific shapes
Any parametric curve can be used as basis
With more parameters the dimension of the accumulator space increases
The calculation demand grows exponentially with the number of parameters
Easy to parallelize (the transformation itself is independently executed for each edge point)
Can be computationally expensive
-
for high-resolution images
-
for complex shapes
-
for dense line images
Can identify partial and obstructed curves
Shorter curve segments with fewer points result in weaker peaks in the accumulator space
Sensitive to noise. Noise may lead to additional peaks in the accumulator space
It can detect multiple objects
Overlapping objects may lead to unphysical, ghost peaks
Partially covered, or smaller objects can be suppressed by the larger objects with more points
Table 2. Results of selection the most appropriate edge detection technique using metric R _ c a l c .
Table 2. Results of selection the most appropriate edge detection technique using metric R _ c a l c .
CannyPrewittRobertsSobel
1st strategy: R _ c a l c Mean0.0630010.1537180.1470870.152823CVC-Clinic
0.048060.0999650.0814370.09925CVC-Colon
0.0261860.060010.0497760.059926ETIS-Larib
2nd strategy:
Num. of samples with R _ c a l c > 0.1
70394360394CVC-Clinic
29138108141CVC-Colon
0352233ETIS-Larib
Table 3. Results of selection the most appropriate edge detection technique using metric R _ e d g e .
Table 3. Results of selection the most appropriate edge detection technique using metric R _ e d g e .
CannyPrewittRobertsSobel
1st strategy: R _ e d g e MAE0.4599190.4471930.5794510.44937CVC-Clinic
0.7798270.5822460.6837460.588328CVC-Colon
2.6099090.447180.5369330.451125ETIS-Larib
2nd strategy:
Num. of samples with
0.5 < R _ e d g e < 1.5
346353221349CVC-Clinic
13215690152CVC-Colon
412288121ETIS-Larib
Table 4. The selected samples and their metrics R _ c a l c   and R _ e d g e together with the total number of Prewitt edge pixels. The first two lines in each case are marked by gray color as they are the “bad” samples with low values of R _ c a l c , the others have R _ c a l c larger than 0.1. The last two columns contain the metrics regarding the size and roundness of the polyp.
Table 4. The selected samples and their metrics R _ c a l c   and R _ e d g e together with the total number of Prewitt edge pixels. The first two lines in each case are marked by gray color as they are the “bad” samples with low values of R _ c a l c , the others have R _ c a l c larger than 0.1. The last two columns contain the metrics regarding the size and roundness of the polyp.
DatabaseImage R _ c a l c R _ e d g e No. of Edge Pixels r a v g δ r o u n d n e s s
CVC-Clinic290.0049180330.01327433661038.750.358
2010.0084602370.0383141761182680.957
1110.2476296710.733884298179392.750.380
1500.2519025881.0541401271314520.261
1880.3646788990.42443666.50.173
2170.2301918270.74393531119964.50.409
2650.2606527650.87121212122061100.775
3900.2978087650.641630901100475.750.366
4750.3678057550.6533546331112106.50.757
4800.2225541450.685057471133972.750.315
5030.3102143760.976190476158684.750.151
5040.1801730920.68358209127156.750.201
CVC-Colon510.0043988270.0350877191364270.278
2550.0027739250.00677966172152.250.299
620.2422702240.80112044823611170.337
740.2180667431.457692308173846.250.694
1010.1806493791.978520286458971.50.711
1280.1822730520.449735451399840.483
1490.2159564241.0995106043121980.334
2200.1494889270.966942149234859.50.202
2300.3185941040.64155251188274.250.442
2830.1944869830.40445859965352.250.132
ETIS-Larib240.0072300770.204545455622439.750.135
1510.00107550.012437811464970.750.507
250.1043441230.8791866037044153.250.549
650.1175931981.247645125.250.249
820.1917211331.2692307694131113.250.321
1380.1024282560.649859944226563.50.449
1600.1298336071.29742388885341510.430
Table 5. Properties of the ellipses of the artificial test images of size 200 × 200 pixels. The order of the ellipses is the same as the order in the vertical axis of Figure 19, except for the ghost circles.
Table 5. Properties of the ellipses of the artificial test images of size 200 × 200 pixels. The order of the ellipses is the same as the order in the vertical axis of Figure 19, except for the ghost circles.
Artificial ImageAxes (Pixels) r a v g Δ r a d i a l δ r o u n d n e s s
Applsci 13 09066 i00130, 3015.50.65080.042
30, 25141.89020.135
30, 20132.87580.221
30, 1511.754.73220.403
30, 1010.55.00810.477
Applsci 13 09066 i00260, 6030.50.79310.026
60, 50282.89020.103
60, 4025.55.20020.204
60, 30237.60230.331
60, 2020.510.03690.490
Applsci 13 09066 i00390, 9045.51.60810.035
90, 7541.54.22200.102
90, 60388.49580.224
90, 4534.2512.19890.356
90, 3030.515.06860.494
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ismail, R.; Nagy, S. A Novel Gradient-Weighted Voting Approach for Classical and Fuzzy Circular Hough Transforms and Their Application in Medical Image Analysis—Case Study: Colonoscopy. Appl. Sci. 2023, 13, 9066. https://doi.org/10.3390/app13169066

AMA Style

Ismail R, Nagy S. A Novel Gradient-Weighted Voting Approach for Classical and Fuzzy Circular Hough Transforms and Their Application in Medical Image Analysis—Case Study: Colonoscopy. Applied Sciences. 2023; 13(16):9066. https://doi.org/10.3390/app13169066

Chicago/Turabian Style

Ismail, Raneem, and Szilvia Nagy. 2023. "A Novel Gradient-Weighted Voting Approach for Classical and Fuzzy Circular Hough Transforms and Their Application in Medical Image Analysis—Case Study: Colonoscopy" Applied Sciences 13, no. 16: 9066. https://doi.org/10.3390/app13169066

APA Style

Ismail, R., & Nagy, S. (2023). A Novel Gradient-Weighted Voting Approach for Classical and Fuzzy Circular Hough Transforms and Their Application in Medical Image Analysis—Case Study: Colonoscopy. Applied Sciences, 13(16), 9066. https://doi.org/10.3390/app13169066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop