Next Article in Journal
Analysis of Occlusion Effects for Map-Based Self-Localization in Urban Areas
Next Article in Special Issue
DBSCAN-Based Tracklet Association Annealer for Advanced Multi-Object Tracking
Previous Article in Journal
Ultrasound Sensors for Process Monitoring in Injection Moulding
Previous Article in Special Issue
Locating Ships Using Time Reversal and Matrix Pencil Method by Their Underwater Acoustic Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Centered Multi-Task Generative Adversarial Network for Small Object Detection

School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(15), 5194; https://doi.org/10.3390/s21155194
Submission received: 30 June 2021 / Revised: 23 July 2021 / Accepted: 29 July 2021 / Published: 31 July 2021
(This article belongs to the Special Issue Sensor Fusion for Object Detection, Classification and Tracking)

Abstract

:
Despite the breakthroughs in accuracy and efficiency of object detection using deep neural networks, the performance of small object detection is far from satisfactory. Gaze estimation has developed significantly due to the development of visual sensors. Combining object detection with gaze estimation can significantly improve the performance of small object detection. This paper presents a centered multi-task generative adversarial network (CMTGAN), which combines small object detection and gaze estimation. To achieve this, we propose a generative adversarial network (GAN) capable of image super-resolution and two-stage small object detection. We exploit a generator in CMTGAN for image super-resolution and a discriminator for object detection. We introduce an artificial texture loss into the generator to retain the original feature of small objects. We also use a centered mask in the generator to make the network focus on the central part of images where small objects are more likely to appear in our method. We propose a discriminator with detection loss for two-stage small object detection, which can be adapted to other GANs for object detection. Compared with existing interpolation methods, the super-resolution images generated by CMTGAN are more explicit and contain more information. Experiments show that our method exhibits a better detection performance than mainstream methods.

1. Introduction

With visual sensors and computer vision development, gaze estimation technology can obtain gaze points with high accuracy [1]. However, the application of gaze estimation is still limited to visual attention analysis [2], assistive technologies for users with motor disabilities [3], behavior research [4], etc. Meanwhile, object detection algorithms such as YOLOv4 [5] and Faster RCNN [6] have low confidence and apparent location deviation in the prediction of small objects. The method of combining object detection and gaze estimation can significantly improve small object detection performance.
Object detection algorithms have achieved impressive accuracy and efficiency in detecting large objects. However, the performance with small-sized objects is far from satisfactory. There is still a big gap between the performances with small and large objects in recall and accuracy. To achieve a better detection performance when using small objects, SSD [7] uses feature maps from shallow layers for small objects. FPN [8] exploits a feature pyramid to combine feature maps at different scales. Bai et al. [9] introduced a generative adversarial network to implement image super-resolution for small object detection. SOD-MTGAN [10] takes ROIs as input and predicts the categories and locations of objects.
The shallow feature maps are full of textural information but less discriminative, which leads to many false positive results in SSD. The up-sampling of FPN and [9] might generate artifacts which can cover the feature of small objects. SOD-MTGAN takes ROIs from baseline detectors as input, which means that SOD-MTGAN is only executed as the second stage of two-stage object detection. The performance of SOD-MTGAN is heavily dependent on its baseline detector. SOD-MTGAN exploits deconvolution layers for up-sampling, which generates fewer artifacts [10]. However, SOD-MTGAN did not propose a method to suppress artifacts.
In this paper, we proposed a centered multi-task generative adversarial network (CMTGAN) to improve detection performance on small objects, which exploits points of interest presented by gaze estimation methods or detectors (e.g., YOLOv4, etc.) for small object detection. We exploit a gaze estimation method or a detector as a baseline selector to propose points of interest. CMTGAN crops the selected regions centered by points of interest and performs two-stage object detection. Following the previous works on GANs, CMTGAN consists of two subnetworks: a generator and a discriminator. The generator performs super-resolution on selected regions. The discriminator distinguishes real images (high-resolution images) from fake images (super-resolution images) and performs complete two-stage object detection.
Contributions: The contributions can be summarized as follows: (1) We proposed an end-to-end convolutional network based on classical GAN for small object detection, which can perform effective single image super-resolution and complete two-stage object detection.
Our method can be pre-trained on high-resolution images for super-resolution without extra object information, which helps the generator learn to extract features from low-resolution images efficiently. The generator performing super-resolution and the discriminator performing object detection can be trained together, which helps them learn to perform better detections simultaneously.
(2) We introduced artificial texture loss into the generator to suppress the artifacts generated by up-sampling, which improves the detection performance on small objects. Artificial texture loss helps the generator reach a balance between textures from original images and textures generated by super-resolution. (3) We exploit a centered mask in the network, making the generator pay more attention to the central part of images. (4) The experiments on VOC datasets reveal that CMTGAN can restore sharper images with more information from low-resolution images than traditional interpolation methods.
Our method has a better performance than mainstream one-stage and two-stage object detection methods. It is also more efficient than object detection methods combined with CNN-based super-resolution methods.
CMTGAN can perform state-of-art detection on small/medium objects.

2. Related Work

2.1. Small Object Detection

Traditional object detection methods are based on handcrafted features and the deformable part model [11]. Due to the limitation of handcrafted features, traditional methods are far less robust than methods based on deep neural networks. Especially for small object detection, the performance of traditional methods is far from CNN-based methods.
In recent years, object detection methods based on deep neural networks have exhibited superior performances. Currently, CNN-based object detection methods can be categorized as one of two frameworks: the two-stage framework (e.g., Faster RCNN [6], FPN citefpn, etc.) and the one-stage framework (e.g., YOLO [5,12,13], SSD [7], etc.). Faster RCNN [6], a milestone of the two-stage framework, performs object detection with two stages. Faster RCNN proposes ROIs in the first stage, then predicts categories and regresses bounding boxes in the second stage. The one-stage frameworks such as YOLO convert object detection to regression problems, significantly improving detection speed. However, Faster RCNN and YOLOv4 still show unsatisfactory performance on small object detection.
To detect small objects better, SSD uses feature maps from the shallow layer. Although shallow feature maps contain more texture information, they lack semantic information, leading to false positive results in SSD. Compared to SSD-like detectors, our discriminator uses deep, strong semantic features to represent small objects, thus reducing the false positive rate.
FPN exploits the feature pyramid to combine low-resolution, semantically strong features with high-resolution, semantically weak features. With the feature pyramid, FPN exhibits a superior performance over Faster RCNN for small object detection. However, FPN up-samples low-resolution features to fit with high-resolution features, a process that introduces artifacts into the features and consequently degrades the detection performance. SOD-MTGAN [10] uses deconvolution layers for up-sampling, which introduces fewer artifacts into features. However, SOD-MTGAN has not proposed a specific method to suppress artifacts.
Xiang et al. [14] proposed a one-stage space-time video super-resolution framework. It exploits a ConvLSTM method to super-resolve videos, but it is not suitable for single image super-resolution. Su et al. [15] proposed a progressive mixture model for single image super-resolution, which achieved impressive performance on super-resolution.
Compared to FPN and the generator of SOD-MTGAN, our method proposes a method to suppress artificial textures. We exploit deconvolution layers for up-sampling like SOD-GAN and propose artificial texture loss to suppress artifacts, which helps our network balance original textures and super-resolution textures.
Different from [15], we combine single image super-resolution and object detection in a CNN-based framework, which means they can be trained together.

2.2. Generative Adversarial Networks

In the primary work, the generative adversarial network generates realistic-looking images from random noise input [16]. GAN exhibits an impressive performance in image super-resolution [17,18], image editing [19,20], image generation, style transfer [21,22], representation learning [23,24], object detection [9,10,25], and so on. GAN includes a generator and a discriminator: the generator generates images, and the discriminator determines the authenticity of images. During training, the generator tries to generate more realistic-looking images, and the discriminator struggles to discover the difference between real images and fake images. After that, the well-trained generator can be used to generate realistic-looking images.
Ledig et al. [17] proposed a generative adversarial network for image super-resolution. The generator takes low-resolution images as input to generate super-resolution images. Real high-resolution images and fake images (e.g., super-resolution images) are delivered to the discriminator. The discriminator the difference between real images and fake images. Bai et al. [10] introduces SOD-MTGAN for image super-resolution and small object detection. The generator of SOD-MTGAN takes ROIs proposed by a baseline detector (e.g., Faster RCNN) as input and performs super-resolution on ROIs. The discriminator of SOD-MTGAN has three tasks: judging the authenticity of the image, predicting categories, and fine-tuning bounding boxes. The discriminator plays a role as the second-stage subnetwork in the two-stage framework. Therefore, the baseline detector has a significant influence on the detection performance of SOD-MTGAN.
Compared to SOD-MTGAN, the discriminator of our method performs complete two-stage small object detection. The generator of CMTGAN takes selected regions as input and performs super-resolution. Then the discriminator proposes ROIs on super-resolution images in the first stage, predicts object categories and regresses object locations in the second stage. The baseline selector only proposes points of interest, which means that CMTGAN has less reliance on the selector.

3. Proposed Method

CMTGAN includes a generator and a discriminator. As shown in Figure 1, the baseline selector creates points of interest on the input containing small objects. We cropped the selected regions centered by points of interest as high-resolution images (HR images) and down-sampled the HR images to obtain low-resolution images (LR images). The generator takes the LR images to generate super-resolution images (SR images). The HR/SR images are delivered to the discriminator. The discriminator categorizes the input as real or fake and detects small objects.

3.1. Network Architecture

3.1.1. Generator

As shown in Figure 2 and Table 1, we adopted a deep CNN architecture which has shown impressive performance in tiny face detection [9] and super-resolution [17].
There is one skip connection layer, one RPN layer, one sigmoid layer, two deconvolution layers, three convolution layers, and five residual blocks in the generator. Differently from [9], we introduced a skip connection layer into the generator, which brings texture information from shallow layers to up-sampling layers. Differently from the up-sampling layers in [9,17], we exploited deconvolution layers for up-sampling, which achieves a higher efficiency and generates fewer artifacts [10]. Every deconvolution layer performs up-sampling with a factor of 4, which means that the size of SR images is four times that of LR images. We exploit a sigmoid layer to limit the output, which can avoid gradient exploding problems in training.

3.1.2. Discriminator

As shown in Figure 2 and Table 2, we employed ResNet-50 as our backbone network in the discriminator. ResNet-50 is not the only choice, which can be replaced with ResNet-101, AlexNet, or VGGNet for different objects. We introduced an ROI layer into the backbone network to propose ROIs. We used an average pooling layer following the backbone network for down-sampling. We used three parallel fully connected layers behind the average pooling layer, which distinguish the real HR images from the generated SR images, predicting object categories, and regressing bounding boxes.
The discriminator takes HR images and SR images as input. The backbone network extracts features from input and proposes ROIs. Figure 3a shows the tuple u = u x 1 , u y 1 , u x 2 , u y 2 of ROI. Behind the average pooling layer, the first fully connected layer ( F C A d v ) uses softmax to predict the probability ( P H R ) of the input image being a real HR image. The second fully connected layer ( F C C l s ) also uses softmax, which outputs the probability P C l s = ( p 0 , , p K ) of the ROI, each being part of the K + 1 object categories. The third fully connected layer ( F C L o c ) outputs the bounding box offset tuple t = ( t x , t y , t w , t h ) . As shown in Figure 3b, the offset tuple t = ( t x , t y , t w , t h ) corresponds to the bounding box.
Compared to the discriminator in [17,23], our discriminator not only distinguishes real images from fake images but also detects objects in the images. The discriminator in [9] predicts the probability of the input being a face. The discriminator in [10] predicts the probability of the input being each of the categories and fine-tunes the bounding boxes. Compared to [9] and [10], our discriminator performs complete two-stage object detection, proposing ROIs, predicting object categories, and regressing bounding boxes. The difference between our method and [10] means that we only need a point of interest to detect a small object, while [10] needs an ROI proposed by its baseline detector.

3.2. Loss Function

We incorporated the loss functions from some state-of-art GAN approaches and propose centered content loss that satisfies the needs of small object detection. Centered content loss consists of pixel-wise loss, perception loss, and artificial texture loss. Centered content loss cooperates with adversarial loss, guiding the generator to generate realistic-looking images easier for small object detection. Furthermore, we propose two-stage detection loss, including ROI loss, classification loss, and regression loss. On the one hand, two-stage detection loss enables the discriminator to perform two-stage object detection. On the other hand, two-stage detection loss drives the generator to recover fine details from LR images for easier detection, as shown in Figure 2. In the following, we describe the centered content loss and the adversarial loss. Furthermore, we define the objective functions of the generator and the discriminator.

3.2.1. Centered Content Loss

As shown in Figure 1, the selected regions contain small objects in the central part. We introduced a centered mask which makes the content loss more sensitive to the central part of SR images. The centered mask is shown in Equation (1), and Figure 4 shows the suppression effect of our centered mask.
M x , y = cos π x W 1 2 2 + y H 1 2 2
Here, W and H denote the size of SR images.
Pixel-wise loss: Instead of the generator in [16] taking random noise as input, our generator creates SR images from LR images. A natural and straightforward way is to enforce the generator’s output to be the ground-truth images by minimizing the pixel-wise loss, which has been proved effective in some state-of-the-art approaches [26,27]. The pixel-wise loss is computed as Equation (2).
l p i x e l w i s e = 1 W H x = 1 W y = 1 H M x , y · I x , y H R G ω I L R x , y 2
Here, M x , y denotes the centered mask. I H R and G ω I L R denote real HR images and generated SR images. G represents the generator, and ω denotes its parameters. W and H denote the size of HR/SR images and the centered mask.
Perception loss: Solutions of MSE optimization problems often lack high-frequency content, which results in images covered with overly smooth textures. Therefore, we adopted the perception loss based on the pre-trained ResNet [28]. The pixel-wise loss is computed as Equation (3).
l p e r c e p t i o n = 1 w h x = 1 w y = 1 h M x , y · R I H R x , y R G ω I L R x , y 2
Here, R denotes the pre-trained ResNet. w and h indicate the size of the feature map created by R.
Artificial texture loss: The perception loss increases high-frequency content in SR images, making them sharper. However, perception loss without suppression tends to introduce artificial textures into images, which do not exist in HR images. These artificial textures significantly reduce the perception loss, but they also obscure the original textures of images, which is fatal for small object detection. Artificial texture loss is proposed to suppress the artificial textures encouraged by perception loss. The artificial texture loss is computed as Equation (4).
l t e x t u r e = 1 W 1 x = 1 W 1 M x · G ω I L R x + 1 , * G ω I L R x , * 2 + 1 H 1 y = 1 H 1 M y · G ω I L R * , y + 1 G ω I L R * , y 2
in which
M x = cos π x W 1 2 M y = cos π y H 1 2
where M x and M y are the variants of M x , y to the direction of x and y. W and H denote the size of the super-resolution image. G ω I L R x , * is the sum of the pixel values of the x-th row in the generated image. G ω I L R * , y denotes the sum of the pixel values of the y-th column in the generated image.

3.2.2. Adversarial Loss

We adopted an adversarial loss to generate more realistic-looking SR images, which has been proved to be efficient in [23]. The adversarial loss is defined as Equation (6):
l a d v = log D θ I H R + log 1 D θ G w I L R
where D represents the discriminator and θ denotes its parameters. D θ I H R denotes the probability of the input I H R being a real HR image.
The adversarial loss encourages the discriminator to have a stronger discriminative ability to distinguish real HR images from generated SR images. At the same time, the adversarial loss drives the generator to produce images with fine details.

3.2.3. Detection Loss

As shown in Figure 2, our discriminator is a two-stage object detection method. First, the discriminator proposes ROIs from the input. Second, the discriminator predicts object categories and regresses bounding boxes on ROIs. To achieve this, we propose detection loss, including ROI loss, classification loss, and regression loss.
ROI Loss: To complete the task of proposing ROIs and ensuring the generated images are in more detail, we introduced the ROI loss to the overall objective. The ROI loss is defined as Equation (7):
l R O I = i x 1 , y 1 , x 2 , y 2 S L 1 r i u i
in which
S L 1 ( x ) = 0.5 x 2 | x | < 1 | x | 0.5 | x | < 1
where r = r x 1 , r y 1 , r x 2 , r y 2 denotes a tuple of the true ROI regression target, and u = u x 1 , u y 1 , u x 2 , u y 2 denotes the proposed ROI tuple u shown in Figure 3a.
In our method, ROI loss plays two roles. First, it guides the discriminator to propose ROIs from the input, regardless of whether they are real HR images or generated SR images. Second, it promotes the generator to recover images with more detail, making it easier to propose ROIs.
Classification Loss. In order to complete the object categorization, we adopted cross-entropy loss as our classification loss. The classification loss is defined as Equation (9):
l c l s = k = 1 K y i , k log D c l s I i H R y i , k log D c l s G ω I i L R
in which
y i , k = 1 if target i belongs to class k 0 otherwise
where D c l s I i * denotes the probability of the i-th input belonging to the k-th category.
Our classification loss also plays two roles in the discriminator and the generator, respectively. First, it encourages the discriminator to predict accurate object categories. Second, it drives the generator to produce images that are easier to classify.
Regression Loss: We also introduced regression loss into the objective function to complete the two-stage object detection and promote the generated images that make it easier to localize small objects.
l l o c = j ( x , y , w , h ) S L 1 t i , j v i , j
where v = v x , v y , v w , v h denotes a tuple of the true bounding box regression target, and t = t x , t y , t w , t h denotes the tuple of the predicted bounding box, as shown in Figure 3b.
Similar to the ROI loss, our regression loss also has two purposes. First, it guides the discriminator to fine-tune the bounding box in the ROI proposed in the first stage. Second, it encourages the generator to produce sharper images with more high-frequency content.

3.2.4. Objective Function

Based on the previous analysis, we propose the objective function of CMTGAN. CMTGAN can be trained by optimizing the objective function. We adopted two objective functions for the generator and the discriminator, respectively. The loss functions L G of the generator and L D of the discriminator are shown in Equations (12) and (13).
L G = 1 N i = 1 N λ p i x l p i x e l w i s e + 1 N i = 1 N λ p e r c l p e r c e p t i o n + 1 N i = 1 N λ t e x l t e x + 1 N i = 1 N λ a d v l a d v + 1 N i = 1 N λ d e t l R O I + l c l s + l l o c
L D = 1 N i = 1 N τ R O I l R O I + 1 N i = 1 N τ c l s l c l s + 1 N i = 1 N τ l o c l l o c + 1 N i = 1 N τ a d v l a d v
where λ p i x , λ p e r c , λ t e x , λ a d v and λ d e t denote the trade-off weights during training generator G. τ R O I , τ c l s , τ l o c , and τ a d v denote the trade-off weights during training discriminator D. l p i x e l w i s e , l p e r c e p t i o n , l t e x , l a d v , l R O I , l c l s and l l o c denote the pixel-wise loss in Equation (2), the perception loss in Equation (3), the artificial texture loss in Equation (4), the adversarial loss in Equation (6), the ROI loss in Equation (7), the classification loss in Equation (9) and the regression loss in Equation (11).
The loss function of generator G consists of centered content loss, adversarial loss, and detection loss. Different to the previous GAN methods, we introduced the centered mask and artificial texture loss into the centered content loss. The centered mask promotes the generator focus on improving details of the central part, which satisfies the needs of small object detection. Artificial texture loss helps the generator reach a balance between keeping original features and generating super-resolution textures. The loss function of discriminator D includes adversarial loss and detection loss. Different from [10], we introduced ROI loss into our detection loss, which helps the discriminator perform the first stage of small object detection: propose ROIs. We also adopt classification loss and regression loss for the second stage: predict object categories and regress bounding boxes.
While training the generator, we froze the discriminator, calculated the loss of the generator with L G , and updated the generator by backpropagation. Similar to the generator, we also optimized the discriminator while keeping the generator frozen.

4. Experiments

4.1. Datasets and Evaluation Metrics

We implemented our model with PyTorch and all the following experiments were performed on a single NVIDIA GeForece RTX 3090 GPU. Table 3 shows our system requirements. Considering the GPU’s performance, we experimentally validated our proposed method on the VOC dataset.
The VOC dataset contains 20 object categories including vehicles, households, animals, and others. This dataset has been widely used as a benchmark for object detection tasks [29].
Due to the resolution of the dataset, we exploited original images for the pre-training of the generator. After that, we created selected regions from original images for the pre-training of the discriminator and the training of CMTGAN, respectively.
Due to the errors in the baseline selector, the point of interest cannot properly coincide with the center of the target. As shown in Figure 5, we also added a random offset x o f f s e t , y o f f s e t from the center of the target while creating selected regions. x o f f s e t , y o f f s e t is shown in Equation (14).
x o f f s e t = random 10 , w o b j e c t · h o b j e c t y o f f s e t = random 10 , w o b j e c t · h o b j e c t
where w o b j e c t and h o b j e c t denote the size of the detection target. The function random x 1 , x 2 returns a random integer from x 1 to x 2 . After that, we took the point of interest as the center and crop the selected region with a fixed size s i z e s e l e c t e d .
We exploited average gradient (AG), standard deviation (STD), and mutual information (MI) to validate the performance of our generator, in which AG shows the definition of images, STD shows the quantity of information, and MI denotes the similarity between HR/SR images. Furthermore, we performed small object detection with CMTGAN and some mainstream methods with one-stage frameworks or two-stage frameworks. We divided the objects into small (area < 96 2 ), medium ( 96 2 > area > 32 2 ), and large objects (area > 96 2 ). We focused on the detection of small/medium objects and report the final detection performance with AP.

4.2. Implementation Details

In the generator, we set the trade-off weights λ p i x = 1 , λ p e r c = 0.006 , λ t e x = 2 × 10 8 , λ a d v = λ d e t = 0.001 . In the discriminator, we set the trade-off weights τ a d v = τ R O I = τ c l s = τ l o c = 1 . First, we performed the pre-training of the generator and the discriminator. Second, we trained the CMTGAN for image super-resolution and small object detection.
Pre-training of the generator and the FC adv branch of the discriminator. We created HR images in the size of 400 2 from the VOC dataset and exploited down-sampling to produce LR images at the size of 100 2 . Then, we performed the pre-training on HR and SR images. The generator produces SR images at the size of 400 2 from LR images, and the F C a d v branch outputs the probability of the input being a real HR image. Our generator was trained from scratch. The weights in each layer were initialized with a zero-mean Gaussian distribution with standard deviation 0.02, while the biases were initialized with 0. The backbone network of discriminator loaded the pre-trained weights of ResNet-50. The weights in the fully connected layer of F C a d v branch were initialized with a zero-mean Gaussian distribution with a standard deviation of 0.1, while the biases were initialized with 0. During the pre-training, the weights and biases in the backbone network of the discriminator were fixed, which makes the discriminator more stable. We adopted the Adam optimizers for the generator and the discriminator, respectively. The learning rates for the optimizers were initially set to 0.0001 and were then reduced to 95% after every epoch. We alternately updated the generator and the discriminator networks: we updated the generator every five iterations and updated the discriminator every iteration except on the generator’s turn. The pre-training was terminated after 50 epochs, and the states of the network were recorded.
Pre-training of the discriminator: We pre-trained the F C c l s branch and F C l o c branch of the discriminator on the selected regions with s i z e s e l e c t e d = 150 . Similar to the former pre-training, we also fixed the backbone network of the discriminator. The backbone network of discriminator loads the pre-trained weights of ResNet-50. The weights in RPN layers, fully connected layers of F C c l s branch and F C l o c branch are initialized with a zero-mean Gaussian distribution with a standard deviation of 0.1, while the biases are initialized with 0. We adopted the Adam optimizer for the discriminator. The learning rate for the optimizer was initially set to 0.0001 and then reduced to 95% after every epoch. The pre-training was terminated after 50 epochs, and the states of the network were recorded.
Training for CMTGAN: Finally, we trained CMTGAN on the selected regions. The generator performed super-resolution on the selected regions in the size of 150 2 . The discriminator performed object detection on the SR images in the size of 600 2 , predicting object categories and regressing bounding boxes. The generator and discriminator load weights from the pre-trained weights. We adopted the Adam optimizers for the generator and the discriminator, respectively. The learning rates for the optimizers were initially set to 1 × 10 5 and then reduced to 95% after every epoch. We alternately updated the generator and the discriminator networks: we updated the generator every five iterations and updated the discriminator every iteration except on the generator’s turn. The training contains 100 epochs. In the first 50 epochs, layers in the backbone network of the discriminator were fixed. In the following 50 epochs, no layer was fixed.

4.3. Experimental Results

4.3.1. Performance of Super-Resolution

The generator performed super-resolution on LR images, and the performance is shown in Figure 6. We performed up-sampling with bicubic interpolation on LR images in the size of 100 2 (Figure 6 row A) and restore images in 400 2 (Figure 6, row B).
We super-resolved LR images with SPSR [30] and ESRGAN (Figure 6, row C and row D).
At the same time, we exploit CMTGAN without artificial texture loss to generate SR images with a factor of 4 (Figure 6, row E). Furthermore, we exploit CMTGAN with artificial texture loss to generate SR images with a factor of 4 (Figure 6, row F).
It is evident that SR images in row E are significantly sharper than restored images in row B. However, SR images in row E contain some abnormal textures, which may cover the original texture information of small objects. Especially in the first image of row E, we can see that the wings are abnormally distorted by artificial textures. SR images in row F contain significantly fewer artificial textures than SR images in row E. The wings in the first image of row F are more realistic than row E.
Although SPSR exhibited an impressive performance on images of buildings, images generated by SPSR in row C contain too many artificial textures for small object detection compared to images generated by our method in row E. ESRGAN generated more realistic-looking images in row D compared to SPSR. Images generated by ESRGAN in row D look sharper than images in row E, which shows extremely clear boundaries. However, due to the optical factors, real HR images captured by cameras do not contain such extremely clear boundaries, which means interference in object detection methods. More details are shown in the following experiments.
In summary, the generator of CMTGAN can generate sharper SR images than traditional interpolation methods. There is no significant gap between the generator of CMTGAN- and CNN-based methods (e.g., ESRGAN, etc.) in single image super-resolution. Artificial texture loss shows significant suppression of artifacts, which helps the generator keep a balance between original features and super-resolution textures.
Furthermore, we quantitatively analyzed the super-resolution performance of CMTGAN with AG, STD, MI, and inference time. A higher AG means sharper images, and a higher STD means more information in images. MI shows a similarity between HR images and SR/RE images. We collected 54 HR images from the VOC dataset randomly and down-sampled them to the size of 150 2 , as shown in Figure 7. We up-sampled LR images with bilinear interpolation and bicubic interpolation to restore images in the size of 600 2 . The generator of CMTGAN produces SR images with a factor of 4. As shown in Table 4, we calculated AG, STD, and MI of SR/RE images to validate the performance of CMTGAN. Taking into consideration the needs of object detection on inference time, we also recorded the inference time in Table 5.
According to Table 4, it is clear that SR images generated by CNN-based methods have higher AG and STD than RE images generated by traditional interpolation methods, and images generated by ESRGAN have the best AG and STD. However, a higher AG and STD do not mean absolutely better images. The images generated by SPSR have a better AG and STD than CMTGAN, while they contain too many artificial textures, as shown in Figure 6. These artificial textures increase AG and STD, but also make small objects hard to detected. Therefore, we exploited MI to measure the similarity between HR images and SR/RE images. As shown in Table 4, SR images generated by CMTGAN have the best MI, which means that SR images generated by CMTGAN are the most similar to the original HR images.
According to Table 5, CMTGAN has the shortest inference time among CNN-based super-resolution methods. The generator of CMTGAN takes an average of 10.1 ms to perform super-resolution, which satisfies the needs of object detection. Although it takes more time than traditional interpolation methods, the inference time of CMTGAN is significantly shorter than SPSR and ESRGAN.
In summary, SR images generated by CMTGAN are sharper than images produced by traditional interpolation methods and contain more information. The generator of CMTGAN exhibits a similar super-resolution performance to some state-of-the-art CNN-based methods. SR images generated by CMTGAN are the most similar to the original HR images as compared to images generated by traditional interpolation methods and CNN-based methods. The generator of CMTGAN can perform real-time super-resolution on a single NVIDIA RTX3090, which satisfies the needs of small object detection.

4.3.2. Performance of Small Object Detection

We exploited CMTGAN to detect small objects, as shown in Figure 8. The generator performed super-resolution on the input, which made the images easier for detection. The discriminator proposed ROIs in the first stage, predicted object categories and regressed bounding boxes in the second stage.
We performed small/medium object detection on selected regions with CMTGAN, YOLOv4, and Faster RCNN combined with different up-sampling methods. We up-sampled the selected regions from 150 2 to 608 2 with bilinear interpolation and bicubic interpolation, from 150 2 to 600 2 with SPSR and ESRGAN for YOLOv4, which is similar to the super-resolution performed by the generator in CMTGAN. We up-sampled the selected regions from 150 2 to 600 2 with bilinear interpolation, bicubic interpolation, SPSR, and ESRGAN for Faster RCNN, similar to the super-resolution in CMTGAN. Then, we exploited these methods for object detection.
As shown in Table 6, CMTGAN has a better performance on small/medium object detection than YOLOv4 (i.e., 20.52% in AP) and Faster RCNN (i.e., 5.27% in AP).
Although YOLOv4 combined with ESRGAN achieved a higher AP, its inference time also increased as shown in Table 7.
According to Table 7, YOLOv4 combined with bilinear interpolation has the shortest inference time. The inference time of CMTGAN is longer than YOLOv4 combined with bilinear interpolation but significantly shorter than YOLOv4 and Faster RCNN combined with CNN-based super-resolution methods. CNN-based super-resolution methods (e.g., ESRGAN, SPSR, etc.) may benefit small object detection, but they also take a long time to super-resolve LR images, which makes the detection is not in real timeCMTGAN exhibited a better object detection performance than Faster RCNN combined with traditional interpolation with a similar inference time.

5. Conclusions

In this paper, we proposed CMTGAN, a new small object detection method based on generative adversarial networks. We introduced artificial texture loss and a centered mask into the generator, with which the generator could create super-resolution images easier for small object detection. The artificial texture loss helped the generator to balance the original features and super-resolution textures. The discriminator of our method performed complete two-stage object detection and distinguished real images from fake images, which can be adapted to other GANs for detection tasks. The experimental results showed that, compared with the existing methods, the generator of CMTGAN could generate sharper super-resolution images with more information. CMTGAN had an obvious advantage in small/medium object detection.
In future work, we will focus on eliminating the baseline selector. Although CMTGAN has a similar inference time than Faster RCNN, there is still a significant difference between YOLOv4 and CMTGAN in inference time. We will investigate how to optimize the architecture of CMTGAN to perform more efficient object detection. Furthermore, we will further investigate the generation of artifacts to achieve a better performance.

Author Contributions

Conceptualization, H.W. and J.W.; methodology, H.W.; software, H.W.; validation, H.W. and K.B.; formal analysis, H.W. and Y.S.; investigation, H.W. and Y.S.; resources, J.W.; data curation, H.W. and K.B.; writing—original draft preparation, H.W.; writing—review and editing, J.W.; visualization, J.W.; supervision, J.W.; project administration, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Defense Industrial Technology Development Program (JCKY2019602C015).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the Defense Industrial Technology Development Program for their funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GANGenerative adversarial network
HR imagesHigh-resolution images
LR imagesLow-resolution images
SR imagesSuper-resolution images
RE imagesRestored images
AGAaverage gradient
STDStandard deviation
MIMutual information

References

  1. Fischer, T.; Chang, H.J.; Demiris, Y. Rt-gene: Real-time eye gaze estimation in natural environments. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 334–352. [Google Scholar]
  2. Jaques, N.; Conati, C.; Harley, J.M.; Azevedo, R. Predicting affect from gaze data during interaction with an intelligent tutoring system. In International Conference on Intelligent Tutoring Systems; Springer: Berlin/Heidelberg, Germany, 2014; pp. 29–38. [Google Scholar]
  3. Eid, M.A.; Giakoumidis, N.; El Saddik, A. A novel eye-gaze-controlled wheelchair system for navigating unknown environments: Case study with a person with ALS. IEEE Access 2016, 4, 558–573. [Google Scholar] [CrossRef]
  4. Georgiou, T.; Demiris, Y. Adaptive user modelling in car racing games using behavioural and physiological data. User Model. User Adapt. Interact. 2017, 27, 267–311. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  6. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  8. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  9. Bai, Y.; Zhang, Y.; Ding, M.; Ghanem, B. Finding tiny faces in the wild with generative adversarial network. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 21–30. [Google Scholar]
  10. Bai, Y.; Zhang, Y.; Ding, M.; Ghanem, B. Sod-mtgan: Small object detection via multi-task generative adversarial network. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 206–221. [Google Scholar]
  11. Zhang, X.; He, Z.; Ma, Z.; Yang, Y. A Self-Labeling Feature Matching Algorithm for Instance Recognition on Multi-Sensor Images. Trans. Beijing Inst. Technol. 2021, 41, 558–568. [Google Scholar]
  12. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  13. Liu, H.; Fan, K.; Ouyang, Q.; Li, N. Real-Time Small Drones Detection Based on Pruned YOLOv4. Sensors 2021, 21, 3374. [Google Scholar] [CrossRef] [PubMed]
  14. Xiang, X.; Tian, Y.; Zhang, Y.; Fu, Y.; Allebach, J.P.; Xu, C. Zooming slow-mo: Fast and accurate one-stage space-time video super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3370–3379. [Google Scholar]
  15. Su, R.; Zhong, B.; Ji, J.; Ma, K.K. Single Image Super-Resolution Via A Progressive Mixture Model. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 508–512. [Google Scholar]
  16. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef] [Green Version]
  17. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  18. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  19. Feng, H.; Guo, J.; Xu, H.; Ge, S.S. SharpGAN: Dynamic Scene Deblurring Method for Smart Ship Based on Receptive Field Block and Generative Adversarial Networks. Sensors 2021, 21, 3641. [Google Scholar] [CrossRef] [PubMed]
  20. Marnerides, D.; Bashford-Rogers, T.; Debattista, K. Deep HDR Hallucination for Inverse Tone Mapping. Sensors 2021, 21, 4032. [Google Scholar] [CrossRef] [PubMed]
  21. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  22. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2223–2232. [Google Scholar]
  23. Li, J.; Liang, X.; Wei, Y.; Xu, T.; Feng, J.; Yan, S. Perceptual generative adversarial networks for small object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1222–1230. [Google Scholar]
  24. Pan, L.; Li, X.; Luo, S.; Wu, Z. Double-Channel GAN with Multi-Level Semantic Correlation for Event Detection. Trans. Beijing Inst. Technol. 2021, 41, 295–302. [Google Scholar]
  25. Truong, N.Q.; Lee, Y.W.; Owais, M.; Nguyen, D.T.; Batchuluun, G.; Pham, T.D.; Park, K.R. SlimDeblurGAN-based motion deblurring and marker detection for autonomous drone landing. Sensors 2020, 20, 3918. [Google Scholar] [CrossRef] [PubMed]
  26. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Dong, Z.; Xu, K.; Yang, Y.; Bao, H.; Xu, W.; Lau, R.W. Location-aware Single Image Reflection Removal. arXiv 2020, arXiv:2012.07131. [Google Scholar]
  30. Ma, C.; Rao, Y.; Cheng, Y.; Chen, C.; Lu, J.; Zhou, J. Structure-preserving super resolution with gradient guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 7769–7778. [Google Scholar]
Figure 1. Workflow of CMTGAN.
Figure 1. Workflow of CMTGAN.
Sensors 21 05194 g001
Figure 2. Architecture of CMTGAN.
Figure 2. Architecture of CMTGAN.
Sensors 21 05194 g002
Figure 3. ROI and bounding box. The red point denotes the point of interest proposed by the baseline selector, and the red box indicates the selected region centered by the point of interest. Due to the error of the baseline selector, the point of interest cannot properly coincide with the center of the ground-truth bounding box. The blue box shows the ROI proposed by the discriminator. The green box denotes the predicted bounding box, and the green point represents the center of the predicted bounding box.
Figure 3. ROI and bounding box. The red point denotes the point of interest proposed by the baseline selector, and the red box indicates the selected region centered by the point of interest. Due to the error of the baseline selector, the point of interest cannot properly coincide with the center of the ground-truth bounding box. The blue box shows the ROI proposed by the discriminator. The green box denotes the predicted bounding box, and the green point represents the center of the predicted bounding box.
Sensors 21 05194 g003
Figure 4. Centered mask.
Figure 4. Centered mask.
Sensors 21 05194 g004
Figure 5. Selected region. The blue point denotes the center of the target, and the blue box indicates the ground-truth bounding box. The red point denotes the point of interest proposed by the baseline selector, and the red box indicates the selected region.
Figure 5. Selected region. The blue point denotes the center of the target, and the blue box indicates the ground-truth bounding box. The red point denotes the point of interest proposed by the baseline selector, and the red box indicates the selected region.
Sensors 21 05194 g005
Figure 6. Performance of traditional interpolation methods and CNN-based methods.
Figure 6. Performance of traditional interpolation methods and CNN-based methods.
Sensors 21 05194 g006
Figure 7. LR images from VOC.
Figure 7. LR images from VOC.
Sensors 21 05194 g007
Figure 8. Two-stage object detection of CMTGAN.
Figure 8. Two-stage object detection of CMTGAN.
Sensors 21 05194 g008
Table 1. Architecture of the generator in CMTGAN.
Table 1. Architecture of the generator in CMTGAN.
LayerConvRes-Block x5ConvDe-ConvDe-ConvConvSkip
Kernel Num.646464256256364
Kernel Size9333391
Stride1112211
Table 2. Architecture of the discriminator in CMTGAN. K denotes the number of object categories.
Table 2. Architecture of the discriminator in CMTGAN. K denotes the number of object categories.
LayerConvMax-PoolLayer 1Layer 2Layer 3RPNLayer 4Avg-PoolFC 1FC 2FC 3
Kernel Num.64-1282565125121024-2K + 14(K + 1)
Kernel Size73111317---
Stride22122121---
Table 3. System requirements.
Table 3. System requirements.
CPUIntel 10700K
GPUNVIDIA RTX3090
OSUbuntu20.04
LanguagePython3.8 with PyTorch1.8.1(LTS)
Table 4. Metrics of super-resolution.
Table 4. Metrics of super-resolution.
AGSTDMI
BilinearBicubicSPSRESRGANCMTGANBilinearBicubicSPSRESRGANCMTGANBilinearBicubicSPSRESRGANCMTGAN
16.6964.11310.28414.2656.96549.76848.02252.31953.77850.6610.9820.9880.8830.9481.124
24.2012.7735.7876.7784.10256.51655.60757.12357.27656.8651.6471.6691.4991.7471.849
33.3802.1895.41610.7844.28241.62640.87342.95643.64342.3101.1591.1581.0371.0581.258
44.5843.2369.7469.7085.45563.95762.52866.06766.63265.6311.6391.6141.3141.6481.708
52.2411.4012.8605.6652.97814.82314.19315.10416.17415.2970.7020.6980.5560.6110.759
65.8724.1008.8048.2996.38274.18472.58276.18576.26375.4011.3031.2911.1661.4141.401
75.9764.0939.2157.3316.06870.64169.05772.02872.31571.8481.6161.6041.4121.8781.792
83.2922.1245.0698.2694.11461.26160.67161.79762.26861.9301.2741.2621.1481.2321.359
96.4714.26710.5669.2036.73572.39370.74574.77774.40173.8801.3731.3671.1721.4961.529
104.4962.8587.2466.1384.55753.73052.78254.92354.51654.2421.5341.5481.3201.7141.736
114.2402.7186.4785.5154.51541.48540.48042.59742.40541.5551.3151.3201.1461.4671.468
124.1372.7606.2025.6334.54851.77550.63353.05953.03152.0561.5521.5281.2941.6951.599
133.3962.2625.5174.5033.70753.51952.70454.44854.38054.0891.8731.8651.6492.0342.021
144.5942.7976.06010.2175.13346.61845.64747.44848.52547.4321.2461.2411.1121.2041.381
154.9663.3039.1929.9145.38769.86768.71571.47871.71670.7711.5511.5721.3801.6001.707
162.1401.4554.8494.1312.78540.71340.16941.62741.31941.5051.6411.6401.4291.6371.645
173.8542.3544.86412.4494.30528.53227.52729.29931.44529.4460.9080.9160.7900.7481.036
184.7902.9726.98410.8465.24242.26741.03543.84445.14243.1971.0691.0630.9341.0711.196
194.4702.7735.9179.1654.64558.91658.03259.79660.31359.2591.6151.6341.4431.6001.673
204.6092.9917.0678.1575.02861.37860.24662.82563.15462.0451.7501.7581.4871.7731.784
214.1392.5956.4936.9614.26748.79547.75849.96650.34949.9251.6881.7131.4591.8731.857
225.2483.4278.3068.3665.78152.86351.16455.05255.01753.8621.3191.3121.1591.4321.483
235.3593.6928.5437.5555.83258.95257.47560.86360.87160.0001.4641.4551.2881.6981.657
244.0992.6876.5527.0564.59985.02584.12185.59386.89685.2931.9121.8811.6381.9721.882
255.9723.7178.95716.0666.38752.31250.77554.26756.03953.2281.1261.1261.0121.0321.305
261.8211.2703.0401.8942.24051.96351.58852.66152.13851.7192.4642.5102.0602.6842.363
273.4572.2825.6158.6703.90133.95832.91035.44936.09034.3781.3041.3311.1471.2221.423
284.7262.9386.8008.7035.22644.17643.02945.62445.99744.9961.1281.1291.0341.2321.286
294.1272.6796.6025.8354.35153.83852.80755.21855.23154.5041.6711.6581.4231.8671.821
305.2293.5839.7198.4146.04960.85859.53963.09863.06962.0901.3841.3851.2571.5821.585
312.9091.9113.6364.5882.75171.21370.81071.34871.57972.1702.1072.1381.9542.1592.221
324.2042.8506.2545.7404.61672.15071.12273.29472.84572.9541.6901.6651.4851.8061.803
334.5703.1428.5769.8015.26560.98259.73763.41263.10261.8631.4151.4101.2181.4231.510
344.0862.5395.96110.7844.69540.81239.80741.83643.03141.4751.0901.0930.9790.9981.162
353.8842.5156.0055.7814.39337.02935.68738.85638.48337.7231.3211.3141.1371.3991.472
365.9204.1059.7607.3426.17062.52860.72464.93464.39463.4701.3721.3741.1921.6181.584
375.3153.66810.31010.5765.98669.41668.06371.54271.77870.5561.5051.5021.3251.5861.691
383.5072.1534.2954.5323.96339.68439.02740.41040.00939.7691.4701.4751.2951.6451.667
396.1054.06610.2779.5816.46271.70470.14373.62074.19773.0751.4981.5021.3471.6751.689
404.9223.2358.1368.6285.31269.05368.01770.44370.69069.9391.6261.6291.4221.7021.729
414.5042.7415.8777.1374.89440.14438.97241.44741.80340.7561.1671.1781.0231.2741.362
422.4551.6854.3414.2942.81446.51145.96047.20146.95946.7711.8181.8141.5481.8591.900
435.8884.02410.20713.4416.47957.40355.64860.02060.74658.5901.3461.3591.1701.3611.541
447.1174.54911.95813.3477.29568.85967.12171.40371.49170.1551.2101.2171.0851.2671.375
455.7993.7988.1636.6135.52861.63760.15862.74362.58862.4381.3971.4191.2371.6071.610
466.7434.4569.7087.0156.43962.94461.04664.22664.67863.7161.4691.4911.3011.8331.778
473.0931.9684.0223.8403.31446.63546.15147.02047.04047.0511.7951.8311.6251.9331.972
483.8272.4425.4179.0304.42952.81952.08353.74753.99953.0551.3581.3491.2201.3251.485
495.6483.69211.68018.4146.71158.62456.95761.94864.66260.1591.2341.2201.0831.1761.335
505.9063.8458.5039.9286.10765.09663.54866.61366.97366.1471.4161.4101.2531.5041.581
516.4544.22711.33114.6277.17359.01957.10161.99063.24660.3221.0401.0360.9101.0111.184
525.6133.6719.7986.7855.64862.00660.58264.07063.02163.1631.5211.5271.3111.7771.784
534.4222.8676.5026.4234.93038.00536.35240.01939.99939.0531.1791.1661.0191.3371.348
544.9843.1717.20710.2345.56175.57874.69776.84377.73976.3831.4591.4531.3361.4981.594
Avg4.6383.0327.3468.4255.04655.30754.12856.78757.13856.1141.4391.4411.2621.5171.575
Table 5. Inference time of super-resolution.
Table 5. Inference time of super-resolution.
MethodsBilinearBicubicSPSRESRGANSR in CMTGAN
Inference time1.8 ms2.6 ms147.1 ms58.5 ms10.1 ms
Table 6. Performance of object detection.
Table 6. Performance of object detection.
MethodsAPAPsAPm
Bilinear + YOLOv433.3920.6435.68
Bicubic + YOLOv432.2022.1434.30
SPSR + YOLOv419.7513.7520.84
ESRGAN + YOLOv434.7020.4236.33
Bilinear + FasterRCNN49.9525.2056.49
Bicubic + FasterRCNN48.8124.0054.86
SPSR + FasterRCNN26.9915.9128.70
ESRGAN + FasterRCNN46.5933.5848.89
CMTGAN55.2236.9969.72
Table 7. Inference time of object detection.
Table 7. Inference time of object detection.
MethodsResize/SRDetectionTotal
Bilinear + YOLOv41.8 ms29.4 ms31.2 ms
Bicubic + YOLOv42.6 ms29.4 ms32.0 ms
SPSR + YOLOv4147.1 ms29.4 ms176.5 ms
ESRGAN + YOLOv458.5 ms29.4 ms87.9 ms
Bilinear + FasterRCNN1.8 ms42.1 ms43.9 ms
Bicubic + FasterRCNN2.6 ms42.1 ms44.7 ms
SPSR + FasterRCNN147.1 ms42.1 ms189.2 ms
ESRGAN + FasterRCNN58.5 ms42.1 ms100.6 ms
CMTGAN10.1 ms35.8 ms45.9 ms
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, H.; Wang, J.; Bai, K.; Sun, Y. Centered Multi-Task Generative Adversarial Network for Small Object Detection. Sensors 2021, 21, 5194. https://doi.org/10.3390/s21155194

AMA Style

Wang H, Wang J, Bai K, Sun Y. Centered Multi-Task Generative Adversarial Network for Small Object Detection. Sensors. 2021; 21(15):5194. https://doi.org/10.3390/s21155194

Chicago/Turabian Style

Wang, Hongfeng, Jianzhong Wang, Kemeng Bai, and Yong Sun. 2021. "Centered Multi-Task Generative Adversarial Network for Small Object Detection" Sensors 21, no. 15: 5194. https://doi.org/10.3390/s21155194

APA Style

Wang, H., Wang, J., Bai, K., & Sun, Y. (2021). Centered Multi-Task Generative Adversarial Network for Small Object Detection. Sensors, 21(15), 5194. https://doi.org/10.3390/s21155194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop