Next Article in Journal
Review of Machining Equipment Reliability Analysis Methods based on Condition Monitoring Technology
Next Article in Special Issue
An Integrated Wildlife Recognition Model Based on Multi-Branch Aggregation and Squeeze-And-Excitation Network
Previous Article in Journal
Graphene-Based Membranes for CO2/CH4 Separation: Key Challenges and Perspectives
Previous Article in Special Issue
Classification Method of Plug Seedlings Based on Transfer Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CP-SSD: Context Information Scene Perception Object Detection Based on SSD

College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2019, 9(14), 2785; https://doi.org/10.3390/app9142785
Submission received: 20 June 2019 / Revised: 7 July 2019 / Accepted: 8 July 2019 / Published: 11 July 2019
(This article belongs to the Special Issue Computer Vision and Pattern Recognition in the Era of Deep Learning)

Abstract

:
Single Shot MultiBox Detector (SSD) has achieved good results in object detection but there are problems such as insufficient understanding of context information and loss of features in deep layers. In order to alleviate these problems, we propose a single-shot object detection network Context Perception-SSD (CP-SSD). CP-SSD promotes the network’s understanding of context information by using context information scene perception modules, so as to capture context information for objects of different scales. Deep layer feature map used semantic activation module, through self-supervised learning to adjust the context feature information and channel interdependence, and enhance useful semantic information. CP-SSD was validated on benchmark dataset PASCAL VOC 2007. The experimental results show that, compared with SSD, the mean Average Precision (mAP) of the CP-SSD detection method reaches 77.8%, which is 0.6% higher than that of SSD, and the detection effect was significantly improved in images with difficult to distinguish the object from the background.

1. Introduction

Object detection is one of the main tasks of image processing. Its main purpose is to be able to accurately locate and classify objects in images. It has been comprehensively used in many communities such as face recognition, road detection, and driverless car, and so forth. The traditional object detection methods such as Histogram of Oriented Gradient (HOG) [1], Scale Invariant Feature Transform (SIFT) [2], are based on hand-crafted features (e.g., RGB color, texture, Gabor filter and gradient). Hand-crafted features lack sufficient discriminative representation, perform poorly in generalization ability and are easily affected by low contrast quality. It is difficult and time-consuming to perform object detection on a large and complex dataset.
The deep convolutional neural network method promotes the understanding of dynamic objects. However, it still faces the challenge of a lack of rich semantic features and insufficient understanding of context information. In Region-Convolutional Neural Network (R-CNN) series, the selection of regional proposal and repeated convolution of feature maps greatly increases the time and complexity of object detection. R-CNN [3] is several times more accurate than the traditional algorithms based on HOG and SIFT. However, this method of obtaining regional proposals by Selective search in R-CNN will lose a lot of contextual information. In the Single Shot MultiBox Detector (SSD) [4] algorithm, the end-to-end network structure is used to remove the steps in the R-CNN for region proposals. It directly inputs the entire picture into the network structure and predicts objects of different sizes using feature maps of different scales in the Convolutional Neural Network(CNN) [5]. In the backbone network Visual Geometry Group 16(VGG16) [6], low-level detection feature maps are generated and then several layers of feature detection maps are constructed, so as to learn semantic information in a layered manner. However, for low-level features, there are usually no appropriate strategies to understand contextual information fully, and to capture strong semantic information successfully, which makes it difficult to be understood for complex scenarios. This may result in inaccurate detection of the object and imperfect low-level feature information can cause high-level language information to be affected. At the same time, the receptive field of each layer in the CNN is fixed and there is an inconsistency between objects of different scales in the natural image, which may impair the object detection performance. For small objects, feature extraction is not easy, which may result in a loss of information and the inability to detect small objects.
According to the above analysis, a new single-shot network model CP-SSD is designed to alleviate the problems in the above SSD. Two modules of context information sensing and semantic activation are added to the original SSD. The context information sensing module uses different convolution kernels to perceive objects of different sizes and combines their important context information. The contextual information of different regions can help to distinguish the goals and backgrounds of various categories more accurately. In addition, inspired by SE-Net [7], a semantic activation module is used on the additional detection feature map built in the SSD. The semantic activation module uses the self-attention mechanism to learn the relationship between the channel and the object, and learn the weights of different channels.

2. Related Work

In recent years, the CNN has achieved great success in computer vision tasks, such as image classification [8,9,10,11,12], segmentation [13,14] and object detection [3,4,15,16,17,18,19,20,21]. Among them, object detection is a basic task that has been extensively studied. There are two frameworks for the network proposed by the research community for object detection—the two-stage framework and the a single-stage framework. In the two-stage framework, such as the R-CNN series, region proposals with different scales and aspect rates are predicted by extracting feature maps using CNN. Then, classification and regression are carried out based on features extracted from the region proposals. With the help of R-CNN [3], the deep learning mechanism was introduced into object detection for the first time in Reference [3], and the algorithm for adaptive object detection was proposed. It generates region proposals which define the set of candidate detections available to the detector. Each region proposal is input into a large CNN to extract features and then use category-specific linear Support Vector Machines (SVMs) for classification. Finally, the object is detected by regression correction. In SPP-Net [17], the spatial pyramid pooling layer is added into the network behind the R-CNN network structure, which overcomes the shortcomings of R-CNN requirements for the size of the input region proposals and hence it helps to improve detection accuracy. For Fast R-CNN [18], a region of interest (RoI) pooling layer based on SPP-Net is introduced, which reduced the number of convolution layers and greatly improved the detection speed. For Faster R-CNN, the regional proposal network (RPN) is introduced in the process of extracting region proposals by Fast R-CNN. The regional proposal is generated on the convolution feature diagram of the last layer of RPN module and they are input into the RoI pooling layer of Fast R-CNN, so as to optimize the selection of the regional proposal, to reduce repeated feature extraction and to improve the accuracy of regional extraction and network training speed. The RoI pooling layer of Faster R-CNN [19] to RoIAlign by Mask R-CNN [20] improved and the bilinear interpolation method is adopted to reduce the position error of the boundary regression box. Meanwhile, a Mask generation task was added to improve the detection accuracy to some extent.
In single-stage frameworks, such as the You Only Look Once (YOLO) [21] series and SSD, the object classifiers and regressions are applied in a dense manner, without the need for object-based pruning. They all classify and regress a set of pre-computed anchors. In the YOLO alogritm, the end-to-end training mode is used to reduce the network structure. Although the accuracy is slightly worse than that of the R-CNN series, the speed is much faster than that of the R-CNN series. It reduces the error rate of background image detection and enhances the global information of the image. SSD also has a great improvement in speed, and uses a backbone network (for example, VGG16) to generate a low-level detection feature map. Based on this, it constructs several layers of object detection feature maps to learn semantic information in a layered manner, with lower layers detecting smaller objects and higher layers detecting larger objects, so as to eliminate region proposals and subsequent pixel resampling stages.

3. CP-SSD

CP-SSD (Context Perception SSD) is a single-shot object detection network based on SSD, which consists of three main parts,—the SSD model, context information scene perception module—which is used to capture local context information of different sizes, and the semantic activation module, which enriches the semantic information in a self-supervised manner. Please refer to Figure 1 for the structure of CP-SSD.
In SSD, VGG16 is used as the backbone network to generate the low-level detection feature map U. Based on that, the feature map U is continuously downsampled through a series of convolutional layers with a stride of 2 (i.e., fc7 to conv9_2) by applying anchors of different sizes and aspect rates in a hierarchical manner, so as to detect objects of small to large sizes.
In the context information scene perception module, we used multiple dilated rate convolution layers in parallel and each dilated convolution layer has different dilated rates. The larger the dilated rate is, the larger the receptive field of the convolution kernel is. The context information sensing module performs feature extraction on the feature map U through convolution kernels of different receptive fields so that the model can perceive changes in the context information between different scales and different sub-regions. In this way, the loss of feature information is reduced, and the image is understood more comprehensively.
In the deeper detection layer, a higher level of detection feature map is enhanced using a semantic activation module. In order to detect objects of different sizes, the feature map is downsampled in the fc7 to conv9_2 layer, which reduces the resolution of feature maps and increases the receptive field of the model. However, semantic information and location information are lost in each downsampling. Therefore, the semantic activation module is used on fc7 to conv9_2 to learn the relationship between the channel and the object by self-supervised learning, so as to adjust and enrich the semantic information.

3.1. Dilated Convolution and Receptive Field

The dilated convolution [13] increases the receptive field of the convolution kernel without introducing extra parameters. The formula for the 1-D dilated convolution is defined as follows:
y [ i ] = k = 1 K x [ i + r · k ] w [ k ]
Here, x [ i ] denotes the input signal, y [ i ] denotes the output signal, i denotes the dilated rate, w [ k ] denotes the k-th parameter of the convolution kernel, and K is the size of the convolution kernel. In the standard convolution, r = 1 .
The 2-D dilated convolution is constructed by inserting 0 between each weight of the convolution kernel. For a convolution kernel of size k × k , the size of the resulted dilated convolution kernel is k d × k d , where k d = k + ( k 1 ) × ( r 1 ) . Therefore, the larger the dilated rate r is, the larger the receptive field of the convolution kernel is. For example, for a convolution kernel of k = 3 , when r = 4 , the corresponding receptive field size is 9. Figure 2 shows the dilated convolution kernel for different dilated rate. In Figure 2 the dark portion denotes the effective weight, and the white portion denotes the inserted zero.

3.2. Context Information Scene Perception Module

In the object detection, the objects to be detected usually have a different scale, so the feature map must contain feature information of receptive fields at different scales. In deep learning, the size of the receptive field can be roughly expressed as the degree of utilization of the context information by the model. But at a high level, the previously important semantic information usually could not be combined by the network. Inspired by PSPNet [22], a contextual information scene perception module was designed, which achieves this goal by parallel dilated convolution of different dilated rates. The same feature map is input to these convolutional layers and different dilated rates d is used to make the convolution kernels have different receptive fields. Then, the feature information of different sizes is sampled. Finally, the feature maps of these outputs are concatenated together. The structure of the context information scene perception module is shown in Figure 3 Firstly, a 1 × 1 convolution is used to reduce the number of channels of the feature map U R W × H × 512 , so as to obtain a feature map U R W × H × 512 . Then, the dilated convolution ( d 1 , d 2 , d 3 , d 4 ) = ( 1 , 2 , 4 , 6 ) with four different dilated rates is used in parallel to carry out feature sampling on the feature map U’, and the feature map V 1 R W × H × 256 , V 2 R W × H × 256 , V 3 R W × H × 128 , and V 4 R W × H × 128 is obtained. Finally, the feature map is spliced to obtain the final feature map Z R W × H × 1024 , Z = [ U , V 1 , V 2 , V 3 , V 4 ] .

3.3. Semantic Activation Block

The semantic activation module is used to adjust the interdependence between contextual feature information and channels by self-supervised learning, and to selectively enhance useful semantic information according to the self-attention mechanism and suppress harmful feature information.
The semantic activation module is shown in Figure 4, which consists of three steps: spatial pooling f g a p ( · ) , channel-wise attention learning f f c l ( · , θ ) , and channel weights adaptive f f u s e ( · , · ) .
Spatial pooling: For a given input X R H × W × C , by globally pooling X with f g a p ( · ) to generate V R C , the i-th element in V is obtained as following:
v i = f g a p ( X C ) = 1 W × H i = 1 W j = 1 H x c ( i , j )
Channel-wise attention learning: In order to make full use of the information summarized in V , the f f c l ( · , θ ) operation is used to capture the direct correlation of the channel. To do this, a gating mechanism and a sigmoid activation function are used as follows:
S = f f c l ( V , θ ) = σ ( G ( V , θ ) ) = σ ( θ 2 φ ( θ 1 V + b 1 ) + b 2 )
Here φ denotes the ReLU activation function and σ denotes the Sigmoid activation function, θ 1 R C × C , θ 2 R C × C . In order to reduce the complexity of the model, we use two fully connected methods to form the bottleneck layer. That is, firstly the dimension is reduced to C , and then it is upgraded to C. In the experiment, we set C = 1 2 C in all modules.
Channel weights adaptive: The final output selects the relevant semantic features by using f f u s e ( · , · ) , to make sure that the related semantic information is assigned a larger weight, and the unrelated semantic information is assigned a smaller weight for generating the final feature map X ˜ . Here, the c-th channel in X ˜ is defined as:
x ˜ c = f f u s e ( x c , s c ) = x c · s c
Here X ˜ = [ x ˜ 1 , x ˜ 2 , , x ˜ C ] , x c R H × W .

4. Analysis and Discussion of Experimental Results

We implemented the proposed model CP-SSD with help of the pytorch [23] deep learning framework. The server configuration of the training model was: Intel(R) Xeon(R) E5-2620 v3 2.40GHz CPU, Tesla K80 GPU and Ubuntu64 system.

4.1. Data Sets and Data Enhancements

PASCAL VOC [24] is a benchmark dataset for visual object classification recognition and detection, which includes 20 categories. The VOC2007 test section (testing dataset of VOC2007) is widely used by the research community for validating the performance of object detection models. In our training process, all the samples of train and val of VOC2007 and VOC2012 are used as the training set. The training set contains 16,551 pictures with 40,058 objects and the testing set contains 4952 pictures with 12,032 objects. In this dataset, smaller objects account for a large proportion of the objects.
In order to make the model more robust to various input object sizes and shapes, each training image is randomly sampled in one of the following ways:
(1) The original image without any further processing; (2) The original image with overlap of 0.1, 0.3, 0.5, 0.7 or 0.9 is selected; (3) A portion of the original image is cropped randomly.
After the above sampling step, each sampling area was resized to a fixed size (300 × 300) and flipped at a probability level of 0.5.

4.2. Experimental Parameter Settings

In order to compare the effectiveness of the CP-SSD network model with SSD, we used the same training settings as SSD. For the model, first we set l r = 10 3 to train for 80k iterations, then we set l r = 10 4 for 20k iterations and finally we set l r = 10 5 for another 20k iterations. The momentum was fixed to be 0.9 and the weight decay was set to be 0.0005, batchsize = 32, and the backbone structure of the model was initialized using pre-trained VGG16 weights.

4.3. Ssd with Context Information Scene Perception Module

In Table 1, we validate the SSD with and without the context information scene perception module(CISP) for detection performance. In terms of general object detection, the overall performance of the model reached 77.6% after applying the context information sensing module to the SSD and the performance improved by 0.4% compared with the original SSD. Especially for samples with similar backgrounds and objects, the original SSD cannot detect some objects because it cannot understand the context information. Using the context information sensing module to perceive and fuse the local context information at different scales, it is possible to understand some complex scenes and detect the objects from the background.

4.4. Ssd with Semantic Activation Block

In Table 2, we show the detection performance of the SSD with and without semantic activation block (SAB). For high-level low-resolution feature maps, self-supervised adjustment of channel weights to enhance useful feature information can better distinguish between object and background. From the table, we can see that the semantic activation module can improve the performance of the model by 0.4%, which indicates the effectiveness of the semantic activation module. Compared with the original SSD, although the addition of the semantic activation module increases the amout of parameters and computation, the cost of the increased parameters and computation on the running time required by speed of the model is negligible.

4.5. Comparison of Methods

In Table 3, we compared the R-CNN, YOLO, and SSD methods on the VOC 2007 test dataset. For RCNN based algorithms, RCNN [3] is the first algorithm to use CNN for object detection. It has great shortcomings in the selection of region proposals. Too many region proposals are selected by the algorithm, which requires a lot of memory and the normalization process of the input network makes the algorithm lose a lot of context information and features, resulting in only 50.2% positioning accuracy. In order to cope with solving the feature loss problem of R-CNN in image normalization, Fast-RCNN [18] inputs the whole image into the network and extracts fixed-length feature vectors from the feature map through the region of interest (RoI) pool layer. The resulting classification and coordinate information eventually increased the accuracy to 70.0%. However, Faster-RCNN still does not solve the problem caused by 2000 regional proposals generated by selective search. Therefore, in Faster-RCNN [19] algorithms, the RPN module is proposed, which utilizes 9 kinds of anchors with different area ratios and applying CNN to complete the object detection compeletely. The mAP reached 76.4%. In YOLO [21] algorithms an end-to-end network is used to remove the selection of regional proposal individually, combining the selection of regional proposal with the object detection network. Due to the simple network structure, the speed of object detection is much higher than that of the RCNN based algorithms but there are great restrictions to the position and size of the object. The detection effect of mAP is only 57.9%, especially for small objects. In SSD [4], low-level feature maps are separated to improve the detection effect of small objects but there are shortcomings of insufficient semantic information. The additional detection layer features use the downsampling method to increase the receptive field but the resolution of the downsampling reduction feature map causes a large loss of feature information and the mAP on the testing dataset is only 77.2%. The mAP of the CP-SSD on the testing dataset reached 77.8%, which is 0.6% higher than the original SSD.
In CP-SSD, we use the CISP module to fuse context information and prior information between different scales and different sub-regions from feature maps U. In the context information perception module, convolution with different dilated rates can be used in parallel to capture different sizes of objects, which makes the model understand local context information more comprehensively. It alleviates the problem that SSD lacks understanding of semantic scene and context information. In the higher level feature map, we proposed the semantic activation module to enhance the semantic information. In the semantic activation module, the global average pooling method is used to remove the spatial information. It learns the relationship between channels and objects in a self-supervised way, promotes the useful feature information, restrains the irrelevant feature information and adjusts and enriches the semantic information. At the same time, SSD uses ResNet101 instead of VGG16 as the backbone network in the experiment. The network structure of ResNet101 is deeper than that of VGG16 and its feature extraction ability is stronger. However, the results of our proposed method using VGG16 (77.8%) perform significantly better than those using the SSD model of ResNet101 (77.1%), which highlights the effectiveness of the CP-SSD method.

4.6. Detection Examples

In Figure 5, we visualize some of the images. The localization results of CP-SSD were compared with the original SSD. As shown in Figure 5, in the upper two rows of images, SSD cannot locate people on horseback and boat, while CP-SSD can. CP-SSD uses the semantic activation module to capture more prior information before downsampling, so it can understand the image more accurately. In the lower two rows of images, the boats and buildings in the image are similar in shape and color and the color of the bird is similar to the surrounding environment. SSD cannot accurately detect the position of ships and birds because of the lack of understanding of the scene. CP-SSD can more fully understand the contextual prior information so that it can better distinguish the background and the detected objects, and determine the location of the ship and the bird through the contextual information.

5. Conclusions

In this paper, we proposed a single-shot object detection method, CP-SSD, to alleviate the problem of insufficient understanding of contextual scene information in SSD. We introduced a context information scene perception module and captured different scales of contextual information by parallel dilated convolution of different dilated rates, so as to improve the model’s ability to understand the scene. Meanwhile, the semantic activation module was used to enrich the semantic information of the feature map in the deep detection feature map. We validated CP-SSD on the PASCAL VOC 2007 benchmark dataset. The experimental results showed that, compared with SSD, YOLO, Faster R-CNN and other methods, our proposed CP-SSD method had better performance on the test set and the mAP was 0.6% higher than that of SSD. In future research, we will work on how to balance the global information feature extraction and improve the accuracy of small object detection.

Author Contributions

Y.J. contributed towards the algorithms and the analysis. As the supervisor of Y.J., she proofread the paper several times and provided guidance throughout the whole preparation of the manuscript. T.P. and N.T. contributed towards the algorithms, the analysis, and the simulations and wrote the paper and critically revised the paper. All authors read and approved the final manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 61163036), The program of NSFC Financing for Natural Science Fund in 2016 (No. 1606RJZA047), The institutes and Universities Graduate Tutor Project in Gansu (No. 1201-16), The Third Period of the Key Scientific Research Project of Knowledge and Innovation Engineering of the Northwest Normal University(No. nwnu-kjcxgc-03-67).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the International Conference on Computer Vision & Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  2. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  3. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  4. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar]
  5. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. In The Handbook of Brain Theory and Neural Networks; The MIT Press: Cambridge, MA, USA, 1995; p. 3361. [Google Scholar]
  6. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  7. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  8. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  9. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  10. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  11. Lee, J.; Kim, E.; Lee, S.; Lee, J.; Yoon, S. FickleNet: Weakly and Semi-supervised Semantic Image Segmentation using Stochastic Inference. arXiv 2019, arXiv:1902.10421. [Google Scholar]
  12. Wang, Y.; Xie, L.; Liu, C.; Qiao, S.; Zhang, Y.; Zhang, W.; Tian, Q.; Yuille, A. Sort: Second-order response transform for visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1359–1368. [Google Scholar]
  13. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv 2014, arXiv:1412.7062. [Google Scholar]
  14. Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 343–3440. [Google Scholar]
  15. Dai, J.; Li, Y.; He, K.; Sun, J. R-fcn: Object detection via region-based fully convolutional networks. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 379–387. [Google Scholar]
  16. Tang, P.; Wang, X.; Bai, X.; Liu, W. Multiple instance detection network with online instance classifier refinement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2843–2851. [Google Scholar]
  17. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
  18. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  19. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
  20. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  21. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  22. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  23. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in pytorch. In Proceedings of the NIPS 2017 Autodiff Workshop, Long Beach, CA, USA, 9 December 2017. [Google Scholar]
  24. Everingham, M.; Eslami, S.A.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes challenge: A retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
  25. Kong, T.; Sun, F.; Yao, A.; Liu, H.; Lu, M.; Chen, Y. Ron: Reverse connection with objectness prior networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5936–5944. [Google Scholar]
  26. Shrivastava, A.; Gupta, A. Contextual priming and feedback for faster r-cnn. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 330–348. [Google Scholar]
Figure 1. Context Perception-Single Shot MultiBox Detector (CP-SSD) network structure.
Figure 1. Context Perception-Single Shot MultiBox Detector (CP-SSD) network structure.
Applsci 09 02785 g001
Figure 2. Dilated Convolution with Different Dilated Rates.
Figure 2. Dilated Convolution with Different Dilated Rates.
Applsci 09 02785 g002
Figure 3. Context information scene perception module.
Figure 3. Context information scene perception module.
Applsci 09 02785 g003
Figure 4. Semantic activation block.
Figure 4. Semantic activation block.
Applsci 09 02785 g004
Figure 5. Partial detection example.
Figure 5. Partial detection example.
Applsci 09 02785 g005
Table 1. Test results of SSD and SSD+CISP.
Table 1. Test results of SSD and SSD+CISP.
MethodmAPAeroBikeBirdBoatBottleBusCarCatChairCow
SSD [4]77.278.885.375.771.549.185.786.487.860.682.7
SSD+CISP77.680.585.176.071.152.986.186.487.161.381.8
MethodmAPTableDogHorseMbikePersonPlantSheepSofaTraintv
SSD [4]77.276.584.986.784.079.251.377.578.786.776.2
SSD+CISP77.676.784.586.485.079.053.076.580.985.577.3
Table 2. Test results of SSD and SSD+semantic activation block (SAB).
Table 2. Test results of SSD and SSD+semantic activation block (SAB).
MethodmAPAeroBikeBirdBoatBottleBusCarCatChairCow
SSD [4]77.278.885.375.771.549.185.786.487.860.682.7
SSD+SAB77.681.384.975.772.050.785.486.487.961.882.3
MethodmAPTableDogHorseMbikePersonPlantSheepSofaTraintv
SSD [4]77.276.584.986.784.079.251.377.578.786.776.2
SSD+SAB77.677.785.687.781.979.152.477.581.684.776.3
Table 3. Test results of CP-SSD in PASCAL VOC2007.
Table 3. Test results of CP-SSD in PASCAL VOC2007.
MethodbackbonemAPAerobikebirdboatbottlebuscarcatchaircow
RCNN [3]AlexNet50.267.164.146.732.030.556.457.265.927.047.3
Fast [18]VGG1670.077.078.169.359.438.381.678.686.742.878.8
Faster [19]VGG1673.276.579.070.965.552.183.184.786.452.081.9
Faster [8]ResNet10176.479.880.776.268.355.985.185.389.856.787.8
RON384++ [25]VGG1677.686.082.576.969.159.286.285.587.259.981.4
Shrivastava et al. [26]VGG1676.479.380.576.872.058.285.186.589.360.682.2
YOLO [21]Darknet57.977.067.257.738.322.768.355.981.436.260.8
SSD321 [4]ResNet10177.176.384.679.364.647.285.484.088.860.182.6
SSD300 [4]VGG1677.278.885.375.771.549.185.786.487.860.682.7
CP-SSD(SSD+CISP+SAB)VGG1677.883.986.380.169.950.686.585.688.462.879.4
MethodbackbonemAPtabledoghorsembikepersonplantsheepsofatraintv
RCNN [3]AlexNet50.240.966.657.865.953.626.756.538.152.850.2
Fast [18]VGG1670.068.984.782.076.669.931.870.174.880.470.4
Faster [19]VGG1673.265.784.884.677.576.738.873.673.983.072.6
Faster [8]ResNet10176.469.488.388.980.978.441.778.679.885.372.0
RON384++ [25]VGG1677.673.385.986.882.279.652.478.276.086.278.0
Shrivastava et al. [26]VGG1676.469.287.087.281.678.244.677.976.782.471.9
YOLO [21]Darknet57.948.577.272.371.363.528.952.254.873.950.8
SSD321 [4]ResNet10177.176.986.787.285.479.150.877.282.687.376.6
SSD300 [4]VGG1677.276.584.986.784.079.251.377.578.786.776.2
CP-SSD(SSD+CISP+SAB)VGG1677.877.983.188.184.580.053.574.177.186.477.0

Share and Cite

MDPI and ACS Style

Jiang, Y.; Peng, T.; Tan, N. CP-SSD: Context Information Scene Perception Object Detection Based on SSD. Appl. Sci. 2019, 9, 2785. https://doi.org/10.3390/app9142785

AMA Style

Jiang Y, Peng T, Tan N. CP-SSD: Context Information Scene Perception Object Detection Based on SSD. Applied Sciences. 2019; 9(14):2785. https://doi.org/10.3390/app9142785

Chicago/Turabian Style

Jiang, Yun, Tingting Peng, and Ning Tan. 2019. "CP-SSD: Context Information Scene Perception Object Detection Based on SSD" Applied Sciences 9, no. 14: 2785. https://doi.org/10.3390/app9142785

APA Style

Jiang, Y., Peng, T., & Tan, N. (2019). CP-SSD: Context Information Scene Perception Object Detection Based on SSD. Applied Sciences, 9(14), 2785. https://doi.org/10.3390/app9142785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop