Next Article in Journal
Rearrangeable and Repackable S-W-S Elastic Optical Networks for Connections with Limited Bandwidths
Next Article in Special Issue
DERN: Deep Ensemble Learning Model for Short- and Long-Term Prediction of Baltic Dry Index
Previous Article in Journal
A Correlational Analysis of Shuttlecock Speed Kinematic Determinants in the Badminton Jump Smash
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting and Localizing Dents on Vehicle Bodies Using Region-Based Convolutional Neural Network

1
School of Industrial Engineering, University of Ulsan, Ulsan 44610, Korea
2
Department of Industrial and Systems Engineering, Dongguk University, Seoul 04620, Korea
3
Department of Computer Science, Pohang University of Science and Technology, Pohang 37673, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(4), 1250; https://doi.org/10.3390/app10041250
Submission received: 4 January 2020 / Revised: 8 February 2020 / Accepted: 10 February 2020 / Published: 13 February 2020
(This article belongs to the Special Issue Advances in Deep Learning Ⅱ)

Abstract

:
Detection and localization of the dents on a vehicle body that occurs during manufacturing is critical to achieve the appearance quality of a new vehicle. This study proposes a region-based convolutional neural network (R-CNN) to detect and localize dents for a vehicle body inspection. For a better feature extraction, this study employed a lighting system, which can highlight dents on an image by projecting the Mach bands (bright-dark stripes). The R-CNN was trained using the highlighted images by the Mach bands, and heat-maps were prepared with the classification scores estimated from the R-CNN to localize dents. This study applied the proposed R-CNN to the inspection of dents on the surface of a car body and quantitatively analyzed its performances. The detection accuracy of the dents was 98.5% for the testing data set, and mean absolute error between the actual dents and estimated dents were 13.7 pixels, which were close to one another. The proposed R-CNN could be applied to detect and localize surface dents during the manufacture of vehicle bodies in the automobile industry.

1. Introduction

Customers who wish to buy a new car naturally expect that there will be no defects on the vehicle’s exterior. If any defects exist on the surface of a vehicle body, customer loyalty for the automobile manufacturer will be significantly reduced and a customer’s intention to purchase a new car from the manufacturer may be withdrawn [1]. Therefore, the automobile industry seriously performs an inspection to repair (called touch-up) any exterior defects during the final stage of manufacturing [2,3].
Two limitations (low inspection accuracy and eye fatigue) have been raised in the automobile industry since workers directly inspect vehicle exteriors to detect defects with eyes. First, small defects on the vehicle body are hardly detected through visual inspection. For example, Armesto et al. [4] reported that 80% of minor defects on the vehicle body are undetected via visual inspection. In addition, Tolba et al. [5] stated that 25% to 40% of major defects are also reported as undetected via visual inspection. Second, high illuminance in an inspection room may induce eye fatigue and discomfort in workers. A high illuminance of 2000 lux could improve the visual threshold since visual acuity increases positively with an increase in illuminance [6,7,8,9,10,11]. Consequently, it is the situation that automobile assembly lines have generally employed a high level of illumination (e.g., 2000 lux) in the workstation. However, it may lead to eye fatigue in the long term and eventually deteriorating inspection performance, although such high illuminance can improve inspection accuracy in the short term [12].
A few studies have used image-processing techniques in vehicle exterior inspections in order to improve inspection accuracy as well as overcome the limitations of visual inspections; however, there is still room to improve the inspection accuracy for small defects (difficult to detect). Chung and Chang [3] and Barber et al. [13] attempted to detect a vehicle exterior defect by calculating surface curvature characteristics from images measured with a 3D scanner and a 2D camera, respectively. However, they did not report the defect detection accuracy of their method. Döring et al. [14] developed discriminant models including decision trees, fuzzy decision trees, neuro fuzzy classification, and mixed fuzzy rules with regard to the inspection variables (e.g., volume and depth) calculated from car body images acquired through a 3D scanner. Their results revealed that the decision trees exhibited the highest defect detection rate of 87.2%. Lastly, Kamani et al. [2] applied a support vector machine to the vehicle body images taken by a camera and reported an average defect detection rate of 98.8%; however, the study employed only simple and clear defects for testing enough to be detected by bare eyes.
Neural networks have been widely employed in many areas, such as classification [15,16,17] and forecasting [18,19]. Convolutional neural network (CNN), as a type of neural networks, has demonstrated preeminent performance toward detecting defects in various fields with good accuracy. Kwon et al. [15] established CNN models for different materials to detect defects and reported an average accuracy of 92.7%. Faghih-Roohi et al. [16] developed a CNN model for detecting defects on rail surfaces and reported an accuracy of 92%. Wang et al. [17] proposed a fast and robust CNN model for detecting defects on textured surfaces with an average accuracy of 99.8%. However, CNN cannot pin-point the locations of defects on an image (called localization).
More recently, a few studies with region-based CNN (R-CNN) models has shown excellent performance in both detection and localization tasks. Yu et al. [20] proposed an R-CNN algorithm to detect strawberry fruit using harvesting robot with reported accuracy of 95.78%. Ferguson et al. [21] developed an R-CNN with 95.7% accuracy that can detect and localize a defect on the GDXray Castings dataset. However, a R-CNN model has not been utilized in detecting and localizing minor dents on vehicle body surfaces yet.
This study developed an automatic method for detecting and localizing dents on a vehicle body surface with a R-CNN model. This study employed a lighting system that can highlight a dent on an image by projecting the Mach bands (bright-dark stripes) for a better feature extraction—the lights rayed on the target surface diffuses around a dent and causes a distortion of the Mach bands. The R-CNN with the enhanced image by the Mach bands detects and localizes dents on the images of car body surface. This study applied the proposed R-CNN method to the inspection of dents on the surface of a car body and analyzed its performances quantitatively.

2. Proposed R-CNN Model

2.1. Dent Highlight in an Image with Mach Bands

A special lighting device with light-emitting diodes (LEDs) and a stripe cover, as shown in Figure 1a, were employed to highlight dents in an image using Mach bands [12]. The overall size of the light device used in this study was 20 cm (width) × 9 cm (height) × 3.4 cm (thickness); the sizes of the bright and dark stripes were 0.5 cm (width) × 6 cm (height) and 1 cm (width) × 6 cm (height), respectively. The light device intensity could be adjusted to determine the best light intensity for different defect types and ambient light conditions. The LEDs with adjustable intensity (1500 lumens) using a rotary knob emitted light. The stripe cover created Mach bands (dark-bright linear stripes) by blocking light behind the stripes and passing light between the stripes. The Mach bands can create a contrast pattern on the vehicle surface of interest and were distorted around dents because of light diffusion, as illustrated in Figure 1b. For a smooth surface, light beams are reflected back in the same concentration; however, an irregular surface scatters light beams (called diffuse light), as shown in Figure 1c. This diffuse light can distort the Mach bands around dents.

2.2. R-CNN Topology

The R-CNN structure in this study consisted of an input layer, two hidden layers, and an output layer as illustrated in Figure 2. The input layer accepted a preprocessed grayscale image and its number of nodes was one. The preprocessed image was represented by a matrix of pixel values ranged from 0 to 255 (0: black, 255: white). The region images (mini-patches) inputted to the network were 32 × 32 pixels, which were determined by sliding the preprocessed image from the top-left corner to the bottom-right corner with stride 5.
The hidden layer consisted of two pairs of convolution and pooling layers. The first hidden layer convoluted 6 × 6 58 filters with stride 3 and padding 2. A rectified linear unit (Relu) was used as an activation function, and a 3 × 3 max pooling was employed with stride 3 and padding 2. The second hidden layer convoluted 6 × 6 58 filters with stride 3 and padding 2. Likewise, Relu was employed as an activation function, and a 3 × 3 max pooling was used with stride 3 and padding 2.
The output layer consisted of a fully connected layer and an output layer. The fully connected layer included 91 neurons, which were connected to the last output layer. The output layer included two final neurons, which were connected from the fully connected neurons. The two final neurons were activated by softmax function to determine normal or abnormal (dent) at the end.
Six parameters (kernel number, kernel size, padding size, pooling size, stride size, and number of fully connected nodes) for the R-CNN were determined with an adaptive genetic algorithm (AGA) by referring to [22]. The AGA searched a set of satisficing parameters by evolutionally changing the parameters of the R-CNN. The gene for the AGA consisted of 20 binary digits (6 digits for kernel number, 3 digits for kernel size, 2 digits for padding size, 2 digits for pooling size, 2 digits for stride size, and 5 digits for number of fully connected nodes) for the six parameters. The Roulette wheel selection from the population (population size = 20) and the one-point crossover were applied for generating the next offspring [23,24]. Mutations were performed in the offspring with a probability of mutation (initial value = 5%), which adaptively adjusted its value depending on the homogeneity of the offspring [25,26,27]. The AGA efficiently explored incumbent solutions for the parameters, as can be seen in Figure 3, and the incumbent optimum was improved rapidly within 50 generations. The satisficing parameters for the R-CNN were 58 for the kernel number, 6 for the kernel size, 2 for the pad number, 3 for the pooling size, 3 for the stride, and 91 for the number of fully connected nodes, as listed in Table 1.

2.3. Dent Localization Using Heat-Map

To localize dents in an image, a heat-map was prepared using the last convolution layer of the R-CNN established in this study. We cropped patches from the test image (120 × 68 pixels) by sliding a window of 32 × 32 pixels and fed them to the R-CNN for obtaining their classification scores at the last convolution layer. The estimated classification scores were used to form a heat map and a bounding box containing a dent was formed, as shown in Figure 4. Lastly, the location of a dent was estimated as the center of the bounding box.

3. Performance Evaluation

3.1. Methods and Material

We used a vehicle body fender (Figure 5a) with 25 artificial defects (diameter: 0.5–10 mm) to collect normal and abnormal images. The body of vehicle was fixed at 80 cm high on the top of a support fixture, as shown in Figure 5b. The illuminance level of the room was controlled around 500 lux and a researcher seated on an office chair recorded videos. The lighting device employed in this study was placed 32.5 cm (range: 30–35 cm) away from the target inspection surface and was slowly moved above the surface to scan the entire inspection area. A general video camera was employed to capture videos while scanning the target surface with the lighting device. In this study, all video clips were taken from the camera and 8017 image frames were extracted from the video clips.
This study preprocessed the images extracted from the recorded videos in three steps (gray scale conversion, image segmentation, and labeling). In the first step, 8017 images (120 × 68 pixels) were randomly sampled from the video clips and were transformed into gray scales in order to generalize the evaluation results by eliminating the effects of car body color. In the second step, image patches (32 × 32 pixels) for each image were prepared by visiting all locations of the image and all patches were manually classified as either normal or abnormal. Then, we randomly selected an equal number of normal (2200) and abnormal patches (2200) for training and testing the R-CNN.
The normal and abnormal patches were randomly divided into training (70%) and testing (30%) data sets. Thus, the R-CNN was trained with 3000 randomly selected patches (1500 normal, 1500 abnormal) from the total number of patches (4400). Next, the classification accuracy of the R-CNN was quantified with the remaining 1400 patches (700 normal, 700 abnormal) that were not utilized in the R-CNN training. Lastly, the localization accuracy of the R-CNN was calculated for each whole image, and it was judged as correct when an actual dent was located in a bounding box formed by the R-CNN.

3.2. Results

The overall classification accuracies of the proposed R-CNN model were 100% for the learning patches and 98.5% for the testing patches. No significant bias between the two groups of patches was observed in the classification accuracy. Figure 6 shows examples of normal and abnormal patches identified by the R-CNN model. In addition, Figure 7 shows an example of learning accuracies by iterations.
The overall classification accuracy of the proposed R-CNN model (98.5%) was superior to a plain R-CNN model (88.7%) for the testing patches. We implemented a plain R-CNN model, which employed a generic set of hyper-parameters (kernel number: 10, kernel size: 3, pooling size: 2, stride: 2, and number of fully connected nodes: 50). Since our R-CNN model used the best combination of hyper-parameters that were decided by the adaptive genetic algorithm, our model showed a lot better accuracy than a plain R-CNN model.
The sensitivity (the percentage of patches with dents that were correctly identified) and specificity (the percentage of normal patches that were correctly identified) for the testing patches were 97.9% and 99.2%, respectively. 2.1% of the abnormal patches were misclassified into the normal from the R-CNN model; however, the almost misclassified abnormal patches was not even distinguishable by human vision either. In addition, 0.8% of the normal patches were also misclassified into abnormal from the R-CNN model.
A mean absolute error (MAE) was 13.7 pixels (SD = 10.7 pixels). The MAE in this study was calculated using an average Euclidean distance between the locations of the actual and predicted dents. The actual locations of dents on the images were manually identified by our research team. As shown in Figure 8, the locations of the predicted dents were all close to those of the actual dents.

4. Conclusions

The present study proposed and applied a region-based convolutional neural network (R-CNN) to detect and localize dents on the surface of a vehicle body. An adaptive genetic algorithm (AGA) explored the optimal combination of hyper-parameters for the R-CNN by an evolutionary process to achieve the best classification accuracy. The R-CNN established a classification network, which classified an input image into either normal or abnormal (dent). In addition, the R-CNN localized the estimated location of a dent using the heat-map. The proposed method was able to classify normal and abnormal patches with an accuracy of 98% and its MAE was 13.7 pixels, which indicated very small discrepancy between the actual and estimated dent locations. It is expected that the R-CNN proposed in this study could help detect and localize dents on a vehicle exterior during vehicle manufacturing and for assembly companies.
Although the findings of this study revealed such promising results, there is still room for improvement in the future studies. Three further works were suggested for better practical applicability of the R-CNN model. First, this study used images of a vehicle fender to train the R-CNN model, and thus the model was specialized for detecting defects on the fender; however, the model might be applicable to other parts of a vehicle body. To validate the applicability of the R-CNN model developed in this study, further studies are needed to evaluate the classification performance of the method for other parts of a vehicle body. Second, this study demonstrated that the hybrid model could classify an image as normal or abnormal with great accuracy. Therefore, the model could be useful in the development of a real-time inspection system, which could record consecutively images of a vehicle body and detect defects. Lastly, this study used artificial defects on a vehicle body; thus, the future study can be suggested to develop and evaluate R-CNN models for detecting normal defects naturally damaged during the vehicle manufacturing as well as assembly process.

Author Contributions

Conceptualization, S.H.P., M.C., and J.P.; methodology, S.H.P. and A.T.; software, S.H.P., M.C., and J.P.; validation, J.C. and K.J.; writing—original draft preparation, S.H.P., M.C., J.C., and J.P.; writing—review and editing, A.T. and K.J.; supervision, J.C. and K.J.; project administration, K.J.; funding acquisition, K.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT; NRF-2019R1A2C4070310, NRF-2018R1C1B5045699).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Loferer, H. Automatic Painted Surface Inspection and Defect Detection. In Proceedings of the SENSOR + TEST Conferences, Nürnberg, Germany, 7–9 June 2011; pp. 871–873. [Google Scholar]
  2. Kamani, P.; Afshar, A.; Towhidkhah, F.; Roghani, E. Car body paint defect inspection using rotation invariant measure of the local variance and one-against-all support vector machine. In Informatics and Computational Intelligence (ICI), Proceedings of the 2011 First International Conference on Informatics and Computational Intelligence, Bandung, Indonesia, 12–14 December 2011; pp. 244–249.
  3. Chung, Y.C.; Chang, M. Visualization of Subtle Defects of Car Body Outer Panels. In Proceedings of the 2006 SICE-ICASE International Joint Conference, Busan, Korea, 18–21 October 2006; pp. 4639–4642. [Google Scholar]
  4. Armesto, L.; Tornero, J.; Herraez, A.; Asensio, J. Inspection system based on artificial vision for paint defects detection on cars bodies. In Robotics and Automation (ICRA), Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4.
  5. Tolba, A.S.; Khan, H.A.; Raafat, H.M. Automated visual inspection of flat surface products using feature fusion. In Signal Processing and Information Technology (ISSPIT), Proceedings of the 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) Ajman, United Arab Emirates, 14–17 December 2009; pp. 160–165.
  6. Godnig, E.C. Visual Acuity under Various Illuminance Lighting Conditions. Available online: http://www.theppsc.org (accessed on 20 September 2016).
  7. Navvab, M.; Uetiani, Y. Performance evaluation of the inspection lighting systems in industrial auto plants. J. Illum. Eng. Soc. 2001, 30, 152–169. [Google Scholar] [CrossRef]
  8. Glover, S.; Kelly, M.; Wozniak, H.; Moss, N. The effect of room illumination on visual acuity measurement. Aust. Orthopic J. 1999, 34, 3–8. [Google Scholar]
  9. Sanders, M.S.; McCormick, E.J. Human Factors in Engineering and Design, 7th ed.; McGraw-Hill: Singapore, 1993. [Google Scholar]
  10. Shlaer, S. The relation between visual acuity and illumination. J. Gen. Physiol. 1937, 21, 165–188. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Banister, H.; Hartridge, H.; Lythgoe, R.J. The effect of illumination and other factors on the acuity vision. Br. J. Ophthalmol. 1927, 11, 321–330. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Tjolleng, A.; Jung, K. Development of a Visual Inspection Method for Defects on Metallic Surface Considering Emergent Feature. In Proceedings of the 2016 Fall Conference of the Korean Institute of Industrial Engineers (KIIE), Seoul, Korea, 19 November 2016. [Google Scholar]
  13. Barber, R.; Zwilling, V.; Salichs, M.A. Algorithm for the evaluation of imperfections in auto bodywork using profiles from a retroreflective image. Sensors 2014, 14, 2476–2488. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Dorïng, C.; Eichhorn, A.; Wang, X.; Kruse, R. Improved classification of surface defects for quality control of car body panels. In Fuzzy Systems, Proceedings of the 2006 IEEE International Conference on Fuzzy Systems, Vancouver, BC, Canada, 16–21 July 2006; pp. 1476–1481.
  15. Kwon, B.K.; Won, J.S.; Kang, D.J. Fast defect detection for various types of surfaces using random forest with VOV features. Int. J. Precis. Eng. Manuf. 2015, 16, 965–970. [Google Scholar] [CrossRef]
  16. Faghih-Roohi, S.; Hajizadeh, S.; Núñez, A.; Babuska, R.; De Schutter, B. Deep convolutional neural networks for detection of rail surface defects. In Neural Networks (IJCNN), Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 2584–2589.
  17. Wang, T.; Chen, Y.; Qiao, M.; Snoussi, H. A fast and robust convolutional neural network-based defect detection model in product quality control. Int. J. Adv. Manuf. Technol. 2018, 94, 3465–3471. [Google Scholar] [CrossRef]
  18. Weron, R. Electricity price forecasting: A review of the state-of-the-art with a look into the future. Int. J. Forecast. 2014, 30, 1030–1081. [Google Scholar] [CrossRef] [Green Version]
  19. Cincotti, S.; Gallo, G.; Ponta, L.; Raberto, M. Modelling and forecasting of electricity spot-prices: Computational intelligence vs. classical econometrics. AI Commun. 2014, 27, 301–314. [Google Scholar] [CrossRef]
  20. Yu, Y.; Zhang, K.; Yang, L.; Zhang, D. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput. Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
  21. Ferguson, M.; Ak, R.; Lee, Y.T.T.; Law, K.H. Detection and Segmentation of Manufacturing Defects with Convolutional Neural Networks and Transfer Learning. Smart Sustain. Manuf. Syst. 2018, 2. [Google Scholar] [CrossRef] [PubMed]
  22. Eslami, P.; Jung, K.; Lee, D.; Tjolleng, A. Predicting tanker freight rates using parsimonious variables and a hybrid artificial neural network with an adaptive genetic algorithm. Marit. Econ. Logist. 2017, 19, 538–550. [Google Scholar] [CrossRef]
  23. Rardin, R.L. Optimization in Operations Research; Prentice Hill: New York, NY, USA, 1998. [Google Scholar]
  24. Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  25. Bekiroglu, S.; Dede, T.; Ayvaz, Y. Implementation of different encoding types on structural optimization based on adaptive genetic algorithm. Finite Elem. Anal. Des. 2009, 45, 826–835. [Google Scholar] [CrossRef]
  26. Lee, C.; Lin, W.; Chen, Y.; Kuo, B. Gene selection and sample classification on microarray data based on adaptive genetic algorithm/k-nearest neighbor method. Expert Syst. Appl. 2011, 38, 4661–4667. [Google Scholar] [CrossRef]
  27. Yun, Y. Hybrid genetic algorithm with adaptive local search scheme. Comput. Ind. Eng. 2006, 51, 128–141. [Google Scholar] [CrossRef]
Figure 1. Light device used to create Mach bands on a vehicle exterior, (a) Lighting device (b) Mach bands projected by the lighting device (c) Examples of light diffusion around normal (left) and abnormal (right) surfaces.
Figure 1. Light device used to create Mach bands on a vehicle exterior, (a) Lighting device (b) Mach bands projected by the lighting device (c) Examples of light diffusion around normal (left) and abnormal (right) surfaces.
Applsci 10 01250 g001
Figure 2. The region-based convolutional neural network (R-CNN) structure used in this study.
Figure 2. The region-based convolutional neural network (R-CNN) structure used in this study.
Applsci 10 01250 g002
Figure 3. Trend incumbent optimums of and average classification accuracies found by the adaptive genetic algorithm for the R-CNN.
Figure 3. Trend incumbent optimums of and average classification accuracies found by the adaptive genetic algorithm for the R-CNN.
Applsci 10 01250 g003
Figure 4. A heat-map to localize the location of a dent.
Figure 4. A heat-map to localize the location of a dent.
Applsci 10 01250 g004
Figure 5. An example of vehicle fender and experimental scene used in this study to record videos, (a) Vehicle body fender (b) Experimental scene.
Figure 5. An example of vehicle fender and experimental scene used in this study to record videos, (a) Vehicle body fender (b) Experimental scene.
Applsci 10 01250 g005
Figure 6. Normal and abnormal patches identified by the proposed R-CNN model, (a) Normal patches (b) Abnormal patches.
Figure 6. Normal and abnormal patches identified by the proposed R-CNN model, (a) Normal patches (b) Abnormal patches.
Applsci 10 01250 g006
Figure 7. An example of learning accuracies by iteration.
Figure 7. An example of learning accuracies by iteration.
Applsci 10 01250 g007
Figure 8. Examples of the actual and predicted dents.
Figure 8. Examples of the actual and predicted dents.
Applsci 10 01250 g008
Table 1. Optimal parameter values for R-CNN.
Table 1. Optimal parameter values for R-CNN.
ParameterValue
Kernel number58
Kernel size6
Padding size2
Pooling size3
Stride size3
Fully connected nodes91

Share and Cite

MDPI and ACS Style

Park, S.H.; Tjolleng, A.; Chang, J.; Cha, M.; Park, J.; Jung, K. Detecting and Localizing Dents on Vehicle Bodies Using Region-Based Convolutional Neural Network. Appl. Sci. 2020, 10, 1250. https://doi.org/10.3390/app10041250

AMA Style

Park SH, Tjolleng A, Chang J, Cha M, Park J, Jung K. Detecting and Localizing Dents on Vehicle Bodies Using Region-Based Convolutional Neural Network. Applied Sciences. 2020; 10(4):1250. https://doi.org/10.3390/app10041250

Chicago/Turabian Style

Park, Sung Hyun, Amir Tjolleng, Joonho Chang, Myeongsup Cha, Jongcheol Park, and Kihyo Jung. 2020. "Detecting and Localizing Dents on Vehicle Bodies Using Region-Based Convolutional Neural Network" Applied Sciences 10, no. 4: 1250. https://doi.org/10.3390/app10041250

APA Style

Park, S. H., Tjolleng, A., Chang, J., Cha, M., Park, J., & Jung, K. (2020). Detecting and Localizing Dents on Vehicle Bodies Using Region-Based Convolutional Neural Network. Applied Sciences, 10(4), 1250. https://doi.org/10.3390/app10041250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop