Change Detection of Selective Logging in the Brazilian Amazon Using X-Band SAR Data and Pre-Trained Convolutional Neural Networks
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Site and Data
2.2. Convolutional Neural Network Architectures
- InceptionV3 [50]: is Google’s deep neural network for image recognition, consisting of 48 layers. It is trained on the ImageNet dataset [51] and has been shown accuracy greater than 78.1% on this set. The model is composed of symmetric and asymmetric components, including convolutions, average clusters, maximum clusters, concatenations, dropouts, and fully connected layers. Batch normalization is used extensively throughout the model and applied to trigger inputs. The loss is calculated using the softmax function.
- VGG16 [52]: is a convolutional neural network model consisting of 16 layers containing their respective weights, trained in the ImageNet dataset, having achieved 92.6% accuracy in its classification. Instead of having a large number of hyperparameters, the network has convolution layers of 3 × 3 filter with a 1 pass, always using the same padding, and the maxpool layer of 2 × 2 filter with 2 passes. It follows this arrangement of convolution and maxpool layers consistently across the architecture. In the end, there are two fully connected layers, followed by a softmax for the output.
- VGG19: is a variant of the VGG model that contains 19 deep layers, which achieved a 92.7 accuracy in the ImageNet set classification.
- SqueezeNet [53]: is a 26-layer deep convolutional neural network that achieves AlexNet level accuracy [54] on ImageNet with 50× fewer parameters. SqueezeNet employs architectural strategies that reduce the number of parameters, notably with the use of trigger modules that “squeeze” the parameters using 1 × 1 convolutions.
- Painters: is a model trained in the dataset of the Painter by Numbers on Kaggle competition [55], consisting of 79,433 images of paintings by 1584 different painters, whose objective was to examine pairs of paintings, and determine if they are by the same artist. The network comprises a total of 24 layers.
- DeepLoc [56]: is a convolutional network trained on 21,882 individual cell images that have been manually assigned to one of 15 location compartments. It is a prediction algorithm that uses deep neural networks to predict the subcellular location of proteins based on sequence information alone. At its core, the prediction model uses a recurrent neural network that processes the entire protein sequence and an attention mechanism that identifies regions of the protein important for subcellular localization. The network consists of 11 layers.
2.3. Data Selection
- RGB whose R channel contains the coefficient of variation image, the G channel the minimum values image and the B channel the gradient (covmingrad image) between the June and October SAR COSMO-SkyMed filtered images;
- RGB whose R channel contains the coefficient of variation image, the G channel the minimum values image and the B channel the gradient (covmingrad image) between the June and October SAR COSMO-SkyMed unfiltered images;
- Single band of the coefficient of variation (CV image);
- Single band of ratio (COSMO-SkyMedOctober/COSMO-SkyMedJune—ratio image).
2.4. Classification Tests
- Area under receiver operator curve (AUC): an AUC of 0.5 suggests no discrimination between classes; 0.7 to 0.8 is considered acceptable; 0.8 to 0.9, excellent; and more than 0.9, exceptional.
- Accuracy: proportion of correctly classified samples.
- Training time (s).
- Test time (s).
2.5. Grouping of Selective Logging Features
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- SEEG-Brasil. Sistema de Estimativa de Emissões de Gases de Efeito Estufa. Available online: http://plataforma.seeg.eco.br/total_emission# (accessed on 20 September 2021).
- Câmara, G.; Valeriano, D.D.M.; Soares, J.V. Metodologia Para o Cálculo da Taxa Anual de Desmatamento Na Amazônia Legal (PRODES Methodology). Available online: http://www.obt.inpe.br/OBT/assuntos/programas/amazonia/prodes/pdfs/metodologia.pdf/@@download/file/metodologia.pdf (accessed on 23 September 2021).
- Diniz, C.G.; Souza, A.A.D.A.; Santos, D.C.; Dias, M.C.; Da Luz, N.C.; De Moraes, D.R.V.; Maia, J.S.A.; Gomes, A.R.; Narvaes, I.D.S.; Valeriano, D.M.; et al. DETER-B: The New Amazon Near Real-Time Deforestation Detection System. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3619–3628. [Google Scholar] [CrossRef]
- Watanabe, M.; Koyama, C.N.; Hayashi, M.; Nagatani, I.; Shimada, M. Early-Stage Deforestation Detection in the Tropics with L-Band SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2127–2133. [Google Scholar] [CrossRef]
- Doblas, J.; Shimabukuro, Y.; Sant’anna, S.; Carneiro, A.; Aragão, L.; Almeida, C. Optimizing near Real-Time Detection of Deforestation on Tropical Rainforests Using Sentinel-1 Data. Remote Sens. 2020, 12, 3922. [Google Scholar] [CrossRef]
- Potapov, P.; Hansen, M.C.; Kommareddy, I.; Kommareddy, A.; Turubanova, S.; Pickens, A.; Adusei, B.; Tyukavina, A.; Ying, Q. Landsat Analysis Ready Data for Global Land Cover and Land Cover Change Mapping. Remote Sens. 2020, 12, 426. [Google Scholar] [CrossRef] [Green Version]
- Fonseca, A.; Amorim, L.; Ribeiro, J.; Ferreira, R.; Monteiro, A.; Santos, B.; Souza, C., Jr.; Veríssimo, A. Boletim Do Desmatamento Da Amazônia Legal (Maio/2021) SAD. Available online: https://imazon.org.br/publicacoes/boletim-do-desmatamento-da-amazonia-legal-maio-2021-sad/ (accessed on 20 September 2021).
- Deutscher, J.; Perko, R.; Gutjahr, K.; Hirschmugl, M.; Schardt, M. Mapping Tropical Rainforest Canopy Disturbances in 3D by COSMO-SkyMed Spotlight InSAR-Stereo Data to Detect Areas of Forest Degradation. Remote Sens. 2013, 5, 648–663. [Google Scholar] [CrossRef] [Green Version]
- Bullock, E.L.; Woodcock, C.E.; Souza, C., Jr.; Olofsson, P. Satellite-Based Estimates Reveal Widespread Forest Degradation in the Amazon. Glob. Chang. Biol. 2020, 26, 2956–2969. [Google Scholar] [CrossRef] [PubMed]
- Matricardi, E.A.T.; Skole, D.L.; Costa, O.B.; Pedlowski, M.A.; Samek, J.H.; Miguel, E.P. Long-Term Forest Degradation Surpasses Deforestation in the Brazilian Amazon. Science 2020, 369, 1378–1382. [Google Scholar] [CrossRef] [PubMed]
- Qin, Y.; Xiao, X.; Wigneron, J.P.; Ciais, P.; Brandt, M.; Fan, L.; Li, X.; Crowell, S.; Wu, X.; Doughty, R.; et al. Carbon Loss from Forest Degradation Exceeds That from Deforestation in the Brazilian Amazon. Nat. Clim. Chang. 2021, 11, 442–448. [Google Scholar] [CrossRef]
- Silva Junior, C.H.L.; Carvalho, N.S.; Pessôa, A.C.M.; Reis, J.B.C.; Pontes-Lopes, A.; Doblas, J.; Heinrich, V.; Campanharo, W.; Alencar, A.; Silva, C.; et al. Amazonian Forest Degradation Must Be Incorporated into the COP26 Agenda. Nat. Geosci. 2021, 14, 634–635. [Google Scholar] [CrossRef]
- Bullock, E.L.; Woodcock, C.E. Carbon Loss and Removal Due to Forest Disturbance and Regeneration in the Amazon. Sci. Total Environ. 2021, 764, 142839. [Google Scholar] [CrossRef]
- Mitchell, A.L.; Rosenqvist, A.; Mora, B. Current Remote Sensing Approaches to Monitoring Forest Degradation in Support of Countries Measurement, Reporting and Verification (MRV) Systems for REDD+. Carbon Balance Manag. 2017, 12, 1–22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bullock, E.L.; Woodcock, C.E.; Olofsson, P. Monitoring Tropical Forest Degradation Using Spectral Unmixing and Landsat Time Series Analysis. Remote Sens. Environ. 2020, 238, 110968. [Google Scholar] [CrossRef]
- Dalagnol, R.; Phillips, O.L.; Gloor, E.; Galvão, L.S.; Wagner, F.H.; Locks, C.J.; Aragão, L.E.O.C. Quantifying Canopy Tree Loss and Gap Recovery in Tropical Forests under Low-Intensity Logging Using VHR Satellite Imagery and Airborne LiDAR. Remote Sens. 2019, 11, 817. [Google Scholar] [CrossRef] [Green Version]
- Asner, G.P. Cloud Cover in Landsat Observations of the Brazilian Amazon. Int. J. Remote Sens. 2001, 22, 3855–3862. [Google Scholar] [CrossRef]
- Mermoz, S.; Le Toan, T. Forest Disturbances and Regrowth Assessment Using ALOS PALSAR Data from 2007 to 2010 in Vietnam, Cambodia and Lao PDR. Remote Sens. 2016, 8, 217. [Google Scholar] [CrossRef] [Green Version]
- Lee, Y.S.; Lee, S.; Baek, W.K.; Jung, H.S.; Park, S.H.; Lee, M.J. Mapping Forest Vertical Structure in Jeju Island from Optical and Radar Satellite Images Using Artificial Neural Network. Remote Sens. 2020, 12, 797. [Google Scholar] [CrossRef] [Green Version]
- Bispo, P.C.; Santos, J.R.; Valeriano, M.M.; Touzi, R.; Seifert, F.M. Integration of Polarimetric PALSAR Attributes and Local Geomorphometric Variables Derived from SRTM for Forest Biomass Modeling in Central Amazonia. Can. J. Remote. Sens. 2014, 40, 26–42. [Google Scholar] [CrossRef]
- Bispo, P.C.; Rodríguez-Veiga, P.; Zimbres, B.; de Miranda, S.C.; Giusti Cezare, C.H.; Fleming, S.; Baldacchino, F.; Louis, V.; Rains, D.; Garcia, M.; et al. Woody Aboveground Biomass Mapping of the Brazilian Savanna with a Multi-Sensor and Machine Learning Approach. Remote Sens. 2020, 12, 2685. [Google Scholar] [CrossRef]
- Santoro, M.; Cartus, O.; Carvalhais, N.; Rozendaal, D.; Avitabilie, V.; Araza, A.; de Bruin, S.; Herold, M.; Quegan, S.; Rodríguez Veiga, P.; et al. The Global Forest Above-Ground Biomass Pool for 2010 Estimated from High-Resolution Satellite Observations. Earth Syst. Sci. Data 2021, 13, 3927–3950. [Google Scholar] [CrossRef]
- Deutscher, J.; Gutjahr, K.; Perko, R.; Raggam, H.; Hirschmugl, M.; Schardt, M. Humid Tropical Forest Monitoring with Multi-Temporal L-, C- and X-Band SAR Data. In Proceedings of the 2017 9th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Brugge, Belgium, 27–29 June 2017; Volume 2017, pp. 1–4. [Google Scholar] [CrossRef]
- Ghosh, S.M.; Behera, M.D. Aboveground Biomass Estimation Using Multi-Sensor Data Synergy and Machine Learning Algorithms in a Dense Tropical Forest. Appl. Geogr. 2018, 96, 29–40. [Google Scholar] [CrossRef]
- Treuhaft, R.; Lei, Y.; Gonçalves, F.; Keller, M.; dos Santos, J.R.; Neumann, M.; Almeida, A. Tropical-Forest Structure and Biomass Dynamics from TanDEM-X Radar Interferometry. Forests 2017, 8, 277. [Google Scholar] [CrossRef] [Green Version]
- Treuhaft, R.; Goncalves, F.; Dos Santos, J.R.; Keller, M.; Palace, M.; Madsen, S.N.; Sullivan, F.; Graca, P.M.L.A. Tropical-Forest Biomass Estimation at X-Band from the Spaceborne Tandem-X Interferometer. IEEE Geosci. Remote Sens. Lett. 2015, 12, 239–243. [Google Scholar] [CrossRef] [Green Version]
- Delgado-Aguilar, M.J.; Fassnacht, F.E.; Peralvo, M.; Gross, C.P.; Schmitt, C.B. Potential of TerraSAR-X and Sentinel 1 Imagery to Map Deforested Areas and Derive Degradation Status in Complex Rain Forests of Ecuador. Int. For. Rev. 2017, 19, 102–118. [Google Scholar] [CrossRef]
- Abo Gharbia, A.Y.; Amin, M.; Mousa, A.E.; AbouAly, N.; El Banby, G.M.; El-Samie, F.E.A. Registration-Based Change Detection for SAR Images. NRIAG J. Astron. Geophys. 2020, 9, 106–115. [Google Scholar] [CrossRef] [Green Version]
- Das, A.; Sahi, A.; Nandini, U. SAR Image Segmentation for Land Cover Change Detection. In Proceedings of the Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, India, 19 November 2016. [Google Scholar] [CrossRef]
- Zhuang, H.; Tan, Z.; Deng, K.; Fan, H. It Is a Misunderstanding That Log Ratio Outperforms Ratio in Change Detection of SAR Images. Eur. J. Remote Sens. 2019, 52, 484–492. [Google Scholar] [CrossRef] [Green Version]
- Koeniguer, E.C.; Nicolas, J.M. Change Detection Based on the Coefficient of Variation in SAR Time-Series of Urban Areas. Remote Sens. 2020, 12, 2089. [Google Scholar] [CrossRef]
- Shi, W.; Zhang, M.; Zhang, R.; Chen, S.; Zhan, Z. Change Detection Based on Artificial Intelligence: State-of-the-Art and Challenges. Remote Sens. 2020, 12, 1688. [Google Scholar] [CrossRef]
- Kaplan, A.; Haenlein, M. Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
- Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of Machine-Learning Classification in Remote Sensing: An Applied Review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
- De, S.; Pirrone, D.; Bovolo, F.; Bruzzone, L.; Bhattacharya, A. A Novel Change Detection Framework Based on Deep Learning for the Analysis of Multi-Temporal Polarimetric SAR Images. In International Geoscience and Remote Sensing Symposium (IGARSS); IEEE: New York, NY, USA, 2017; pp. 5193–5196. [Google Scholar] [CrossRef]
- Chen, H.; Jiao, L.; Liang, M.; Liu, F.; Yang, S.; Hou, B. Fast Unsupervised Deep Fusion Network for Change Detection of Multitemporal SAR Images. Neurocomputing 2019, 332, 56–70. [Google Scholar] [CrossRef]
- Lei, Y.; Liu, X.; Shi, J.; Lei, C.; Wang, J. Multiscale Superpixel Segmentation with Deep Features for Change Detection. IEEE Access 2019, 7, 36600–36616. [Google Scholar] [CrossRef]
- Lv, N.; Chen, C.; Qiu, T.; Sangaiah, A.K. Deep Learning and Superpixel Feature Extraction Based on Contractive Autoencoder for Change Detection in SAR Images. IEEE Trans. Ind. Inform. 2018, 14, 5530–5538. [Google Scholar] [CrossRef]
- Planinšič, P.; Gleich, D. Temporal Change Detection in SAR Images Using Log Cumulants and Stacked Autoencoder. IEEE Geosci. Remote Sens. Lett. 2018, 15, 297–301. [Google Scholar] [CrossRef]
- Dong, H.; Ma, W.; Wu, Y.; Gong, M.; Jiao, L. Local Descriptor Learning for Change Detection in Synthetic Aperture Radar Images via Convolutional Neural Networks. IEEE Access 2019, 7, 15389–15403. [Google Scholar] [CrossRef]
- Li, Y.; Peng, C.; Chen, Y.; Jiao, L.; Zhou, L.; Shang, R. A Deep Learning Method for Change Detection in Synthetic Aperture Radar Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5751–5763. [Google Scholar] [CrossRef]
- Cui, B.; Zhang, Y.; Yan, L.; Wei, J.; Wu, H. An Unsupervised SAR Change Detection Method Based on Stochastic Subspace Ensemble Learning. Remote Sens. 2019, 11, 1314. [Google Scholar] [CrossRef] [Green Version]
- Jaturapitpornchai, R.; Matsuoka, M.; Kanemoto, N.; Kuzuoka, S.; Ito, R.; Nakamura, R. Newly Built Construction Detection in SAR Images Using Deep Learning. Remote Sens. 2019, 11, 1444. [Google Scholar] [CrossRef] [Green Version]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable Are Features in Deep Neural Networks? Adv. Neural Inf. Process. Syst. 2014, 4, 3320–3328. [Google Scholar]
- Lopes, A.; Nezry, E.; Touzi, R.; Laur, H. Maximum a Posteriori Speckle Filtering and First Order Texture Models in SAR Images. Dig.-Int. Geosci. Remote Sens. Symp. 1990, 28, 992–1000. [Google Scholar] [CrossRef]
- Kuck, T.N.; Gomez, L.D.; Sano, E.E.; Bispo, P.d.C.; Honorio, D.D.C. Performance of Speckle Filters for COSMO-SkyMed Images from the Brazilian Amazon. IEEE Geosci. Remote Sens. Lett. 2021, 99, 1–5. [Google Scholar] [CrossRef]
- QGIS.org. QGIS Geographic Information System. Available online: http://qgis.osgeo.org (accessed on 12 September 2021).
- Cheng, G.; Xie, X.; Han, J.; Guo, L.; Xia, G.S. Remote Sensing Image Scene Classification Meets Deep Learning: Challenges, Methods, Benchmarks, and Opportunities. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3735–3756. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1–10. Available online: https://arxiv.org/abs/1512.00567 (accessed on 10 September 2021).
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. Imagenet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015; Available online: https://arxiv.org/abs/1409.1556 (accessed on 12 September 2021).
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and <0.5 MB Model Size. ICLR 2016, 1–14. Available online: https://arxiv.org/abs/1602.07360 (accessed on 11 September 2021).
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Nichol, K. Painter by Numbers—Kaggle. Available online: https://www.kaggle.com/c/painter-by-numbers (accessed on 15 September 2021).
- Almagro Armenteros, J.J.; Sønderby, C.K.; Sønderby, S.K.; Nielsen, H.; Winther, O. DeepLoc: Prediction of Protein Subcellular Localization Using Deep Learning. Bioinformatics 2017, 33, 3387–3395. [Google Scholar] [CrossRef]
- Kuck, T.N.; Sano, E.E.; Bispo, P.d.C.; Shiguemori, E.H.; Filho, P.F.F.S.; Matricardi, E.A.T. A Comparative Assessment of Machine-Learning Techniques for Forest Degradation Caused by Selective Logging in an Amazon Region Using Multitemporal X-Band SAR Images. Remote Sens. 2021, 13, 3341. [Google Scholar] [CrossRef]
- Demšar, J.; Curk, T.; Erjavec, A.; Gorup, Č.; Hočevar, T.; Milutinovič, M.; Možina, M.; Polajnar, M.; Toplak, M.; Starič, A.; et al. Orange: Data Mining Toolbox in Python. J. Mach. Learn. Res. 2013, 14, 2349–2353. [Google Scholar]
- Stone, M. Cross-Validatory Choice and Assessment of Statistical Predictions. J. R. Stat. Soc. Ser. B 1974, 36, 111–133. [Google Scholar] [CrossRef]
- Congalton, R.G. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
- Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the KDD-96 Proceedings, Portland, OR, USA, 2–4 August 1996; pp. 226–231. Available online: https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf (accessed on 23 September 2021). [CrossRef]
- Serviço Florestal Brasileiro. DETEX. Available online: https://www.florestal.gov.br/monitoramento (accessed on 22 August 2021).
- Locks, C.J. Aplicações da Tecnologia LiDAR no Monitoramento da Exploração Madeireira em Áreas de Concessão Florestal. M.Sc. Thesis, Universidade de Brasília, Brasília, DF, Brazil, 2017. [Google Scholar]
- van Der Sanden, J.J. Radar Remote Sensing to Support Tropical Forest Management; Tropenbos-Guyana Programme, Georgetown, Guyana, 1997. Available online: https://edepot.wur.nl/121223 (accessed on 10 September 2021).
- Bouvet, A.; Mermoz, S.; Ballère, M.; Koleck, T.; Le Toan, T. Use of the SAR Shadowing Effect for Deforestation Detection with Sentinel-1 Time Series. Remote Sens. 2018, 10, 1250. [Google Scholar] [CrossRef] [Green Version]
- Villard, L.; Borderies, P. Backscattering Border Effects for Forests at C-Band. PIERS Online, 2007; Volume 3. Available online: https://www.piers.org/piersonline/pdf/Vol3No5Page731to735.pdf (accessed on 23 September 2021).
- Kuntz, S.; Astrium, P. White Paper Status of X-Band SAR Applications in Forestry. In Proceedings of the GEO FCT 3rd Science and Data Summit, Arusha, Tanzania, 6–10 February 2012; pp. 6–10. [Google Scholar]
(A) | ||||
Embedder | Train Time (s) | Test Time (s) | AUC | Accuracy |
InceptionV3 | 82.971 | 1.103 | 0.953 ± 0.006 | 0.912 ± 0.009 |
VGG16 | 207.532 | 2.345 | 0.945 ± 0.009 | 0.901 ± 0.018 |
VGG19 | 385.772 | 2.547 | 0.937 ± 0.009 | 0.907 ± 0.018 |
SqueezeNet | 132.128 | 0.511 | 0.954 ± 0.009 | 0.918 ± 0.018 |
Painters | 124.047 | 1.064 | 0.951 ± 0.009 | 0.920 ± 0.018 |
DeepLoc | 62.666 | 0.245 | 0.955 ± 0.009 | 0.910 ± 0.018 |
(B) | ||||
Embedder | Train Time (s) | Test Time (s) | AUC | Accuracy |
InceptionV3 | 107.208 | 1.025 | 0.948 ± 0.012 | 0.907 ± 0.026 |
VGG16 | 333.649 | 3.194 | 0.954 ± 0.009 | 0.906 ± 0.018 |
VGG19 | 239.280 | 2.851 | 0.951 ± 0.009 | 0.899 ± 0.018 |
SqueezeNet | 123.924 | 0.469 | 0.943 ± 0.006 | 0.906 ± 0.009 |
Painters | 105.149 | 0.807 | 0.957 ± 0.009 | 0.911 ± 0.018 |
DeepLoc | 64.436 | 0.187 | 0.957 ± 0.009 | 0.910 ± 0.018 |
(C) | ||||
Embedder | Train Time (s) | Test Time (s) | AUC | Accuracy |
InceptionV3 | 145.101 | 1.475 | 0.838 ± 0.009 | 0.858 ± 0.018 |
VGG16 | 281.361 | 3.830 | 0.845 ± 0.002 | 0.863 ± 0.001 |
VGG19 | 285.898 | 2.321 | 0.845 ± 0.002 | 0.860 ± 0.001 |
SqueezeNet | 61.798 | 0.476 | 0.850 ± 0.006 | 0.850 ± 0.009 |
Painters | 125.313 | 1.031 | 0.849 ± 0.012 | 0.858 ± 0.026 |
DeepLoc | 60.500 | 0.190 | 0.854 ± 0.009 | 0.860 ± 0.018 |
Covmingrad (Painters) | |||
Predicted | |||
selective logging | non-logging | ||
Ground truth | selective logging | 85.4% | 7.0% |
non-logging | 14.6% | 93.0% | |
Cov (Painters) | |||
Predicted | |||
selective logging | non-logging | ||
Ground truth | selective logging | 83.0% | 6.6% |
non-logging | 17.0% | 93.4% | |
Ratio (VGG16) | |||
Predicted | |||
selective logging | non-logging | ||
Ground truth | selective logging | 84.4% | 11.2% |
non-logging | 15.6% | 88.8% |
Covmingrad (Painters)—Generalization Test | |||
---|---|---|---|
Predicted | |||
selective logging | non-logging | ||
Ground truth | selective logging | 87.5% | 12.7% |
non-logging | 12.5% | 87.3% |
Embedder | Train Time (s) | Test Time (s) | AUC | Accuracy |
---|---|---|---|---|
InceptionV3 | 149.655 | 1.220 | 0.931 ± 0.006 | 0.894 ± 0.009 |
VGG16 | 412.935 | 14.689 | 0.923 ± 0.009 | 0.890 ± 0.018 |
VGG19 | 496.055 | 4.399 | 0.910 ± 0.009 | 0.871 ± 0.018 |
SqueezeNet | 175.185 | 0.500 | 0.923 ± 0.006 | 0.891 ± 0.009 |
Painters | 169.676 | 1.239 | 0.944 ± 0.009 | 0.899 ± 0.018 |
DeepLoc | 127.549 | 0.263 | 0.932 ± 0.009 | 0.887 ± 0.018 |
Covmingrad (Painters) | |||
---|---|---|---|
Predicted | |||
selective logging | non-logging | ||
Ground truth | selective logging | 81.2% | 8.6% |
non-logging | 18.8% | 91.4% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kuck, T.N.; Silva Filho, P.F.F.; Sano, E.E.; Bispo, P.d.C.; Shiguemori, E.H.; Dalagnol, R. Change Detection of Selective Logging in the Brazilian Amazon Using X-Band SAR Data and Pre-Trained Convolutional Neural Networks. Remote Sens. 2021, 13, 4944. https://doi.org/10.3390/rs13234944
Kuck TN, Silva Filho PFF, Sano EE, Bispo PdC, Shiguemori EH, Dalagnol R. Change Detection of Selective Logging in the Brazilian Amazon Using X-Band SAR Data and Pre-Trained Convolutional Neural Networks. Remote Sensing. 2021; 13(23):4944. https://doi.org/10.3390/rs13234944
Chicago/Turabian StyleKuck, Tahisa Neitzel, Paulo Fernando Ferreira Silva Filho, Edson Eyji Sano, Polyanna da Conceição Bispo, Elcio Hideiti Shiguemori, and Ricardo Dalagnol. 2021. "Change Detection of Selective Logging in the Brazilian Amazon Using X-Band SAR Data and Pre-Trained Convolutional Neural Networks" Remote Sensing 13, no. 23: 4944. https://doi.org/10.3390/rs13234944
APA StyleKuck, T. N., Silva Filho, P. F. F., Sano, E. E., Bispo, P. d. C., Shiguemori, E. H., & Dalagnol, R. (2021). Change Detection of Selective Logging in the Brazilian Amazon Using X-Band SAR Data and Pre-Trained Convolutional Neural Networks. Remote Sensing, 13(23), 4944. https://doi.org/10.3390/rs13234944