A New Approach to Landscape Visual Quality Assessment from a Fine-Tuning Perspective
Abstract
:1. Introduction
2. Methods and Data
2.1. Study Area
2.2. Methods
2.2.1. Street View Images
2.2.2. Landscape Beauty Assessment
2.2.3. Convolutional Neural Network Fine-Tuning
3. Results
3.1. Results of SBE Evaluation
3.2. Model Parameter Exploration Results
3.3. Results of Model Performance Evaluation
3.4. Visualization of Prediction Results
4. Discussion
4.1. Main Findings
4.2. Importance of Fine-Tuning for Performance Improvement
4.3. Application of Deep Learning Techniques in Landscape Disciplines
4.3.1. Image Segmentation Model
4.3.2. Natural Language Processing Models
4.4. Advantages and Limitations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Gobster, P.H.; Ribe, R.G.; Palmer, J.F. Themes and trends in visual assessment research: Introduction to the Landscape and Urban Planning special collection on the visual assessment of landscapes. Landsc. Urban Plan. 2019, 191, 103635. [Google Scholar] [CrossRef]
- Crowe, S.; Miller, Z. Shaping Tomorrow’s Landscape. 1964. Available online: https://cir.nii.ac.jp/crid/1130282272394986368 (accessed on 18 October 2022).
- Litton, R.B. Forest Landscape Description and Inventories: A Basis for Land Planning and Design (No. 49); Forest Service, US Department of Agriculture, Pacific Forest and Range Experiment Station: Berkeley, CA, USA, 1968.
- USDA Forest Service. National Forest Landscape Management, volume 2, chapter 1, The Visual Management System. 47: Col. Ill., Maps (Some Col.); 27 cm; USDA: Washington, DC, USA, 1974.
- Daniel, T.C. Measuring Landscape Esthetics: The Scenic Beauty Estimation Method; Department of Agriculture, Forest Service, Rocky Mountain Forest and Range Experiment Station: Fort Collins, CO, USA, 1976. [Google Scholar]
- Egoz, S.; Bowring, J.; Perkins, H.C. Tastes in tension: Form, function, and meaning in New Zealand’s farmed landscapes. Landsc. Urban Plan. 2001, 57, 177–196. [Google Scholar] [CrossRef]
- Fuller, R.A.; Irvine, K.N.; Devine-Wright, P.; Warren, P.H.; Gaston, K.J. Psychological benefits of greenspace increase with biodiversity. Biol. Lett. 2007, 3, 390–394. [Google Scholar] [CrossRef] [PubMed]
- Junker, B.; Buchecker, M. Aesthetic preferences versus ecological objectives in river restorations. Landsc. Urban Plan. 2008, 85, 141–154. [Google Scholar] [CrossRef]
- Clifford, M.A. Your Guide to Forest Bathing (Expanded Edition): Experience the Healing Power of Nature; Red Wheel: London, UK, 2021. [Google Scholar]
- Hansen, M.M.; Jones, R.; Tocchini, K. Shinrin-Yoku (Forest Bathing) and Nature Therapy: A State-of-the-Art Review. Int. J. Environ. Res. Public Health 2017, 14, 8. [Google Scholar] [CrossRef] [PubMed]
- Park, B.J.; Tsunetsugu, Y.; Kasetani, T.; Kagawa, T.; Miyazaki, Y. The physiological effects of Shinrin-yoku (taking in the forest atmosphere or forest bathing): Evidence from field experiments in 24 forests across Japan. Environ. Health Prev. Med. 2010, 15, 18–26. [Google Scholar] [CrossRef] [PubMed]
- Schroeder, H.W.; Anderson, L.M. Perception of Personal Safety in Urban Recreation Sites. J. Leis. Res. 1984, 16, 178–194. [Google Scholar] [CrossRef]
- Daniel, T.C. Whither scenic beauty? Visual landscape quality assessment in the 21st century. Landsc. Urban Plan. 2001, 54, 267–281. [Google Scholar] [CrossRef]
- Van Herzele, A.; Wiedemann, T. A monitoring tool for the provision of accessible and attractive urban green spaces. Landsc. Urban Plan. 2003, 63, 109–126. [Google Scholar] [CrossRef]
- Wright Wendel, H.E.; Zarger, R.K.; Mihelcic, J.R. Accessibility and usability: Green space preferences, perceptions, and barriers in a rapidly urbanizing city in Latin America. Landsc. Urban Plan. 2012, 107, 272–282. [Google Scholar] [CrossRef]
- Wang, R.; Zhao, J.; Liu, Z. Consensus in visual preferences: The effects of aesthetic quality and landscape types. Urban For. Urban Green. 2016, 20, 210–217. [Google Scholar] [CrossRef]
- Dupont, L.; Antrop, M.; Van Eetvelde, V. Does landscape related expertise influence the visual perception of landscape photographs? Implications for participatory landscape planning and management. Landsc. Urban Plan. 2015, 141, 68–77. [Google Scholar] [CrossRef]
- Kalivoda, O.; Vojar, J.; Skřivanová, Z.; Zahradník, D. Consensus in landscape preference judgments: The effects of landscape visual aesthetic quality and respondents’ characteristics. J. Environ. Manag. 2014, 137, 36–44. [Google Scholar] [CrossRef]
- Dunkel, A. Visualizing the perceived environment using crowdsourced photo geodata. Landsc. Urban Plan. 2015, 142, 173–186. [Google Scholar] [CrossRef]
- Li, X.; Zhang, C.; Li, W.; Ricard, R.; Meng, Q.; Zhang, W. Assessing street-level urban greenery using Google Street View and a modified green view index. Urban For. Urban Green. 2015, 14, 675–685. [Google Scholar] [CrossRef]
- Lu, Y.; Yang, Y.; Sun, G.; Gou, Z. Associations between overhead-view and eye-level urban greenness and cycling behaviors. Cities 2019, 88, 10–18. [Google Scholar] [CrossRef]
- Tieskens, K.F.; Van Zanten, B.T.; Schulp, C.J.E.; Verburg, P.H. Aesthetic appreciation of the cultural landscape through social media: An analysis of revealed preference in the Dutch river landscape. Landsc. Urban Plan. 2018, 177, 128–137. [Google Scholar] [CrossRef]
- Ye, Y.; Richards, D.; Lu, Y.; Song, X.; Zhuang, Y.; Zeng, W.; Zhong, T. Measuring daily accessed street greenery: A human-scale approach for informing better urban planning practices. Landsc. Urban Plan. 2019, 191, 103434. [Google Scholar] [CrossRef]
- Zhu, H.; Nan, X.; Yang, F.; Bao, Z. Utilizing the green view index to improve the urban street greenery index system: A statistical study using road patterns and vegetation structures as entry points. Landsc. Urban Plan. 2023, 237, 104780. [Google Scholar] [CrossRef]
- Santos, I.; Castro, L.; Rodriguez-Fernandez, N.; Torrente-Patino, A.; Carballal, A. Artificial neural networks and deep learning in the visual arts: A review. Neural Comput. Appl. 2021, 33, 121–157. [Google Scholar] [CrossRef]
- Jiang, W.; Loui, A.C.; Cerosaletti, C.D. Automatic aesthetic value assessment in photographic images. In Proceedings of the 2010 IEEE International Conference on Multimedia and Expo, Singapore, 19–23 July 2010; pp. 920–925. [Google Scholar] [CrossRef]
- Lu, X.; Lin, Z.; Shen, X.; Mech, R.; Wang, J.Z. Deep multi-patch aggregation network for image style, aesthetics, and quality estimation. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 990–998. [Google Scholar]
- Aydın, T.O.; Smolic, A.; Gross, M. Automated Aesthetic Analysis of Photographic Images. IEEE Trans. Vis. Comput. Graph. 2015, 21, 31–42. [Google Scholar] [CrossRef]
- Kao, Y.; Wang, C.; Huang, K. Visual aesthetic quality assessment with a regression model. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 1583–1587. [Google Scholar] [CrossRef]
- Aikoh, T.; Homma, R.; Abe, Y. Comparing conventional manual measurement of the green view index with modern automatic methods using google street view and semantic segmentation. Urban For. Urban Green. 2023, 80, 127845. [Google Scholar] [CrossRef]
- Chandler, D.M. Seven Challenges in Image Quality Assessment: Past, Present, and Future Research. Int. Sch. Res. Not. 2013, 2013, e905685. [Google Scholar] [CrossRef]
- Talebi, H.; Milanfar, P. NIMA: Neural Image Assessment. IEEE Trans. Image Process. 2018, 27, 3998–4011. [Google Scholar] [CrossRef]
- Lee, J.-T.; Lee, C.; Kim, C.-S. Property-Specific Aesthetic Assessment With Unsupervised Aesthetic Property Discovery. IEEE Access 2019, 7, 114349–114362. [Google Scholar] [CrossRef]
- Zhang, X.; Gao, X.; Lu, W.; He, L.; Li, J. Beyond Vision: A Multimodal Recurrent Attention Convolutional Neural Network for Unified Image Aesthetic Prediction Tasks. IEEE Trans. Multimed. 2021, 23, 611–623. [Google Scholar] [CrossRef]
- Wang, W.; Yang, S.; Zhang, W.; Zhang, J. Neural aesthetic image reviewer. IET Comput. Vis. 2019, 13, 749–758. [Google Scholar] [CrossRef]
- Qi, Y.; Chodron Drolma, S.; Zhang, X.; Liang, J.; Jiang, H.; Xu, J.; Ni, T. An investigation of the visual features of urban street vitality using a convolutional neural network. Geo-Spat. Inf. Sci. 2020, 23, 341–351. [Google Scholar] [CrossRef]
- Zhang, X.; Xiong, X.; Chi, M.; Yang, S.; Liu, L. Research on visual quality assessment and landscape elements influence mechanism of rural greenways. Ecol. Indic. 2024, 160, 111844. [Google Scholar] [CrossRef]
- General Office of the People’s Government of Jiangsu Province. (28 February 2021). Notice of the Provincial Government on the Issuance of Interim Measures for the Control of Land Space in the Core Monitoring Area of the Jiangsu Section of the Grand Canal. Available online: http://www.jiangsu.gov.cn/art/2021/4/7/art_46143_9745264.html (accessed on 4 February 2023).
- Larkin, A.; Gu, X.; Chen, L.; Hystad, P. Predicting perceptions of the built environment using GIS, satellite and street view image approaches. Landsc. Urban Plan. 2021, 216, 104257. [Google Scholar] [CrossRef] [PubMed]
- Wu, C.; Ye, Y.; Gao, F.; Ye, X. Using street view images to examine the association between human perceptions of locale and urban vitality in Shenzhen, China. Sustain. Cities Soc. 2023, 88, 104291. [Google Scholar] [CrossRef]
- Sainju, A.M.; Jiang, Z. Mapping road safety features from streetview imagery: A deep learning approach. ACM Trans. Data Sci. 2020, 1, 1–20. [Google Scholar] [CrossRef]
- Evans-Cowley, J.S.; Akar, G. StreetSeen visual survey tool for determining factors that make a street attractive for bicycling. Transp. Res. Rec. 2014, 2468, 19–27. [Google Scholar] [CrossRef]
- Li, Y.; Yabuki, N.; Fukuda, T. Integrating GIS, deep learning, and environmental sensors for multicriteria evaluation of urban street walkability. Landsc. Urban Plan. 2023, 230, 104603. [Google Scholar] [CrossRef]
- Chen, L.; Lu, Y.; Sheng, Q.; Ye, Y.; Wang, R.; Liu, Y. Estimating pedestrian volume using Street View images: A large-scale validation test. Comput. Environ. Urban Syst. 2020, 81, 101481. [Google Scholar] [CrossRef]
- Ki, D.; Lee, S. Analyzing the effects of Green View Index of neighborhood streets on walking time using Google Street View and deep learning. Landsc. Urban Plan. 2021, 205, 103920. [Google Scholar] [CrossRef]
- Lu, Y. Using Google Street View to investigate the association between street greenery and physical activity. Landsc. Urban Plan. 2019, 191, 103435. [Google Scholar] [CrossRef]
- Seiferling, I.; Naik, N.; Ratti, C.; Proulx, R. Green streets − Quantifying and mapping urban trees with street-level imagery and computer vision. Landsc. Urban Plan. 2017, 165, 93–101. [Google Scholar] [CrossRef]
- Ibrahim, M.R.; Haworth, J.; Cheng, T. Understanding cities with machine eyes: A review of deep computer vision in urban analytics. Cities 2020, 96, 102481. [Google Scholar] [CrossRef]
- Kim, J.H.; Lee, S.; Hipp, J.R.; Ki, D. Decoding urban landscapes: Google street view and measurement sensitivity. Comput. Environ. Urban Syst. 2021, 88, 101626. [Google Scholar] [CrossRef]
- Vodak, M.C.; Roberts, P.L.; Wellman, J.D.; Buhyoff, G.J. Scenic impacts of eastern hardwood management. For. Sci. 1985, 31, 289–301. [Google Scholar]
- Zhai, M.; Zhang, R.; Yan, H. Review on the Studies on Scenic Evaluation and Its Application in Scenic Forest Construction both at Home and Abroad. World For. Res. 2003, 6, 16–19. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
- Smith, S.L.; Le, Q.V. A Bayesian Perspective on Generalization and Stochastic Gradient Descent. arXiv 2018, arXiv:1710.06451. [Google Scholar] [CrossRef]
- Shin, H.-C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed]
- Hou, L.; Yu, C.-P.; Samaras, D. Squared Earth Mover’s Distance-based Loss for Training Deep Neural Networks. arXiv 2017, arXiv:1611.05916. [Google Scholar] [CrossRef]
- Zliobaite, I. On the relation between accuracy and fairness in binary classification. arXiv 2015, arXiv:1505.05723. [Google Scholar] [CrossRef]
- Liu, Y.; Zhang, Z.; Liu, X.; Wang, L.; Xia, X. Ore image classification based on small deep learning model: Evaluation and optimization of model depth, model structure and data size. Miner. Eng. 2021, 172, 107020. [Google Scholar] [CrossRef]
- Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 36–47. [Google Scholar] [CrossRef]
- Harris, L.; Brown, G. Mixing interview and questionnaire methods: Practical problems in aligning data. Pract. Assess. Res. Eval. 2019, 15, 1. [Google Scholar] [CrossRef]
- Dadvand, P.; Wright, J.; Martinez, D.; Basagaña, X.; McEachan, R.R.C.; Cirach, M.; Gidlow, C.J.; de Hoogh, K.; Gražulevičienė, R.; Nieuwenhuijsen, M.J. Inequality, green spaces, and pregnant women: Roles of ethnicity and individual and neighbourhood socioeconomic status. Environ. Int. 2014, 71, 101–108. [Google Scholar] [CrossRef]
- Shen, Y.; Sun, F.; Che, Y. Public green spaces and human wellbeing: Mapping the spatial inequity and mismatching status of public green space in the Central City of Shanghai. Urban For. Urban Green. 2017, 27, 59–68. [Google Scholar] [CrossRef]
- Taherdoost, H. Validity and Reliability of the Research Instrument; How to Test the Validation of a Questionnaire/Survey in a Research (10 August 2016). Available online: https://ssrn.com/abstract=3205040 (accessed on 15 May 2023).
- Hu, S.; Yue, H.; Zhou, Z. Preferences for urban stream landscapes: Opportunities to promote unmanaged riparian vegetation. Urban For. Urban Green. 2019, 38, 114–123. [Google Scholar] [CrossRef]
- Tian, Y.; Qian, J. Suburban identification based on multi-source data and landscape analysis of its construction land: A case study of Jiangsu Province, China. Habitat Int. 2021, 118, 102459. [Google Scholar] [CrossRef]
- Lindal, P.J.; Hartig, T. Architectural variation, building height, and the restorative quality of urban residential streetscapes. J. Environ. Psychol. 2013, 33, 26–36. [Google Scholar] [CrossRef]
- Zhou, L.; Li, Y.; Cheng, J.; Qin, Y.; Shen, G.; Li, B.; Yang, H.; Li, S. Understanding the aesthetic perceptions and image impressions experienced by tourists walking along tourism trails through continuous cityscapes in Macau. J. Transp. Geogr. 2023, 112, 103703. [Google Scholar] [CrossRef]
- Yin, X.; Chen, W.; Wu, X.; Yue, H. Fine-tuning and visualization of convolutional neural networks. In Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; pp. 1310–1315. [Google Scholar] [CrossRef]
- Becherer, N.; Pecarina, J.; Nykl, S.; Hopkinson, K. Improving optimization of convolutional neural networks through parameter fine-tuning. Neural Comput. Appl. 2019, 31, 3469–3479. [Google Scholar] [CrossRef]
- Kandel, I.; Castelli, M. The effect of batch size on the generalizability of the convolutional neural networks on a histopathology dataset. ICT Express 2020, 6, 312–315. [Google Scholar] [CrossRef]
- Zheng, T.; Bergin, M.H.; Hu, S.; Miller, J.; Carlson, D.E. Estimating ground-level PM2.5 using micro-satellite images by a convolutional neural network and random forest approach. Atmos. Environ. 2020, 230, 117451. [Google Scholar] [CrossRef]
- Li, J.; Heap, A.D. Spatial interpolation methods applied in the environmental sciences: A review. Environ. Model. Softw. 2014, 53, 173–189. [Google Scholar] [CrossRef]
- Oliver, M.A.; Webster, R. Kriging: A method of interpolation for geographical information systems. Int. J. Geogr. Inf. Syst. 1990, 4, 313–332. [Google Scholar] [CrossRef]
- Theodossiou, N.; Latinopoulos, P. Evaluation and optimisation of groundwater observation networks using the Kriging methodology. Environ. Model. Softw. 2006, 21, 991–1000. [Google Scholar] [CrossRef]
- Knotters, M.; Brus, D.J.; Oude Voshaar, J.H. A comparison of kriging, co-kriging and kriging combined with regression for spatial interpolation of horizon depth with censored observations. Geoderma 1995, 67, 227–246. [Google Scholar] [CrossRef]
- Gupta, J.; Pathak, S.; Kumar, G. Deep Learning (CNN) and Transfer Learning: A Review. J. Phys. Conf. Ser. 2022, 2273, 012029. [Google Scholar] [CrossRef]
- Ma, J.; Zhao, Z.; Yi, X.; Chen, J.; Hong, L.; Chi, E.H. Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 1930–1939. [Google Scholar] [CrossRef]
- Talukder, A.; Islam, M.; Uddin, A.; Akhter, A.; Pramanik, A.J.; Aryal, S.; Almoyad, M.A.A.; Hasan, K.F.; Moni, M.A. An efficient deep learning model to categorize brain tumor using reconstruction and fine-tuning. Expert Syst. Appl. 2023, 230, 120534. [Google Scholar] [CrossRef]
- Tang, J.; Long, Y. Measuring visual quality of street space and its temporal variation: Methodology and its application in the Hutong area in Beijing. Landsc. Urban Plan. 2019, 191, 103436. [Google Scholar] [CrossRef]
- Helbich, M.; Poppe, R.; Oberski, D.; Zeylmans van Emmichoven, M.; Schram, R. Can’t see the wood for the trees? An assessment of street view- and satellite-derived greenness measures in relation to mental health. Landsc. Urban Plan. 2021, 214, 104181. [Google Scholar] [CrossRef]
- Jeon, J.; Woo, A. Deep learning analysis of street panorama images to evaluate the streetscape walkability of neighborhoods for subsidized families in Seoul, Korea. Landsc. Urban Plan. 2023, 230, 104631. [Google Scholar] [CrossRef]
- Koo, B.W.; Guhathakurta, S.; Botchwey, N.; Hipp, A. Can good microscale pedestrian streetscapes enhance the benefits of macroscale accessible urban form? An automated audit approach using Google street view images. Landsc. Urban Plan. 2023, 237, 104816. [Google Scholar] [CrossRef]
- Suzuki, M.; Mori, J.; Maeda, T.N.; Ikeda, J. The economic value of urban landscapes in a suburban city of Tokyo, Japan: A semantic segmentation approach using Google Street View images. J. Asian Archit. Build. Eng. 2023, 22, 1110–1125. [Google Scholar] [CrossRef]
- Deng, Y.; Loy, C.C.; Tang, X. Image Aesthetic Assessment: An experimental survey. IEEE Signal Process. Mag. 2017, 34, 80–106. [Google Scholar] [CrossRef]
- Lv, P.; Fan, J.; Nie, X.; Dong, W.; Jiang, X.; Zhou, B.; Xu, M.; Xu, C. User-Guided Personalized Image Aesthetic Assessment Based on Deep Reinforcement Learning. IEEE Trans. Multimed. 2023, 25, 736–749. [Google Scholar] [CrossRef]
- Mormont, R.; Geurts, P.; Marée, R. Multi-Task Pre-Training of Deep Neural Networks for Digital Pathology. IEEE J. Biomed. Health Inform. 2021, 25, 412–421. [Google Scholar] [CrossRef] [PubMed]
- Kang, Y.; Cai, Z.; Tan, C.-W.; Huang, Q.; Liu, H. Natural language processing (NLP) in management research: A literature review. J. Manag. Anal. 2020, 7, 139–172. [Google Scholar] [CrossRef]
- Luo, J.; Lei, Z.; Hu, Y.; Wang, M.; Cao, L. Analysis of Tourists’ Sentiment Tendency in Urban Parks Based on Deep Learning: A Case Study of Tianjin Water Park. Chin. Landsc. Archit. 2021, 37, 65–70. [Google Scholar] [CrossRef]
- Shivaprasad, T.K.; Shetty, J. Sentiment analysis of product reviews: A review. In Proceedings of the 2017 International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 10–11 March 2017; pp. 298–301. [Google Scholar] [CrossRef]
- Kong, L.; Liu, Z.; Pan, X.; Wang, Y.; Guo, X.; Wu, J. How do different types and landscape attributes of urban parks affect visitors’ positive emotions? Landsc. Urban Plan. 2022, 226, 104482. [Google Scholar] [CrossRef]
- Liu, J.; Ettema, D.; Helbich, M. Street view environments are associated with the walking duration of pedestrians: The case of Amsterdam, the Netherlands. Landsc. Urban Plan. 2023, 235, 104752. [Google Scholar] [CrossRef]
- Sun, X.; Yang, D.; Li, X.; Zhang, T.; Meng, Y.; Qiu, H.; Wang, G.; Hovy, E.; Li, J. Interpreting Deep Learning Models in Natural Language Processing: A Review. arXiv 2021, arXiv:2110.10470. [Google Scholar] [CrossRef]
- Jiang, H.; He, P.; Chen, W.; Liu, X.; Gao, J.; Zhao, T. SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, WA, USA, 5–10 July 2020; pp. 2177–2190. [Google Scholar] [CrossRef]
- Gu, Y.; Tinn, R.; Cheng, H.; Lucas, M.; Usuyama, N.; Liu, X.; Naumann, T.; Gao, J.; Poon, H. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. ACM Trans. Comput. Healthc. 2021, 3, 2:1–2:23. [Google Scholar] [CrossRef]
- Khurana, D.; Koli, A.; Khatter, K.; Singh, S. Natural language processing: State of the art, current trends and challenges. Multimed. Tools Appl. 2023, 82, 3713–3744. [Google Scholar] [CrossRef]
- Gozalo-Brizuela, R.; Garrido-Merchan, E.C. ChatGPT is not all you need. A State of the Art Review of large Generative AI models. arXiv 2023, arXiv:2301.04655. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y.; et al. Segment Anything. arXiv 2023, arXiv:2304.02643. [Google Scholar] [CrossRef]
Cronbach’s α Ratio | Standardization Cronbach’s α Ratio | Item Count | Sample Size |
---|---|---|---|
0.866 | 0.835 | 100 | 100 |
Model_id | Batch Size | LR | Epoch | EMD | BCA (%) | meanLCC | meanSRCC |
---|---|---|---|---|---|---|---|
Model_1 | 16 | 1 × 10−3 | 50 | 0.0761 | 73.3 | 0.6805 | 0.6893 |
Model_2 | 16 | 8 × 10−4 | 75 | 0.0756 | 66.7 | 0.7329 | 0.6679 |
Model_3 | 16 | 7 × 10−4 | 50 | 0.0476 * | 100 * | 0.6315 | 0.6321 |
Model_4 | 16 | 5 × 10−4 | 50 | 0.0874 | 66.7 | 0.7348 * | 0.7929 * |
meanData_0 | meanData_1 | meanData_2 | meanData_3 | meanData_4 | |
---|---|---|---|---|---|
meanData_0 | 1 (0.000 ***) | 0.541 (0.000 ***) | 0.544 (0.000 ***) | 0.601 (0.000 ***) | 0.368 (0.000 ***) |
meanData_1 | 0.541 (0.000 ***) | 1 (0.000 ***) | 0.622 (0.000 ***) | 0.49 (0.000 ***) | 0.368 (0.000 ***) |
meanData_2 | 0.544 (0.000 ***) | 0.622 (0.000 ***) | 1 (0.000 ***) | 0.541 (0.000 ***) | 0.352 (0.000 ***) |
meanData_3 | 0.601 (0.000 ***) | 0.49 (0.000 ***) | 0.541 (0.000 ***) | 1 (0.000 ***) | 0.412 (0.000 ***) |
meanData_4 | 0.368 (0.000 ***) | 0.368 (0.000 ***) | 0.352 (0.000 ***) | 0.412 (0.000 ***) | 1 (0.000 ***) |
stdData_0 | stdData_1 | stdData_2 | stdData_3 | stdData_4 | |
stdData_0 | 1 (0.000 ***) | −0.201 (0.045 **) | −0.116 (0.249) | −0.227 (0.023 **) | 0.066 (0.516) |
stdData_1 | −0.201 (0.045 **) | 1 (0.000 ***) | 0.368 (0.000 ***) | 0.391 (0.000 ***) | 0.217 (0.030 **) |
stdData_2 | −0.116 (0.249) | 0.368 (0.000 ***) | 1 (0.000 ***) | 0.525 (0.000 ***) | 0.459 (0.000 ***) |
stdData_3 | −0.227 (0.023 **) | 0.391 (0.000 ***) | 0.525 (0.000 ***) | 1 (0.000 ***) | 0.224 (0.025 **) |
stdData_4 | 0.066 (0.516) | 0.217 (0.030 **) | 0.459 (0.000 ***) | 0.224 (0.025**) | 1 (0.000 ***) |
meanData_0 | meanData_1 | meanData_2 | meanData_3 | meanData_4 | |
---|---|---|---|---|---|
meanData_0 | 1 (0.000 ***) | 0.574 (0.000 ***) | 0.634 (0.000 ***) | 0.661 (0.000 ***) | 0.413 (0.000 ***) |
meanData_1 | 0.574 (0.000 ***) | 1 (0.000 ***) | 0.67 (0.000 ***) | 0.575 (0.000 ***) | 0.447 (0.000 ***) |
meanData_2 | 0.634 (0.000 ***) | 0.67 (0.000 ***) | 1 (0.000 ***) | 0.68 (0.000 ***) | 0.456 (0.000 ***) |
meanData_3 | 0.661 (0.000 ***) | 0.575 (0.000 ***) | 0.68 (0.000 ***) | 1 (0.000 ***) | 0.494 (0.000 ***) |
meanData_4 | 0.413 (0.000 ***) | 0.447 (0.000 ***) | 0.456 (0.000 ***) | 0.494 (0.000 ***) | 1 (0.000 ***) |
stdData_0 | stdData_1 | stdData_2 | stdData_3 | stdData_4 | |
stdData_0 | 1 (0.000 ***) | −0.219 (0.028 **) | −0.139 (0.167) | −0.235 (0.018 **) | 0.031 (0.762) |
stdData_1 | −0.219 (0.028 **) | 1 (0.000 ***) | 0.366 (0.000 ***) | 0.349 (0.000 ***) | 0.219 (0.029 **) |
stdData_2 | −0.139 (0.167) | 0.366 (0.000 ***) | 1 (0.000 ***) | 0.539 (0.000 ***) | 0.421 (0.000 ***) |
stdData_3 | −0.235 (0.018 **) | 0.349 (0.000 ***) | 0.539 (0.000 ***) | 1 (0.000 ***) | 0.226 (0.023 **) |
stdData_4 | 0.031 (0.762) | 0.219 (0.029 **) | 0.421 (0.000 ***) | 0.226 (0.023**) | 1 (0.000 ***) |
Indicator | Value |
---|---|
Mean | −0.00119693795090222 |
Root-Mean-Square | 0.112469443221043 |
Mean Standardized | −0.0082343573416086 |
Root-Mean-Square Standardized | 0.978334654061759 |
Average Standard Error | 0.11499194216091 |
Area | meanSBE | minSBE | maxSBE | Standard Dev. |
---|---|---|---|---|
Built-up area | 6.67 | 6.04 | 7.03 | 0.22 |
Other area of the monitoring area | 6.72 | 6.03 | 7.07 | 0.22 |
Riverfront ecological area | 6.72 | 6.06 | 7.03 | 0.20 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fan, R.; Chen, Y.; Yocom, K.P. A New Approach to Landscape Visual Quality Assessment from a Fine-Tuning Perspective. Land 2024, 13, 673. https://doi.org/10.3390/land13050673
Fan R, Chen Y, Yocom KP. A New Approach to Landscape Visual Quality Assessment from a Fine-Tuning Perspective. Land. 2024; 13(5):673. https://doi.org/10.3390/land13050673
Chicago/Turabian StyleFan, Rong, Yingze Chen, and Ken P. Yocom. 2024. "A New Approach to Landscape Visual Quality Assessment from a Fine-Tuning Perspective" Land 13, no. 5: 673. https://doi.org/10.3390/land13050673
APA StyleFan, R., Chen, Y., & Yocom, K. P. (2024). A New Approach to Landscape Visual Quality Assessment from a Fine-Tuning Perspective. Land, 13(5), 673. https://doi.org/10.3390/land13050673