Unified Scale Theorem: A Mathematical Formulation of Scale in the Frame of Earth Observation Image Classification
Abstract
:1. Introduction
- The geographic scale, defined by the extent of the study area;
- The observation (or measurement) scale (or resolution, or support), defined as the recording unit size (or pixel) of spectral reflectance in image data;
- The functional (or operational) scale, defined as the spatial or temporal window in which a feature is recognized or a process operates;
- The cartographic (or map) scale, defined as the ratio between a distance on a map and its corresponding distance in the real world—map scale is dictated by the capacity of the human eye for catching visual details on a map (according to the rule of thumb of half-a-millimeter distinction).
1.1. Image Classification and Scale
1.2. Fractals and Scale Unification
1.3. Research Objectives
2. Methodology
2.1. Background
- Conversion of the original image into a principal component image and use of the PC1 layer;
- Multi-resolution segmentation of the PC1 layer, using the Fractal Net Evolution Assessment (FNEA) algorithm (embedded in eCognition software) for a series of scale factors (f). The scale factor corresponds to the standard deviation of spectral information inside a candidate object, and is the most influencing input parameter in segmentation;
- Plotting of the scale factor values (f) vs. the resulting mean object size values (sf) and extraction of a power-law equation from the plotted data;
- Application of the rank-size rule with the PC1 layer. Then, extraction of the head–tail portions per partition according to the paradigm of the “ht-index” (Jiang and Yin, 2014);
- In the partition table, computation of the “simulated mean object size” (sn) at every partition level. sn is the ratio of the total image extent to the extent of the head group at that partition level;
- Computation of the optimal segmentation scale (fn) for each partition level by resolving the extracted power-law equation.
2.2. Mathematical Formulation
- Condition-1—The pixel size of the image in hand must be smaller than the mean feature size of the smallest functional scale. Otherwise, the smallest functional scale must be rejected;
- Condition-2—The number of objects on the largest functional scale must be greater than one. The number of objects for every scale is defined by dividing the size of the image in hand with the mean feature size on that scale;
- Condition-3—The final cartographic scale must be greater than 1.
2.3. Scale Calculator
3. Experimentation
3.1. Image Data
3.2. Functional Scales
3.3. Classification Process
- In ArcMap, a set of 100 random points was created for classification training purposes and another similar one for testing purposes;
- From every optimal segmentation layer, the objects containing training random points were selected (i.e., 100 objects per scale). As a result, 5 polygon-type training layers were created;
- The training layers were transferred to the Google Earth application (GE) and were updated with land cover information by an independent interpreter using the most contemporary GE background to the Sentinel-2 image (i.e., the closest possible to 28 June 2017). Interpretation was based on visual assessment judging from the dominant land cover/use within every object;
- Using the land cover information of the training samples, the Sentinel-2 image underwent pixel-based supervised classification using the ML algorithm;
- The class information was transferred to the testing objects (using a majority filter) for every layer. Thus, 5 test layers were created;
- The test layers were transferred to the Google Earth application, where they were assessed visually object by object by an independent interpreter;
- The updated test layers were transferred back to ArcMap, to calculate the accuracy figures.
- At the 75-scale segmentation level, i.e., the largest among the optimum scales, a set of 15 different land cover/use classes was identified in the study area. The mean size of the objects at this level was 18,482 m2, with the minimum of 1100 m2 and maximum of 229,700 m2. The urban environment was categorized into sparse and dense, while industrial environments were taken independently from other built up classes. Agricultural fields were characterized either as cultivated at the time of assessment or non-cultivated. Natural vegetation classes were distinguished into sparse, medium, or dense shrub, shrub trees, or tree compositions. Fallow land and water (divided into shallow and deep) were the remaining classes;
- At the 5-scale segmentation level, i.e., the smallest among the optimum scales, the mean size of the objects was 152 m2, ranging from 100 m2 to 1642 m2. At this level, two different nomenclature schemes were tested:
- ◦
- One detailed scheme with a set of 55 classes, in accordance with the real complexity of the studied environment;
- ◦
- One simplified scheme, after grouping the detailed classes into more generalized ones (7 classes).
- For the rest of the optimal segmentation scales (9, 16, and 36), similar approaches to that of the 75-scale level were followed regarding nomenclature, with the number of classes ranging from 7 up to 11;
- In order to broaden the range of possible functional scales (beyond those suggested by the optimal ones), another two classifications were also attempted, one on the 139-scale and one on the 255-scale of segmentation. These scales, although indicated by the optimal scale detection method of Karydas (2020) [26], were rejected initially, as they did not meet the null-number condition of the method;
- In the 139-scale classification, classes containing only one sample were removed, and as a result, the classes were simplified (leaving 11 from the original class of 15 classes). In the new scheme, there were some classes that had not been identified on the 75-scale classification (e.g., Grassland). Finally, 9 out of the 100 samples were indicated as mixed classes, to a degree that was not possible to assess. This could be attributed to the fact that on this scale, objects are generally much larger than the previous scales.
4. Results and Discussion
5. Conclusions
- Its core part (i.e., the segmentation process adapted from Karydas 2020) is a proven theorem;
- It is supported by an axiom, namely the rule of thumb for human eye distinction capability;
- It contains an obvious statement, namely the equivalence of mean object size at a segmentation level with the functional scale at that level;
- It is confined by limitations (the three predefined conditions);
- It is verified by experimentation.
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Goodchild, M.F. Metrics of scale in remote sensing and GIS. Int. J. Appl. Earth Obs. Geoinf. 2001, 3, 114–120. [Google Scholar] [CrossRef]
- Levin, S.A. The problem of pattern and scale in ecology. Ecology 1992, 73, 1943–1967. [Google Scholar] [CrossRef]
- Marceau, D.J.; Hay, G.J. Remote Sensing Contributions to the Scale Issue. Can. J. Remote Sens. 1999, 25, 357–366. [Google Scholar] [CrossRef]
- Zahra, D.; Blaschke, T. Scale matters: A survey of the concepts of scale used in spatial disciplines. Eur. J. Remote Sens. 2019, 52, 419–434. [Google Scholar]
- Lillesand, T.M.; Kiefer, R.W. Remote Sensing and Image Interpretation; John Wiley and Sons, Inc.: New York, NY, USA, 1994; Chapter 7; 750p. [Google Scholar]
- Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
- Dey, V.; Zhang, Y.; Zhong, M. A review on image segmentation techniques with Remote Sensing perspective. In Proceedings of the ISPRS TC VII Symposium—100 Years ISPRS, Vienna, Austria, 5–7 July 2010; Wagner, W., Székely, B., Eds.; IAPRS: Vienna, Austria, 2010; Volume XXXVIII. Part 7A. [Google Scholar]
- Haralick, R.M.; Shapiro, L.G. Image segmentation techniques. Comput. Vis. Graph. Image Process. 1985, 29, 100–132. [Google Scholar] [CrossRef]
- Karydas, C.G.; Gitas, I.Z. Development of an IKONOS image classification rule-set for multi-scale mapping of Mediterranean rural landscapes. Int. J. Remote Sens. 2011, 32, 9261–9277. [Google Scholar] [CrossRef]
- Dragut, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 119–127. [Google Scholar] [CrossRef] [Green Version]
- Janowski, L.; Tylmann, K.; Trzcinska, K.; Rudowski, S.; Tegowski, J. Exploration of Glacial Landforms by Object-Based Image Analysis and Spectral Parameters of Digital Elevation Model. IEEE Trans. Geosci. Remote Sens. 2021, 1–17. [Google Scholar] [CrossRef]
- Parish, E.J.; Duraisamy, K. A Unified Framework for Multiscale Modeling Using Mori–Zwanzig and the Variational Multiscale Method. arXiv 2018, arXiv:1712.09669. [Google Scholar]
- Jiang, B.; Yin, J. Ht-Index for Quantifying the Fractal or Scaling Structure of Geographic Features. Ann. Assoc. Am. Geogr. 2014, 104, 530–540. [Google Scholar] [CrossRef]
- Lam, N.S.-N.; Quattrochi, D. On the Issues of Scale, Resolution, and Fractal Analysis in the Mapping Sciences. Trans. Am. Geophys. Union 1992, 2, 638–693. [Google Scholar] [CrossRef]
- Sun, W.; Xu, G.; Gong, P.; Liang, S. Fractal analysis of remotely sensed images: A review of methods and applications. Int. J. Remote Sens. 2006, 27, 4963–4990. [Google Scholar] [CrossRef]
- Martins, M.D.; Laaha, S.; Freiberger, E.M.; Choi, S.; Fitch, W.T. How children perceive fractals: Hierarchical self-similarity and cognitive development. Cognition 2014, 133, 10–24. [Google Scholar] [CrossRef] [Green Version]
- Hay, G.J.; Marceau, D.J.; Dube, P.; Bouchard, A. A Multiscale Framework for Landscape Analysis: Object-Specific Analysis and Upscaling. Landsc. Ecol. 2001, 16, 471–490. [Google Scholar] [CrossRef]
- Lam, N.S.-N.; Qiu, H.L.; Quattrochi, D.A.; Emerson, C.W. An evaluation of fractal methods for characterizing image complexity. Cartogr. Geogr. Inf. Sci. 2002, 29, 25–35. [Google Scholar] [CrossRef]
- Mandelbrot, B. The Fractal Geometry of Nature; W. H. Freeman and Co.: New York, NY, USA, 1982. [Google Scholar]
- Roy, A.G.; Gravel, G.; Gauthier, C. Measuring the dimension of surfaces: A review and appraisal of different methods. In Proceedings of the Eighth International Symposium on Computer-Assisted Cartography(Auto–Carto8), Baltimore, MD, USA, 29 March–3 April 1987; pp. 68–77. [Google Scholar]
- Tate, N.J. Estimating the fractal dimension of synthetic topographic surfaces. Comput. Geosci. 1998, 24, 325–334. [Google Scholar] [CrossRef]
- Sun, W. Three new implementations of the triangular prism method for computing the fractal dimension of remote sensing images. Photogramm. Eng. Remote Sens. 2005, 72, 373–382. [Google Scholar] [CrossRef]
- Husain, A.; Reddy, J.; Bisht, D.; Sajid, M. Fractal dimension of coastline of Australia. Sci. Rep. 2021, 11, 6304. [Google Scholar] [CrossRef]
- Berke, J. Using Spectral Fractal Dimension in Image Classification. In Innovations and Advances in Computer Sciences and Engineering; Sobh, T., Ed.; Springer: Dordrecht, The Netherlands, 2010. [Google Scholar]
- Unified Field Theory, Wikipedia. Available online: https://en.wikipedia.org/wiki/Unified_field_theory (accessed on 21 August 2021).
- Karydas, C.G. Optimization of multi-scale segmentation of satellite imagery using fractal geometry. Int. J. Remote Sens. 2020, 41, 2905–2933. [Google Scholar] [CrossRef]
- Karydas, C.; Jiang, B. Scale Optimization in Topographic and Hydrographic Feature Mapping Using Fractal Analysis. ISPRS Int. J. Geo-Inf. 2020, 9, 631. [Google Scholar] [CrossRef]
- Bobick, A.; Bolles, R. The representation space paradigm of concurrent evolving object descriptions. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 146–156. [Google Scholar] [CrossRef]
- European Space Agency (ESA). Available online: https://www.esa.int/ESA (accessed on 5 August 2019).
- Torma, M.; Hatunen, S.; Harma, P.; Jarvenpaa, E. Sentinel-2 Images and Finnish Corine Land Cover Classification. In Proceedings of the 1st ESA Sentinel-2 Preparatory Symposium, Frascati, Italy, 23–27 April 2012. [Google Scholar]
- Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
- ArcMap Documentation. Available online: https://desktop.arcgis.com/en/arcmap/10.3/tools/spatial-analyst-toolbox/how-maximum-likelihood-classification-works.htm (accessed on 18 August 2021).
- The Definitive Glossary of Higher Mathematical Jargon. Available online: https://mathvault.ca/math-glossary/#theorem (accessed on 11 September 2021).
- Ratcliffe, J.H.; McCullagh, M.J. Hotbeds of Crime and the Search for Spatial Accuracy. J. Geogr. Syst. 1999, 1, 385–398. [Google Scholar] [CrossRef]
A | B | C | D | E | F | G | H | I | J | |
---|---|---|---|---|---|---|---|---|---|---|
1 | Geographic Scale | Observation Scale | Functional Scale | Cartographic Scale | ||||||
2 | Image size (m2) | Pixel (m) | Equation (2) | |||||||
3 | INPUT VALUE | INPUT VALUE | Min mean feature size | Equation (3) | Rule of thumb | |||||
4 | Image side (m) | Surface (m2) | =SQRT(C5/3.14) | Radius (m) | 1/ | =C15/J15 | 0.0005 | |||
5 | =SQRT(A3) | =B3 × B3 | =C10 | Surface (m2) | Nominal scale | =((INT(I4/10,000) + 1) × 10,000) − 5000 | Condition-3 | |||
6 | Max number of objects | Nominal scale > 1 | ||||||||
7 | Condition-2 | Condition-1 | =A3/C10 | =IF(I5 > 1, “OK”, “REJECT”) | ||||||
8 | Min. no. of objects | Min. object size/pixel | Equation (1) | Segmentation process | Minimum mapping units | |||||
9 | >1 | >1 | Integer(sn) | sn | fn | a | b | Side (m) | Radius (m) | Cartographic scale |
10 | =A3/C14 | =C5/B5 | =INT(D10) + 1 | =F10 × POWER(E21,G21) | INPUT VALUE | INPUT VALUE | INPUT VALUE | =SQRT(C10) | =SQRT(C10/3.14) | =I10/0.0005 |
11 | =IF(A10 ≥ 1, “OK”, “REJECT”) | =IF(B10 ≥ 1, “OK”, “REJECT”) | =INT(D11) + 1 | =F11 × POWER(E22,G22) | INPUT VALUE | =F10 | =G10 | =SQRT(C11) | =SQRT(C11/3.14) | =I11/0.0005 |
12 | =INT(D12) + 1 | =F12 × POWER(E23,G23) | INPUT VALUE | =F11 | =G11 | =SQRT(C12) | =SQRT(C12/3.14) | =I12/0.0005 | ||
13 | =INT(D13) + 1 | =F13 × POWER(E24,G24) | INPUT VALUE | =F12 | =G12 | =SQRT(C13) | =SQRT(C13/3.14) | =I13/0.0005 | ||
14 | =INT(D14) + 1 | =F14 × POWER(E25,G25) | INPUT VALUE | =F13 | =G13 | =SQRT(C14) | =SQRT(C14/3.14) | =I14/0.0005 |
Geographic Scale | Observation Scale | Functional Scale | Cartographic Scale | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Image size (m2) | Pixel (m) | Equation (2) | ||||||||
600,000,000 | 10.00 | Min mean feature size | Equation (3) | Rule of thumb | ||||||
Image side (m) | Surface (m2) | 8.0 | Radius (m) | 1/ | 16,081 | 0.0005 | ||||
24,495 | 100.00 | 203 | Surface (m2) | Nominal scale | 15,000 | Condition-3 | ||||
Max number of objects | Nominal scale > 1 | |||||||||
Condition-2 | Condition-1 | 2,970,297 | OK | |||||||
Min. no. of objects | Min. object size/pixel | Equation (1) | Segmentation process | Minimum mapping units | ||||||
>1 | >1 | Level | Integer(sn) | sn | fn | a | b | Side (m) | Radius (m) | Cartographic scale |
43,409 | 2.03 | 1 | 203 | 202.7 | 5 | 16.48 | 1.5592 | 14.2 | 8.0 | 16,081 |
OK | OK | 2 | 507 | 506.8 | 9 | 16.48 | 1.5592 | 22.5 | 12.7 | 25,414 |
3 | 1243 | 1242.9 | 16 | 16.48 | 1.5592 | 35.3 | 19.9 | 39,792 | ||
4 | 4401 | 4400.9 | 36 | 16.48 | 1.5592 | 66.3 | 37.4 | 74,876 | ||
5 | 13,822 | 13,821.4 | 75 | 16.48 | 1.5592 | 117.6 | 66.3 | 132,694 |
No | Class for Scale Factor 75 | Producer’s Accuracy (%) | User’s Accuracy (%) |
---|---|---|---|
1 | Urban_Sparse | 77.8 | 87.5 |
2 | Urban_Dense | 28.6 | 100.0 |
3 | Industrial | 71.4 | 45.5 |
4 | Fields_Cultivation | 58.3 | 70.0 |
5 | Fields_No-Cultivation | 90.3 | 73.7 |
6 | Shrubs_Medium | 0.0 | N/A * |
7 | Shrubs_Dense | N/A | N/A |
8 | Shrub Trees_Sparse | 100.0 | 50.0 |
9 | Shrub Trees_Medium | 50.0 | 71.4 |
10 | Shrub Trees_Dense | 77.8 | 77.8 |
11 | Trees_Medium | 0.0 | N/A |
12 | Trees_Dense | 62.5 | 71.4 |
13 | Fallow Land | N/A | 0.0 |
14 | Water_Shallow | 100.0 | 100.0 |
15 | Water_Deep | 100.0 | 66.7 |
No | Class for Scale Factor 138 | Producer’s Accuracy (%) | User’s Accuracy (%) |
---|---|---|---|
1 | Urban | 100.0 | 100.0 |
2 | Industrial | 87.5 | 100.0 |
3 | Fields | 89.7 | 57.8 |
4 | Shrubs_Sparse | 11.8 | 33.3 |
5 | Shrubs_Medium | 0.0 | 0.0 |
6 | Shrubs_Dense | 71.4 | 55.6 |
7 | Grassland | 50.0 | 100.0 |
8 | Forest Trees | 0.0 | 0.0 |
9 | Fallow Land | 66.7 | 50.0 |
10 | Water_Shallow | N/A | 0.0 |
11 | Water_Deep | 100.0 | 100.0 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Karydas, C.G. Unified Scale Theorem: A Mathematical Formulation of Scale in the Frame of Earth Observation Image Classification. Fractal Fract. 2021, 5, 127. https://doi.org/10.3390/fractalfract5030127
Karydas CG. Unified Scale Theorem: A Mathematical Formulation of Scale in the Frame of Earth Observation Image Classification. Fractal and Fractional. 2021; 5(3):127. https://doi.org/10.3390/fractalfract5030127
Chicago/Turabian StyleKarydas, Christos G. 2021. "Unified Scale Theorem: A Mathematical Formulation of Scale in the Frame of Earth Observation Image Classification" Fractal and Fractional 5, no. 3: 127. https://doi.org/10.3390/fractalfract5030127
APA StyleKarydas, C. G. (2021). Unified Scale Theorem: A Mathematical Formulation of Scale in the Frame of Earth Observation Image Classification. Fractal and Fractional, 5(3), 127. https://doi.org/10.3390/fractalfract5030127