Next Article in Journal
Sea Level Seasonal, Interannual and Decadal Variability in the Tropical Pacific Ocean
Previous Article in Journal
Editorial for the Special Issue “New Advances on Sub-Pixel Processing: Unmixing and Mapping Methods”
Previous Article in Special Issue
Continental-Scale Land Cover Mapping at 10 m Resolution Over Europe (ELC10)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Operational Radiometric Correction Technique for Shadow Reduction in Multispectral UAV Imagery

Grumets Research Group, Departament de Geografia, Edifici B, Universitat Autònoma de Barcelona, 08193 Bellaterra, Catalonia, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(19), 3808; https://doi.org/10.3390/rs13193808
Submission received: 29 August 2021 / Revised: 19 September 2021 / Accepted: 21 September 2021 / Published: 23 September 2021
(This article belongs to the Special Issue Remote Sensing for Land Cover and Vegetation Mapping)

Abstract

:
This study focuses on the recovery of information from shadowed pixels in RGB or multispectral imagery sensed from unmanned aerial vehicles (UAVs). The proposed technique is based on the concept that a property characterizing a given surface is its spectral reflectance, i.e., the ratio between the flux reflected by the surface and the radiant flux received by the surface, and this ratio is usually similar under direct-plus-diffuse irradiance and under diffuse irradiance when a Lambertian behavior can be assumed. Scene-dependent elements, such as trees, shrubs, man-made constructions, or terrain relief, can block part of the direct irradiance (usually sunbeams), in which part of the surface only receives diffuse irradiance. As a consequence, shadowed surfaces comprising pixels of the image created by the UAV remote sensor appear. Regardless of whether the imagery is analyzed by means of photointerpretation or digital classification methods, when the objective is to create land cover maps, it is hard to treat these areas in a coherent way in terms of the areas receiving direct and diffuse irradiance. The hypothesis of the present work is that the relationship between irradiance conditions in shadowed areas and non-shadowed areas can be determined by following classical empirical line techniques for fulfilling the objective of a coherent treatment in both kinds of areas. The novelty of the presented method relies on the simultaneous recovery of information in non-shadowed and shadowed areas by the in situ spectral reflectance measurements of characterized Lambertian targets followed by smoothing of the penumbra area. Once in the lab, firstly, we accurately detected the shadowed pixels by combining two well-known techniques for the detection of the shadowed areas: (1) using a physical approach based on the sun’s position and the digital surface model of the area covered by the imagery; and (2) the image-based approach using the histogram properties of the intensity image. In this paper, we present the benefits of the combined usage of both techniques. Secondly, we applied a fit between non-shadowed and shadowed areas by using a twin set of spectrally characterized target sets. One set was placed under direct and diffuse irradiance (non-shadowed targets), whereas the second set (with the same spectral characteristics) was placed under diffuse irradiance (shadowed targets). Assuming that the reflectance of the homologous targets of each set was the same, we approximated the diffuse incoming irradiance through an empirical line correction. The model was applied to all detected shadowed areas in the whole scene. Finally, a smoothing filter was applied to the penumbra transitions. The presented empirical method allowed the operational and coherent recovery of information from shadowed areas, which is very common in high-resolution UAV imagery.

Graphical Abstract

1. Introduction

Since the beginning of optical remote sensing (RS), much of the scientific literature has been devoted to shadow hindrances: first, in aerial; later, in satellites; and recently, in drone-acquired imagery. The shadow problematics are different at these three scales mainly due to the spatial resolution (SR), but they all share the common trait that the shadowed pixels are darker areas where the surface radiance is lower due to a lack of direct illumination, which often causes misinterpretations in reflectance estimation [1]. In an ideal de-shadowing process, a surface located in non-shadowed conditions and the same surface located in shadowed conditions should result in the same surface reflectance value. This is because the reflectance factor is the ratio of the radiant flux actually reflected by a sample surface to that which would be reflected into the same reflected-beam geometry by an ideal Lambertian surface [2,3].
Topographic shadows [4,5,6,7,8] and cloud shadows [9,10] are the main focus of the shadow problematic correction both in satellite optical RS imagery with moderate SR (coarser than 100 m), such as MODIS [11] or AVHRR [12], as well as in RS with high SR (coarser than 10 m), such as Landsat [13] or Sentinel-2 [14]. In very-high SR imagery (coarser than 0.25 m) from either satellite platforms, such as WorldView [15], or airborne sensors, such as AHS-160 [16,17], CASI [18] or DMCIII [19], the focus has additionally been the shadows projected by superficial elements such as buildings and trees [20,21,22,23]. In drone (also known as unmanned aerial vehicle (UAV)) imagery, new challenges are added to those previous problematics due to the ultra-high SR (up to, typically, 0.01 m), such as that provided by sensors such as MicaSense RedEdege [24], Parrot Sequoia [25], Mapir [26] or MAIA [27], making it necessary to adapt inherited airborne and satellite shadow detection and reduction techniques to the unprecedented image detail [28].
The partial or total occlusion of direct light from a source of illumination [23] causes the cast shadow of an object. There are two main classes of shadows [23,29,30]: (a) cast shadows, which are projected by the object towards the direction of the light source; and (b) self-shadows, which are in the areas of the object not illuminated by direct light (pixels with incident angle ≥ 90°). Within a cast shadow, the area called umbra is where direct light is completely blocked by the object, and the area called penumbra is where direct light is partially blocked by the object (the transition zone between umbra and not-umbra). The width of the projected penumbra can be delimited using trigonometry, accounting for the angular width of the illumination source and the distance between the object and the surface onto which the shadow is projected; this is especially relevant for the top shadows of tall elements (e.g., skyscrapers) [21]. However, only umbra-cast shadowed pixels are tackled by most shadow detection and reduction methods because ultra-realistic object modelling is usually required for the accurate delimitation of the penumbra-cast shadow [23]. It is worth noting that material transmissivity is an additional problem in shadow removal [1], especially in UAV ultra-high SR. The intrinsic complexity of each single object light transmission is avoided in this work, although it has been approached in specific studies at leaf scale [31].
The radiometric theoretical basis of optical multispectral remote sensing in the visible and near-infrared regions (VNIR), i.e., between 400 nm and 900 nm, is based on the measurement of the spectral surface reflectance (ρλ), in order to be comparable when imagery is acquired on different dates and/or times, by different sensors, and in different illumination conditions. Imagery is converted to ρλ at pixel level, i.e., the spectral radiance (Lλ, W·m−2·sr−1·µm−1) reflected by a surface located in a given pixel with respect to the total spectral irradiance (Iλ, W·m−2·µm−1) received [32], which are both wavelength-dependent. The total spectral irradiance in non-cloud-covered conditions reaching a given surface, which is composed of the direct solar irradiance (Idirectλ), a less important component of diffuse irradiance (skylight) scattered by the atmosphere (Idiffuseλ), and a still much lower component of irradiance coming from the reflection by surrounding elements (Ireflectedλ), is very difficult to estimate due to the intrinsic characteristics of the unknown materials around the given surface. The estimation of diffuse radiance is key for the correction of shadowed pixels, where it is the main source of irradiation [33]. In UAV imagery, hyperspectral and multispectral sensor systems implement specific downwelling light sensor (DLS) devices [34], which obtain direct information to calculate the at-sensor reflectance. Moreover, UAV sensor manufacturers provide calibration panels to perform empirical line corrections [35] to obtain spectral surface reflectance. Both methods are commonly used in UAV frame cameras to obtain dimensionless reflectance values. Field spectroradiometric measurements are usually the ground truth for validating the spectral surface reflectance imagery resulting from radiometric corrections of satellite, airborne or UAV optical data [36,37,38,39,40]. The spectral surface reflectance of a given material does not change when located in shadowed conditions (receiving only diffuse irradiance) or in non-shadowed conditions (receiving mainly direct irradiance): what changes is the reflected spectral radiance because the surface receives less total spectral irradiance, thereby appearing darker in shadowed conditions.
The hypothesis of the present work is that the relationship between irradiance conditions in shadowed areas and non-shadowed areas can be determined by following classical empirical line techniques [41,42,43] for fulfilling the objective of a coherent treatment in both kinds of areas. To this end, we present a simple and operational radiometric correction technique based on the usage of twin sets of radiometrically characterized panels: one set located in shadowed conditions, and the other set in non-shadowed conditions. By first determining the response values of the targets in non-shadowed areas, the procedure seeks, on the one hand, to obtain the same reflectance values in the homologous twin targets located in the shadowed areas after image de-shadowing, and on the other hand, to expand the correction model to all the detected shadows in the scene. The method includes three sequential blocks: (1) shadow detection; (2) shadow reduction; and (3) edge correction filtering to smooth the transition between de-shadowed areas and non-shadowed areas.

2. Materials and Methods

2.1. Experiment Conceptualization and Materials

The main concept is that the spectral reflectance of a sample surface is the same in direct-plus-diffuse illumination conditions as in diffuse illumination conditions (what is variable is the spectral irradiance) [2,3]. Additionally, there is a need to account for atmospheric radiance contributions to the at-sensor spectral radiance [4,5]. These principles can be applied using an empirical line approach, among others [41,42,43]. Then, the information retrieval from shadowed pixels can be achieved in three stages: (1) shadow detection; (2) shadow reduction in all shadowed scene pixels using empirical relationships between the characterized set of panels located in shadowed and non-shadowed areas; and (3) edge filtering to smooth the transition between de-shadowed and non-shadowed areas (Figure 1).
The objective was to test whether the de-shadowing workflow was successful and operational in a real scenario. The method for checking the hypothesis is presented in an applied scenario (forested area). The case study was carried out in a Mediterranean forestry environment; specifically, within a restored quarry in Catalonia (X = 411,713 m; Y = 4,616,052 m (UTM31N-WGS84)) with grasslands, scrublands, and Pinus hapelensis trees of several heights due to the afforestation carried out in the frame of restoration actuations (Figure 2a). The authors previously used this area for UAV vegetation mapping studies, geometric and radiometric calibration experiments [38,39,40,44]. The flight was performed on 27 April 2018 between 10:36 and 10:47 UTC, at 90 m above ground level, designed to cover a 64,386 m2 study area (Table 1). By applying photogrammetric and Structure from Motion processing, an orthoimage of 6 cm SR (Figure 2b) and a digital surface model (DSM) of 9 cm of pixel size were obtained.
In order to relate the non-shadowed illumination conditions and the shadowed conditions, we used a set of 500 mm × 500 mm × 20 mm ethylene-vinyl acetate (EVA) panels in direct illumination conditions and a twin set of panels in a cast shadow. Panels were spectrally characterized using a hand-held OceanOptics USB2000+ spectroradiometer [45] (Table 2, Figure 3a–e) and present a Lambertian behavior [37]. The spectral range covered the VNIR regions (340 nm–1030 nm), with a sampling interval of 0.3 nm and a spectral resolution (full width at half maximum, FWHM) of 1.26 nm. A reference panel was used to measure the incoming spectral irradiance. At the same time, UAV multispectral imagery was acquired with a Parrot Sequoia sensor (Table 3, Figure 3c,d) on board a DJI Phantom III quadcopter for the applied case in a forestry environment.

2.2. Methods: Shadow Detection, Shadow Reduction, Edge Filtering

The UAV at-sensor reflectance image (obtained with the DLS irradiance reference) was empirically fitted to the characterized panels to obtain at-surface reflectance values [38]. Afterwards, the methodology for recovering information from shadowed areas consisted of three processing blocks, previously mentioned (Figure 1) and detailed in this section: (a) shadow detection, (b) shadow reduction, and (c) edge filtering.

2.2.1. Shadow Detection

Let us consider the scenario of a clear sky and direct illumination, as in the example case, wherein the sun vector hits a given surface, thereby projecting shadows. Shadow detection can mainly be carried out with physical and/or image-based approaches (Figure 4). Physical approaches are based on the sun’s position and the cast shadow projected by the surface objects at a given location. The sun’s position is well known for its given geographical position and the surface is modeled using a DSM, geometrically obtaining a digital cast shadow model (DCSM) and a digital illumination model (DIM). Image-based approaches are based on the image band histogram, selecting those dark pixels located under a given threshold, assuming that those pixels have low illumination and are located in shadows. Both approaches provide good results but commission errors (false positives) for non-shadow pixels classified as shadows (low signal surfaces classified as shadows [23]) and omission errors (false negatives) of shadowed pixels classified as non-shadow, often appear. False positives in the physical approach appear when the DSM is not of enough quality or is not truly 3D but 2.5D, providing false positives specially in the trunk shadows in vegetation environments. False positives in the image-based approach appear in dark surfaces that are not located in shadows, such as waters or dark soils. To overpass the commission errors of both methods, in our approach, the detailed shadow detection was obtained combining image-based and physically based methods [33]. We selected pixels achieving the joined condition of being under the histogram threshold of the first quartile in the intensity image, in the Red Edge band and in the Near-Infrared band (image-based condition), also being under the shadow of the Digital Cast Shadow Model (DCSM) and in a self-shadow of the Digital Illumination Model (DIM) (physically based condition) in order to constrain the possible histogram commissions within the geometrically candidate pixels and considering that the Digital Elevation Models (DEM) are not truly 3D but 2.5D and provide false positives, especially in the trunk shadows.

2.2.2. Shadow Reduction

Detected shadows can be managed by using three approaches [21]: (1) simply masking, (2) multisource data fusion, (3) radiometric enhancement. In the masking approach, shadowed pixels are simply converted to nodata values (removed), whereas in the multisource data fusion approach, the shadowed pixels are filled with imagery captured in other moments, both being approaches out of the focus of this work. Shadow reduction with radiometric enhancement techniques recovers the radiometry of the shadowed pixels, increasing their brightness [21] to make them seamless to neighboring non-shadowed pixels. Due to the great complexity and diversity of shadows, errors can appear in pixels with too high values after shadow reduction (overcorrection errors) or pixels with excessively high values after shadow reduction (undercorrection errors). In our approach (Figure 5), once the shadowed pixels were selected, they were further processed separately. The reduction was achieved with an invariant color model method [33], which takes advantage of pairs of panels located in shadowed and non-shadowed conditions simultaneously. We correlated four panels, whereas the remaining were used for validation purposes [38]. The linear function relationship between both panels was applied to all the detected shadowed pixels in the image. Finally, the de-shadowed pixels were covered over the image, resulting almost in the final result. However, at this stage, the problem of a bad transition between the de-shadowed areas and the non-shadowed areas appeared (Figure 5). To solve this problem located in the penumbra areas, a further step is needed that involved edge filtering (Section 2.2.3).

2.2.3. Edge Filtering

Boundary pixels between de-shadowed and non-shadowed regions are problematic due to the spatial mixture of illumination conditions and the limited perfection of the DSM. Thereby, these pixels tend to be more or less overcorrected if considered as shadows, or undercorrected otherwise, leading to artifacts and a visual sensation of patched de-shadowed areas; the extraction of a boundary map [21,33] is a possible solution for dealing with these areas. In our approach, the shadow-mask was vectorized and a buffer (edge belt) was calculated for that limit area; in this case study, the penumbra area was in the order of 10 cm, so the buffer area was 2 pixels (1 pixel at each side of the shadow mask). As we have pointed out in Section 1, precise determination of that area can be performed, according to [21], through Equation (1):
w = H ( 1 tan ( e ε 2 ) 1 tan ( e + ε 2 ) )
where w is the width of the penumbra, H is the height of the object (a tree, a house, etc.), e is the elevation angle of the center of the sun, and ε is the angular width of the sun at the top of the object. Figure 6 illustrates this formula. Nevertheless, due to our non-ultra-realistic object modelling goal, a visual inspection was enough to set a convenient value.
That edge belt was used as follows: firstly, the output image of the shadow reduction block was smoothed with a 3 × 3 low-pass filter kernel; secondly, this smoothed image was clipped with the edge belt, and the smoothed belt was overlayed to the patched image, thus applying the kernel filter only in the penumbra transition (Figure 7).

2.3. Methods: Statistical Data Analysis

We will present the coefficients of determination, R², of the linear regression between the seven twin targets located both in diffuse illumination conditions and in direct illumination conditions. If these regressions show high R² (>0.90) and p-values lower than 0.01 in all the spectral bands of the sensor, with a reasonably good distribution along the [0, 100] % of reflectance values (also for all spectral bands), we will consider that the approach is robust and fulfils the hypothesis of our proposal. This analysis will be completed with a visual inspection of the corrected image to check whether most shadows have been removed.

3. Results

The shadow detection method provided good results in detecting those dark pixels located in projected shadows. The sun position was well known during the drone mission (±5 min the central time of the flight), and the DSM was detailed enough (9 cm/pixel) to obtain a good modelling of the projected shadows (DCSM). However, the application of a constraint to select only dark pixels in the projected shadows avoided false positives, particularly in the trunk shadows (Figure 8).
De-shadow reduction method, which is based on the empirical relationship between the shadowed panels and the non-shadowed panels, providing the ad hoc parameters for converting the shadowed pixels of the image to de-shadowed pixels, in each of the Parrot Sequoia bands (Green, Red, Red Edge and Near-Infrared) (Figure 9, Table 4).
The application of this linear function to the previously detected shadowed pixels led to the recovery of information that was previously hidden, thus obtaining at-surface reflectance values both in direct illumination and diffuse illumination conditions. However, and as mentioned above, edge filtering was required to smooth the patched transition between the de-shadowed and the shadowed pixels (Figure 10).
The edge map, which is based on the mask shadow, delimited the zone with overcorrections and undercorrections due to the penumbra, and a buffer of two pixels was applied to delimit the transition zone. This buffer was used to clip the smoothed image and covered the de-shadowed image, thereby obtaining reasonably good results (Figure 11).

4. Discussion

The ultra-high spatial resolution of UAV imagery allows the use of twin sets of affordable and useful targets in field campaigns. Images with less than 10 cm/pix allow for enough pixels inside panels of 500 mm × 500 mm for obtaining a central pixel without adjacency contributions. This new opportunity for remote sensing opens doors for easy use of the invariant color model method [33] using twin panels simultaneously located in shadowed and non-shadowed conditions, and ultra-high spatial resolution sensors.
The detailed imagery and derived DSM allowed a reasonable modelling of the vegetation structures, but the nature of the 2.5D DSM makes it necessary to combine the physical approach with an image-based approach in order to obtain better shadow detection. However, not all shadows were detected, mainly due to the specific complexity of canopy structures and the photogrammetric surface modelling inaccuracies.
Shadow correction based on the linear relationship between the targets located under direct illumination and diffuse illumination led to good results in the extrapolation to all the detected shadows on the scene, where general overcorrections or undercorrections are not observed. On the other hand, there were artifacts in the limit areas of the detected shadows. This problem was tackled with the edge filtering correction. The edge map based on the shadow mask was precise for detecting the artifacts, whereas the buffer accounted for all of them. The kernel filter, which smoothed the penumbra area, was effective for reducing the patching effects, but obviously blurred these areas.
Limitations: This method is designed for ad hoc radiometric correction, as the approach is completely empirical. Therefore, the empirical line parameters are scene-dependent. The scale factor depends on the opacity of objects that causes the shadow, and the bias factor depends on the atmospheric composition (atmospheric optical depth). This is a limitation to establish generalized, fixed parameters, to relate shadowed pixels and non-shadowed pixels. However, the method is also useful for common UAV applications where the area covered is not large in scope for containing great atmospheric composition differences; this is usually reasonable when the flight height does not exceed 120 m (400 ft), as in current regulations in most European countries [46].
Although the overall enhancement of the image is evident, it is worth noting that not absolutely all of the shadows were recovered. Due to the ultra-high spatial detail of UAV imagery, some shadows remained inside the tree canopy. Additionally, not all the shadows were enhanced with the same quality because the empirical approach was fitted to the shadows of the tree canopy, but not in a closed forest situation; therefore, we followed a conservative trend to avoid overcorrection errors. The de-shadowed areas were not patched over the non-shadowed areas, and the transition between both illumination conditions is, in our humble opinion, a strength of the presented methodology.
The shadow detection, shadow reduction and edge filtering workflow use GIS tools implemented in most GIS software, which can thus be automatized by using model builders. In our case, we used the free MiraMon RS and GIS software [47].

5. Conclusions

In this study, a method for shadow reduction in UAV imagery was described and tested; the method used well-known RS and image enhancement techniques from the airborne and satellite image processing world combined with the new opportunities of drone imagery in reduced scenarios. The empirical line correction using radiometric references has proven to be a good method for obtaining at-surface reflectance values from UAV imagery. In this work, it has also proven to be an interesting method for obtaining surface reflectance values in shadowed forested areas. Apart from the efficiency of the method for recovering radiometric information, it is a simple and operational method for smoothing the transition in the edge zone by applying a correction in the buffer of the limit area. The high spatial resolution of UAV imagery allows the use twin sets of affordable reference targets in field campaigns, which is a common requirement and limitation of ad hoc radiometric corrections. This new opportunity for RS opens the door to the easy use of invariant color models, as well as other well-known remote sensing techniques such as shadow detection or edge filtering smoothing.

Author Contributions

Conceptualization, X.P. and J.-C.P.; methodology, X.P. and J.-C.P.; software, X.P.; validation, X.P. and J.-C.P.; formal analysis, J.-C.P.; investigation, J.-C.P.; resources, X.P.; data curation, J.-C.P.; writing—original draft preparation, J.-C.P.; writing—review and editing, X.P.; visualization, J.-C.P.; supervision, X.P.; project administration, X.P.; funding acquisition, X.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially funded by the Catalan Government under Grant (SGR2017-1690), by the European Union through the NewLife4Drylands Project (LIFE20 PRE/IT/000007) and by the Spanish MCIU Ministry through the NEWFORLAND research project (RTI2018-099397-B-C21/C22 MCIU/AEI/ERDF, EU).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adler-Golden, S.; Matthew, M.W.; Anderson, G.P.; Felde, G.W.; Gardner, J.A. An Algorithm for De-Shadowing Spectral Imagery. Proc. SPIE Imaging Spectrom. 2002, VIII 4816, 203–210. [Google Scholar] [CrossRef]
  2. Nicodemus, F.E.; Richmond, J.C.; Hsia, J.J. Geometrical Considerations and Nomenclature for Reflectance; National Bureau of Standards, US Department of Commerce: Washington, DC, USA, 1977. Available online: http://graphics.stanford.edu/courses/cs448-05-winter/papers/nicodemus-brdf-nist.pdf (accessed on 1 August 2021).
  3. Milton, E.J.; Schaepmann, M.E.; Anderson, K.; Kneubühler, M.; Fox, N. Progress in field spectroscopy. Remote Sens. Environ. 2009, 113, S92–S109. [Google Scholar] [CrossRef] [Green Version]
  4. Teillet, P.M.; Guindon, B.; Goodenough, D.G. On the slope-aspect correction of multispectral scanner data. Can. J. Remote Sens. 1982, 8, 84–106. [Google Scholar] [CrossRef] [Green Version]
  5. Pons, X.; Solé-Sugrañes, L. A simple radiometric correction model to improve automatic mapping of vegetation from multispectral satellite data. Remote Sens. Environ. 1994, 45, 317–332. [Google Scholar] [CrossRef]
  6. Pons, X.; Pesquer, L.; Cristóbal, J.; González-Guerrero, O. Automatic and improved radiometric correction of Landsat imagery using reference values from MODIS surface reflectance images. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 243–254. [Google Scholar] [CrossRef] [Green Version]
  7. Riaño, D.; Chuvieco, E.; Salas, J.; Aguado, I. Assessment of different topographic corrections in Landsat-TM data for mapping vegetation types. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1056–1061. [Google Scholar] [CrossRef] [Green Version]
  8. Richter, R.; Kellenberger, T.; Kaufmann, H. Comparison of topographic correction methods. Remote Sens. 2009, 1, 184–196. [Google Scholar] [CrossRef] [Green Version]
  9. Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
  10. Zhu, Z.; Woodcock, C.E. Automated cloud, cloud shadow, and snow detection in multitemporal Landsat data: An algorithm designed specifically for monitoring land cover change. Remote Sens. Environ. 2014, 152, 217–234. [Google Scholar] [CrossRef]
  11. Aeronautics and Space Administration (NASA). MODIS Web. Available online: https://modis.gsfc.nasa.gov/ (accessed on 1 August 2021).
  12. National Oceanic and Atmospheric Administration (NOAA). Advanced Very High Resolution Radiometer—AVHRR. Available online: https://www.avl.class.noaa.gov/release/data_available/avhrr/index.htm (accessed on 1 August 2021).
  13. Aeronautics and Space Administration (NASA). Landsat Data Continuity Mission (LDCM). Available online: https://www.nasa.gov/mission_pages/landsat/main/index.html (accessed on 1 August 2021).
  14. European Space Agency (ESAa). ESA Sentinel Online. Sentinel-2 Mission. Available online: http://www.esa.int/Our_Activities/Observing_the_Earth/Copernicus/Sentinel-2 (accessed on 1 August 2021).
  15. DigitalGlobe. Satellite Information. Available online: https://www.digitalglobe.com/resources/satellite-information (accessed on 1 August 2021).
  16. Fernández-Renau, A.; Gómez, J.A.; de Miguel, E. The INTA AHS System. Proc. SPIE 2005. [Google Scholar] [CrossRef]
  17. Jiménez, M.; Díaz-Delgado, R.; Vaughan, P.; De Santis, A.; Fernández-Renau, A.; Prado, E.; Gutiérrez de la Cámara, O. Airborne hyperspectral scanner (AHS) mapping capacity simulation for the Doñana biological reserve scrublands. In Proceedings of the 10th International Symposium on Physical Measurements and Signatures in Remote Sensing, Davos, Switzerland, 12–14 March 2007; Schaepman, M., Liang, S., Groot, N., Kneubühler, M., Eds.; Available online: http://www.isprs.org/proceedings/XXXVI/7-C50/papers/P81.pdf (accessed on 28 October 2017).
  18. Itres Research Ltd. Compact Airborne Spectrographic Imager (CASI-1500H). Available online: https://itres.com/wp-content/uploads/2019/09/CASI1500.pdf (accessed on 1 August 2021).
  19. Leica Geosystems. Leica DMCII Airborne Digital Camera. Available online: https://leica-geosystems.com/products/airborne-systems/imaging-sensors/leica-dmciii (accessed on 1 August 2021).
  20. Chen, Y.; Wen, D.; Jing, L.; Shi, P. Shadow information recovery in urban areas from very high resolution satellite imagery. Int. J. Remote Sens. 2007, 28, 3249–3254. [Google Scholar] [CrossRef]
  21. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. PE&RS 2005, 71, 169–177. [Google Scholar] [CrossRef] [Green Version]
  22. Richter, R.; Muller, A. De-shadowing of satellite/airborne imagery. Int. J. Remote Sens. 2005, 26, 3137–3148. [Google Scholar] [CrossRef]
  23. Arévalo, V.; González, J.; Ambrosio, G. Shadow detection in colour high-resolution satellite images. Int. J. Remote Sens. 2008, 29, 1945–1963. [Google Scholar] [CrossRef]
  24. MicaSense. MicaSense RedEdge™ 3 Multispectral Camera User Manual; MicaSense, Inc.: Seattle, WA, USA, 2015; p. 33. Available online: https://support.micasense.com/hc/en-us/article_attachments/204648307/RedEdge_User_Manual_06.pdf (accessed on 1 August 2021).
  25. Parrot Drones. Parrot Sequoia Technical Specifications. 2018. Available online: https://www.parrot.com/global/parrot-professional/parrot-sequoia#technicals (accessed on 1 August 2021).
  26. Mapir. Specifications. 2021. Available online: https://www.mapir.camera/pages/survey3-cameras#specs (accessed on 1 August 2021).
  27. SAL Engineering. MAIA—The Multispectral Camera. 2021. Available online: https://www.salengineering.it/public/en/p/maia.asp (accessed on 1 August 2021).
  28. Markelin, L.; Simis, S.G.H.; Hunter, P.D.; Spyrakos, E.; Tyler, A.N.; Clewley, D.; Groom, S. Atmospheric Correction Performance of Hyperspectral Airborne Imagery over a Small Eutrophic Lake under Changing Cloud Cover. Remote Sens. 2017, 9, 2. [Google Scholar] [CrossRef] [Green Version]
  29. Salvador, E.; Cavallaro, A.; Ebrahimi, T. Shadow identification and classification using invariant color models. In Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, UT, USA, 7–11 May 2001; Volume 3, pp. 1545–1548. [Google Scholar] [CrossRef] [Green Version]
  30. Shahtahmassebi, A.; Yang, N.; Wang, K.; Moore, N.; Shen, Z. Review of shadow detection and de-shadowing methods in remote sensing. Chinese Geogr. Sci. 2013, 23, 403–420. [Google Scholar] [CrossRef] [Green Version]
  31. Wu, T.; Zhang, L.; Huang, C. An analysis of shadow effects on spectral vegetation indices using a ground-based imaging spectrometer. In Proceedings of the 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2–5 June 2015; pp. 1–4. [Google Scholar] [CrossRef]
  32. Schaepman-Strub, G.; Schaepman, M.E.; Painter, T.H.; Dangel, S.; Martonchik, J.V. Reflectance quantities in optical remote sensing—definitions and case studies. Remote Sens. Environ. 2006, 103, 27–42. [Google Scholar] [CrossRef]
  33. Adeline, K.R.M.; Chen, M.; Briottet, X.; Pang, S.K.; Paparoditis, N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS J. Photogramm. 2013, 80, 21–38. [Google Scholar] [CrossRef]
  34. Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J.; Pesonen, L. Processing and Assessment of Spectrometric, Stereoscopic Imagery Collected Using a Lightweight UAV Spectral Camera for Precision Agriculture. Remote Sens. 2013, 5, 5006–5039. [Google Scholar] [CrossRef] [Green Version]
  35. Wang, C.; Myint, S.W. A Simplified Empirical Line Method of Radiometric Calibration for Small Unmanned Aircraft Systems-Based Remote Sensing. IEEE JSTARS 2015, 8, 1876–1885. [Google Scholar] [CrossRef]
  36. Padró, J.C.; Pons, X.; Aragonés, D.; Díaz-Delgado, R.; García, D.; Bustamante, J.; Pesquer, L.; Domingo-Marimon, C.; González-Guerrero, O.; Cristóbal, J.; et al. Radiometric Correction of Simultaneously Acquired Landsat-7/Landsat-8 and Sentinel-2A Imagery Using Pseudoinvariant Areas (PIA): Contributing to the Landsat Time Series Legacy. Remote Sens. 2017, 9, 1319. [Google Scholar] [CrossRef] [Green Version]
  37. Padró, J.C.; Muñoz, F.J.; Avila, L.A.; Pesquer, L.; Pons, X. Radiometric Correction of Landsat-8 and Sentinel-2A Scenes Using Drone Imagery in Synergy with Field Spectroradiometry. Remote Sens. 2018, 10, 1687. [Google Scholar] [CrossRef] [Green Version]
  38. Padró, J.C.; Carabassa, V.; Balagué, J.; Brotons, L.; Alcañiz, J.M.; Pons, X. Monitoring opencast mine restorations using Unmanned Aerial System (UAS) imagery. Sci. Total Environ. 2019, 657, 1602–1614. [Google Scholar] [CrossRef] [PubMed]
  39. Pons, X.; Padró, J.C. An Empirical Approach on Shadow Reduction of UAV Imagery in Forests. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 2463–2466. [Google Scholar] [CrossRef]
  40. Bareth, G.; Aasen, H.; Bendig, J.; Gnyp, M.; Bolten, A.; Jung, A.; Michels, R.; Soukkamäki, J. Low-weight and UAV-based Hyperspectral Full-frame Cameras for Monitoring Crops: Spectral Comparison with Portable Spectroradiometer. PFG 2015, 1, 69–79. [Google Scholar] [CrossRef]
  41. Smith, G.M.; Milton, E.J. The use of the empirical line method to calibrate remotely sensed data to reflectance. Int. J. Remote Sens. 1999, 20, 2653–2662. [Google Scholar] [CrossRef]
  42. Perry, E.M.; Warner, T.; Foote, P. Comparison of atmospheric modelling versus empirical line fitting for mosaicking HYDICE imagery. Int. J. Remote Sens. 2000, 21, 799–803. [Google Scholar] [CrossRef]
  43. Karpouzli, E.; Malthus, T. The empirical line method for the atmospheric correction of IKONOS imagery. Int. J. Remote Sens. 2003, 24, 1143–1150. [Google Scholar] [CrossRef]
  44. Padró, J.C.; Muñoz, F.J.; Planas, J.; Pons, X. Comparison of four UAV georeferencing methods for environmental monitoring purposes focusing on the combined use with airborne and satellite remote sensing platforms. IJAEO 2019, 79, 130–140. [Google Scholar] [CrossRef]
  45. Photonic Solutions. USB200+ Data Sheet. OceanOptics; Ocean Optics, Inc.: Dunedin, FL, USA, 2006; Available online: https://www.photonicsolutions.co.uk/upfiles/USB2000PlusSpectrometerDatasheetLG14Jun18.pdf (accessed on 1 August 2021).
  46. European Union Aviation Safety Agency (EASA). Drones—Regulatory Framework Background. 2019. Available online: https://www.easa.europa.eu/easa-and-you/civil-drones-rpas/drones-regulatory-framework-background (accessed on 1 August 2021).
  47. Pons, X. MiraMon. Geographic Information System and Remote Sensing Software; Version 8.1j; Centre de Recerca Ecològica i Aplicacions Forestals, CREAF: Bellaterra, Spain, 2019; Available online: https://www.miramon.cat/Index_usa.htm (accessed on 1 August 2021)ISBN 84-931323-4-9.
Figure 1. (a) Basic concepts of the nature of shadows, including the three steps of the presented methodology: (1) shadow detection; (2) shadow correction; (3) edge filtering. (b) General workflow for obtaining de-shadowed images from UAV orthomosaics with shadowed areas.
Figure 1. (a) Basic concepts of the nature of shadows, including the three steps of the presented methodology: (1) shadow detection; (2) shadow correction; (3) edge filtering. (b) General workflow for obtaining de-shadowed images from UAV orthomosaics with shadowed areas.
Remotesensing 13 03808 g001
Figure 2. (a) Conceptualization of the applied scenario for checking the applicability of the hypothesis. (b) Experimental display of the EVA panels used to obtain the empirical line correction between shadowed and non-shadowed areas (bottom image: Parrot Sequoia false color composite).
Figure 2. (a) Conceptualization of the applied scenario for checking the applicability of the hypothesis. (b) Experimental display of the EVA panels used to obtain the empirical line correction between shadowed and non-shadowed areas (bottom image: Parrot Sequoia false color composite).
Remotesensing 13 03808 g002
Figure 3. (a) Field spectroradiometry measurements. (b) Twin set of targets located in diffuse illumination conditions. (c) UAV sensor Parrot Sequoia spectral bandwidth. (d) UAV vehicle configuration used for the applied scenario in a forested area. (e) Spectral signatures of the EVA panels used to obtain the empirical line correction between shadowed and non-shadowed areas.
Figure 3. (a) Field spectroradiometry measurements. (b) Twin set of targets located in diffuse illumination conditions. (c) UAV sensor Parrot Sequoia spectral bandwidth. (d) UAV vehicle configuration used for the applied scenario in a forested area. (e) Spectral signatures of the EVA panels used to obtain the empirical line correction between shadowed and non-shadowed areas.
Remotesensing 13 03808 g003
Figure 4. Shadow detection step methodology workflow. Two problems must be solved: (1) when detecting the shadows with a physical approach, the typical DEM (2.5D) does not reproduce the trees silhouette and produces commission error (left); (2) when detecting shadows with the image histogram approach, dark objects located in non-shadowed areas also produce commission errors (right). The solution is the masking of only those dark pixels located in projected shadow areas.
Figure 4. Shadow detection step methodology workflow. Two problems must be solved: (1) when detecting the shadows with a physical approach, the typical DEM (2.5D) does not reproduce the trees silhouette and produces commission error (left); (2) when detecting shadows with the image histogram approach, dark objects located in non-shadowed areas also produce commission errors (right). The solution is the masking of only those dark pixels located in projected shadow areas.
Remotesensing 13 03808 g004
Figure 5. Shadow reduction block methodology: finding the empirical line function from the characterized targets located in shadowed and non-shadowed areas, and further applying it in all the detected shadowed areas.
Figure 5. Shadow reduction block methodology: finding the empirical line function from the characterized targets located in shadowed and non-shadowed areas, and further applying it in all the detected shadowed areas.
Remotesensing 13 03808 g005
Figure 6. Theoretical estimation of the penumbra width, assuming a flat terrain without other objects where the shadow is projected (source: author 1).
Figure 6. Theoretical estimation of the penumbra width, assuming a flat terrain without other objects where the shadow is projected (source: author 1).
Remotesensing 13 03808 g006
Figure 7. Edge filtering step methodology workflow: the cover problem is a patching pattern of de-shadowed pixels over the original image. To solve this, the shadow mask is vectorized, a two-pixel buffer is applied, and used to clip a smoothed raster of the patched image. This clip is overlayed again on the patched image to smooth the transition between the de-shadowed and the non-shadowed pixels.
Figure 7. Edge filtering step methodology workflow: the cover problem is a patching pattern of de-shadowed pixels over the original image. To solve this, the shadow mask is vectorized, a two-pixel buffer is applied, and used to clip a smoothed raster of the patched image. This clip is overlayed again on the patched image to smooth the transition between the de-shadowed and the non-shadowed pixels.
Remotesensing 13 03808 g007
Figure 8. (a) Original orthoimage with shadows. (b) Shadow mask obtained after the shadow detection procedure. (c) Detail of some tree shadows. (d) Detail of the digital cast shadow model projection, based on a 2.5D digital surface model. (e) Detail of the constraint for selecting only dark pixels in the projected shadows that avoided false positives.
Figure 8. (a) Original orthoimage with shadows. (b) Shadow mask obtained after the shadow detection procedure. (c) Detail of some tree shadows. (d) Detail of the digital cast shadow model projection, based on a 2.5D digital surface model. (e) Detail of the constraint for selecting only dark pixels in the projected shadows that avoided false positives.
Remotesensing 13 03808 g008
Figure 9. Spectral-dependent linear regression plots for Parrot Sequoia bands, between twin targets (n = 7) located in diffuse illumination conditions and direct illumination conditions. All regressions show an R² higher than 0.90 and a p-value < 0.01; additionally, note the reasonably good distribution along the [0, 100] % of reflectance values (also for all spectral bands). These empirical line functions were applied to all the detected shadowed pixels to enhance their radiometry and obtain de-shadowed pixels.
Figure 9. Spectral-dependent linear regression plots for Parrot Sequoia bands, between twin targets (n = 7) located in diffuse illumination conditions and direct illumination conditions. All regressions show an R² higher than 0.90 and a p-value < 0.01; additionally, note the reasonably good distribution along the [0, 100] % of reflectance values (also for all spectral bands). These empirical line functions were applied to all the detected shadowed pixels to enhance their radiometry and obtain de-shadowed pixels.
Remotesensing 13 03808 g009
Figure 10. Results of the shadow detection and shadow reduction stages. This is the partial result before edge filtering.
Figure 10. Results of the shadow detection and shadow reduction stages. This is the partial result before edge filtering.
Remotesensing 13 03808 g010
Figure 11. Results of the radiometric correction of UAV imagery using the empirical approach on shadow reduction. (a) At-sensor Parrot Sequoia UAV image (obtained from DLS radiometric correction) before de-shadowing. (b) At-surface UAV image empirically fitted to the characterized panels and empirically de-shadowed. (c) Detail before de-shadowing. (d) Detail of shadow detection results. (e) Detail of edge filtering results. (f) Detail of the image after de-shadowing.
Figure 11. Results of the radiometric correction of UAV imagery using the empirical approach on shadow reduction. (a) At-sensor Parrot Sequoia UAV image (obtained from DLS radiometric correction) before de-shadowing. (b) At-surface UAV image empirically fitted to the characterized panels and empirically de-shadowed. (c) Detail before de-shadowing. (d) Detail of shadow detection results. (e) Detail of edge filtering results. (f) Detail of the image after de-shadowing.
Remotesensing 13 03808 g011
Table 1. Flight plan main features and scene highlights.
Table 1. Flight plan main features and scene highlights.
Date27 April 2018
First photogram capture time (UTC)10:36:33
Last photogram capture time (UTC)10:46:23
Flight height Above Ground Level (m)90
Number of photograms (x4 = bands)623 (2492)
Planned along-track overlapping (%)90
Planned across-track overlapping (%)85
Latitude of the center of the whole scene (°)41°41′31.29″ N
Longitude of the center of the whole scene (°)1°49′43.18″ E
Sun azimuth at central time of flight (°)146.50
Sun elevation at central time of flight (°)58.37
Table 2. Field spectroradiometer main features (OceanOptics USB2000+).
Table 2. Field spectroradiometer main features (OceanOptics USB2000+).
FWHM (nm)Size (mm)Weight (g)DesignManufacturer/Model
1.2689.1 × 63.3 × 34.4190Czerny-TurnerOceanOptics USB2000+
Input Focal Length (mm)Fiber optic FOV (°)Sampling interval (nm)Sensor CCD samplesGrating #2 Spectral range (nm)
42250.32048340–1030
Table 3. Parrot Sequoia camera main features.
Table 3. Parrot Sequoia camera main features.
Expanded Dynamic Range (DN)Size (mm)Weight (g)Sensor TypeManufacturer-Model
0–65,53559 × 41 × 2872CMOSParrot Sequoia
Raw radiometric resolution (bits)Field of View (°)Input Focal Length (mm)Pixel size (µm)Sensor size (pixels)
1047.53.983.75 × 3.751280 × 960
#1 Blue FWHM (nm)#2 Green FWHM (nm)#3 Red FWHM (nm)#4 Red-edge FWHM (nm)#5 NIR FWHM (nm)
No blue band530–570640–680730–740770–810
Table 4. Linear regressions to fit shadowed reflectance panels to the reflectance of the non-shadowed panels for the Parrot Sequoia camera (case study).
Table 4. Linear regressions to fit shadowed reflectance panels to the reflectance of the non-shadowed panels for the Parrot Sequoia camera (case study).
Parrot Sequoia BandsR2SlopeBias
Green: 530–570 (nm)0.9692.25812.152
Red: 640–680 (nm)0.9461.94511.050
Red Edge: 730–740 (nm)0.9831.53310.771
NIR: 770–810 (nm)0.9901.4793.642
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pons, X.; Padró, J.-C. An Operational Radiometric Correction Technique for Shadow Reduction in Multispectral UAV Imagery. Remote Sens. 2021, 13, 3808. https://doi.org/10.3390/rs13193808

AMA Style

Pons X, Padró J-C. An Operational Radiometric Correction Technique for Shadow Reduction in Multispectral UAV Imagery. Remote Sensing. 2021; 13(19):3808. https://doi.org/10.3390/rs13193808

Chicago/Turabian Style

Pons, Xavier, and Joan-Cristian Padró. 2021. "An Operational Radiometric Correction Technique for Shadow Reduction in Multispectral UAV Imagery" Remote Sensing 13, no. 19: 3808. https://doi.org/10.3390/rs13193808

APA Style

Pons, X., & Padró, J. -C. (2021). An Operational Radiometric Correction Technique for Shadow Reduction in Multispectral UAV Imagery. Remote Sensing, 13(19), 3808. https://doi.org/10.3390/rs13193808

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop