Next Article in Journal
Mapping Small Watercourses from DEMs with Deep Learning—Exploring the Causes of False Predictions
Previous Article in Journal
Spatial-Temporal Changes of Abarkuh Playa Landform from Sentinel-1 Time Series Data
Previous Article in Special Issue
Satellite Imagery-Based Identification of High-Risk Areas of Schistosome Intermediate Snail Hosts Spread after Flood
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Malaria Vector Habitats in West Africa: Drone Imagery and Deep Learning Analysis for Targeted Vector Surveillance

by
Fedra Trujillano
1,2,*,
Gabriel Jimenez Garay
1,3,
Hugo Alatrista-Salas
4,5,
Isabel Byrne
6,
Miguel Nunez-del-Prado
7,8,
Kallista Chan
6,9,
Edgar Manrique
1,
Emilia Johnson
2,
Nombre Apollinaire
10,
Pierre Kouame Kouakou
11,
Welbeck A. Oumbouke
6,12,
Alfred B. Tiono
9,
Moussa W. Guelbeogo
9,
Jo Lines
6,9,
Gabriel Carrasco-Escobar
1,13 and
Kimberly Fornace
2,9,14
1
Health Innovation Laboratory, Institute of Tropical Medicine “Alexander von Humboldt”, Universidad Peruana Cayetano Heredia, Lima 15102, Peru
2
School of Biodiversity, One Health & Veterinary Medicine, University of Glasgow, Glasgow G12 8QQ, UK
3
Department of Engineering and Computer Science, Faculty of Science and Engineering, Sorbonne University, 75005 Paris, France
4
Escuela de Posgrado Newman, Tacna 23001, Peru
5
Science and Engineering School, Pontificia Universidad Católica del Perú (PUCP), Lima 15088, Peru
6
Department of Infection Biology, London School of Hygiene & Tropical Medicine, London WC1E 7HT, UK
7
Peru Research, Development and Innovation Center (Peru IDI), Lima 15076, Peru
8
The World Bank, Washington, DC 20433, USA
9
Centre on Climate Change and Planetary Health, London School of Hygiene & Tropical Medicine, London WC1E 7HT, UK
10
Centre National de Recherche et de Formation sur le Paludisme, Ouagadougou 01 BP 2208, Burkina Faso
11
Institute Pierre Richet, Bouake 01 BP 1500, Côte d’Ivoire
12
Innovative Vector Control Consortium, Liverpool School of Tropical Medicine, London L3 5QA, UK
13
Scripps Institution of Oceanography, University of California San Diego, La Jolla, CA 92093, USA
14
Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore 119077, Singapore
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(11), 2775; https://doi.org/10.3390/rs15112775
Submission received: 17 February 2023 / Revised: 20 April 2023 / Accepted: 18 May 2023 / Published: 26 May 2023
(This article belongs to the Special Issue Remote Sensing and Infectious Diseases)

Abstract

:
Disease control programs are needed to identify the breeding sites of mosquitoes, which transmit malaria and other diseases, in order to target interventions and identify environmental risk factors. The increasing availability of very-high-resolution drone data provides new opportunities to find and characterize these vector breeding sites. Within this study, drone images from two malaria-endemic regions in Burkina Faso and Côte d’Ivoire were assembled and labeled using open-source tools. We developed and applied a workflow using region-of-interest-based and deep learning methods to identify land cover types associated with vector breeding sites from very-high-resolution natural color imagery. Analysis methods were assessed using cross-validation and achieved maximum Dice coefficients of 0.68 and 0.75 for vegetated and non-vegetated water bodies, respectively. This classifier consistently identified the presence of other land cover types associated with the breeding sites, obtaining Dice coefficients of 0.88 for tillage and crops, 0.87 for buildings and 0.71 for roads. This study establishes a framework for developing deep learning approaches to identify vector breeding sites and highlights the need to evaluate how results will be used by control programs.

1. Introduction

Land use changes, such as agricultural expansion, can create new aquatic habitats suitable for breeding sites for mosquito vectors, which transmit malaria and other diseases [1]. The identification of such water bodies can be vital to disease control programs, allowing vector control teams to perform targeted malaria control by larval source management (LSM) [2]. LSM targets the immature aquatic stages of disease vectors through environmental, chemical or biological modification of the larval habitat, with the overall aim of reducing adult mosquito populations [3]. While traditional approaches have relied on ground-based surveys, Earth Observation (EO) data, such as drone and satellite data, are increasingly utilized to identify potential breeding sites rapidly and target control measures [2,3,4,5,6]. EO data additionally provide new opportunities to characterize mosquito habitats and monitor changes in these habitats in response to environmental changes [7].
Obtaining actionable information from EO data requires classifying imagery into relevant habitat types. The specific classes of interest are highly dependent on the local vector ecology. For example, An. gambiae breed in small or transient, often man-made, water bodies across a wide range of habitat types, including puddles on roads, agricultural irrigation such as rice paddies, sunlit rivers and streams and quarries or construction sites [8,9,10]. As these breeding sites are often temporary and may be difficult to directly observe (e.g., under trees or in small water bodies), identifying important land types where breeding sites occur can be an important proxy to target vector control measures. Further information on the locations of houses and other buildings may also aid planning for vector control campaigns. Additionally, the type of information required when classifying EO data depends on the end use. While a vector control program may benefit from knowing the probability of a large area containing habitat types, ecological and epidemiological studies aiming to identify risk factors for vector breeding sites may require more detailed segmentation approaches to characterize the shape and configuration of different habitat types, so as to identify where these breeding sites are likely to occur [11,12,13,14,15,16,17].
In contrast to An. gambiae, An. funestus typically breeds in large semipermanent or permanent water bodies, often characterized by emergent vegetation [2,18,19,20,21]. Despite An. funestus being Africa’s second most important malaria vector, and although the size and permanence of the species’ breeding sites should make them intuitively easy to locate, An. funestus breeding sites are notoriously difficult to detect [21]. When compared to the small and transient An. gambiae breeding sites, the large and stable characteristics of the An. funestus breeding sites make them a suitable target for identification using EO techniques. These water bodies have often been associated with agricultural practices, including rice cultivation, irrigation canals and ditches, pastures and cultivated swamps [22,23,24,25,26,27]. A review of the available published literature on An. funestus breeding ecology [2] identified key characteristics of An. funestus breeding sites, which include irrigated and non-irrigated forms of agriculture and savannah landscapes. They also identified land classes that are likely to exist in landscapes where humans and malaria vectors overlap, but which are not necessarily associated with the breeding cycle, such as roads, buildings and other features of the built environment. Additional information on infrastructure, including the locations of houses and other buildings, can aid planning interventions for vector control campaigns.
For these applications, it is critical that EO data are collected simultaneously as ground-based vector surveys or are recent enough to provide actionable information for control programs. Anopheles’ breeding sites are difficult to detect from coarse, freely available, satellite-based EO data such as Sentinel or LandSat, with aquatic habitats often small (<1 m), vegetated or obscured depending on local vector ecology. Additionally, breeding sites may be temporary or exist in landscapes that are rapidly modified, requiring temporally accurate EO data to link with ground-based surveys. This can be challenging with satellite-based EO sources where data are collected infrequently (weekly or monthly) and limited by cloud cover or other factors [28,29]. This has led to the increased use of EO data with high spatial and temporal resolutions, such as user-defined imagery collected by drones (unmanned aerial vehicles or UAVs) or daily commercial satellite data (e.g., Planet). These data typically have a low spectral resolution, limiting the utility of traditional pixel-based approaches requiring data measured outside the visible spectrum [30,31]. Alternatively, deep learning approaches, such as convolutional neural networks (CNNs), have revolutionized image analysis by efficiently analyzing image textures, patterns and spectral characteristics using self-learning artificial intelligence approaches to identify features in complex environments [16,32,33,34].
Multiple approaches have been applied in identifying habitats and their characteristics from EO imagery for operational use by vector-borne disease control programs. In Malawi, Stanton et al. [3] assessed approaches to identifying the aquatic habitats of larval-stage malaria mosquitoes. They assessed geographical object-based image analysis (GeoOBIA), which groups contiguous pixels into objects based on prespecified pixel characteristics. The objects were classified by a random-forest-supervised classification and demonstrated strong agreement with test samples, successfully identifying larval habitat characteristics with a median accuracy of 98%. Liu et al. [35] developed a framework for mapping the spatial distribution of suitable aquatic habitats for the snail hosts of the debilitating parasitic disease Schistosomiasis along the Senegalese River Basin. A deep learning U-Net model was built to analyze high-resolution satellite imagery and to produce segmentation maps of aquatic vegetation. The model produced predictions of snail habitats with higher accuracy than commonly used pixel-based classification methods such as random forest. Hardy et al. [36] developed a novel approach to classify and extract malaria vector larval habitats from drone imagery in Zanzibar, Tanzania. This used computer vision to assist manual digitization. This approach significantly outperformed supervised classification approaches, which were unsuitable for mapping potential vector larval habitats in the study region based on accuracy scores. Examples of methods for other applications, data sources and the performance of different classification techniques are summarized in Table 1.
Building on these methods, we aimed to develop and validate deep learning approaches to identify land classes associated with the breeding sites of malaria mosquito vectors in West Africa. Data were assembled from multiple sites in Burkina Faso and Côte d’Ivoire to develop an approach able to generalize to different malaria-endemic landscapes. We developed a land classification system based on habitats of interest for the breeding ecology of the malaria vector An. funestus and An. gambiae, which are present in the study sites. We used RGB drone images from the study sites to build a training dataset and to implement two CNN-based frameworks using the U-Net and attention U-Net architectures to identify features of interest for Anopheles breeding. The specific objectives of this study were to (i) collect and assemble a labeled dataset of drone images in malaria-endemic areas in Côte d’Ivoire and Burkina Faso; (ii) develop a protocol to label land classes of interest based on the local vector ecology; (iii) assemble a labeled dataset for each class and (iv) train, validate and test the U-Net and attention U-Net deep learning architectures. The final algorithms were assessed based on the performance in predicting the presence of the classes of interest in test images using the best model after cross-validation.

2. Materials and Methods

2.1. Drone Mapping

Drone surveys were conducted in two malaria-endemic sites in West Africa—Saponé, Burkina Faso and Bouaké, Côte d’Ivoire—where the incidence of malaria (per 1000 population at risk) is 389.9 and 287, respectively (the World Bank: https://data.worldbank.org/indicator/SH.MLR.INCD.P3 (accessed on 19 April 2023)). Both sites are rural, with extensive small-scale agriculture and highly seasonal rainfall and malaria transmission patterns. Saponé is located 45 km south-west of Ouagadougou, Burkina Faso and has reported an extremely high malaria prevalence of predominantly Plasmodium falciparum [42]. The main malaria vector in this site is An. gambiae s.l., with low densities of other species also reported. Between November 2018 and November 2019, with an average temporal resolution of 5 months, fixed-wing (Sensefly eBee) and quadcopter (DJI Phantom 4 Pro) drones were used to collect 26 RGB images at 2–10 cm per pixel resolution. Similarly, in Bouaké, we conducted targeted drone surveys from June to August 2021 using a DJI Phantom 4 Pro drone to collect RGB data at a 2 cm per pixel resolution, as described by [2]; 77 images were used in this study. This area was also rural and dominated by small-scale agriculture; however, this site had different vector compositions, including high densities of An. funestus. For both sites, drone images were processed using Agisoft Metashape Professional (Agisoft: https://www.agisoft.com/ (accessed on 19 April 2023)). The steps performed were photo alignment (using high accuracy, selecting the generic preselection, reference preselection and adaptive camera model fitting options; the key point was set to 40,000), building a dense point cloud (high quality and moderate depth filtering), building a digital elevation model (extrapolated option) and finally performing the orthomosaic generation. The drone images covered a 11.52 km 2 and 30.42 km 2 area for Burkina Faso and Côte d’Ivoire, as illustrated in Figure 1. Both sites were dominated by agriculture, mainly yams, cassava, cashews, peanuts and maize.

2.2. Image Labeling and Development of Labeled Dataset

We identified specific land cover classes of interest associated with Anopheles breeding sites, including water bodies and irrigated agricultural land types [2]. Within these settings, crops, roads and tillage areas were identified as land types with a high probability of containing small water bodies where An. gambiae breeds, the predominant vector in the Burkina Faso site. For An. funestus, a dominant vector in areas of the Cote d’Ivoire site, we differentiated between vegetated and non-vegetated water bodies as An. funestus is most commonly identified only in vegetated water bodies. We additionally identified habitat types associated with human activities, including roads and buildings. The final land cover classes of interest for this analysis included vegetated and non-vegetated water bodies, irrigated crops (planted vegetation with no tree cover), tillage (land cleared for planting crops), buildings and roads. While this classification cannot specifically identify whether or not an area is an Anopheles breeding site, this provides the basis for targeting future entomological surveys and wider epidemiological studies on how landscape impacts malaria transmission.
To identify these classes using a supervised deep learning approach, we first needed to assemble a dataset of harmonized labeled images. We manually labeled a total of 103 drone acquisitions to generate gold-standard (i.e., labeled images) masks for each of the specific land classes. Trained personnel familiar with the study area validated the labels. This process was performed using two different tools: GroundWork (GroundWork: https://groundwork.azavea.com (accessed on 19 April 2023)) and QGIS (QGIS: https://qgis.org/site/forusers/download.html (accessed on 19 April 2023)). The former is a cloud-based licensed tool where the images are uploaded for labeling. In this cloud-based interface, a grid is overlaid in the image as illustrated in Figure 2A. The user is then able to select one of the predefined classes to assign the corresponding class over the images, as shown in Figure 2B. While this tool has a streamlined workflow to facilitate labeling, this required internet access, had a limited data allowance and was not suitable in all contexts. Therefore, we also used QGIS, which, on the other hand, is a Geographic Information System (GIS) open-source desktop application that allows one to input geo-referenced images and annotate them manually using polygons (Figure 3). Images labeled with each software were randomly selected. The obtained polygons from both tools were checked to detect invalid polygons and were corrected manually.
Once all the images were labeled, ground truth masks were created for the supervised image classification. As shown in Figure 4, first, we create a subset of vector layers, one for each class. Then, we rasterized the vector layers to create a separate raster image for each class. The land class presence depended on the acquisition site’s characteristics. For example, Figure 5 shows a labeled region from Burkina Faso where all the land classes are present; however, this was not the case for all the images. Images that did not contain labeled polygons were not considered in this study.
We built six land class datasets: crops, tillage, roads, buildings, vegetated water bodies and non-vegetated water bodies (defined at the beginning of this section). For each dataset, we selected only the drone images that contained labels of the corresponding land class. From each dataset, we randomly selected images for training, validation and testing. To avoid bias related to the training data, we considered using a 3-fold cross-validation scheme.
Due to GPU memory constraints, the entire drone image and its corresponding labels were split into several patches, which were used to train, validate and test the deep learning models. In addition, because of the variable extension of each labeled polygon, splitting the image in a grid pattern resulted in a highly unbalanced dataset where the background predominated over the class of interest. Instead, we used a new approach for data patching and augmentation based on ROI shifting, described in [43], to avoid this imbalance. This method also prevented potential bias caused by the network focusing on the background instead of the class of each patch. The augmentation aimed to train the models robustly in the presence of variable neighborhood context information. Thus, we identified the ROIs and framed them in rectangles, which could contain one or more polygons, as illustrated in Figure 6. Patches of 256 × 256 and 512 × 512 pixels were extracted from these rectangles and assigned to each training, validation and testing dataset. A summary of the number of images and patches in each dataset is reported in Table 2. For each patch size, we report in Table 3 the average percentage of the class present in the dataset. This was computed, per patch, as the ratio of the pixels belonging to each class and the background pixels. Patches with classes representing less than 10% (256 × 256) or 20% (512 × 512) of the patch size were eliminated from the datasets. The total number of patches obtained for each class differed by study site; while Burkina Faso contained more tillage areas, Côte d’Ivoire had more irrigated crops. Considering both sites, water body datasets contained the least number of patches. On average, non-vegetated water bodies, tillage and crops covered greater percentages of patches as these classes were more likely to be larger.

2.3. Algorithm Development

We developed a multi-step approach to classifying multiple land classes from patches, as shown in Figure 6. Following the dataset preparation, two deep learning segmentation models were selected: U-Net and attention U-Net. U-Net is a widely used architecture for semantic segmentation tasks. This method relies on the upsampling technique, which increases an image’s dimensions (i.e., the number of rows and/or columns). Thus, the present method builds on a conventional network with successive layers by using upsampling operators to replace pooling operations, which implies using contraction and expansion paths (i.e., encoder and decoder, respectively). The contraction part reduces the spatial dimensions in every layer and increases the channels. Meanwhile, the expansive part increases the spatial dimensions while reducing the channels. Finally, using also skip connections between the encoder and decoder, the spatial dimensions are restored to predict each pixel in the input image. An important modification in U-Net is that many feature bands are in the upper sampling part, allowing the network to propagate context information to higher-resolution layers. Consequently, the expanding trajectory is more or less symmetric to the contracted part and produces a U-shaped architecture [44]. Attention U-Net adds a self-attention gating module in every skip connection of the U-Net architecture, without increasing the computation overhead. These modules are incorporated to improve the sensitivity and accuracy and add visual explainability to the network. The improvement is performed by focusing on the features of the regions of interest rather than the background [45,46].
Regarding the computational features used in this study, data preparation and deep learning experiments were executed on an 8-core Intel(R) Xeon E5-2686 @ 2.3 GHz CPU with 60 GiB RAM and one 16 GiB RAM Nvidia Tesla V100 GPU on the Amazon Elastic Compute Cloud service (AWS).

2.4. Evaluation Metrics

To quantitatively assess the similarities between the predicted and gold-standard object areas, we used the Dice coefficient. This divides two times the area of overlap by the total number of pixels in both images, as shown in Equation (1).
D I C E = 2 × | A B | | A B | + | A B |
The Dice coefficient takes values from 0 to 1, in which 1 represents a complete match between the ground truth and the prediction. We additionally calculated the precision and recall metrics, which are computed based on true positives (TP), false positives (FP), true negatives (TN) and false negatives (FN), as described in Equations (2) and (3), respectively.
Precision = T P T P + F P
Recall = T P T P + F N
Based on the aforementioned elements, we performed different experiments, which are described in the following section.

3. Results

As the number of ROIs in every drone image was not the same, the number of patches (generated from the ROIs) in each fold varied from class to class. As a result, we used up to 90% of the CPU RAM capacity in the experiments containing the higher number of patches. This computational load was due to caching the data and annotations in CPU RAM prior to moving the batches to the GPU RAM in the training, validation and test phases. This approach reduces the number of CPU–GPU data transfers, which can intensively impact the training time. The average training and cross-validation time was approximately 12 h.
The U-Net and attention U-Net architectures were used to classify the different classes organized in sets of patches of 256 × 256 and 512 × 512 pixels in size using a three-fold cross-validation procedure. Table 4 shows the results for the U-Net using patches with a size of 256 × 256 pixels. For vegetated water bodies, one of our primary classes of interest, the model reached its highest Dice score at 0.68 in the first fold and an average of 0.63. Non-vegetated water bodies had a higher Dice score of 0.75 and 0.58 on average, showing the worst performance among all the classes. Crops, tillage and buildings had the best overall performance, above 0.80 in all validation sets, followed by roads, which reached 0.71. The same U-Net architecture was also trained with patches of 512 × 512 pixels in size. For both training approaches, all classes achieved comparable results; however, the model trained with 256 × 256 pixel size patches outperformed, on average, the 512 × 512 model in every class. The detailed table showing the performance of the 512 × 512 pixel size model is provided in the Supplementary Information.
Similarly to the U-Net experiments, we trained the attention U-Net with patches of 256 × 256 pixels. The evolution of the training using a heatmap in a jet color scale is shown in Figure 7. The areas where the network found relevant features for segmentation are displayed in red, while the blue areas correspond to less essential regions. The first column shows the original patch, and the following columns are the heatmaps according to an iteration number indicated above; however, they correspond to different epochs. In general, for non-vegetated water body (Figure 7c), road (Figure 7d) and vegetated water body (Figure 7f) patches, we noticed that, as the number of iterations increased, the network focused more on the areas of interest for learning. Nevertheless, in the case of buildings (Figure 7a), the initial iteration focused more on the construction than the final one because it corresponded to a different epoch and batch, meaning that iteration 212 corresponded to an early epoch where the network was not fully updated or a batch with a different data distribution than the patch analyzed. Another important aspect to highlight is that, in some iterations, the network concentrated not on the class but on the shadows of the patch, such as in Figure 7d iteration 327 and Figure 7d iteration 3.
The quantitative results to evaluate the performance of the attention U-Net using patches of 256 × 256 pixels are reported in Table 5. Comparing these results with the ones obtained with the 256 × 256 U-Net model, we observe that the vegetated and non-vegetated water body classes had improved performance. Despite this, the first fold of the non-vegetated water body class showed a low Dice value (below 5%), meaning that the training was unstable across the three folds and may have impacted the inference process due to the different data distributions present in the datasets. Therefore, cross-validation provided insights into the robustness and stability of the trained models.
We also trained the attention U-Net using patches of 512 × 512 pixels. However, we used a subset of patches per class ranging from 5% to 20% to test the model performance on the vegetated and non-vegetated water body, crop and building classes. Although it showed an improvement in the water body classes compared to the 256 × 256 pixel U-Net model using the best fold as a reference, the standard deviation calculated after the cross-validation was higher, meaning that the network was not entirely stable. For instance, in the non-vegetated class, the Dice score ranged from 0.1 to 0.91 in the attention U-Net 512 × 512 pixel model. We report all results for this last experiment in the Supplementary Information.
We selected the U-Net 256 × 256 pixel model to evaluate the predicted mask as it was the most stable and robust network. Figure 8 shows the inference results for one sample patch (taken from the test dataset) per class. The first column shows the original test patch. The second column shows the gold standard in white and the background in black, whereas the third column is the network prediction, where each pixel is depicted in white if it belongs to the corresponding class with a probability higher than 0.65. The patch’s Dice score is reported above each predicted mask. Figure 8a shows the buildings accurately distinguished over the soil region. In contrast, qualitative results for the crop class in Figure 8b show that the network predicts as crops more regions of soil between the leaves rather than the actual crop. This may be explained by the imprecise annotations (i.e., mask almost covering 100% of the patch area) seen in the gold standard. Figure 8c presents a water body’s segmentation despite the shadows and blurriness of the patch. Figure 8a,d show a road and a vegetated water body, respectively. In both cases, the network outperformed the manual annotation qualitatively. Finally, the tillage model output shown in Figure 8e predicted not only the corresponding class but also areas of soil that were not prepared for cultivation.

4. Discussion

This study highlights the utility of deep learning approaches to identify potential mosquito habitats using high-resolution RGB imagery. We developed a workflow and methodology to assemble and process labeled training data to implement deep learning algorithms to automatically detect malaria vectors’ potential habitats. Although the performance, as measured by the Dice coefficient, was low for some classes, the classifier did consistently detect the presence of specific classes within drone imagery. In fact, we identified that the relevant information for the end-user needs, in this case, is to identify the presence of a particular type of land cover rather than the boundary delimitation of the class. Overall, this work establishes a framework to apply artificial intelligence tools to support vector-borne disease control.
Our proposed methodology builds on a growing body of literature using deep learning approaches and remote sensing data to identify priority areas in implementing disease control measures. Compared to other studies using deep learning algorithms with multispectral satellite imagery to detect vector habitats, our study had lower predictive power [35], most likely due to the limited information in RGB images. In addition, the annotation process (i.e., manual labeling) is a factor that we need to consider. For example, Figure 8f shows an example of the human labeling error in not encompassing the water bodies’ boundaries. However, despite these errors on the training labels, the network performs better in segmenting the pixels that belong to this class. The qualitative differences in the manual and predicted labels result in a lower Dice score. Rather than relying solely on manual annotations, which can be imprecise, unsupervised learning approaches or region-growing approaches may result in more accurate ground truths and higher-performing models [36].
As shown in Figure 8f, our proposal using the U-Net architecture achieved better qualitative results when segmenting certain classes. In order to understand deeply where the network was focusing its attention when dealing with this task, we used the attention U-Net architecture. The results provided a clear view of which pixels were being used in the segmentation process. This introduced a level of visual interpretability of the training process and allowed us to propose a methodology to leverage the attention maps to refine manual annotations.
This tool can be improved in the future by including human supervision, as proposed in Figure 9. Initially, we need a set of manually annotated patches extracted from several drone images used in the pre-training engine. The model obtained after this process could be used to segment and detect structures in a new drone image. The network will output its predictions as attention maps (heatmaps with pixel probabilities). A user will then evaluate the predictions and determine if there are missing objects or if the boundaries of the detected objects are correctly segmented. These new annotations will then be used as inputs to an online training engine, improving the knowledge of the original deep learning model. This procedure should reduce the imprecision of manual annotations and allow the model to learn incrementally from new samples introduced by different users. In addition, the feedback loop should also help when there are similar qualitative characteristics (imaging features) in different class patches—a challenging process regarding data cleaning procedures.
We could also observe a difference in performance using different patch sizes. Ideally, larger patches should allow the network to extract information from the object’s surroundings and identify the borders of the objects detected, such as the buildings. However, this additional information may need to be clarified in some cases. A deeper analysis, including multi-resolution deep learning models, could provide a better understanding of the features needed for better segmentation of the classes proposed in this study.
Deep learning approaches based on open-access satellite data can provide a more efficient and cost-effective means for vector control programs to identify priority areas for field surveys and targeted interventions. Larval source management is an important component in the toolkit for controlling mosquito-borne diseases, particularly in endemic contexts with persistent insecticide resistance [36]; however, identifying aquatic breeding sites is both time- and resource-intensive and can be biased by reliance on prior knowledge, convenience or assumptions. Based on the Dice score obtained, we found that the presence of specific habitat classes could be consistently detected within drone imagery, including vegetated and non-vegetated water bodies, tillage, crops and roads. By delineating certain areas within a large, gridded landscape with a high probability of containing potential vector breeding habitats, deep learning algorithms could facilitate the more targeted planning and implementation of larvicidal activities. For example, vector control programs can use this to focus finite resources on narrower areas for entomological field-based surveys or anticipate the scale of larvicide requirements for a given area. More broadly, knowledge of roads and building locations can be used to plan interventions. For example, clustered buildings in close proximity to breeding sites are important targets for indoor residual spraying against adult malaria vectors. Importantly, this approach is generalizable and could be used in a range of vector-borne disease-endemic contexts to identify the presence of habitats of interest that are relevant to the local land cover and local vector ecology.
One of the key advantages of drone data is that they allow user-defined time points to characterize features over time. This study was predominantly cross-sectional, aiming to classify land types from labeled drone images from specific points in time. As one of the key aims was to categorize potential breeding sites for the malaria vector An. funestus, which breeds in large, semipermanent to permanent water bodies, breeding sites are less likely to vary throughout the seasons. However, these methods could be repeated to reclassify potential habitats for specific seasons or time points. This could be particularly informative in mapping seasonal changes in breeding site availability or monitoring agricultural activities that expand different habitat types.
Additionally, this study highlights the importance of identifying how the classified information will be used. While we assessed model performance using the Dice coefficient, these metrics describe pixel-level classification accuracy. In some cases, this may be appropriate, such as when an epidemiological study needs to identify the precise outline of a water body. However, in many cases, these scores do not reflect the utility of the classifier. For example, a control program may only need to know where a potential breeding site is located and the relative size in order to plan larvicidal activities.
This study has several important limitations. While we used data from multiple sites in West Africa, these do not indicate the full range of habitats within this region or seasonal changes. Future studies could integrate data from other sources to develop more representative datasets. Additionally, limited amounts of ground-truthed data were available from these study sites, and there were insufficient data on larvae presence or absence to predict whether specific land classes contained Anopheles larvae. If data were available, this framework could be extended to predict the presence or absence of specific species.
Despite these limitations, this study developed a methodology to automatically detect potential mosquito breeding sites. Although data labeling is highly labor-intensive, this classifier can rapidly analyze RGB drone images collected using small, low-cost drones. Similarly, as deep learning methods are self-learning, additional datasets will likely improve the performance and applicability of these methods. Future work could develop more user-friendly interfaces to support the uptake of these methods. Altogether, this study sets out a useful framework to apply deep learning approaches to RGB drone imagery.

5. Conclusions

This study proposed a methodology to automatically spotlight high-resolution RGB drone images of the West Africa land cover to detect malaria vectors’ potential habitats. After manual image annotation, images were cut into patches of size 256 × 256 and 512 × 512 pixels. Later, U-Net-based and attention U-Net-based algorithms were applied to automatically identify buildings, roads, water bodies, crops and tillage. Finally, the best model was selected from the different experiments performed based on the Dice score. Although we obtained promising results in identifying buildings, roads and water bodies, crops and tillage still represent a challenge, which will be explored in future work. Nevertheless, we have demonstrated that our proposal is pertinent in helping experts to create tools to avoid the proliferation of mosquito breeding sites.

Supplementary Materials

Author Contributions

F.T. and G.J.G. contributed equally as first authors. Conceptualization, I.B., J.L., K.F. and G.C.-E.; data curation, F.T., G.J.G., K.C., N.A., P.K.K., W.A.O., A.B.T. and M.W.G.; funding acquisition, I.B. and K.F.; investigation, I.B., G.C.-E. and K.F.; methodology, F.T., G.J.G., M.N.-d.-P., K.C., E.M. and H.A.-S.; project administration, I.B., G.C.-E. and K.F.; resources, I.B., G.C.-E. and K.F.; software, F.T. and G.J.G.; supervision, I.B., J.L., G.C.-E. and K.F.; validation, F.T., G.J.G., M.N.-d.-P. and H.A.-S.; visualization, F.T., G.J.G., M.N.-d.-P. and E.J.; writing—original draft preparation, F.T., G.J.G., M.N.-d.-P., I.B., K.F. and H.A.-S.; writing—review and editing, I.B., K.F. and G.C.-E. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a Sir Henry Dale fellowship awarded to K.M.F. and jointly funded by the Wellcome Trust and Royal Society (Grant No. 221963/Z/20/Z). Additional funding was provided by the BBSRC and EPSRC Impact Accelerator Accounts (BB/X511110/1) and the CGIAR Research Program on Agriculture for Nutrition and Health (A4NH). K.C. and J.L. are also partly supported by UK aid from the UK government (Foreign, Commonwealth & Development Office-Funded RAFT [Resilience Against Future Threats] Research Programme Consortium).

Data Availability Statement

For reproducibility purposes, the python code is available at https://github.com/healthinnovation/aerial-image-analysis (accessed on 19 April 2023).

Acknowledgments

We would like to thank the collaborators and communities in Burkina Faso and Cote d’Ivoire as well as all volunteers who helped to label and check the data.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
EOEarth observation
UAVunmanned aerial vehicle
CNNconvolutional neural network
DPdeep learning
TADtechnology-assisted digitizing
GLCMgray-level co-occurrence matrix
RGBred, green and blue
RMSEroot mean square error
MCSMAmultiple-criteria spectral mixture analysis
RNNrecurrent neural network
ReLUrectified linear
CPUcentral processing unit
RAMrandom access memory

References

  1. Patz, J.A.; Daszak, P.; Tabor, G.M.; Aguirre, A.A.; Pearl, M.; Epstein, J.; Wolfe, N.D.; Kilpatrick, A.M.; Foufopoulos, J.; Molyneux, D.; et al. Unhealthy landscapes: Policy recommendations on land use change and infectious disease emergence. Environ. Health Perspect. 2004, 112, 1092–1098. [Google Scholar] [CrossRef] [PubMed]
  2. Byrne, I.; Chan, K.; Manrique, E.; Lines, J.; Wolie, R.Z.; Trujillano, F.; Garay, G.J.; Del Prado Cortez, M.N.; Alatrista-Salas, H.; Sternberg, E.; et al. Technical Workflow Development for Integrating Drone Surveys and Entomological Sampling to Characterise Aquatic Larval Habitats of Anopheles funestus in Agricultural Landscapes in Côte d’Ivoire. J. Environ. Public Health 2021, 2021, 3220244. [Google Scholar] [CrossRef] [PubMed]
  3. Stanton, M.C.; Kalonde, P.; Zembere, K.; Spaans, R.H.; Jones, C.M. The application of drones for mosquito larval habitat identification in rural environments: A practical approach for malaria control? Malar. J. 2021, 20, 244. [Google Scholar] [CrossRef] [PubMed]
  4. Hardy, A.; Makame, M.; Cross, D.; Majambere, S.; Msellem, M. Using low-cost drones to map malaria vector habitats. Parasites Vectors 2017, 10, 29. [Google Scholar] [CrossRef]
  5. Hardy, A.; Ettritch, G.; Cross, D.E.; Bunting, P.; Liywalii, F.; Sakala, J.; Silumesii, A.; Singini, D.; Smith, M.; Willis, T.; et al. Automatic detection of open and vegetated water bodies using Sentinel 1 to map African malaria vector mosquito breeding habitats. Remote Sens. 2019, 11, 593. [Google Scholar] [CrossRef]
  6. Carrasco-Escobar, G.; Manrique, E.; Ruiz-Cabrejos, J.; Saavedra, M.; Alava, F.; Bickersmith, S.; Prussing, C.; Vinetz, J.M.; Conn, J.E.; Moreno, M.; et al. High-accuracy detection of malaria vector larval habitats using drone-based multispectral imagery. PLoS Neglected Trop. Dis. 2019, 13, e0007105. [Google Scholar] [CrossRef]
  7. Fornace, K.M.; Diaz, A.V.; Lines, J.; Drakeley, C.J. Achieving global malaria eradication in changing landscapes. Malar. J. 2021, 20, 69. [Google Scholar] [CrossRef]
  8. Lacey, L.A.; Lacey, C.M. The medical importance of riceland mosquitoes and their control using alternatives to chemical insecticides. J. Am. Mosq. Control. Assoc. Suppl. 1990, 2, 1–93. [Google Scholar]
  9. Tusting, L.S.; Thwing, J.; Sinclair, D.; Fillinger, U.; Gimnig, J.; Bonner, K.E.; Bottomley, C.; Lindsay, S.W. Mosquito larval source management for controlling malaria. Cochrane Database Syst. Rev. 2013, 2013, CD008923. [Google Scholar] [CrossRef]
  10. Ndiaye, A.; Niang, E.H.A.; Diène, A.N.; Nourdine, M.A.; Sarr, P.C.; Konaté, L.; Faye, O.; Gaye, O.; Sy, O. Mapping the breeding sites of Anopheles gambiae sl in areas of residual malaria transmission in central western Senegal. PLoS ONE 2020, 15, e0236607. [Google Scholar] [CrossRef]
  11. Kalluri, S.; Gilruth, P.; Rogers, D.; Szczur, M. Surveillance of arthropod vector-borne infectious diseases using remote sensing techniques: A review. PLoS Pathog. 2007, 3, e116. [Google Scholar] [CrossRef]
  12. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2012, 3, 397–404. [Google Scholar] [CrossRef]
  13. Wimberly, M.C.; de Beurs, K.M.; Loboda, T.V.; Pan, W.K. Satellite observations and malaria: New opportunities for research and applications. Trends Parasitol. 2021, 37, 525–537. [Google Scholar] [CrossRef]
  14. Fornace, K.M.; Herman, L.S.; Abidin, T.R.; Chua, T.H.; Daim, S.; Lorenzo, P.J.; Grignard, L.; Nuin, N.A.; Ying, L.T.; Grigg, M.J.; et al. Exposure and infection to Plasmodium knowlesi in case study communities in Northern Sabah, Malaysia and Palawan, The Philippines. PLoS Neglected Trop. Dis. 2018, 12, e0006432. [Google Scholar] [CrossRef]
  15. Brock, P.M.; Fornace, K.M.; Grigg, M.J.; Anstey, N.M.; William, T.; Cox, J.; Drakeley, C.J.; Ferguson, H.M.; Kao, R.R. Predictive analysis across spatial scales links zoonotic malaria to deforestation. Proc. R. Soc. B 2019, 286, 20182351. [Google Scholar] [CrossRef]
  16. Byrne, I.; Aure, W.; Manin, B.O.; Vythilingam, I.; Ferguson, H.M.; Drakeley, C.J.; Chua, T.H.; Fornace, K.M. Environmental and spatial risk factors for the larval habitats of plasmodium knowlesi vectors in sabah, Malaysian borneo. Sci. Rep. 2021, 11, 11810. [Google Scholar] [CrossRef]
  17. Johnson, E.; Sharma, R.S.K.; Cuenca, P.R.; Byrne, I.; Salgado-Lynn, M.; Shahar, Z.S.; Lin, L.C.; Zulkifli, N.; Saidi, N.D.M.; Drakeley, C.; et al. Forest fragmentation drives zoonotic malaria prevalence in non-human primate hosts. bioRxiv 2022. [Google Scholar] [CrossRef]
  18. Gimnig, J.E.; Ombok, M.; Kamau, L.; Hawley, W.A. Characteristics of larval anopheline (Diptera: Culicidae) habitats in Western Kenya. J. Med. Entomol. 2001, 38, 282–288. [Google Scholar] [CrossRef] [PubMed]
  19. Himeidan, Y.E.; Zhou, G.; Yakob, L.; Afrane, Y.; Munga, S.; Atieli, H.; El-Rayah, E.A.; Githeko, A.K.; Yan, G. Habitat stability and occurrences of malaria vector larvae in western Kenya highlands. Malar. J. 2009, 8, 234. [Google Scholar] [CrossRef]
  20. Kibret, S.; Wilson, G.G.; Ryder, D.; Tekie, H.; Petros, B. Malaria impact of large dams at different eco-epidemiological settings in Ethiopia. Trop. Med. Health 2017, 45, 4. [Google Scholar] [CrossRef]
  21. Nambunga, I.H.; Ngowo, H.S.; Mapua, S.A.; Hape, E.E.; Msugupakulya, B.J.; Msaky, D.S.; Mhumbira, N.T.; Mchwembo, K.R.; Tamayamali, G.Z.; Mlembe, S.V.; et al. Aquatic habitats of the malaria vector Anopheles funestus in rural south-eastern Tanzania. Malar. J. 2020, 19, 219. [Google Scholar] [CrossRef] [PubMed]
  22. Diakité, N.R.; Guindo-Coulibaly, N.; Adja, A.M.; Ouattara, M.; Coulibaly, J.T.; Utzinger, J.; N’Goran, E.K. Spatial and temporal variation of malaria entomological parameters at the onset of a hydro-agricultural development in central Côte d’Ivoire. Malar. J. 2015, 14, 340. [Google Scholar] [CrossRef] [PubMed]
  23. Zahouli, J.B.Z.; Koudou, B.G.; Müller, P.; Malone, D.; Tano, Y.; Utzinger, J. Effect of land-use changes on the abundance, distribution, and host-seeking behavior of Aedes arbovirus vectors in oil palm-dominated landscapes, southeastern Côte d’Ivoire. PLoS ONE 2017, 12, e0189082. [Google Scholar] [CrossRef] [PubMed]
  24. Dida, G.O.; Anyona, D.N.; Abuom, P.O.; Akoko, D.; Adoka, S.O.; Matano, A.S.; Owuor, P.O.; Ouma, C. Spatial distribution and habitat characterization of mosquito species during the dry season along the Mara River and its tributaries, in Kenya and Tanzania. Infect. Dis. Poverty 2018, 7, 2. [Google Scholar] [CrossRef] [PubMed]
  25. Mendis, C.; Jacobsen, J.L.; Gamage-Mendis, A.; Bule, E.; Dgedge, M.; Thompson, R.; Cuamba, N.; Barreto, J.; Begtrup, K.; Sinden, R.E.; et al. Anopheles arabiensis and An. funestus are equally important vectors of malaria in Matola coastal suburb of Maputo, southern Mozambique. Med. Vet. Entomol. 2000, 14, 171–180. [Google Scholar] [CrossRef]
  26. Omukunda, E.; Githeko, A.; Ndong’a, M.F.; Mushinzimana, E.; Yan, G. Effect of swamp cultivation on distribution of anopheline larval habitats in Western Kenya. J. Vector Borne Dis. 2012, 49, 61–71. [Google Scholar]
  27. Kweka, E.J.; Kamau, L.; Munga, S.; Lee, M.C.; Githeko, A.K.; Yan, G. A first report of Anopheles funestus sibling species in western Kenya highlands. Acta Trop. 2013, 128, 158–161. [Google Scholar] [CrossRef]
  28. Fornace, K.M.; Drakeley, C.J.; William, T.; Espino, F.; Cox, J. Mapping infectious disease landscapes: Unmanned aerial vehicles and epidemiology. Trends Parasitol. 2014, 30, 514–519. [Google Scholar] [CrossRef]
  29. Carrasco-Escobar, G.; Moreno, M.; Fornace, K.; Herrera-Varela, M.; Manrique, E.; Conn, J.E. The use of drones for mosquito surveillance and control. Parasites Vectors 2022, 15, 473. [Google Scholar] [CrossRef]
  30. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  31. Šiljeg, A.; Panđa, L.; Domazetović, F.; Marić, I.; Gašparović, M.; Borisov, M.; Milošević, R. Comparative Assessment of Pixel and Object-Based Approaches for Mapping of Olive Tree Crowns Based on UAV Multispectral Imagery. Remote Sens. 2022, 14, 757. [Google Scholar] [CrossRef]
  32. Hodgson, J.C.; Mott, R.; Baylis, S.M.; Pham, T.T.; Wotherspoon, S.; Kilpatrick, A.D.; Raja Segaran, R.; Reid, I.; Terauds, A.; Koh, L.P. Drones count wildlife more accurately and precisely than humans. Methods Ecol. Evol. 2018, 9, 1160–1167. [Google Scholar] [CrossRef]
  33. Gray, P.C.; Fleishman, A.B.; Klein, D.J.; McKown, M.W.; Bezy, V.S.; Lohmann, K.J.; Johnston, D.W. A convolutional neural network for detecting sea turtles in drone imagery. Methods Ecol. Evol. 2019, 10, 345–355. [Google Scholar] [CrossRef]
  34. Kattenborn, T.; Eichel, J.; Fassnacht, F.E. Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci. Rep. 2019, 9, 17656. [Google Scholar] [CrossRef]
  35. Liu, Z.Y.C.; Chamberlin, A.J.; Tallam, K.; Jones, I.J.; Lamore, L.L.; Bauer, J.; Bresciani, M.; Wolfe, C.M.; Casagrandi, R.; Mari, L.; et al. Deep Learning Segmentation of Satellite Imagery Identifies Aquatic Vegetation Associated with Snail Intermediate Hosts of Schistosomiasis in Senegal, Africa. Remote Sens. 2022, 14, 1345. [Google Scholar] [CrossRef]
  36. Hardy, A.; Oakes, G.; Hassan, J.; Yussuf, Y. Improved Use of Drone Imagery for Malaria Vector Control through Technology-Assisted Digitizing (TAD). Remote Sens. 2022, 14, 317. [Google Scholar] [CrossRef]
  37. Kwak, G.H.; Park, N.W. Impact of texture information on crop classification with machine learning and UAV images. Appl. Sci. 2019, 9, 643. [Google Scholar] [CrossRef]
  38. Hu, P.; Chapman, S.C.; Zheng, B. Coupling of machine learning methods to improve estimation of ground coverage from unmanned aerial vehicle (UAV) imagery for high-throughput phenotyping of crops. Funct. Plant Biol. 2021, 48, 766–779. [Google Scholar] [CrossRef]
  39. Komarkova, J.; Jech, J.; Sedlak, P. Comparison of Vegetation Spectral Indices Based on UAV Data: Land Cover Identification Near Small Water Bodies. In Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Sevilla, Spain, 24–27 June 2020; pp. 1–4. [Google Scholar]
  40. Cao, S.; Xu, W.; Sanchez-Azofeif, A.; Tarawally, M. Mapping Urban Land Cover Using Multiple Criteria Spectral Mixture Analysis: A Case Study in Chengdu, China. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 2701–2704. [Google Scholar]
  41. Rustowicz, R.; Cheong, R.; Wang, L.; Ermon, S.; Burke, M.; Lobell, D. Semantic segmentation of crop type in Africa: A novel dataset and analysis of deep learning methods. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 75–82. [Google Scholar]
  42. Collins, K.A.; Ouedraogo, A.; Guelbeogo, W.M.; Awandu, S.S.; Stone, W.; Soulama, I.; Ouattara, M.S.; Nombre, A.; Diarra, A.; Bradley, J.; et al. Investigating the impact of enhanced community case management and monthly screening and treatment on the transmissibility of malaria infections in Burkina Faso: Study protocol for a cluster-randomised trial. BMJ Open 2019, 9, e030598. [Google Scholar] [CrossRef]
  43. Jimenez, G.; Kar, A.; Ounissi, M.; Ingrassia, L.; Boluda, S.; Delatour, B.; Stimmer, L.; Racoceanu, D. Visual Deep Learning-Based Explanation for Neuritic Plaques Segmentation in Alzheimer’s Disease Using Weakly Annotated Whole Slide Histopathological Images. In Lecture Notes in Computer Science, Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI), Singapore, 18–22 September 2022; Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S., Eds.; Springer: Cham, Switzerland, 2022; Volume 13432, pp. 336–344. [Google Scholar] [CrossRef]
  44. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical image computing and computer-assisted intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  45. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 16–17 June 2017; Volume 30.
  46. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  47. Jimenez, G.; Kar, A.; Ounissi, M.; Stimmer, L.; Delatour, B.; Racoceanu, D. Interpretable Deep Learning in Computational Histopathology for refined identification of Alzheimer’s Disease biomarkers. In Proceedings of the Alzheimer’s & Dementia: Alzheimer’s Association International Conference (AAIC), San Diego, CA, USA, 2–4 August 2022; Wiley: Hoboken, NJ, USA, 2022. [Google Scholar]
Figure 1. Drone image collection sites with example drone imagery from each site.
Figure 1. Drone image collection sites with example drone imagery from each site.
Remotesensing 15 02775 g001
Figure 2. Example of image labeling using GroundWork. (A) Grid over an image for labeling. (B) Polygon selection.
Figure 2. Example of image labeling using GroundWork. (A) Grid over an image for labeling. (B) Polygon selection.
Remotesensing 15 02775 g002
Figure 3. Example of image labeling using QGIS.
Figure 3. Example of image labeling using QGIS.
Remotesensing 15 02775 g003
Figure 4. Gold-standard (ground truth) mask process.
Figure 4. Gold-standard (ground truth) mask process.
Remotesensing 15 02775 g004
Figure 5. Image example from Burkina Faso.
Figure 5. Image example from Burkina Faso.
Remotesensing 15 02775 g005
Figure 6. Classification methodology schema.
Figure 6. Classification methodology schema.
Remotesensing 15 02775 g006
Figure 7. Evolution of the training process using the attention U-Net architecture for patches of size 256 × 256 pixels. (a) Buildings. (b) Crops. (c) Non-vegetated water bodies. (d) Roads. (e) Tillage. (f) Vegetated water bodies.
Figure 7. Evolution of the training process using the attention U-Net architecture for patches of size 256 × 256 pixels. (a) Buildings. (b) Crops. (c) Non-vegetated water bodies. (d) Roads. (e) Tillage. (f) Vegetated water bodies.
Remotesensing 15 02775 g007
Figure 8. Predictions using the U-Net architecture for patches of size 256 × 256 pixels. (a) Buildings. (b) Crops. (c) Non-vegetated water bodies. (d) Roads. (e) Tillage. (f) Vegetated water bodies.
Figure 8. Predictions using the U-Net architecture for patches of size 256 × 256 pixels. (a) Buildings. (b) Crops. (c) Non-vegetated water bodies. (d) Roads. (e) Tillage. (f) Vegetated water bodies.
Remotesensing 15 02775 g008
Figure 9. Future proposal: human-supervised tool for improved drone labeling. The idea is adapted from [43,47].
Figure 9. Future proposal: human-supervised tool for improved drone labeling. The idea is adapted from [43,47].
Remotesensing 15 02775 g009
Table 1. Summary of land cover classification methods.
Table 1. Summary of land cover classification methods.
LocationApplicationImaging SourceMethodResult
Senegal River,
West Africa
[35]
Mapping snails’
aquatic habitats
Satellite: 8-band World
View 2 for training
UAV: used to assess
labeling
Semantic segmentation
using
U-Net 8-band + GLCM
features
Accuracy
4 classes
Test: 82.7%
4 classes hold-out
validation: 96.5%
Anbandegi,
Korea
[37]
Crop classification
Kimchi cabbage
UAV: green, red,
NIR bands
SVM and RF. Using
GLCM features
to reduce noise
Overall accuracy
4 classes: 98.72%
Queensland,
Australia
[38]
Ground coverage
Wheat crops
UAV:
RGB
Real image set (RISs)
Synthetic image set (SISs)
Two-step approach:
per-pixel segmentation,
sub-pixel segmentation
using regression tree
classifier
RMSE
RISs: <6%
SISs: <5%
Pardubice,
Czech
Republic
[39]
Land cover
identification
near small
water body
UAV: RGBComparing 8 different
vegetation indexes
Visual comparison
Best performance:
NGRDI, GLI2,
VARI
Chengdu,
China
[40]
Mapping vegetation,
impervious surface
and soil in
urban environment
Satellite: Landsat-8
Operational Land
Imager (OLI)
Applied multiple
criteria spectral mixture
analysis
(MCSMA) with
multi-step approach
for spectral unmixing
RMSE
Vegetation: 0.143
Soil: 0.170
Impervious: 0.151
Ghana and
South Sudan
[41]
Semantic segmentation
of crops in Africa
Satellite:
Sentinel-1 (VV and VH),
Sentinel-2 (10 bands) and
Planet Scope (RGB + NIR)
Compared:
2D U-Net + CLSTM
and 3D CNN using
multi-temporal images
Accuracy
South Sudan:
2D U-Net 88.7%,
3D 90%
Ghana:
2D U-Net 65.7%,
3D 63.5%
Table 2. Number of images and patches by class for training, validation and test.
Table 2. Number of images and patches by class for training, validation and test.
Class# Drone# Train/Val# Train/Val Patches# Test# Test Patches
ImagesImages256 × 256512 × 512Images256 × 256512 × 512
Buildings483674411324124835714
Crops9369229,52258,7382464,35716,440
Roads604538,79939081514,3302106
Tillage423186,98822,8171139,02510,264
Non-vegetated37279194223210791
Vegetated2015466011075564125
Table 3. Summary of the percentage of the class per patch in each category.
Table 3. Summary of the percentage of the class per patch in each category.
Burkina Faso
Patch size256 × 256512 × 512
Classpatchesmin (%)mean (%)max (%)patchesmin (%)mean (%)max (%)
Buildings658410.0133.7210077820.0535.8187.9
Crops14,44010.0076.70100366220.0071.23100.0
Roads42,81010.0029.51100451320.0036.72100.0
Tillage121,95210.0075.4910032,08920.0070.35100.0
Non-vegetated773410.0991.73100190020.0889.51100.0
Vegetated25110.1765.891005820.4961.03100.0
Côte d’Ivoire
Patch size256 × 256512 × 512
Classpatchesmin (%)mean (%)max (%)patchesmin (%)mean (%)max (%)
Buildings569210.046.5100126020.035.682
Crops279,43910.079.510071,51620.074.2100
Roads10,31910.031.4100150120.026.796
Tillage406110.069.410099220.162.3100
Non-vegetated153910.155.310033320.056.3100
Vegetated564610.062.3100110720.158.5100
Table 4. Results of the classification process using a U-Net architecture for 256 × 256 pixels patch size, where the best fold is reported in bold font. The results are reported in terms of cross-validation (CV), false positives (FP), false negatives (FN), true negatives (TN), true positives (TP), precision, recall and Dice.
Table 4. Results of the classification process using a U-Net architecture for 256 × 256 pixels patch size, where the best fold is reported in bold font. The results are reported in terms of cross-validation (CV), false positives (FP), false negatives (FN), true negatives (TN), true positives (TP), precision, recall and Dice.
ClassCVFPFNTNTPPrecisionRecallDice
Vegetated water body10.170.150.150.530.750.780.68
20.250.190.200.370.590.660.56
30.210.120.220.450.680.780.65
Avg.0.210.150.190.450.670.740.63
Tillage10.100.030.130.740.880.960.88
20.100.090.150.660.870.880.82
30.110.050.130.710.870.930.86
Avg.0.100.060.140.700.870.920.85
Roads10.090.070.630.210.700.740.70
20.060.060.710.170.730.750.71
30.160.150.510.170.520.530.43
Avg.0.100.090.620.180.650.670.61
Non-vegetated water body10.020.540.060.380.950.410.50
20.210.260.260.270.570.510.48
30.220.030.180.570.720.960.75
Avg.0.150.280.170.410.740.630.58
Crops10.100.060.100.740.870.930.86
20.130.040.090.750.850.950.86
30.090.040.100.760.890.950.88
Avg.0.110.050.100.750.870.940.86
Building10.040.070.510.380.890.840.81
20.040.080.570.310.880.800.76
30.040.030.540.380.900.920.87
Avg.0.040.060.540.350.890.850.81
Table 5. Results of the classification process using an attention U-Net architecture used for 256 × 256 pixels patch size, where the best fold is reported in bold font. The results are reported in terms of cross-validation (CV), false positives (FP), false negatives (FN), true negatives (TN), true positives (TP), precision, recall, F1-score and Dice.
Table 5. Results of the classification process using an attention U-Net architecture used for 256 × 256 pixels patch size, where the best fold is reported in bold font. The results are reported in terms of cross-validation (CV), false positives (FP), false negatives (FN), true negatives (TN), true positives (TP), precision, recall, F1-score and Dice.
ClassCVFPFNTNTPPrecisionRecallDice
Vegetated water body10.270.080.150.490.640.850.67
20.280.110.170.440.610.810.64
30.080.140.240.530.810.720.70
Avg.0.210.110.190.490.690.790.67
Tillage10.060.250.180.510.850.660.67
20.110.230.130.530.800.680.69
30.030.550.270.150.770.200.27
Avg.0.070.340.190.400.810.510.54
Roads10.150.120.530.200.700.670.58
20.150.100.480.270.710.690.60
30.350.060.400.190.510.770.46
Avg.0.220.090.470.220.640.710.55
Non-vegetated water body10.100.580.300.020.360.030.04
20.060.100.020.820.910.880.85
30.200.050.260.490.710.900.72
Avg.0.120.240.190.440.660.600.54
Crops10.070.170.140.620.850.760.75
20.040.270.130.550.890.660.72
30.160.040.050.750.810.940.84
Avg.0.090.160.110.640.850.790.77
Building10.030.060.560.360.910.840.85
20.030.070.530.370.910.820.83
30.210.060.410.320.650.830.66
Avg.0.090.060.500.350.820.830.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trujillano, F.; Jimenez Garay, G.; Alatrista-Salas, H.; Byrne, I.; Nunez-del-Prado, M.; Chan, K.; Manrique, E.; Johnson, E.; Apollinaire, N.; Kouame Kouakou, P.; et al. Mapping Malaria Vector Habitats in West Africa: Drone Imagery and Deep Learning Analysis for Targeted Vector Surveillance. Remote Sens. 2023, 15, 2775. https://doi.org/10.3390/rs15112775

AMA Style

Trujillano F, Jimenez Garay G, Alatrista-Salas H, Byrne I, Nunez-del-Prado M, Chan K, Manrique E, Johnson E, Apollinaire N, Kouame Kouakou P, et al. Mapping Malaria Vector Habitats in West Africa: Drone Imagery and Deep Learning Analysis for Targeted Vector Surveillance. Remote Sensing. 2023; 15(11):2775. https://doi.org/10.3390/rs15112775

Chicago/Turabian Style

Trujillano, Fedra, Gabriel Jimenez Garay, Hugo Alatrista-Salas, Isabel Byrne, Miguel Nunez-del-Prado, Kallista Chan, Edgar Manrique, Emilia Johnson, Nombre Apollinaire, Pierre Kouame Kouakou, and et al. 2023. "Mapping Malaria Vector Habitats in West Africa: Drone Imagery and Deep Learning Analysis for Targeted Vector Surveillance" Remote Sensing 15, no. 11: 2775. https://doi.org/10.3390/rs15112775

APA Style

Trujillano, F., Jimenez Garay, G., Alatrista-Salas, H., Byrne, I., Nunez-del-Prado, M., Chan, K., Manrique, E., Johnson, E., Apollinaire, N., Kouame Kouakou, P., Oumbouke, W. A., Tiono, A. B., Guelbeogo, M. W., Lines, J., Carrasco-Escobar, G., & Fornace, K. (2023). Mapping Malaria Vector Habitats in West Africa: Drone Imagery and Deep Learning Analysis for Targeted Vector Surveillance. Remote Sensing, 15(11), 2775. https://doi.org/10.3390/rs15112775

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop