Next Article in Journal
AMHFN: Aggregation Multi-Hierarchical Feature Network for Hyperspectral Image Classification
Previous Article in Journal
Projecting Response of Ecological Vulnerability to Future Climate Change and Human Policies in the Yellow River Basin, China
Previous Article in Special Issue
An Unsupervised CNN-Based Pansharpening Framework with Spectral-Spatial Fidelity Balance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatiotemporal Prediction of Conflict Fatality Risk Using Convolutional Neural Networks and Satellite Imagery

1
AidData, Global Research Institute, William & Mary, Williamsburg, VA 23185, USA
2
Department of Economics, William & Mary, Williamsburg, VA 23185, USA
3
Department of Applied Science, William & Mary, Williamsburg, VA 23185, USA
4
Geospatial Evaluation and Observation Lab, Data Science Program, William & Mary, Williamsburg, VA 23185, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(18), 3411; https://doi.org/10.3390/rs16183411
Submission received: 15 July 2024 / Revised: 30 August 2024 / Accepted: 10 September 2024 / Published: 13 September 2024
(This article belongs to the Special Issue Weakly Supervised Deep Learning in Exploiting Remote Sensing Big Data)

Abstract

:
As both satellite imagery and image-based machine learning methods continue to improve and become more accessible, they are being utilized in an increasing number of sectors and applications. Recent applications using convolutional neural networks (CNNs) and satellite imagery include estimating socioeconomic and development indicators such as poverty, road quality, and conflict. This article builds on existing work leveraging satellite imagery and machine learning for estimation or prediction, to explore the potential to extend these methods temporally. Using Landsat 8 imagery and data from the Armed Conflict Location & Event Data Project (ACLED) we produce subnational predictions of the risk of conflict fatalities in Nigeria during 2015, 2017, and 2019 using distinct models trained on both yearly and six-month windows of data from the preceding year. We find that predictions at conflict sites leveraging imagery from the preceding year for training can predict conflict fatalities in the following year with an area under the receiver operating characteristic curve (AUC) of over 75% on average. While models consistently outperform a baseline comparison, and performance in individual periods can be strong (AUC > 80%), changes based on ground conditions such as the geographic scope of conflict can degrade performance in subsequent periods. In addition, we find that training models using an entire year of data slightly outperform models using only six months of data. Overall, the findings suggest CNN-based methods are moderately effective at detecting features in Landsat satellite imagery associated with the risk of fatalities from conflict events across time periods.

1. Introduction

Conflict in Sub-Saharan nations such as Nigeria can have many drivers such as ethnic and religious strife (including terrorist groups), climate change and drought, violence against civilians, and general corruption [1]. The resulting conflicts can range in form from violent clashes, explosions, or attacks on civilians to one-sided protests, riots, or strategic developments [2,3]. Methods to forecast conflict and the risk of violence (i.e., fatalities) can provide utility to humanitarian organizations whose efforts are needed to support people in conflict-impacted areas, yet are also impeded by ongoing conflict (As an example, see https://www.usaid.gov/humanitarian-assistance/nigeria for an overview of USAID’s work in Nigeria. Accessed on 1 July 2024). Advances in the use of satellite imagery and deep learning provide opportunities to remotely develop forecasts of future deadly conflict that can support humanitarian efforts in conflict-affected areas.
Human and environmental systems can be observed with satellite imagery in many ways that may be directly relevant for conflict, such as detecting drought conditions or agricultural encroachment into pastoral lands [4]. The severity of conflict events in particular, in terms of whether the event results in fatalities, can be linked to observable conditions. Data around conflict event sites in Nigeria reveals that the median population of non-fatal event locations is nearly triple that of fatal event locations, and non-fatal event areas are on average 40% urban while fatal event areas are on average only 20% urban (Based on population [5], land cover [6], and precipitation [7] data within a 3 km buffer around conflict event sites in Nigeria from 2015, 2017, and 2019 from the Armed Conflict Location & Event Data Project (ACLED) [3]. Population, land cover, and precipitation data prepared following methods in [8]). In addition, the median coverage of rainfed cropland in fatal event locations is nearly 40% yet only 7.5% in non-fatal locations. Irrigated cropland however is only present in around 1% of all conflict event areas. Given the increased occurrence of fatal events in areas with rainfed cropland, it is notable that average rainfall (See World Bank Climate Change Knowledge Portal https://climateknowledgeportal.worldbank.org/country/nigeria/climate-data-historical. Accessed on 1 July 2024) at non-fatal event locations is in line with national rainfall averages, while fatal event locations are approximately 15% lower (based on fatal and non-fatal event locations where at least 25% of the nearby area was rainfed cropland). These geospatial characteristics provide meaningful insight into potential information (e.g., populous urban areas, dry cropland) that can be detected from even moderate-resolution satellite imagery over time in order to forecast fatal conflict events.
Convolutional neural networks (ConvNets or CNNs) have been a focus of recent work that is connecting advancements in geospatial AI with research focused on human development [9]. CNNs are a class of neural networks well-suited for a variety of computer vision tasks that require detecting spatial patterns or features within images [10,11,12,13]. Numerous implementations of CNNs have been developed, dealing with image classification, object detection, and more [14].
Geospatial applications of CNNs include using spectral signatures and phenological trends captured by satellite imagery to perform pixel and image classification [15,16], land use classification [17], and the detection of objects such as vehicles or buildings in satellite imagery [18,19]. In recent years, researchers focused on development economics have leveraged these methods to improve upon existing methods to fill the gaps in sparse datasets such as household surveys [20]. By using abundant remotely sensed data sources (satellite imagery) as inputs, CNNs are trained using existing sparse data (e.g., poverty metrics from household surveys) as labels. The resulting trained network can be used to predict the sparse data in additional geographical areas based on satellite imagery alone.
Building on work within the geospatial community [21], Jean et al. showed that CNN-based estimates of poverty offered improvements on previous methods of using nighttime lights to estimate poverty [20]. Over the past several years there have been a number of additional contributions to the literature utilizing CNNs and satellite imagery to predict human development indicators [9]. Much of this work has focused on exploring effectiveness in new geographic contexts, along with advancements based on incorporating new data or improvements to machine learning algorithms (e.g., [22]). Poverty estimates have been produced using a variety of available satellite imagery across most of Africa [23], as well as Mexico [24], Bangladesh & India [25], and Sri Lanka [26]. In addition, literature addressing the usage of CNN and satellite image-based methods to predict other development indicators has explored infrastructure quality assessments [27,28,29], crop yield [30], education & health-related metrics [31,32], population mapping [33,34], and conflict [35]. The potential for application, improvement, and operationalization of these methods across varying sectors remains a topic of discussion [9,36].
While much of this work has shown promising results, the literature has highlighted some limitations [32] as well as the abundance of opportunities for future research to advance the practical application of these methods [9,35,36]. One important area yet to be fully addressed when predicting development indicators using CNNs and satellite imagery is the capacity for accurate predictions across varying and discrete time periods. Producing time series estimates is necessary for many applications [9] such as geospatial impact evaluations that aim to leverage geospatial data over time to study the impact of development projects and other interventions [37,38,39,40,41,42]. To date, the literature has largely focused on using imagery that is a combination of a broad temporal range, or imagery tied to a single, specific period based on available training data, with limited evaluation of temporal transferability of models [9]. Recent work has indicated that existing approaches with strong cross-sectional performance may not perform as well across multiple points in time [43].
More broadly, a range of existing literature has highlighted the capability of deep learning methods such as CNNs to leverage time series data [44,45]. The Time Series Land Cover Challenge (TiSeLaC) [46] is one effort that engages with these problems and has led to the advancement of time series CNNs for classifying land cover types using satellite imagery [47]. Notably, [48], the winners of the 2017 TiSeLaC, were able to accurately classify multispectral time series Landsat 8 imagery over 99% of the time, using a combination of MLPs (multilayer perceptrons) and CNNs [48]. Related time series crop classification efforts have also led to the development of novel neural architectures [49]. Numerous applications in recent literature provide opportunities to explore whether existing time series approaches for deep learning can be extended to using CNNs to estimate development indicators [20,23,24,26,36]. In particular, recent work demonstrating that for a discrete temporal window, CNNs can detect features from moderate resolution imagery associated with the risk of conflict fatalities is well suited to be expanded as a time series application [35].
A large body of existing work in the conflict prediction space has explored a range of methods, including machine learning, to forecast conflict in varying geographical contexts [50]. Applications of existing methods have often been limited in either their temporal or spatial precision, scope, or reliability [51,52]. While some approaches incorporate explicitly spatial modeling and/or data [53], many use non-spatial inputs such as text analysis from media content and other sources [54]. Several recent efforts using geospatial data have been a part of the Violence Early Warning System’s (ViEWS) prediction competition. The models created have leveraged an array of geospatial data with modeling techniques including logistic regressions, random forests, a variety of neural networks, and other ML algorithms [55,56]. A defining feature of the ViEWS-related efforts is the use of either the country level or coarse (approximately 50 × 50 km) grid cells as the sampling/prediction unit along with predefined features (e.g., demographics, GDP, past conflict data) as input to train models [57,58,59]. Recent work in the conflict prediction space has sought to improve the resolution of predictions, using 25 × 25 km grid cells, along with predefined features from a range of remote sensing data sources such as land cover, population, and crop data [60]. A critical distinction between these approaches and existing imagery-based deep learning efforts is the use of predefined input features and the resolution of the sampling/prediction units [35].
Building on existing applications of CNNs and satellite imagery in order to produce multi temporal predictions of future fatal conflict has the potential to benefit a range of development, security, and other applications that are currently limited by available data. Estimates of the spatiotemporal risk of conflict fatalities—as opposed to only the spatial location of any future conflict events—are particularly valuable for practicable applications including humanitarian and aid efforts, which require determining whether it is safe for personnel to operate in potentially dangerous regions. Models that estimate the risk of conflict fatalities should a conflict event occur in the future could be used by organizations to assess the risk of deploying personnel to those locations. Finer resolution predictions, as well as depending on only multispectral imagery for model inputs, can provide additional operational value over existing approaches which depend on many inputs and use coarse-resolution spatial grids for predictions.
In this piece, we explicitly focus on the degree to which the fatality of potential future conflict events can be forecast using satellite imagery and deep learning. The research builds on previous work to estimate conflict fatality in order to further explore the effectiveness and challenges of detecting precursors to fatal conflict from satellite imagery at multiple discrete points in time. We train CNN models using satellite imagery and conflict event data from both yearly and six-month windows from 2014 to 2019 to address two research questions: (RQ1) Can CNNs detect features from moderate resolution satellite imagery associated with conflict fatality at multiple discrete points in time? And if so, (RQ2) how does model performance vary based on the temporal window size used for training? In doing so, we also aim to contribute to the broader literature by exploring the potential of CNN-based approaches to estimate development indicators using satellite imagery at multiple points in time in order to support practical time series applications.
The paper is structured as follows. Section 2 presents the methods used to prepare Landsat 8 imagery and ACLED data samples for training, as well as the approach for training and validating CNN models. The results of the training and validation are reviewed in Section 3. Finally, Section 4 discusses potential paths forward based on this work and a conclusion.

2. Materials and Methods

To assess whether CNNs can detect features from satellite imagery associated with conflict fatality at multiple discrete points in time, and how model performance varies based on the temporal window size used for training, we train a series of ResNet (residual network) [14] CNNs using imagery from Landsat 8 around ACLED conflict event locations which are labeled based on whether a conflict fatality occurred during the associated event. ResNets are a well-established type of CNN that have been used in similar applications leveraging satellite imagery to estimate socioeconomic factors [26]. Landsat 8 imagery provides sufficient resolution to detail a wide range of features including environmental conditions and land use with broad spatial and temporal coverage including Nigeria during the study window. Finally, ACLED offers extensive coverage of conflict events and related fatality counts that can be used to label imagery [35].
Throughout this paper, predicting conflict fatality refers specifically to determining the risk of a fatality if conflict were to occur at a given location. Individual CNNs are trained for three distinct time periods of conflict event data (2015, 2017, 2019) using: (1) twelve months of imagery from one full calendar year prior to the conflict event year (January–December), (2) six months of imagery starting twelve months prior to the event year (January–June), or (3) six months of imagery starting six months prior to the event year (July–December) (see Figure 1). Testing both yearly and six-month windows will provide insight into both the general effectiveness of temporal models, as well as whether smaller windows (A) provide sufficient data to train models, and (B) can improve models by enabling the detection of seasonal trends.
The resulting models are assessed based on their ability to accurately classify validation data left out from the original training. Finally, the performance of models from each discrete time period using both yearly and six-month windows of data are compared with one another, as well as with a naive baseline conflict prediction approach based only on the location of fatal conflict from the previous year. Each of these steps is detailed further in this section.

2.1. Data Preparation

NASA and USGS have made Landsat 8 imagery free and publicly available since its launch in 2013. Like its predecessor Landsat 7, Landsat 8 imagery captures data for the entire planet every 16 days at 30 m resolution [61]. It consists of eleven spectral bands of imagery, captured using its operational land imager (OLI) and thermal infrared sensor (TIRS) [62] (Table 1).
Individual scenes of Landsat 8 imagery covering Nigeria are acquired using the EarthExplorer platform and bulk download tools [63,64] for 2014, 2016, and 2018 to be paired with conflict data from 2015, 2017, and 2019. These scenes, acquired at sixteen-day intervals, are subsequently mosaiced and aggregated to six-month and yearly windows for each year by taking the average pixel values for each band. Cloud pixels within individual scenes are masked based on the Landsat quality assessment band, ignoring any pixels marked as medium or high confidence of clouds [65].
In this analysis, we seek to construct three different temporal aggregations of Landsat data: two six-month periods and the twelve-month period associated with the year prior to the detected conflict event. The six-month window aggregations are performed based on the first and second half of each calendar year (i.e., January–June and July–December). The twelve-month window aggregated uses all data from the year preceding the year in which the event took place (i.e., January–December). Of the eleven bands available from Landsat 8, all individual and non overlapping bands are utilized (i.e., the panchromatic band is dropped) to include any potentially relevant spatial features across the detectable spectrum. The result of this imagery preprocessing is an aggregate image of the entirety of Nigeria for each of the ten bands for each year at yearly and six-month windows.
Conflict locations and fatality counts are downloaded using ACLED’s data export tool [3]. The dataset consists of a date, coordinates, and fatality count for each recorded conflict event. This data is aggregated into yearly and six-month windows for 2015, 2017, and 2019 to be paired with Landsat 8 imagery from the previous year. For each event, the conflict fatality count is converted into a binary value indicating whether or not a fatality occurred (1 or 0, respectively). This binary classification is used to label imagery associated with the conflict location during the training and validation of CNN models.
Imagery from 2014, 2016, and 2018 is paired with conflict fatality data from 2015, 2017, and 2019, respectively, at yearly and six-month windows as listed in Table 2. For each time period and temporal window, samples are prepared for training and validation of the CNNs. Individual samples consist of a location (longitude and latitude) based on a conflict event which can be used to load georeferenced imagery at that location, and a label of whether the associated conflict event had a fatality or not. Each conflict event location is treated as a unique sample as conflict event locations are not consistent across time periods, and therefore do not allow time-series sampling of locations.
Imagery for each sample consists of a 244 × 244 × 10 image stack around the sample location (approximately a 7 × 7 km area), representing each of the ten bands of Landsat 8 imagery detailed in Table 1. In the event that the classes within the sample sets are imbalanced (fatal and non-fatal events), the larger class is trimmed to the size of the smaller class (observations from the larger class are randomly dropped until the classes are balanced; on average, fatal events represent approximately 44% of the data). The balanced class samples are then split into separate datasets to be used during the training and validation stages (85% training, 15% validation), and the samples are duplicated nine times to increase model exposure during training (considerations regarding sampling schemes are included in Section 4). Finally, the process of producing the training and validation dataset is repeated five times to ensure the outcomes are reproducible regardless of randomization during sample generation.

2.2. Training

To train CNNs using the sampling data frames generated from Landsat imagery and ACLED conflict data, a ResNet CNN architecture is implemented using PyTorch (v1). This work employs ResNets pre-trained on ImageNet [66] to benefit from transfer learning, which has been shown to be effective in a range of applications within the broader machine learning community including satellite imagery-based application [20,35,67,68,69,70,71,72,73]. To utilize transfer learning, a CNN first learns generalizable features—such as basic lines, shapes, and other patterns from a much larger (although potentially unrelated) set of images such as ImageNet. The pre-trained CNN retains that knowledge and can then be fine-tuned using application-specific data (e.g., satellite imagery with conflict fatality labels). This process allows leveraging CNNs for applications that would not have sufficient data to properly train a newly initialized CNN.
The default ResNet architecture is modified to work with ten bands of imagery (compared to the three bands of red, green, and blue found in common images, including those in ImageNet). Following examples in the literature, the additional bands are initialized using an average of the weights for each of the red, green, and blue bands [20,35]. The final layer of the CNN responsible for classification is then replaced in order to utilize the binary classes identifying whether a conflict event was fatal. Five sample data frames for training and validation are created for each time period and temporal window listed in Table 2 to mitigate the chance that results are influenced by the random generation of an individual sample data frame. For each sample data frame generated, 16 CNNs are trained during a grid search using combinations of the specific ResNet architecture, the learning rate, gamma, and step size as described in Table 3.
The values tested for each of the hyperparameters, seen in Table 3, are based on results from earlier work which showed minimal variation in performance across hyperparameters Goodman et al. [35]. Despite the lack of hyperparameter impact in previous work, we include a range of hyperparameter values to assess potential variation in optimal hyperparameters across time periods and temporal windows tested in this paper. For all tests, stochastic gradient descent (SGD) optimization is used over 60 epochs with a batch size of 64. Across all time periods, temporal windows, and hyperparameter combinations a total of 720 CNNs are trained.

2.3. Validation

For each CNN, a set of validation data (15% of total data for each period, see Table 2) is withheld from the training data, and subsequently used to produce predictions. The result of each prediction is the probability of a fatality at the location of a conflict event.
A confusion matrix defining the relationship of all possible outcomes for a binary classifier is generated to assess the performance of the models. In the confusion matrix, actual and predicted values are either positive (a fatality occurred from a conflict event) or negative (no fatality occurred from a conflict event) as seen in Table 4.
The results from the confusion matrix can be used to produce Receiver Operating Characteristic (ROC) curves, which are a useful visualization to understand model performance [74]. An ROC curve plots the true positive rate on the y-axis and the false positive rate on the x-axis as the threshold for detecting positive cases based on probability is varied from 0 to 1. Using the ROC curve, we then generate the area under the ROC curve (AUC). In an ideal classifier, which has a true positive rate of 1.0 regardless of the false positive rate, the AUC would be 1.0. For a random or “no skill” classifier, the AUC would be 0.5. The value of the AUC for a model can be used to assess overall performance, as well as to identify a threshold value suitable for a specific application, and has been used in other applications of satellite imagery and CNNs within the development community [27]. Identifying suitable threshold values is of particular importance in conflict and security applications given the potential implications of false negative predictions.
To establish a baseline for comparing model accuracy, we generate a naive predictor of conflict fatality risk based only on proximity to the locations of fatal conflict events from the prior year. For each conflict window listed in Table 2 we evaluate the percentage of both fatal and non-fatal events correctly classified using this naive predictor. Four versions of the naive predictor are tested using proximity thresholds of 1 km, 5 km, 7 km, and 15 km. For example, when using the 5 km naive predictor for an event location in 2019, a fatal conflict event would be predicted if a fatal conflict event occurred within 5 km during 2018. Although the threshold distances used are not directly comparable to the imagery window size used in the CNN (approximately 7.3 × 7.3 km), the threshold distances were selected to assess naive performance across a broad range. Smaller proximity distances are more likely to result in better true negative rates (and worse true positive rates) while larger distances will likely result in better true positive rates (and worse true negative rates).

3. Results

Using the methods described in Section 2, 720 CNNs were trained to predict the risk of conflict fatality across three distinct time periods using yearly and six-month windows of imagery and conflict data (Table 2). The models also explored the impact of 16 different CNN hyperparameter combinations (Table 3). To ensure model performance is not dependent on a particular random sample generation, testing included repeating the sample generation process five times. A baseline accuracy comparison was produced using the maximum accuracy of a naive predictor described in Section 2 employed across all time periods and temporal windows. The accuracy results from all CNN models, along with the baseline, can be seen in Figure 2.
Average accuracy across time periods and temporal windows was approximately 70%, with models using yearly windows of data slightly outperforming models using six-month windows overall. Model performance in the 2018–2019 time period was lower as contrasted to the 2014–2015 and 2016–2017 periods. Despite the significant decline in performance of the 2019 models, the minimum accuracy of all CNN models exceeded the 56% baseline accuracy of the naive predictor.
The accuracy of the naive predictor when only classifying fatal events (true positive rate) (Table 5) is only slightly below that of the CNN models’ average accuracy. However, the naive predictor’s accuracy when classifying non-fatal events (true negative rate) falls significantly compared to the CNN (indicating a high false positive rate) and results in worse overall performance. The baseline comparison with the naive predictor provides some confirmation that the CNN is detecting meaningful features from imagery that extend beyond solely the geographic locations of past events or exact features from training data, which allows the models to identify conflict in areas outside of those on which it was trained. The impact of a significant geographic shift in the location of training data on performance is discussed further in Section 4.
Across all tests, no noticeable changes based on hyperparameters were found and no notable changes in hyperparameter performance were associated with specific time periods or data windows. These results build upon findings from earlier work that found fine-tuning of hyperparameters within a generally acceptable range does not have a significant impact on model performance [35]. A notable additional finding is that a broader range between minimum and maximum values was seen across tests when using the six-month windows of data compared to yearly windows.
ROC curves were produced for the best-performing hyperparameter combination across all tests. The optimal hyperparameter combination consisted of ResNet50 CNN architecture, a learning rate of 0.0001, a gamma of 0.25, and a step size of 15. These hyperparameters produced a mean accuracy of 72% across all tests, with a maximum accuracy of 82% using six-month windows of data, and a maximum of 78% using yearly windows. ROC curves and an accompanying AUC metric were produced to gauge the ability of the models to effectively predict true positives without generating excessive false positives. An example of the ROC curves generated is seen in Figure 3 and shows the results based on using 2014 imagery from January–December to predict the 2015 (January–December) probability of a fatality from conflict. In this example, five curves are shown, showing the results from our five randomized permutations of test and training data. All ROC curves are included in Appendix A.3. The average AUC for each temporal pair is shown in Table 6.
AUC values when using the yearly data windows were consistently higher than when using the six-month windows. For both the 2015 and 2017 models, the AUC dropped by 2.5% when moving from yearly to six-month windows. For the 2019 model, the AUC when using a six-month data window dropped by nearly 5%. There was a significant drop in performance based on AUC in 2019 in line with earlier findings when compared to the 2015 and 2017 models. The 2019 model AUC values were approximately 10% less than the corresponding 2015 and 2017 models’ AUC.
Overall, these results indicate that the CNNs are able to leverage sufficient information from satellite imagery to consistently outperform a baseline naive prediction at multiple time periods and temporal windows. Using six-month windows of data can achieve similar performance to using yearly windows of data, but both may be susceptible to degraded performance due to changes in the underlying data based on ground conditions. The shifts in performance seen across the time periods explored during tests—primarily poorer performance in 2019—along with other limitations and potential future research directions are discussed in the following section.

4. Discussion

This paper presented evidence that CNN-based methods are moderately effective at identifying features in satellite imagery associated with the risk of fatalities from conflict events across time periods. These findings contribute to a broader set of literature using CNNs and satellite imagery to predict development indicators at a single year or coarser time periods [20,24,26,32,35].
As a byproduct of testing performance over multiple time periods, this work was able to observe the model’s behavior when the geographic distribution of samples shifted over time. The decrease in performance seen in the models using 2018/19 data—compared to models using data from 2014/15 or 2016/17—was one of the most pronounced changes found throughout testing. As shown in Figure 4, this change in performance was associated with a notable shift in the geographic distribution of conflict locations used for training (an increase in conflict events in the northwest of the country). This reinforces the importance of testing out-of-sample predictions, across both space and time, presented in literature [26], and suggests a need for additional training and validation data to improve the robustness of the models.
Addressing the amount and spatial coverage of samples used for training and validation has been handled in multiple ways in literature exploring CNN-based approaches to predict development indicators. Previous work to produce estimates of poverty using CNNs and satellite imagery incorporated data across multiple geographic extents (countries), and shown varying performance based on which areas were used to train models, as well as which country was being evaluated [20]. Other work has advised caution when extending predictions across geographic areas and also suggests that temporal variation in imagery may impact performance [26]. Given the potential impact of shifting the geographic distribution of locations over time on model performance, applications of these methods should consider the geographic distribution of training and validation data as well as the distribution of locations where the model will be used to generate predictions. The lack of considerable variation in performance across hyperparameters seen during optimization may be one indicator of the generalizability of the model and could facilitate training models on broader geographic areas.
The amount of data available also impacts the approaches that are used to train models. Transfer learning and pre-trained CNNs can be leveraged to reduce the dependence on large training data sets, yet there is still a need for enough data to sufficiently fine-tune pre-trained networks and represent all geographic regions where the model may be used for predictions. For example, insufficient training data may be one of the reasons for a decrease in performance when moving from using a full-year window of data to a six-month window. Across tests, there was a slightly larger range between minimum and maximum accuracy when using six-month windows which may be indicative of the instability of model performance when relying on smaller training sets. The potential ability and value of narrower temporal windows to reflect seasonal trends (by aggregating less underlying data and therefore containing less noise from seasonal variation) may have either been negated by the associated drop in samples, not realized due to seasonal misalignment, or simply not been meaningful when predicting the fatality of conflict events.
One possible solution to help address limitations around sample size is to train a model using a combination of samples across time periods. In addition to increasing geographic coverage and sample size, combining data across time periods would enable exploring the model’s ability to predict conflict fatality beyond the temporal range on which they were trained. Related to this, the specific seasonality of sub-annual time periods used for training could impact model performance and would require further research to evaluate. The extensibility of models to time periods beyond those on which they were trained would be necessary to support many real-world applications (i.e., using historical data to predict future conflict fatality risk). While the current approach could be used to gain general insight into extensibility, the lack of repeat observations across time periods does limit our ability to gauge the performance of a time series of estimates (i.e., using true panel data). Similarly, the current work provides little explanatory power as to the drivers of conflict/fatalities, yet may be able to contribute to broader literature exploring the nature of conflict through further research into the underlying features in satellite imagery associated with conflict fatalities.
Another aspect of real-world applications not explored in this paper is the spatial scale of predictions. This paper focuses on predictions and validation in terms of known conflict events at specific locations. Future estimates would likely be made in the form of either a continuous surface or a grid across an entire region. In these scenarios, the resolution of the surface or distance between grid points used would play a significant role in the accuracy of predictions. Given possible concerns with the accuracy of the underlying georeferencing of conflict events datasets [75], and that a single coordinate is often a simplification of the extent of a conflict event, it may also be practical to aggregate predictions to relevant local units such as administrative zones.
As methods for predicting conflict and fatalities progress, applications may aim to generate predictions for narrower windows of time (e.g., three-month windows) or further examine the existence of seasonal trends. Using the presented methodology may result in issues of (a) producing cloud-free aggregates of satellite imagery without missing data and (b) suffering reduced performance due to limitations regarding the temporal precision of recorded conflict events. Despite the sixteen-day revisit time of Landsat 8 imagery, not all images result in usable scenes due to cloud cover or other quality issues. As a result, it is necessary to have a sufficient temporal window over which to aggregate scenes and achieve full coverage. A possible solution to this problem would be to aggregate a longer period of imagery than the prediction interval. For example, a model could be trained to use a six-month window of imagery to produce predictions for a three-month window of conflict fatality. Limitations based on the temporal precision of the underlying conflict data used for training would be more difficult to overcome. Potential solutions include identifying new sources of data with improved temporal precision or working with data providers to improve data collection. In addition, while the arbitrary six-month windows did not benefit from seasonal trends in imagery, they also only showed minimal decreases in performance and it is possible more specific windows (e.g., harvest seasons, dry season) could produce better results.
In addition to the areas of future research discussed above, multiple opportunities exist for improving the existing methodology based on progress within the broader machine learning community. Examples of pathways towards improving model performance include the use of ensemble models [76,77,78], 3D convolutional layers [79,80], and other CNN architectures such as ResNeXt [81]. Applying transformations such as rotating or flipping the input satellite image are other viable means of increasing exposure to data variations when using limited training data [82,83]. These methods have been used widely and may offer an improvement of the basic sampling scheme presented in this paper [84].
Similarly, opportunities may exist to leverage alternative sources of data for imagery and conflict records. Finer resolution imagery products such as Sentinel 2 (European Space Agency) or PlanetScope (Planet Labs) may support detecting features associated with conflict that cannot be detected from Landsat 8 imagery. In addition, improvements to conflict datasets to produce more precise location records (or utilizing only subsets of datasets such as ACLED that were recorded at precise levels) may enable more effective training of deep learning models.

5. Conclusions

The work presented in this paper sought to address two research questions: (RQ1) Can CNNs detect features from moderate resolution satellite imagery associated with conflict fatality at multiple discrete points in time? And if so, (RQ2) how does model performance vary based on the temporal window size used for training? Using Landsat 8 imagery and ACLED conflict data, this paper showed that models trained using six-month and yearly windows of data were able to achieve comparable performance at multiple points in time. While predictive ability varied, overall metrics of both accuracy as well as ROC area under the curve both reached approximately 75% on average and outperformed a naive baseline predictor. These results advance existing work in the field by providing evidence that features in moderate-resolution satellite imagery that are associated with conflict fatalities can be detected using CNNs at multiple points in time, as well as when using varying windows of data. Combined, these findings provide a pathway forward for producing future estimates of the risk of conflict fatalities that may be useful in a variety of development and security applications.

Author Contributions

S.G. conceptualized the project, with input from D.R.; A.B. and S.G. wrote code to prepare data and implement machine learning methods, analysis, and other components. D.R. assisted in algorithm and methodology development. This piece was written by S.G. and D.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the USAID Global Development Lab under Grant AID-OAA-A-12-00096.

Data Availability Statement

All data necessary to reproduce the work in this paper—specifically Landsat 8 imagery and ACLED conflict data—are from publicly available sources and can be acquired as detailed in the paper, but are either too large to redistribute or not permitted to be redistributed. Code used for model development can be accessed on GitHub (https://github.com/aiddata/geo-cnn/tree/nigeria_npe_prediction, accessed on 1 July 2024).

Acknowledgments

The authors acknowledge William & Mary Research Computing for providing computational resources and/or technical support that have contributed to the results reported within this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Additional Metrics

This appendix section consists of a full confusion matrix (Table A1) and associated performance metrics (Table A2) using varying thresholds for the models referenced in Section 3. For Table A1 and Table A2, the version column references each of the five versions of training/validation data generated randomly generated to ensure model performance is not dependent on a particular random sample generation.

Appendix A.1. Confusion Matrix

Table A1. Full confusion matrices for all models generated.
Table A1. Full confusion matrices for all models generated.
DataVersionThreshtptnfpfn
2016 imagery, 2017 conflict [July–December]v10010.30.8790.4840.5160.121
2016 imagery, 2017 conflict [July–December]v10010.350.8790.5160.4840.121
2016 imagery, 2017 conflict [July–December]v10010.40.8790.5480.4520.121
2016 imagery, 2017 conflict [July–December]v10010.450.8620.6290.3710.138
2016 imagery, 2017 conflict [July–December]v10010.50.8450.6450.3550.155
2016 imagery, 2017 conflict [July–December]v10020.30.8970.5590.4410.103
2016 imagery, 2017 conflict [July–December]v10020.350.8450.6270.3730.155
2016 imagery, 2017 conflict [July–December]v10020.40.8450.6440.3560.155
2016 imagery, 2017 conflict [July–December]v10020.450.8450.6610.3390.155
2016 imagery, 2017 conflict [July–December]v10020.50.810.6950.3050.19
2016 imagery, 2017 conflict [July–December]v10030.30.9470.3460.6540.053
2016 imagery, 2017 conflict [July–December]v10030.350.8950.4620.5380.105
2016 imagery, 2017 conflict [July–December]v10030.40.8950.5190.4810.105
2016 imagery, 2017 conflict [July–December]v10030.450.8770.5960.4040.123
2016 imagery, 2017 conflict [July–December]v10030.50.8420.6540.3460.158
2016 imagery, 2017 conflict [July–December]v10040.30.9140.5820.4180.086
2016 imagery, 2017 conflict [July–December]v10040.350.8970.7090.2910.103
2016 imagery, 2017 conflict [July–December]v10040.40.8790.7450.2550.121
2016 imagery, 2017 conflict [July–December]v10040.450.7930.7640.2360.207
2016 imagery, 2017 conflict [July–December]v10040.50.7070.8550.1450.293
2016 imagery, 2017 conflict [July–December]v1000.30.8770.5170.4830.123
2016 imagery, 2017 conflict [July–December]v1000.350.8250.60.40.175
2016 imagery, 2017 conflict [July–December]v1000.40.8250.6170.3830.175
2016 imagery, 2017 conflict [July–December]v1000.450.8070.6170.3830.193
2016 imagery, 2017 conflict [July–December]v1000.50.8070.6670.3330.193
2014 imagery, 2015 conflict [July–December]v10010.30.950.5750.4250.05
2014 imagery, 2015 conflict [July–December]v10010.350.9250.6750.3250.075
2014 imagery, 2015 conflict [July–December]v10010.40.9250.70.30.075
2014 imagery, 2015 conflict [July–December]v10010.450.8750.70.30.125
2014 imagery, 2015 conflict [July–December]v10010.50.8750.70.30.125
2014 imagery, 2015 conflict [July–December]v10020.30.8210.6320.3680.179
2014 imagery, 2015 conflict [July–December]v10020.350.7440.6580.3420.256
2014 imagery, 2015 conflict [July–December]v10020.40.6920.6580.3420.308
2014 imagery, 2015 conflict [July–December]v10020.450.6920.7110.2890.308
2014 imagery, 2015 conflict [July–December]v10020.50.6670.7890.2110.333
2014 imagery, 2015 conflict [July–December]v10030.30.7750.6150.3850.225
2014 imagery, 2015 conflict [July–December]v10030.350.7250.7440.2560.275
2014 imagery, 2015 conflict [July–December]v10030.40.70.7950.2050.3
2014 imagery, 2015 conflict [July–December]v10030.450.6750.7950.2050.325
2014 imagery, 2015 conflict [July–December]v10030.50.650.8210.1790.35
2014 imagery, 2015 conflict [July–December]v10040.30.8250.610.390.175
2014 imagery, 2015 conflict [July–December]v10040.350.8250.6340.3660.175
2014 imagery, 2015 conflict [July–December]v10040.40.7750.6590.3410.225
2014 imagery, 2015 conflict [July–December]v10040.450.70.6830.3170.3
2014 imagery, 2015 conflict [July–December]v10040.50.70.6830.3170.3
2014 imagery, 2015 conflict [July–December]v1000.30.90.5760.4240.1
2014 imagery, 2015 conflict [July–December]v1000.350.90.6970.3030.1
2014 imagery, 2015 conflict [July–December]v1000.40.8750.8180.1820.125
2014 imagery, 2015 conflict [July–December]v1000.450.850.8480.1520.15
2014 imagery, 2015 conflict [July–December]v1000.50.850.8480.1520.15
2014 imagery, 2015 conflict [January–December]v10110.30.9510.4560.5440.049
2014 imagery, 2015 conflict [January–December]v10110.350.9020.5340.4660.098
2014 imagery, 2015 conflict [January–December]v10110.40.8530.660.340.147
2014 imagery, 2015 conflict [January–December]v10110.450.8240.7280.2720.176
2014 imagery, 2015 conflict [January–December]v10110.50.7750.7770.2230.225
2014 imagery, 2015 conflict [January–December]v10120.30.9120.5090.4910.088
2014 imagery, 2015 conflict [January–December]v10120.350.8920.5570.4430.108
2014 imagery, 2015 conflict [January–December]v10120.40.8430.6230.3770.157
2014 imagery, 2015 conflict [January–December]v10120.450.7650.6510.3490.235
2014 imagery, 2015 conflict [January–December]v10120.50.7250.7360.2640.275
2014 imagery, 2015 conflict [January–December]v10130.30.9310.4850.5150.069
2014 imagery, 2015 conflict [January–December]v10130.350.8920.5260.4740.108
2014 imagery, 2015 conflict [January–December]v10130.40.8530.6190.3810.147
2014 imagery, 2015 conflict [January–December]v10130.450.8040.7320.2680.196
2014 imagery, 2015 conflict [January–December]v10130.50.7650.8140.1860.235
2014 imagery, 2015 conflict [January–December]v10140.30.9710.4690.5310.029
2014 imagery, 2015 conflict [January–December]v10140.350.9510.5310.4690.049
2014 imagery, 2015 conflict [January–December]v10140.40.9310.5820.4180.069
2014 imagery, 2015 conflict [January–December]v10140.450.8630.6430.3570.137
2014 imagery, 2015 conflict [January–December]v10140.50.8140.7240.2760.186
2014 imagery, 2015 conflict [January–December]v1010.30.9220.5290.4710.078
2014 imagery, 2015 conflict [January–December]v1010.350.9120.5580.4420.088
2014 imagery, 2015 conflict [January–December]v1010.40.8730.5870.4130.127
2014 imagery, 2015 conflict [January–December]v1010.450.8630.6250.3750.137
2014 imagery, 2015 conflict [January–December]v1010.50.7550.6920.3080.245
2014 imagery, 2015 conflict [January–June]v10010.30.7420.7250.2750.258
2014 imagery, 2015 conflict [January–June]v10010.350.7260.7390.2610.274
2014 imagery, 2015 conflict [January–June]v10010.40.7260.7540.2460.274
2014 imagery, 2015 conflict [January–June]v10010.450.7260.7680.2320.274
2014 imagery, 2015 conflict [January–June]v10010.50.7260.7830.2170.274
2014 imagery, 2015 conflict [January–June]v10020.30.790.5970.4030.21
2014 imagery, 2015 conflict [January–June]v10020.350.790.6450.3550.21
2014 imagery, 2015 conflict [January–June]v10020.40.790.6610.3390.21
2014 imagery, 2015 conflict [January–June]v10020.450.7740.7260.2740.226
2014 imagery, 2015 conflict [January–June]v10020.50.7580.7420.2580.242
2014 imagery, 2015 conflict [January–June]v10030.30.810.6230.3770.19
2014 imagery, 2015 conflict [January–June]v10030.350.7780.6230.3770.222
2014 imagery, 2015 conflict [January–June]v10030.40.7620.6720.3280.238
2014 imagery, 2015 conflict [January–June]v10030.450.7460.6890.3110.254
2014 imagery, 2015 conflict [January–June]v10030.50.730.6890.3110.27
2014 imagery, 2015 conflict [January–June]v10040.30.6450.7160.2840.355
2014 imagery, 2015 conflict [January–June]v10040.350.6290.7310.2690.371
2014 imagery, 2015 conflict [January–June]v10040.40.5970.7910.2090.403
2014 imagery, 2015 conflict [January–June]v10040.450.5810.8060.1940.419
2014 imagery, 2015 conflict [January–June]v10040.50.5650.8660.1340.435
2014 imagery, 2015 conflict [January–June]v1000.30.8870.4640.5360.113
2014 imagery, 2015 conflict [January–June]v1000.350.8710.50.50.129
2014 imagery, 2015 conflict [January–June]v1000.40.8550.5710.4290.145
2014 imagery, 2015 conflict [January–June]v1000.450.8060.6250.3750.194
2014 imagery, 2015 conflict [January–June]v1000.50.7740.750.250.226
2018 imagery, 2019 conflict [January–June]v10010.30.7080.50.50.292
2018 imagery, 2019 conflict [January–June]v10010.350.6520.5560.4440.348
2018 imagery, 2019 conflict [January–June]v10010.40.6180.6110.3890.382
2018 imagery, 2019 conflict [January–June]v10010.450.5620.6560.3440.438
2018 imagery, 2019 conflict [January–June]v10010.50.5510.7220.2780.449
2018 imagery, 2019 conflict [January–June]v10020.30.7980.5510.4490.202
2018 imagery, 2019 conflict [January–June]v10020.350.7640.5510.4490.236
2018 imagery, 2019 conflict [January–June]v10020.40.7190.5840.4160.281
2018 imagery, 2019 conflict [January–June]v10020.450.6850.6180.3820.315
2018 imagery, 2019 conflict [January–June]v10020.50.6630.6970.3030.337
2018 imagery, 2019 conflict [January–June]v10030.30.8430.3710.6290.157
2018 imagery, 2019 conflict [January–June]v10030.350.8310.4160.5840.169
2018 imagery, 2019 conflict [January–June]v10030.40.7870.5170.4830.213
2018 imagery, 2019 conflict [January–June]v10030.450.7640.5620.4380.236
2018 imagery, 2019 conflict [January–June]v10030.50.7420.5730.4270.258
2018 imagery, 2019 conflict [January–June]v10040.30.8670.3330.6670.133
2018 imagery, 2019 conflict [January–June]v10040.350.8330.3980.6020.167
2018 imagery, 2019 conflict [January–June]v10040.40.8110.430.570.189
2018 imagery, 2019 conflict [January–June]v10040.450.7440.4840.5160.256
2018 imagery, 2019 conflict [January–June]v10040.50.7110.5480.4520.289
2018 imagery, 2019 conflict [January–June]v1000.30.8760.4430.5570.124
2018 imagery, 2019 conflict [January–June]v1000.350.8310.5450.4550.169
2018 imagery, 2019 conflict [January–June]v1000.40.8090.580.420.191
2018 imagery, 2019 conflict [January–June]v1000.450.7750.6020.3980.225
2018 imagery, 2019 conflict [January–June]v1000.50.7640.670.330.236
2018 imagery, 2019 conflict [January–December]v10110.30.7430.5030.4970.257
2018 imagery, 2019 conflict [January–December]v10110.350.6940.5310.4690.306
2018 imagery, 2019 conflict [January–December]v10110.40.6670.5590.4410.333
2018 imagery, 2019 conflict [January–December]v10110.450.6250.6140.3860.375
2018 imagery, 2019 conflict [January–December]v10110.50.6110.6760.3240.389
2018 imagery, 2019 conflict [January–December]v10120.30.9720.2390.7610.028
2018 imagery, 2019 conflict [January–December]v10120.350.9510.3310.6690.049
2018 imagery, 2019 conflict [January–December]v10120.40.9170.4860.5140.083
2018 imagery, 2019 conflict [January–December]v10120.450.8820.6060.3940.118
2018 imagery, 2019 conflict [January–December]v10120.50.7290.7110.2890.271
2018 imagery, 2019 conflict [January–December]v10130.30.9170.3290.6710.083
2018 imagery, 2019 conflict [January–December]v10130.350.8960.40.60.104
2018 imagery, 2019 conflict [January–December]v10130.40.8330.4650.5350.167
2018 imagery, 2019 conflict [January–December]v10130.450.7710.5480.4520.229
2018 imagery, 2019 conflict [January–December]v10130.50.7220.6320.3680.278
2018 imagery, 2019 conflict [January–December]v10140.30.8550.3850.6150.145
2018 imagery, 2019 conflict [January–December]v10140.350.8140.4130.5870.186
2018 imagery, 2019 conflict [January–December]v10140.40.7790.4550.5450.221
2018 imagery, 2019 conflict [January–December]v10140.450.7030.5660.4340.297
2018 imagery, 2019 conflict [January–December]v10140.50.6340.6360.3640.366
2018 imagery, 2019 conflict [January–December]v1010.30.9510.3380.6620.049
2018 imagery, 2019 conflict [January–December]v1010.350.9240.4410.5590.076
2018 imagery, 2019 conflict [January–December]v1010.40.8820.5310.4690.118
2018 imagery, 2019 conflict [January–December]v1010.450.840.5930.4070.16
2018 imagery, 2019 conflict [January–December]v1010.50.7570.6410.3590.243
2016 imagery, 2017 conflict [January–December]v10110.30.9570.2910.7090.043
2016 imagery, 2017 conflict [January–December]v10110.350.940.3250.6750.06
2016 imagery, 2017 conflict [January–December]v10110.40.9150.410.590.085
2016 imagery, 2017 conflict [January–December]v10110.450.8720.5210.4790.128
2016 imagery, 2017 conflict [January–December]v10110.50.8210.6410.3590.179
2016 imagery, 2017 conflict [January–December]v10120.30.8470.6480.3520.153
2016 imagery, 2017 conflict [January–December]v10120.350.7880.6970.3030.212
2016 imagery, 2017 conflict [January–December]v10120.40.780.7380.2620.22
2016 imagery, 2017 conflict [January–December]v10120.450.7630.7790.2210.237
2016 imagery, 2017 conflict [January–December]v10120.50.7370.8030.1970.263
2016 imagery, 2017 conflict [January–December]v10130.30.8290.6450.3550.171
2016 imagery, 2017 conflict [January–December]v10130.350.8030.6850.3150.197
2016 imagery, 2017 conflict [January–December]v10130.40.7950.7340.2660.205
2016 imagery, 2017 conflict [January–December]v10130.450.7690.7740.2260.231
2016 imagery, 2017 conflict [January–December]v10130.50.7690.790.210.231
2016 imagery, 2017 conflict [January–December]v10140.30.8030.6530.3470.197
2016 imagery, 2017 conflict [January–December]v10140.350.8030.6690.3310.197
2016 imagery, 2017 conflict [January–December]v10140.40.7780.7020.2980.222
2016 imagery, 2017 conflict [January–December]v10140.450.7440.7270.2730.256
2016 imagery, 2017 conflict [January–December]v10140.50.7180.7850.2150.282
2016 imagery, 2017 conflict [January–December]v1010.30.8460.6220.3780.154
2016 imagery, 2017 conflict [January–December]v1010.350.8210.630.370.179
2016 imagery, 2017 conflict [January–December]v1010.40.8210.6470.3530.179
2016 imagery, 2017 conflict [January–December]v1010.450.7950.6810.3190.205
2016 imagery, 2017 conflict [January–December]v1010.50.7860.7390.2610.214
2018 imagery, 2019 conflict [July–December]v10010.30.6550.5670.4330.345
2018 imagery, 2019 conflict [July–December]v10010.350.6550.6330.3670.345
2018 imagery, 2019 conflict [July–December]v10010.40.6360.650.350.364
2018 imagery, 2019 conflict [July–December]v10010.450.60.70.30.4
2018 imagery, 2019 conflict [July–December]v10010.50.5640.750.250.436
2018 imagery, 2019 conflict [July–December]v10020.30.7040.490.510.296
2018 imagery, 2019 conflict [July–December]v10020.350.6850.510.490.315
2018 imagery, 2019 conflict [July–December]v10020.40.630.5690.4310.37
2018 imagery, 2019 conflict [July–December]v10020.450.630.6270.3730.37
2018 imagery, 2019 conflict [July–December]v10020.50.5930.6860.3140.407
2018 imagery, 2019 conflict [July–December]v10030.30.7820.5830.4170.218
2018 imagery, 2019 conflict [July–December]v10030.350.7640.6250.3750.236
2018 imagery, 2019 conflict [July–December]v10030.40.7270.6250.3750.273
2018 imagery, 2019 conflict [July–December]v10030.450.6360.6670.3330.364
2018 imagery, 2019 conflict [July–December]v10030.50.60.7710.2290.4
2018 imagery, 2019 conflict [July–December]v10040.30.80.3280.6720.2
2018 imagery, 2019 conflict [July–December]v10040.350.7820.4260.5740.218
2018 imagery, 2019 conflict [July–December]v10040.40.7450.4920.5080.255
2018 imagery, 2019 conflict [July–December]v10040.450.6910.6230.3770.309
2018 imagery, 2019 conflict [July–December]v10040.50.5640.7210.2790.436
2018 imagery, 2019 conflict [July–December]v1000.30.7090.5370.4630.291
2018 imagery, 2019 conflict [July–December]v1000.350.6730.5930.4070.327
2018 imagery, 2019 conflict [July–December]v1000.40.6550.6110.3890.345
2018 imagery, 2019 conflict [July–December]v1000.450.6360.630.370.364
2018 imagery, 2019 conflict [July–December]v1000.50.5640.7220.2780.436
2016 imagery, 2017 conflict [January–June]v10010.30.750.5080.4920.25
2016 imagery, 2017 conflict [January–June]v10010.350.7330.5760.4240.267
2016 imagery, 2017 conflict [January–June]v10010.40.70.610.390.3
2016 imagery, 2017 conflict [January–June]v10010.450.70.6610.3390.3
2016 imagery, 2017 conflict [January–June]v10010.50.6830.7630.2370.317
2016 imagery, 2017 conflict [January–June]v10020.30.9330.4750.5250.067
2016 imagery, 2017 conflict [January–June]v10020.350.9330.4920.5080.067
2016 imagery, 2017 conflict [January–June]v10020.40.8830.590.410.117
2016 imagery, 2017 conflict [January–June]v10020.450.8830.6390.3610.117
2016 imagery, 2017 conflict [January–June]v10020.50.8830.6390.3610.117
2016 imagery, 2017 conflict [January–June]v10030.30.8330.5420.4580.167
2016 imagery, 2017 conflict [January–June]v10030.350.750.610.390.25
2016 imagery, 2017 conflict [January–June]v10030.40.7170.6780.3220.283
2016 imagery, 2017 conflict [January–June]v10030.450.7170.7630.2370.283
2016 imagery, 2017 conflict [January–June]v10030.50.7170.8140.1860.283
2016 imagery, 2017 conflict [January–June]v10040.30.850.4920.5080.15
2016 imagery, 2017 conflict [January–June]v10040.350.7830.5250.4750.217
2016 imagery, 2017 conflict [January–June]v10040.40.7670.5930.4070.233
2016 imagery, 2017 conflict [January–June]v10040.450.7670.610.390.233
2016 imagery, 2017 conflict [January–June]v10040.50.7670.6270.3730.233
2016 imagery, 2017 conflict [January–June]v1000.30.8330.5570.4430.167
2016 imagery, 2017 conflict [January–June]v1000.350.8170.590.410.183
2016 imagery, 2017 conflict [January–June]v1000.40.80.590.410.2
2016 imagery, 2017 conflict [January–June]v1000.450.7830.6890.3110.217
2016 imagery, 2017 conflict [January–June]v1000.50.7830.7050.2950.217

Appendix A.2. Performance Metrics

Table A2. Full performance metrics for all models generated.
Table A2. Full performance metrics for all models generated.
DataVersionThreshAccuracyPrecisionRecallf1
2016 imagery, 2017 conflict [July–December]v10010.30.6750.6140.8790.723
2016 imagery, 2017 conflict [July–December]v10010.350.6920.630.8790.734
2016 imagery, 2017 conflict [July–December]v10010.40.7080.6460.8790.745
2016 imagery, 2017 conflict [July–December]v10010.450.7420.6850.8620.763
2016 imagery, 2017 conflict [July–December]v10010.50.7420.690.8450.76
2016 imagery, 2017 conflict [July–December]v10020.30.7260.6670.8970.765
2016 imagery, 2017 conflict [July–December]v10020.350.7350.690.8450.76
2016 imagery, 2017 conflict [July–December]v10020.40.7440.70.8450.766
2016 imagery, 2017 conflict [July–December]v10020.450.7520.710.8450.772
2016 imagery, 2017 conflict [July–December]v10020.50.7520.7230.810.764
2016 imagery, 2017 conflict [July–December]v10030.30.6610.6140.9470.745
2016 imagery, 2017 conflict [July–December]v10030.350.6880.6460.8950.75
2016 imagery, 2017 conflict [July–December]v10030.40.7160.6710.8950.767
2016 imagery, 2017 conflict [July–December]v10030.450.7430.7040.8770.781
2016 imagery, 2017 conflict [July–December]v10030.50.7520.7270.8420.78
2016 imagery, 2017 conflict [July–December]v10040.30.7520.6970.9140.791
2016 imagery, 2017 conflict [July–December]v10040.350.8050.7650.8970.825
2016 imagery, 2017 conflict [July–December]v10040.40.8140.7850.8790.829
2016 imagery, 2017 conflict [July–December]v10040.450.7790.780.7930.786
2016 imagery, 2017 conflict [July–December]v10040.50.7790.8370.7070.766
2016 imagery, 2017 conflict [July–December]v1000.30.6920.6330.8770.735
2016 imagery, 2017 conflict [July–December]v1000.350.7090.6620.8250.734
2016 imagery, 2017 conflict [July–December]v1000.40.7180.6710.8250.74
2016 imagery, 2017 conflict [July–December]v1000.450.7090.6670.8070.73
2016 imagery, 2017 conflict [July–December]v1000.50.7350.6970.8070.748
2014 imagery, 2015 conflict [July–December]v10010.30.7620.6910.950.8
2014 imagery, 2015 conflict [July–December]v10010.350.80.740.9250.822
2014 imagery, 2015 conflict [July–December]v10010.40.8130.7550.9250.831
2014 imagery, 2015 conflict [July–December]v10010.450.7870.7450.8750.805
2014 imagery, 2015 conflict [July–December]v10010.50.7870.7450.8750.805
2014 imagery, 2015 conflict [July–December]v10020.30.7270.6960.8210.753
2014 imagery, 2015 conflict [July–December]v10020.350.7010.690.7440.716
2014 imagery, 2015 conflict [July–December]v10020.40.6750.6750.6920.684
2014 imagery, 2015 conflict [July–December]v10020.450.7010.7110.6920.701
2014 imagery, 2015 conflict [July–December]v10020.50.7270.7650.6670.712
2014 imagery, 2015 conflict [July–December]v10030.30.6960.6740.7750.721
2014 imagery, 2015 conflict [July–December]v10030.350.7340.7440.7250.734
2014 imagery, 2015 conflict [July–December]v10030.40.7470.7780.70.737
2014 imagery, 2015 conflict [July–December]v10030.450.7340.7710.6750.72
2014 imagery, 2015 conflict [July–December]v10030.50.7340.7880.650.712
2014 imagery, 2015 conflict [July–December]v10040.30.7160.6730.8250.742
2014 imagery, 2015 conflict [July–December]v10040.350.7280.6880.8250.75
2014 imagery, 2015 conflict [July–December]v10040.40.7160.6890.7750.729
2014 imagery, 2015 conflict [July–December]v10040.450.6910.6830.70.691
2014 imagery, 2015 conflict [July–December]v10040.50.6910.6830.70.691
2014 imagery, 2015 conflict [July–December]v1000.30.7530.720.90.8
2014 imagery, 2015 conflict [July–December]v1000.350.8080.7830.90.837
2014 imagery, 2015 conflict [July–December]v1000.40.8490.8540.8750.864
2014 imagery, 2015 conflict [July–December]v1000.450.8490.8720.850.861
2014 imagery, 2015 conflict [July–December]v1000.50.8490.8720.850.861
2014 imagery, 2015 conflict [January–December]v10110.30.7020.6340.9510.761
2014 imagery, 2015 conflict [January–December]v10110.350.7170.6570.9020.76
2014 imagery, 2015 conflict [January–December]v10110.40.7560.7130.8530.777
2014 imagery, 2015 conflict [January–December]v10110.450.7760.750.8240.785
2014 imagery, 2015 conflict [January–December]v10110.50.7760.7750.7750.775
2014 imagery, 2015 conflict [January–December]v10120.30.7070.6410.9120.753
2014 imagery, 2015 conflict [January–December]v10120.350.7210.6590.8920.758
2014 imagery, 2015 conflict [January–December]v10120.40.7310.6830.8430.754
2014 imagery, 2015 conflict [January–December]v10120.450.7070.6780.7650.719
2014 imagery, 2015 conflict [January–December]v10120.50.7310.7250.7250.725
2014 imagery, 2015 conflict [January–December]v10130.30.7140.6550.9310.769
2014 imagery, 2015 conflict [January–December]v10130.350.7140.6640.8920.762
2014 imagery, 2015 conflict [January–December]v10130.40.7390.7020.8530.77
2014 imagery, 2015 conflict [January–December]v10130.450.7690.7590.8040.781
2014 imagery, 2015 conflict [January–December]v10130.50.7890.8130.7650.788
2014 imagery, 2015 conflict [January–December]v10140.30.7250.6560.9710.783
2014 imagery, 2015 conflict [January–December]v10140.350.7450.6780.9510.792
2014 imagery, 2015 conflict [January–December]v10140.40.760.6990.9310.798
2014 imagery, 2015 conflict [January–December]v10140.450.7550.7150.8630.782
2014 imagery, 2015 conflict [January–December]v10140.50.770.7550.8140.783
2014 imagery, 2015 conflict [January–December]v1010.30.7230.6570.9220.767
2014 imagery, 2015 conflict [January–December]v1010.350.7330.6690.9120.772
2014 imagery, 2015 conflict [January–December]v1010.40.7280.6740.8730.761
2014 imagery, 2015 conflict [January–December]v1010.450.7430.6930.8630.769
2014 imagery, 2015 conflict [January–December]v1010.50.7230.7060.7550.73
2014 imagery, 2015 conflict [January–June]v10010.30.7330.7080.7420.724
2014 imagery, 2015 conflict [January–June]v10010.350.7330.7140.7260.72
2014 imagery, 2015 conflict [January–June]v10010.40.740.7260.7260.726
2014 imagery, 2015 conflict [January–June]v10010.450.7480.7380.7260.732
2014 imagery, 2015 conflict [January–June]v10010.50.7560.750.7260.738
2014 imagery, 2015 conflict [January–June]v10020.30.6940.6620.790.721
2014 imagery, 2015 conflict [January–June]v10020.350.7180.690.790.737
2014 imagery, 2015 conflict [January–June]v10020.40.7260.70.790.742
2014 imagery, 2015 conflict [January–June]v10020.450.750.7380.7740.756
2014 imagery, 2015 conflict [January–June]v10020.50.750.7460.7580.752
2014 imagery, 2015 conflict [January–June]v10030.30.7180.6890.810.745
2014 imagery, 2015 conflict [January–June]v10030.350.7020.6810.7780.726
2014 imagery, 2015 conflict [January–June]v10030.40.7180.7060.7620.733
2014 imagery, 2015 conflict [January–June]v10030.450.7180.7120.7460.729
2014 imagery, 2015 conflict [January–June]v10030.50.710.7080.730.719
2014 imagery, 2015 conflict [January–June]v10040.30.6820.6780.6450.661
2014 imagery, 2015 conflict [January–June]v10040.350.6820.6840.6290.655
2014 imagery, 2015 conflict [January–June]v10040.40.6980.7250.5970.655
2014 imagery, 2015 conflict [January–June]v10040.450.6980.7350.5810.649
2014 imagery, 2015 conflict [January–June]v10040.50.7210.7950.5650.66
2014 imagery, 2015 conflict [January–June]v1000.30.6860.6470.8870.748
2014 imagery, 2015 conflict [January–June]v1000.350.6950.6590.8710.75
2014 imagery, 2015 conflict [January–June]v1000.40.720.6880.8550.763
2014 imagery, 2015 conflict [January–June]v1000.450.720.7040.8060.752
2014 imagery, 2015 conflict [January–June]v1000.50.7630.7740.7740.774
2018 imagery, 2019 conflict [January–June]v10010.30.6030.5830.7080.64
2018 imagery, 2019 conflict [January–June]v10010.350.6030.5920.6520.62
2018 imagery, 2019 conflict [January–June]v10010.40.6150.6110.6180.615
2018 imagery, 2019 conflict [January–June]v10010.450.6090.6170.5620.588
2018 imagery, 2019 conflict [January–June]v10010.50.6370.6620.5510.601
2018 imagery, 2019 conflict [January–June]v10020.30.6740.640.7980.71
2018 imagery, 2019 conflict [January–June]v10020.350.6570.630.7640.69
2018 imagery, 2019 conflict [January–June]v10020.40.6520.6340.7190.674
2018 imagery, 2019 conflict [January–June]v10020.450.6520.6420.6850.663
2018 imagery, 2019 conflict [January–June]v10020.50.680.6860.6630.674
2018 imagery, 2019 conflict [January–June]v10030.30.6070.5730.8430.682
2018 imagery, 2019 conflict [January–June]v10030.350.6240.5870.8310.688
2018 imagery, 2019 conflict [January–June]v10030.40.6520.6190.7870.693
2018 imagery, 2019 conflict [January–June]v10030.450.6630.6360.7640.694
2018 imagery, 2019 conflict [January–June]v10030.50.6570.6350.7420.684
2018 imagery, 2019 conflict [January–June]v10040.30.5960.5570.8670.678
2018 imagery, 2019 conflict [January–June]v10040.350.6120.5730.8330.679
2018 imagery, 2019 conflict [January–June]v10040.40.6170.5790.8110.676
2018 imagery, 2019 conflict [January–June]v10040.450.6120.5830.7440.654
2018 imagery, 2019 conflict [January–June]v10040.50.6280.6040.7110.653
2018 imagery, 2019 conflict [January–June]v1000.30.6610.6140.8760.722
2018 imagery, 2019 conflict [January–June]v1000.350.6890.6490.8310.729
2018 imagery, 2019 conflict [January–June]v1000.40.6950.6610.8090.727
2018 imagery, 2019 conflict [January–June]v1000.450.6890.6630.7750.715
2018 imagery, 2019 conflict [January–June]v1000.50.7180.7010.7640.731
2018 imagery, 2019 conflict [January–December]v10110.30.6230.5980.7430.663
2018 imagery, 2019 conflict [January–December]v10110.350.6120.5950.6940.641
2018 imagery, 2019 conflict [January–December]v10110.40.6120.60.6670.632
2018 imagery, 2019 conflict [January–December]v10110.450.6190.6160.6250.621
2018 imagery, 2019 conflict [January–December]v10110.50.6440.6520.6110.631
2018 imagery, 2019 conflict [January–December]v10120.30.6080.5650.9720.714
2018 imagery, 2019 conflict [January–December]v10120.350.6430.5910.9510.729
2018 imagery, 2019 conflict [January–December]v10120.40.7030.6440.9170.756
2018 imagery, 2019 conflict [January–December]v10120.450.7450.6940.8820.777
2018 imagery, 2019 conflict [January–December]v10120.50.720.7190.7290.724
2018 imagery, 2019 conflict [January–December]v10130.30.6120.5590.9170.695
2018 imagery, 2019 conflict [January–December]v10130.350.6390.5810.8960.705
2018 imagery, 2019 conflict [January–December]v10130.40.6420.5910.8330.692
2018 imagery, 2019 conflict [January–December]v10130.450.6560.6130.7710.683
2018 imagery, 2019 conflict [January–December]v10130.50.6760.6460.7220.682
2018 imagery, 2019 conflict [January–December]v10140.30.6220.5850.8550.695
2018 imagery, 2019 conflict [January–December]v10140.350.6150.5840.8140.68
2018 imagery, 2019 conflict [January–December]v10140.40.6180.5920.7790.673
2018 imagery, 2019 conflict [January–December]v10140.450.6350.6220.7030.66
2018 imagery, 2019 conflict [January–December]v10140.50.6350.6390.6340.637
2018 imagery, 2019 conflict [January–December]v1010.30.6440.5880.9510.727
2018 imagery, 2019 conflict [January–December]v1010.350.6820.6210.9240.743
2018 imagery, 2019 conflict [January–December]v1010.40.7060.6510.8820.749
2018 imagery, 2019 conflict [January–December]v1010.450.7160.6720.840.747
2018 imagery, 2019 conflict [January–December]v1010.50.6990.6770.7570.715
2016 imagery, 2017 conflict [January–December]v10110.30.6240.5740.9570.718
2016 imagery, 2017 conflict [January–December]v10110.350.6320.5820.940.719
2016 imagery, 2017 conflict [January–December]v10110.40.6620.6080.9150.73
2016 imagery, 2017 conflict [January–December]v10110.450.6970.6460.8720.742
2016 imagery, 2017 conflict [January–December]v10110.50.7310.6960.8210.753
2016 imagery, 2017 conflict [January–December]v10120.30.7460.6990.8470.766
2016 imagery, 2017 conflict [January–December]v10120.350.7420.7150.7880.75
2016 imagery, 2017 conflict [January–December]v10120.40.7580.7420.780.76
2016 imagery, 2017 conflict [January–December]v10120.450.7710.7690.7630.766
2016 imagery, 2017 conflict [January–December]v10120.50.7710.7840.7370.76
2016 imagery, 2017 conflict [January–December]v10130.30.7340.6880.8290.752
2016 imagery, 2017 conflict [January–December]v10130.350.7430.7070.8030.752
2016 imagery, 2017 conflict [January–December]v10130.40.7630.7380.7950.765
2016 imagery, 2017 conflict [January–December]v10130.450.7720.7630.7690.766
2016 imagery, 2017 conflict [January–December]v10130.50.780.7760.7690.773
2016 imagery, 2017 conflict [January–December]v10140.30.7270.6910.8030.743
2016 imagery, 2017 conflict [January–December]v10140.350.7350.7010.8030.749
2016 imagery, 2017 conflict [January–December]v10140.40.7390.7170.7780.746
2016 imagery, 2017 conflict [January–December]v10140.450.7350.7250.7440.734
2016 imagery, 2017 conflict [January–December]v10140.50.7520.7640.7180.74
2016 imagery, 2017 conflict [January–December]v1010.30.7330.6880.8460.759
2016 imagery, 2017 conflict [January–December]v1010.350.7250.6860.8210.747
2016 imagery, 2017 conflict [January–December]v1010.40.7330.6960.8210.753
2016 imagery, 2017 conflict [January–December]v1010.450.7370.710.7950.75
2016 imagery, 2017 conflict [January–December]v1010.50.7630.7480.7860.767
2018 imagery, 2019 conflict [July–December]v10010.30.6090.5810.6550.615
2018 imagery, 2019 conflict [July–December]v10010.350.6430.6210.6550.637
2018 imagery, 2019 conflict [July–December]v10010.40.6430.6250.6360.631
2018 imagery, 2019 conflict [July–December]v10010.450.6520.6470.60.623
2018 imagery, 2019 conflict [July–December]v10010.50.6610.6740.5640.614
2018 imagery, 2019 conflict [July–December]v10020.30.60.5940.7040.644
2018 imagery, 2019 conflict [July–December]v10020.350.60.5970.6850.638
2018 imagery, 2019 conflict [July–December]v10020.40.60.6070.630.618
2018 imagery, 2019 conflict [July–December]v10020.450.6290.6420.630.636
2018 imagery, 2019 conflict [July–December]v10020.50.6380.6670.5930.627
2018 imagery, 2019 conflict [July–December]v10030.30.6890.6830.7820.729
2018 imagery, 2019 conflict [July–December]v10030.350.6990.70.7640.73
2018 imagery, 2019 conflict [July–December]v10030.40.680.690.7270.708
2018 imagery, 2019 conflict [July–December]v10030.450.650.6860.6360.66
2018 imagery, 2019 conflict [July–December]v10030.50.680.750.60.667
2018 imagery, 2019 conflict [July–December]v10040.30.5520.5180.80.629
2018 imagery, 2019 conflict [July–December]v10040.350.5950.5510.7820.647
2018 imagery, 2019 conflict [July–December]v10040.40.6120.5690.7450.646
2018 imagery, 2019 conflict [July–December]v10040.450.6550.6230.6910.655
2018 imagery, 2019 conflict [July–December]v10040.50.6470.6460.5640.602
2018 imagery, 2019 conflict [July–December]v1000.30.6240.6090.7090.655
2018 imagery, 2019 conflict [July–December]v1000.350.6330.6270.6730.649
2018 imagery, 2019 conflict [July–December]v1000.40.6330.6320.6550.643
2018 imagery, 2019 conflict [July–December]v1000.450.6330.6360.6360.636
2018 imagery, 2019 conflict [July–December]v1000.50.6420.6740.5640.614
2016 imagery, 2017 conflict [January–June]v10010.30.630.6080.750.672
2016 imagery, 2017 conflict [January–June]v10010.350.6550.6380.7330.682
2016 imagery, 2017 conflict [January–June]v10010.40.6550.6460.70.672
2016 imagery, 2017 conflict [January–June]v10010.450.6810.6770.70.689
2016 imagery, 2017 conflict [January–June]v10010.50.7230.7450.6830.713
2016 imagery, 2017 conflict [January–June]v10020.30.7020.6360.9330.757
2016 imagery, 2017 conflict [January–June]v10020.350.7110.6440.9330.762
2016 imagery, 2017 conflict [January–June]v10020.40.7360.6790.8830.768
2016 imagery, 2017 conflict [January–June]v10020.450.760.7070.8830.785
2016 imagery, 2017 conflict [January–June]v10020.50.760.7070.8830.785
2016 imagery, 2017 conflict [January–June]v10030.30.6890.6490.8330.73
2016 imagery, 2017 conflict [January–June]v10030.350.6810.6620.750.703
2016 imagery, 2017 conflict [January–June]v10030.40.6970.6940.7170.705
2016 imagery, 2017 conflict [January–June]v10030.450.7390.7540.7170.735
2016 imagery, 2017 conflict [January–June]v10030.50.7650.7960.7170.754
2016 imagery, 2017 conflict [January–June]v10040.30.6720.630.850.723
2016 imagery, 2017 conflict [January–June]v10040.350.6550.6270.7830.696
2016 imagery, 2017 conflict [January–June]v10040.40.6810.6570.7670.708
2016 imagery, 2017 conflict [January–June]v10040.450.6890.6670.7670.713
2016 imagery, 2017 conflict [January–June]v10040.50.6970.6760.7670.719
2016 imagery, 2017 conflict [January–June]v1000.30.6940.6490.8330.73
2016 imagery, 2017 conflict [January–June]v1000.350.7020.6620.8170.731
2016 imagery, 2017 conflict [January–June]v1000.40.6940.6580.80.722
2016 imagery, 2017 conflict [January–June]v1000.450.7360.7120.7830.746
2016 imagery, 2017 conflict [January–June]v1000.50.7440.7230.7830.752

Appendix A.3. ROC Curves

Each Figure includes the curve and AUC value for each of the five sample data frames tested for a combination of time periods and temporal windows of data. h1 for a given year refers to January–June and h2 refers to July–December.
Figure A1. ROC curve produced from model train on 2014 (January–December) imagery to predict the probability of a fatality if there is a conflict event in 2015 (January–December).
Figure A1. ROC curve produced from model train on 2014 (January–December) imagery to predict the probability of a fatality if there is a conflict event in 2015 (January–December).
Remotesensing 16 03411 g0a1
Figure A2. ROC curve produced from model train on 2015 (January–December) imagery to predict the probability of a fatality if there is a conflict event in 2016 (January–December).
Figure A2. ROC curve produced from model train on 2015 (January–December) imagery to predict the probability of a fatality if there is a conflict event in 2016 (January–December).
Remotesensing 16 03411 g0a2
Figure A3. ROC curve produced from model train on 2018 (January–December) imagery to predict the probability of a fatality if there is a conflict event in 2019 (January–December).
Figure A3. ROC curve produced from model train on 2018 (January–December) imagery to predict the probability of a fatality if there is a conflict event in 2019 (January–December).
Remotesensing 16 03411 g0a3
Figure A4. ROC curve produced from model train on 2014 h1 (January–June) imagery to predict the probability of a fatality if there is a conflict event in 2015 h1 (January–June).
Figure A4. ROC curve produced from model train on 2014 h1 (January–June) imagery to predict the probability of a fatality if there is a conflict event in 2015 h1 (January–June).
Remotesensing 16 03411 g0a4
Figure A5. ROC curve produced from model train on 2014 h2 (July–December) imagery to predict the probability of a fatality if there is a conflict event in 2015 h2 (July–December).
Figure A5. ROC curve produced from model train on 2014 h2 (July–December) imagery to predict the probability of a fatality if there is a conflict event in 2015 h2 (July–December).
Remotesensing 16 03411 g0a5
Figure A6. ROC curve produced from model train on 2016 h1 (January–June) imagery to predict the probability of a fatality if there is a conflict event in 2017 h1 (January–June).
Figure A6. ROC curve produced from model train on 2016 h1 (January–June) imagery to predict the probability of a fatality if there is a conflict event in 2017 h1 (January–June).
Remotesensing 16 03411 g0a6
Figure A7. ROC curve produced from model train on 2016 h2 (July–December) imagery to predict the probability of a fatality if there is a conflict event in 2017 h2 (July–December).
Figure A7. ROC curve produced from model train on 2016 h2 (July–December) imagery to predict the probability of a fatality if there is a conflict event in 2017 h2 (July–December).
Remotesensing 16 03411 g0a7
Figure A8. ROC curve produced from model train on 2018 h1 (January–June) imagery to predict the probability of a fatality if there is a conflict event in 2019 h1 (January–June).
Figure A8. ROC curve produced from model train on 2018 h1 (January–June) imagery to predict the probability of a fatality if there is a conflict event in 2019 h1 (January–June).
Remotesensing 16 03411 g0a8
Figure A9. ROC curve produced from model train on 2018 h2 (July–December) imagery to predict the probability of a fatality if there is a conflict event in 2019 h2 (July–December).
Figure A9. ROC curve produced from model train on 2018 h2 (July–December) imagery to predict the probability of a fatality if there is a conflict event in 2019 h2 (July–December).
Remotesensing 16 03411 g0a9

References

  1. Herbert, S.; Husaini, S. Conflict, Instability and Resilience in Nigeria. Rapid Literature Review; Technical Report; University of Birmingham: Birmingham, UK, 2018. [Google Scholar]
  2. The Armed Conflict Location & Event Data Project. ACLED Data Dashboard. 2019. Available online: https://acleddata.com/data/ (accessed on 1 July 2024).
  3. Raleigh, C.; Linke, A.; Hegre, H.; Karlsen, J. Introducing ACLED: An Armed Conflict Location and Event Dataset: Special Data Feature. J. Peace Res. 2010, 47, 651–660. [Google Scholar] [CrossRef]
  4. Usman, M.; Nichol, J.E. Changes in agricultural and grazing land, and insights for mitigating farmer-herder conflict in West Africa. Landsc. Urban Plan. 2022, 222, 104383. [Google Scholar] [CrossRef]
  5. WorldPop; CIESIN. Global High Resolution Population Denominators Project. 2018. Available online: https://hub.worldpop.org/doi/10.5258/SOTON/WP00647 (accessed on 1 July 2024).
  6. European Space Agency. ESA Land Cover. 2017. Available online: https://www.esa-landcover-cci.org/ (accessed on 1 July 2024).
  7. Matsuura, K.; Willmott, C.J. Terrestrial Precipitation: 1900–2014 Gridded Monthly Time Series. 2015. Available online: https://climate.geog.udel.edu/ (accessed on 1 July 2024).
  8. Goodman, S.; BenYishay, A.; Lv, Z.; Runfola, D. GeoQuery: Integrating HPC systems and public web-based geospatial data tools. Comput. Geosci. 2019, 122, 103–112. [Google Scholar] [CrossRef]
  9. Burke, M.; Driscoll, A.; Lobell, D.B.; Ermon, S. Using satellite imagery to understand and promote sustainable development. Science 2021, 371, 6535. [Google Scholar] [CrossRef]
  10. Farabet, C.; Couprie, C.; Najman, L.; LeCun, Y. Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1915–1929. [Google Scholar] [CrossRef]
  11. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  12. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  13. Albelwi, S.; Mahmood, A. A Framework for Designing the Architectures of Deep Convolutional Neural Networks. Entropy 2017, 19, 242. [Google Scholar] [CrossRef]
  14. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  15. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657. [Google Scholar] [CrossRef]
  16. Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Wang, H.; Xu, F.; Jin, Y.Q. Polarimetric SAR Image Classification Using Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
  18. Chen, X.; Xiang, S.; Liu, C.L.; Pan, C.H. Vehicle detection in satellite images by hybrid deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1797–1801. [Google Scholar] [CrossRef]
  19. Vakalopoulou, M.; Karantzalos, K.; Komodakis, N.; Paragios, N. Building detection in very high resolution multispectral data with deep learning features. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2015; pp. 1873–1876. [Google Scholar] [CrossRef]
  20. Jean, N.; Burke, M.; Xie, M.; Davis, W.M.; Lobell, D.B.; Ermon, S. Combining satellite imagery and machine learning to predict poverty. Science 2016, 353, 790–794. [Google Scholar] [CrossRef] [PubMed]
  21. Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 105–109. [Google Scholar] [CrossRef]
  22. Chi, G.; Fang, H.; Chatterjee, S.; Blumenstock, J.E. Microestimates of wealth for all low- and middle-income countries. Proc. Natl. Acad. Sci. USA 2022, 119, e2113658119. [Google Scholar] [CrossRef]
  23. Yeh, C.; Perez, A.; Driscoll, A.; Azzari, G.; Tang, Z.; Lobell, D.; Ermon, S.; Burke, M. Using publicly available satellite imagery and deep learning to understand economic well-being in Africa. Nat. Commun. 2020, 11, 2583. [Google Scholar] [CrossRef]
  24. Babenko, B.; Hersh, J.; Newhouse, D.; Ramakrishnan, A.; Swartz, T. Poverty Mapping Using Convolutional Neural Networks Trained on High and Medium Resolution Satellite Images, with an Application in Mexico. arXiv 2017, arXiv:1711.06323. [Google Scholar]
  25. Subash, S.P.; Kumar, R.R.; Aditya, K.S. Satellite data and machine learning tools for predicting poverty in rural India. Agric. Econ. Res. Rev. 2018, 31, 231. [Google Scholar] [CrossRef]
  26. Engstrom, R.; Hersh, J.S.; Newhouse, D.L. Poverty from Space: Using High-Resolution Satellite Imagery for Estimating Economic Well-Being; Technical Report; The World Bank: Washington, DC, USA, 2017. [Google Scholar]
  27. Oshri, B.; Hu, A.; Adelson, P.; Chen, X.; Dupas, P.; Weinstein, J.; Burke, M.; Lobell, D.; Ermon, S. Infrastructure Quality Assessment in Africa Using Satellite Imagery and Deep Learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, London, UK, 19–23 August 2018; ACM: New York, NY, USA, 2018; pp. 616–625. [Google Scholar] [CrossRef]
  28. Brewer, E.; Lin, J.; Kemper, P.; Hennin, J.; Runfola, D. Predicting road quality using high resolution satellite imagery: A transfer learning approach. PLoS ONE 2021, 16, e0253370. [Google Scholar] [CrossRef]
  29. Lv, Z.; Nunez, K.; Brewer, E.; Runfola, D. pyShore: A deep learning toolkit for shoreline structure mapping with high-resolution orthographic imagery and convolutional neural networks. Comput. Geosci. 2023, 171, 105296. [Google Scholar] [CrossRef]
  30. Lobell, D.B.; Di Tommaso, S.; You, C.; Djima, I.Y.; Burke, M.; Kilic, T. Sight for sorghums: Comparisons of satellite-and ground-based sorghum yield estimates in Mali. Remote Sens. 2020, 12, 100. [Google Scholar] [CrossRef]
  31. Runfola, D.; Stefanidis, A.; Baier, H. Using satellite data and deep learning to estimate educational outcomes in data-sparse environments. Remote. Sens. Lett. 2021, 13, 87–97. [Google Scholar] [CrossRef]
  32. Head, A.; Manguin, M.; Tran, N.; Blumenstock, J.E. Can Human Development Be Measured with Satellite Imagery? In Proceedings of the Ninth International Conference on Information and Communication Technologies and Development, ICTD ’17, Lahore, Pakistan, 16–19 November 2017; ACM: New York, NY, USA, 2017; pp. 8:1–8:11. [Google Scholar] [CrossRef]
  33. Hu, W.; Novosad, P.; Burke, M.; Patel, J.H.; Asher, S.; Lobell, D.; Robert, Z.A.; Tang, Z.; Ermon, S. Mapping missing population in rural India: A deep learning approach with satellite imagery. In Proceedings of the AIES 2019—2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; Association for Computing Machinery, Inc.: New York, NY, USA, 2019; pp. 353–359. [Google Scholar] [CrossRef]
  34. Runfola, D.; Baier, H.; Mills, L.; Naughton-Rockwell, M.; Stefanidis, A. Deep learning fusion of satellite and social information to estimate human migratory flows. Trans. GIS 2022, 26, 2495–2518. [Google Scholar] [CrossRef] [PubMed]
  35. Goodman, S.; BenYishay, A.; Runfola, D. A convolutional neural network approach to predict non-permissive environments from moderate-resolution imagery. Trans. GIS 2020, 25, 674–691. [Google Scholar] [CrossRef]
  36. Runfola, D.; Stefanidis, A.; Lv, Z.; O’Brien, J.; Baier, H. A multi-glimpse deep learning architecture to estimate socioeconomic census metrics in the context of extreme scope variance. Int. J. Geogr. Inf. Sci. 2024, 38, 726–750. [Google Scholar] [CrossRef]
  37. Buntaine, M.T.; Hamilton, S.E.; Millones, M. Titling community land to prevent deforestation: An evaluation of a best-case program in Morona-Santiago, Ecuador. Glob. Environ. Chang. 2015, 33, 32–43. [Google Scholar] [CrossRef]
  38. BenYishay, A.; Runfola, D.; Trichler, R.; Dolan, C.; Goodman, S.; Parks, B.; Tanner, J.; Heuser, S.; Batra, G.; Anand, A. A Primer on Geospatial Impact Evaluation Methods, Tools, and Applications; AidData Working Paper #44; AidData at William & Mary: Williamsburg, VA, USA, 2017. [Google Scholar]
  39. Runfola, D.; BenYishay, A.; Tanner, J.; Buchanan, G.; Nagol, J.; Leu, M.; Goodman, S.; Trichler, R.; Marty, R. A Top-Down Approach to Estimating Spatially Heterogeneous Impacts of Development Aid on Vegetative Carbon Sequestration. Sustainability 2017, 9, 409. [Google Scholar] [CrossRef]
  40. Marty, R.; Goodman, S.; LeFew, M.; Dolan, C.; BenYishay, A.; Runfola, D. Assessing the causal impact of Chinese aid on vegetative land cover in Burundi and Rwanda under conditions of spatial imprecision. Dev. Eng. 2019, 4, 100038. [Google Scholar] [CrossRef]
  41. Runfola, D.; Batra, G.; Anand, A.; Way, A.; Goodman, S. Exploring the Socioeconomic Co-benefits of Global Environment Facility Projects in Uganda Using a Quasi-Experimental Geospatial Interpolation (QGI) Approach. Sustainability 2020, 12, 3225. [Google Scholar] [CrossRef]
  42. BenYishay, A.; Sayers, R.; Singh, K.; Goodman, S.; Walker, M.; Traore, S.; Rauschenbach, M.; Noltze, M. Irrigation strengthens climate resilience: Long-term evidence from Mali using satellites and surveys. PNAS Nexus 2024, 3, pgae022. [Google Scholar] [CrossRef]
  43. Bansal, C.; Jain, A.; Barwaria, P.; Choudhary, A.; Singh, A.; Gupta, A.; Seth, A. Temporal prediction of socio-economic indicators using satellite imagery. In Proceedings of the CoDS COMAD 2020: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, Hyderabad, India, 5–7 January 2020; ACM International Conference Proceeding Series. pp. 73–81. [Google Scholar] [CrossRef]
  44. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef] [PubMed]
  45. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef]
  46. Gaetano, R.; Ienco, D. TiSeLaC: Time Series Land Cover Classification Challenge Dataset; UMR TETIS: Montpellier, France, 2017. [Google Scholar]
  47. Martino, T.D. Time Series Land Cover Challenge: A Deep Learning Perspective. 2020. Available online: https://towardsdatascience.com/time-series-land-cover-challenge-a-deep-learning-perspective-6a953368a2bd (accessed on 1 July 2024).
  48. Mauro, N.D.; Vergari, A.; Basile, T.; Ventola, F.; Esposito, F. End-to-end Learning of Deep Spatio-temporal Representations for Satellite Image Time Series Classification. In Proceedings of the DC@PKDD/ECML 2017, Skopje, Macedonia, 18–22 September 2017. [Google Scholar]
  49. Garnot, V.S.F.; Landrieu, L.; Giordano, S.; Chehata, N. Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 12325–12334. [Google Scholar]
  50. Hegre, H.; Metternich, N.W.; Nygård, H.M.; Wucherpfennig, J. Introduction: Forecasting in peace research. J. Peace Res. 2017, 54, 113–124. [Google Scholar] [CrossRef]
  51. Cederman, L.E.; Weidmann, N.B. Predicting armed conflict: Time to adjust our expectations? Science 2017, 355, 474–476. [Google Scholar] [CrossRef]
  52. Bazzi, S.; Blair, R.A.; Blattman, C.; Dube, O.; Gudgeon, M.; Merton Peck, R. The Promise and Pitfalls of Conflict Prediction: Evidence from Colombia and Indonesia; Technical Report; The National Bureau of Economic Research: Cambridge, MA, USA, 2019. [Google Scholar]
  53. Weidmann, N.B.; Ward, M.D. Predicting Conflict in Space and Time. J. Confl. Resolut. 2010, 54, 883–901. [Google Scholar] [CrossRef]
  54. Mueller, H.; Rauh, C. The Hard Problem of Prediction for Conflict Prevention. J. Eur. Econ. Assoc. 2022, 20, 2440–2467. [Google Scholar] [CrossRef]
  55. Hegre, H.; Allansson, M.; Basedau, M.; Colaresi, M.; Croicu, M.; Fjelde, H.; Hoyles, F.; Hultman, L.; Högbladh, S.; Jansen, R.; et al. ViEWS: A political violence early-warning system. J. Peace Res. 2019, 56, 155–174. [Google Scholar] [CrossRef]
  56. Vesco, P.; Hegre, H.; Colaresi, M.; Jansen, R.B.; Lo, A.; Reisch, G.; Weidmann, N.B. United they stand: Findings from an escalation prediction competition. Int. Interact. 2022, 48, 860–896. [Google Scholar] [CrossRef]
  57. Radford, B.J. High resolution conflict forecasting with spatial convolutions and long short-term memory. Int. Interact. 2022, 48, 739–758. [Google Scholar] [CrossRef]
  58. Brandt, P.T.; D’Orazio, V.; Khan, L.; Li, Y.F.; Osorio, J.; Sianan, M. Conflict forecasting with event data and spatio-temporal graph convolutional networks. Int. Interact. 2022, 48, 800–822. [Google Scholar] [CrossRef]
  59. D’Orazio, V.; Lin, Y. Forecasting conflict in Africa with automated machine learning systems. Int. Interact. 2022, 48, 714–738. [Google Scholar] [CrossRef]
  60. Racek, D.; Thurner, P.W.; Davidson, B.I.; Zhu, X.X.; Kauermann, G. Conflict forecasting using remote sensing data: An application to the Syrian civil war. Int. J. Forecast. 2024, 40, 373–391. [Google Scholar] [CrossRef]
  61. United States Geological Survey. Landsat 7; United States Geological Survey: Reston, VA, USA, 2018.
  62. Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.; Kennedy, R.; et al. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef]
  63. United States Geological Survey. USGS EarthExplorer; United States Geological Survey: Reston, VA, USA, 2017.
  64. United States Geological Survey. USGS Landsat Bulk Download; United States Geological Survey: Reston, VA, USA, 2017.
  65. Foga, S.; Scaramuzza, P.L.; Guo, S.; Zhu, Z.; Dilley, R.D.; Beckmann, T.; Schmidt, G.L.; Dwyer, J.L.; Joseph Hughes, M.; Laue, B. Cloud detection algorithm comparison and validation for operational Landsat data products. Remote Sens. Environ. 2017, 194, 379–390. [Google Scholar] [CrossRef]
  66. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the Computer Vision and Pattern Recognition, CVPR 2009, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  67. Weiss, K.R.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 1–40. [Google Scholar]
  68. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  69. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems 27; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 3320–3328. [Google Scholar]
  70. Razavian, A.S.; Azizpour, H.; Sullivan, J.; Carlsson, S. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 512–519. [Google Scholar]
  71. Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
  72. Hoo-Chang, S.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 2016, 35, 1285. [Google Scholar]
  73. Lv, Z.; Nunez, K.; Brewer, E.; Runfola, D. Mapping the tidal marshes of coastal Virginia: A hierarchical transfer learning approach. GISci. Remote Sens. 2024, 61, 2287291. [Google Scholar] [CrossRef]
  74. Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef]
  75. Eck, K. In data we trust? A comparison of UCDP GED and ACLED conflict events datasets. Coop. Confl. 2012, 47, 124–141. [Google Scholar] [CrossRef]
  76. Li, H.; Fu, K.; Xu, G.; Zheng, X.; Ren, W.; Sun, X. Scene classification in remote sensing images using a two-stage neural network ensemble model. Remote Sens. Lett. 2017, 8, 557–566. [Google Scholar] [CrossRef]
  77. Zhang, C.; Pan, X.; Li, H.; Gardiner, A.; Sargent, I.; Hare, J.; Atkinson, P.M. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification. ISPRS J. Photogramm. Remote Sens. 2018, 140, 133–144. [Google Scholar] [CrossRef]
  78. Li, W.; Liu, H.; Wang, Y.; Li, Z.; Jia, Y.; Gui, G. Deep Learning-Based Classification Methods for Remote Sensing Images in Urban Built-Up Areas. IEEE Access 2019, 7, 36274–36284. [Google Scholar] [CrossRef]
  79. Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  80. Ji, S.; Zhang, C.; Xu, A.; Shi, Y.; Duan, Y. 3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images. Remote Sens. 2018, 10, 75. [Google Scholar] [CrossRef]
  81. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2017; pp. 5987–5995. [Google Scholar] [CrossRef]
  82. Flusser, J.; Suk, T. Pattern recognition by affine moment invariants. Pattern Recognit. 1993, 26, 167–174. [Google Scholar] [CrossRef]
  83. Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the ICML’10: Proceedings of the 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010. [Google Scholar]
  84. Han, D.; Liu, Q.; Fan, W. A new image classification method using CNN transfer learning and web data augmentation. Expert Syst. Appl. 2018, 95, 43–56. [Google Scholar] [CrossRef]
Figure 1. Overview of the methodology applied for training and validating conflict fatality models at each discrete time period.
Figure 1. Overview of the methodology applied for training and validating conflict fatality models at each discrete time period.
Remotesensing 16 03411 g001
Figure 2. Accuracy distribution across all tests based on time period (inset year range) and temporal window (x-axis). Boxplot outline colors reflect the same temporal window across the different time periods. The blue baseline is the average baseline accuracy across all time periods and temporal windows.
Figure 2. Accuracy distribution across all tests based on time period (inset year range) and temporal window (x-axis). Boxplot outline colors reflect the same temporal window across the different time periods. The blue baseline is the average baseline accuracy across all time periods and temporal windows.
Remotesensing 16 03411 g002
Figure 3. ROC curve produced from CNNs trained on 2014 imagery (January–December) to predict the probability of a fatality if there is a conflict event in 2015 (January–December).
Figure 3. ROC curve produced from CNNs trained on 2014 imagery (January–December) to predict the probability of a fatality if there is a conflict event in 2015 (January–December).
Remotesensing 16 03411 g003
Figure 4. Location of conflict events [3] and prediction results in 2015 (left), 2017 (center), 2019 (right). Imagery ©2024 TerraMetrics, Map data ©2024 Google.
Figure 4. Location of conflict events [3] and prediction results in 2015 (left), 2017 (center), 2019 (right). Imagery ©2024 TerraMetrics, Map data ©2024 Google.
Remotesensing 16 03411 g004
Table 1. Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) Bands [35].
Table 1. Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) Bands [35].
 BandsWavelength
(Micrometers)
Resolution
(Meters)
Band 1—Coastal aerosol0.433–0.45330
Band 2—Blue0.450–0.51530
Band 3—Green0.525–0.60030
Band 4—Red0.630–0.68030
Band 5—Near Infrared (NIR)0.845–0.88530
Band 6—Shortwave Infrared (SWIR) 11.560–1.66030
Band 7—Shortwave Infrared (SWIR) 22.100–2.30030
Band 8—Panchromatic0.500–0.68015
Band 9—Cirrus1.360–1.39030
Band 10—Thermal Infrared (TIRS) 110.60–11.2030 *
Band 11—Thermal Infrared (TIRS) 211.50–12.5030 *
* Collected at 100 m but resampled to 30 m to match OLI bands.
Table 2. Temporal windows and associated conflict event counts used for training models.
Table 2. Temporal windows and associated conflict event counts used for training models.
Imagery WindowConflict WindowConflict Event Count
(Training/Validation Splits)
2014 all2015 all1673 (1422/251)
2014 January–June2015 January–June997 (847/150)
2014 July–December2015 July–December676 (575/101)
2016 all2017 all1644 (1397/247)
2016 January–June2017 January–June817 (694/123)
2016 July–December2017 July–December827 (703/124)
2018 all2019 all2218 (1885/333)
2018 January–June2019 January–June1214 (1032/182)
2018 July–December2019 July–December1004 (853/151)
Table 3. Network parameters adjusted during testing, and associated values.
Table 3. Network parameters adjusted during testing, and associated values.
ParameterValues
NetworkResnet18, Resnet50
Learning Rate0.0001, 0.00001
Gamma0.25, 0.5
Step Size10, 15
Table 4. Confusion matrix definitions.
Table 4. Confusion matrix definitions.
ActualPredictedClassification
PositivePositivetrue positive (tp)
PositiveNegativefalse negative (fp)
NegativePositivefalse positive (fn)
NegativeNegativetrue negative (tn)
Table 5. Accuracy of naive predictor based on proximity to previous fatal conflict locations when using varying proximity thresholds.
Table 5. Accuracy of naive predictor based on proximity to previous fatal conflict locations when using varying proximity thresholds.
Category Proximity Threshold
Year1 km5 km7 km15 km
Mean % Fatal Correct20150.620.670.710.81
20170.670.720.740.83
20190.520.620.670.82
Mean % Non-Fatal Correct20150.480.400.380.29
20170.360.290.270.21
20190.400.340.300.16
Table 6. Average ROC AUC from models for each time period and temporal window.
Table 6. Average ROC AUC from models for each time period and temporal window.
Imagery TemporalConflict TemporalAUC ROC
2014 January–December2015 January–December0.815
2014 January–June2015 January–June0.790
2014 July–December2015 July–December0.796
2016 January–December2017 January–December0.807
2016 January–June2017 January–June0.782
2016 July–December2017 July–December0.797
2018 January–December2019 January–December0.722
2018 January–June2019 January–June0.681
2018 July–December2019 July–December0.678
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goodman, S.; BenYishay, A.; Runfola, D. Spatiotemporal Prediction of Conflict Fatality Risk Using Convolutional Neural Networks and Satellite Imagery. Remote Sens. 2024, 16, 3411. https://doi.org/10.3390/rs16183411

AMA Style

Goodman S, BenYishay A, Runfola D. Spatiotemporal Prediction of Conflict Fatality Risk Using Convolutional Neural Networks and Satellite Imagery. Remote Sensing. 2024; 16(18):3411. https://doi.org/10.3390/rs16183411

Chicago/Turabian Style

Goodman, Seth, Ariel BenYishay, and Daniel Runfola. 2024. "Spatiotemporal Prediction of Conflict Fatality Risk Using Convolutional Neural Networks and Satellite Imagery" Remote Sensing 16, no. 18: 3411. https://doi.org/10.3390/rs16183411

APA Style

Goodman, S., BenYishay, A., & Runfola, D. (2024). Spatiotemporal Prediction of Conflict Fatality Risk Using Convolutional Neural Networks and Satellite Imagery. Remote Sensing, 16(18), 3411. https://doi.org/10.3390/rs16183411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop