Next Article in Journal
Correction: Obata, K.; Yoshioka, H. A Simple Algorithm for Deriving an NDVI-Based Index Compatible between GEO and LEO Sensors: Capabilities and Limitations in Japan. Remote Sens. 2020, 12, 2417
Next Article in Special Issue
Multimodal and Multitemporal Land Use/Land Cover Semantic Segmentation on Sentinel-1 and Sentinel-2 Imagery: An Application on a MultiSenGE Dataset
Previous Article in Journal
Shipborne HFSWR Target Detection in Clutter Regions Based on Multi-Frame TFI Correlation
Previous Article in Special Issue
A Deep Multitask Semisupervised Learning Approach for Chlorophyll-a Retrieval from Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scalable Crop Yield Prediction with Sentinel-2 Time Series and Temporal Convolutional Network

1
Natural Resources Institute Finland, Latokartanonkaari 9, FI-00790 Helsinki, Finland
2
Department of Geosciences and Geography, University of Helsinki, FI-00014 Helsinki, Finland
3
Finnish Geospatial Research Institute, National Land Survey of Finland, Vuorimiehentie 5, FI-02150 Espoo, Finland
4
Department of Built Environment, Aalto University, FI-00076 Espoo, Finland
5
Department of Computer Science, University of Helsinki, FI-00014 Helsinki, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(17), 4193; https://doi.org/10.3390/rs14174193
Submission received: 5 July 2022 / Revised: 19 August 2022 / Accepted: 23 August 2022 / Published: 25 August 2022

Abstract

:
One of the precepts of food security is the proper functioning of the global food markets. This calls for open and timely intelligence on crop production on an agroclimatically meaningful territorial scale. We propose an operationally suitable method for large-scale in-season crop yield estimations from a satellite image time series (SITS) for statistical production. As an object-based method, it is spatially scalable from parcel to regional scale, making it useful for prediction tasks in which the reference data are available only at a coarser level, such as counties. We show that deep learning-based temporal convolutional network (TCN) outperforms the classical machine learning method random forests and produces more accurate results overall than published national crop forecasts. Our novel contribution is to show that mean-aggregated regional predictions with histogram-based features calculated from farm-level observations perform better than other tested approaches. In addition, TCN is robust to the presence of cloudy pixels, suggesting TCN can learn cloud masking from the data. The temporal compositing of information do not improve prediction performance. This indicates that with end-to-end learning less preprocessing in SITS tasks seems viable.

1. Introduction

Food security is one of the longstanding continuing development priorities of the United Nations and has been reaffirmed in the 2030 Agenda for Sustainable Development [1]. However, since the onset of the Agenda in 2015, the number of people in the world affected by hunger has increased, reflecting persistent regional socio-economic inequalities [2]. Multiple factors interact within the food systems to the detriment of food security and nutrition, the major global drivers being conflict, climate variability and extremes, and economic slowdowns and downturns [2,3]. It is also estimated that the minimum calorie requirement to eliminate projected food undernourishment by 2030 will unlikely be attainable due to the competition for crops harvested for various other uses, such as animal feed and crop-based biofuels [4].
One of the precepts to food security is to ensure the proper functioning of food commodity markets and agricultural derivatives. Market disruptions or shocks in crop production such as extreme weather events can occasionally disturb the equilibrium of the price determination of agricultural commodities. This induces food price spikes and volatility that are often triggering social unrest and food crises. When food markets are provided with timely and accurate information openly conveyed between actors, the societies are better prepared for such disruptions in food supplies [2].
Today, many national, regional, and even global agricultural monitoring systems are operating at a range of scales to collect, analyze, disseminate, and communicate comprehensive agricultural intelligence to decision makers (see, e.g., [5]). Since the 1972 launch of the first Landsat satellite, earth observation (EO) has become an essential component of such agricultural monitoring systems, providing timely, objective, and global coverage information regarding crop acreage and yield. Free and open access to US Landsat [6] and European Sentinel imagery [7], with high temporal and spatial resolution, and deployment of high-performance computing have further accelerated the utilization of EO data. For a review of a multitude of crop yield forecasting techniques, the interested reader is invited to take a look at, e.g., [8,9].
Despite the promise of EO as an applicable source of information for operational use for fine resolution crop yield forecasts, agricultural monitoring systems and national statistical offices publish pre-harvest forecasts on state/country-levels. Nevertheless, accurate sub-national level information about crop production is a prerequisite for local-scale research or policy evaluation due to the typically high spatial variability in agricultural production. In addition, agricultural producers, especially livestock husbandry, are dependent on local primary production chains (seeds and fodder), because local supply reduces logistics costs. Accurate, and publicly available, local level near-real time information about expected crop production therefore helps the agricultural practitioners, markets, and decision makers to react and adjust in the event of disruptions.
For fine resolution (temporal and spatial) crop production information, we need a prediction model that can produce in-season crop yield estimates from rather long time series of observations (e.g., 121 days in this study). In optical remote sensing, the time series are typically irregular and sparse due to occlusion by clouds and overlap between swaths from adjacent orbits at higher latitudes. The observable unit itself in crop growth monitoring is a field parcel that is typically irregular in shape and size. In this setting, there are plenty of avenues for constructing a representation of an observable unit for a prediction model.
We assume that a field parcel or set of parcels are near-homogeneous areas, managed with similar agricultural practices, sown approximately in the same short period, and growing in similar agroclimatic conditions and can therefore be considered as objects instead of independent pixel units. In object-oriented remote sensing, objects are typically represented by the mean [10,11,12,13] or median [14] of the reflectance values within the bounds of an object. However, a point estimate of distribution, such as mean or median, is inclined to loose important discriminating information about the samples. Histograms provide a straightforward means for utilizing more information from the reflectance distribution [15,16,17], and histograms with more than one dimension can be used if the prediction method can handle a larger feature space.
Random forests (RF) [18] is a widely applied machine learning method in remote sensing [19] and specifically in agricultural monitoring tasks (e.g., [20,21,22,23,24]). In supervised regression tasks, the method aggregates predictions from several randomized decision-trees by averaging. RF is generally recognized for its high performance, easy parametrization and robustness, and its ability to work in high-dimensional feature spaces while having relatively low sample sizes compared with neural networks that usually perform better with larger sample sizes. However, RF does not per se capture the temporal dimension. Each time step is merely added to the input data as a new independent static feature. This also implies that for each time step of interest (for each feature set), a separate model needs to be trained, making it impractical for automatized near-real time monitoring.
The resurgence of neural networks and resulting advances in computer vision and deep learning inspired by, e.g., LeCun et al. [25] have given rise to novel applications in various domains, including remote sensing [26]. The development of deep neural network models (DNN) has increasingly enabled end-to-end learning of subtle latent features, which previously required separate extraction. DNNs can reduce manual feature extraction in a variety of fields, ranging from computer vision to chemometrics or assisting clinical decisions [27,28,29]. In agricultural monitoring, DNNs have been applied for remotely sensed time series, e.g., by [12,15,30,31,32,33,34,35].
A recent novelty of the temporal convolutional network (TCN) is a sequential implementation of convolutional neural networks (CNN) that uses dilated convolutions. The term was first introduced by [36] in their study on action segmentation and detection. Ref. [37] showed that TCNs could outperform canonical recurrent neural networks (RNN) for sequence modeling tasks on diverse benchmark datasets. Compared to RNNs, TCNs can have a very long effective history making them a favorable option for crop monitoring. TCNs also have the computational advantage of processing convolutions in parallel instead of sequentially as in RNN. Ref. [37] In agricultural monitoring, [38] applied TCN to crop classification. Ref. [39] used TCN in combination with RNN for greenhouse crop yield forecasting. Ref. [40] applied an adaptation of TCN by introducing a channel (band) attention mechanism in crop classification.
Occlusion by clouds in optical satellite images inevitably requires a handling strategy for most remote sensing tasks and data. In most applications, cloud detection is involved as an isolated preprocessing step, where approaches generally allow either discrete filtering of pixels classified as cloudy or accepting pixels with a low cloud probability as part of the signal [41,42,43]. As an explicit step, cloud masking has a long history combining feature engineering, thresholding and classical machine learning [44,45,46]. Recently, DNNs, especially CNNs, have increasingly enabled end-to-end cloud masking approaches without preprocessed features [47,48].
Completely delegating cloud detection to latent features of the same machine-learning model that is being used to learn the application features would be attractive due to simplified processing. Rußwurm and Körner [10] studied crop type mapping with Sentinel-2 top-of-the-atmosphere (Level-1C) time series data. They auspiciously observed that self-attention and recurrent based networks were able to suppress cloudy observations, although cloud masking still outperformed in the overall classification performance. Otherwise, explicitly assuming cloud detection as an latent hidden part of an end-to-end model has been relatively unexplored in previous work to the best of our knowledge.
Information about crop yields is needed on finer temporal and spatial resolution. The primary goal of this study is to find the most accurate, spatially scalable, yet operationally lightweight prediction method for statistical production. As explained above, this requires a solution that can effectively use long, irregular and sparse time series of EO data as basis of learning, and that is designed to provide predictions for fields of irregular size and shape. To this end, we (i) examine several approaches to construct spatial and temporal representations from the reflectance information of the observables. In addition, we explore cloud detection and crop yield in separate and combined scope by (ii) comparing the omission of cloudy pixels in preprocessing to end-to-end learning of cloud-contamination from data in our modeling schemes.
For reliable validation of crop yield mapping methods, accurate historical harvest data are needed. National agricultural statistics on crop yields are used as a primary source of reference data in many crop yield mapping studies e.g., [16,49,50,51,52,53,54,55,56,57,58]. However, in the global perspective, serious weaknesses have been identified in the practices of measuring agricultural production [59,60,61]. In Europe, regulation imposed by the European Commission streamlines statistical production for harmonized information at sub-national territorial levels. Crop statistics are usually based on farmer surveys. Nevertheless, in designing surveys, there is a trade-off between expenses and spatial coverage. Typically, nationally representative sampling frames place more weight on high production areas of the economically most important crops. As an implication, this leaves fringe regions with a paucity of information about crop yields and therefore, less accurate regional statistics. Acknowledging the effects of measurement error and sampling variability calls for more careful validation schemes for prediction models. We will additionally address this imperfection by selecting highly representative municipalities in farmer survey data when assessing the accuracy of the method at the regional scale.

2. Materials and Methods

2.1. Study Area

Agricultural crop production in Finland is determined by the typical boundary conditions of high-latitude rainfed agroecosystems, namely, a short growing season, uneven rainfall distributions, special natural light conditions, and low growth temperatures, e.g., [62,63]. In focus in this study are all the main cereal crops, namely, spring-sown barley, oats, and wheat, and autumn-sown wheat, and rye [64]. The sowing period spans from the end of April in the south until early June in the north [65]. Winter crops are sown in August. Harvesting starts with winter crops in the early August followed by spring crops. Harvesting ends in September.
The study area as shown in Figure 1 comprises of 28 Sentinel-2 tiles that overlay approximately 92% of the arable land in Finland. The tiles were selected so that there were min. ~20 farms of any subsets (crop type per year) on a tile.
The growing conditions within the study area differ, because it extends from 60°N to 65°N. A recent study by [66] showed that there was substantial local and temporal variability in the average thermal growing season conditions in northern Europe. The main drivers of spatial variation were latitudinal and elevational gradients. The proximity of seas and lakes, and high forest cover, also typically characteristic of our study area in Finland, suppressed temporal trends and interannual variability [66].
The variability in growing conditions is also shown in the crop statistics from the study area [64]: Figure 2 shows quite a high variation in regional crop yields, especially in the autumn-sown winter crops. Interannual variation is notably higher in the winter crops.

2.2. Reference Data

The Finnish annual crop yield statistics are based on a farmer survey conducted by the Natural Resources Institute Finland (LUKE). We utilized the same survey data as a reference data. The sample unit is a farm, and thus, for each crop, we have the average yield at a farm level. The sampling design of the survey aims for an accurate estimate of crop yields at country level by following a multistage weighted sampling. Most of the weight is determined by the regional share of total harvested area for the main crops in Finland. We therefore do not have equal spatial coverage of farms in Finland, but most of the farms come from the high agricultural productivity regions in the southwestern part of the country. Other variables determining the weights in the sampling design are the production type and economic size of the farm.
Observed fields comprised approximately 6.5% (159,110 ha) of the total arable land in Finland. The main soil types in the sample were clay soils (54%), rough mineral soils (35%), and organic soils (6.5%). The proportions deviate slightly from the total sample; the Finnish cultivated soils are mostly 52%, 38%, and 10%, respectively [67]. The deviation can be explained by the sampling design, which favors the high agricultural productivity regions that are usually clayey.
Seeded crops are declared by farmers each year at the end of the sowing season (the middle of June) to the agricultural monitoring authority for being compliant for agricultural subsidies. Crop types and field geometries are stored in the Land Parcel Identification System (LPIS). We utilized LPIS data to select the subset of farms in the crop production sample growing the six main crop types (winter soft wheat, spring soft wheat, rye, feed barley, malting barley, and oats). The spatial observable unit was therefore a field (polygon) or a group of fields (multi-polygon) if a farm was growing the crop in question on several fields.
To ensure that all fields of a crop type are located in approximately similar agroclimatic conditions, we included only multi-polygons with an inner distance of less than 30 km. Additionally, we chose to filter out fields smaller than 1 ha in order to ensure an adequate number of pixels would represent a field. The average size of field parcel in Finland is rather small, namely 2.4 ha. In our remaining subsample, there were few farms growing a crop on a single parcel, therefore on an area of 1 ha only, but typically crops are cultivated on a larger scale on several parcels. In our subsample, the average area of a single crop per farm was 21.7 ha.
The number of farms in the crop-wise samples is shown in Table 1. With six crop types and three monitoring seasons, we have a total of 18 sample sets. Winter wheat and rye are winter crops and greatly differ from the summer crops in phenology. Winter crops are sown in the autumn, and harvested earlier in August than summer crops. All varieties of crops have specific timing of developmental stages, in addition to the effect of agroclimatic conditions on their growth, and we therefore decided to develop separate models for each crop. However, feed barley and malting barley are phenologically similar, and in early-season crop forecasts the two varieties of barley are typically published in one yield estimate. We also have a combined barley model for in-season forecasting.
To validate regional-level yield predictions, we selected 91 subsets from 46 municipalities (see Figure 1) where the survey data comprises 30% of the whole cultivated area of a municipality (crop-wise). In addition, the minimum total cultivated area in the survey data subset was set to 200 ha. These municipalities were then considered to be adequately represented in the survey data, and the crop yields in these regions were calculated as the sample mean.
To assess the in-season prediction performance of our proposed methods, we used country-level crop forecasts from two sources as a reference. The European Commission’s science and knowledge service Joint Research Centre (JRC) publishes European-wide model-based crop forecasts with the MARS Crop Yield Forecasting System (MCYFS). We collected forecasts for Finland from monthly MARS Bulletins (e.g., [68]). JRC publishes forecasts as early as in mid-June, and in mid-July and mid-August. LUKE is another source of country-level crop forecasts. LUKE’s forecasts are based on regional agricultural advisors’ estimates and they are published in mid-July and mid-August [69].

2.3. Optical Time Series Data

For crop yield prediction we needed time series of satellite observations of crop growth as an input for the prediction model. Geometrically and atmospherically corrected bottom of the atmosphere reflectance imagery (Level-2A) from the multi-spectral instrument aboard the Sentinel-2A and Sentinel-2B satellites were downloaded from the Copernicus Open Access Hub (Scihub). We excluded scenes with cloud cover over 95%. We utilized 10 spectral bands suitable for environmental monitoring with the following central wavelengths: Band 2 (492 nm), Band 3 (560 nm), Band 4 (665 nm), Band 5 (705 nm), Band 6 (740 nm), Band 7 (783 nm), Band 8 (842 nm), Band 8A (865 nm), Band 11 (1610 nm), and Band 12 (2190 nm) [70]. The observation period (10 May–31 August) covers most of the growing season in the study area. The average revisit-time over the study area is two days. Due to overlapping swaths from adjacent orbits, even more frequent observations are possible given cloud-free conditions.
Raw pixel values were extracted from the Level-2A product using a customized version of the EODIE toolkit (v0.1) [71] that builds on Python libraries. Pixels with the center point within the bounding polygon were included. The downstream modules of the processing pipeline included cloud masking, feature engineering, reshaping, and model training.

2.4. Cloud Masking

Using the scene classification map product from the Sen2Cor processor [72], we filtered out saturated or defective pixels, cloud shadows, clouds on medium and high probability, and thin cirrus (classes 0, 1, 3, 8, 9, and 10). Cloud masking reduced approximately 55% of pixels (year 2020). Figure 3 shows the frequency distribution of reflectance values in 1000 bins for all bands from one season (10 May–31 August 2020) of field parcels growing cereal crops (oats, barley, wheat, rye) in the study area. The histograms in Figure 3a are calculated from unprocessed Level-2A data, and, in Figure 3b, after cloud masking. It seems that cloud masking has cut the long tail of high reflectance values. The bump-curves at the low reflectances of some bands are also somewhat dampened with cloud masking, but are not removed entirely. Otherwise, a visual inspection suggests the distribution curves very much resemble each other with or without cloud masking.

2.5. Object Representations

When monitoring the development of a specific crop type at farm-level, our sample consists of time series of observations from one or several field parcels. We had on average 2056 pixels representing a farm (1503 after cloud masking) at each time point. First, we treated each farm as an observational unit and constructed 32-bin histograms at each time point from 10 bands separately. In making the histograms, the range of values was determined by taking the 5th and 95th percentiles of the entire 2020 growing season cloud-masked reflectance distributions for each band separately. Figure 4 shows these upper and lower limits of range for bands 4, 8, and 12. The value range omits the small bump on the left and the long tails on the right. Figure A1 in the Appendix shows histograms with ranges for all 10 bands. The same upper and lower limits of range are used across all crop types.
The prediction models were also trained with median-based time series data for comparison. Figure 5 shows the median intensity values of spring wheat fields of a farm during the 2020 growing season. The medians in Figure 5a are calculated from unprocessed Level-2A data and in Figure 5b after cloud masking. The bands seem to be correlated in both cases. Cloud masking seems to smoothen the time series curve. Before modeling, the median values were rescaled by dividing image digital numbers by 10,000, as shown in Appendix D in [73].
Figure 6 shows histograms of a farm during the 2020 growing season (spring wheat) from bands 4, 8, and 12. The data are 1 -normalized at observation dates. The dates without observations are simply zero-vectors. In some cases, cloud masking has resulted in completely excluding some dates due to clouds or cloud shadows. On average, there were 42 observation dates per season in the cloudy dataset, and 33 observation dates in the cloud-masked dataset.
We further explored the capabilities of histograms as density estimates of larger regions. From all the pixels of the survey data subset, we constructed municipality-level histograms. However, municipalities can expand over several Sentinel-2 tiles. In time series, there can therefore be observations from the adjacent orbits on different dates, posing a potential problem of over-represented observations from marginal areas. As a solution, we tested temporal compositing: Pixel values from an 11-day window were compiled into a histogram. The 11-day window was chosen because it is approximately the longest period of still modest phenological changes in crops, and at the same time, it is hoped that there exists at least one clear-sky observation day available for each sample with the period. Both farm-level (see Figure 7) and regional-level 11-day histograms were constructed. As the crop monitoring period was from the early May till the end of August, we had 12 windows per season. Similarly, we calculated the 11-day mean of median observations of farms (see Figure 8) and regions. In a few cases there were observations from only 11 windows due to clouds.

2.6. Prediction Models

For sequence modeling task we used RF as a state-of-the-art baseline method despite its limitations on time series tasks. We used RF implementation in sklearn-library version 0.23.2 [74] with parameters: 500 trees, and eight features randomly sampled as candidates at each split. The trees were grown to the maximal depth. For TCN, we used the Keras implementation, version 3.4.0 [75] in the Tensorflow environment (version 2.7.0) [76]. Keras TCN is based on [37]. TCNs were trained using the Adam optimizer [77] with parameters: learning rate α = 0.001, β 1 = 0.9, β 2 = 0.999, and ϵ ^ = 0.1. The batch size was set to 128. The validation loss was monitored with an early stopping mechanism. We used dilated causal convolutions for dilations 1, 2, 4, 8, and 16. The receptive field size was then 63. The validation split was set to 0.20.

2.7. Experimental Setup

We explored which machine learner and feature method performs best with farm-level data. For each crop, we trained a separate model, both with RF and TCN. We used two years for training and one year in testing. For cross-validation we iterated the training for 10 times. For each learner and feature method, we thus had performance metrics from 30 runs. For comparison purposes we chose to explore prediction accuracy in the mid-June, mid-July, mid-August, and at the end of the season (September 1). The TCN model can take a variable-size input, but RF was trained at each time point separately. From the model training perspective, we had either a 32 × 10 tensor (histogram) or a 1 × 10 tensor (median) for each time point to feed for TCN. For RF, the number of features was simply the time-steps × 32 × 10 (histogram) or time-steps × 1 × 10 (median). Missing observation days were padded with zero. We repeated the object representation schemes for both cloudy and cloud-masked datasets. The object representation schemes are illustrated in Figure 9.

3. Results

We start reporting the results with conventional cloud-masked datasets. See Table A1 in the Appendix for the performance metrics for all farm-level prediction models in the experiments. First, we compared the novel histogram-based TCN model with the conventional median-based RF as a baseline. Figure 10 shows the normalized mean of root-mean-squared error (NRMSE) and its standard deviation from the iteration runs with farm-level cloud-masked data. The accuracy of TCN improves in the course of the season, as is expected in time series prediction. RF predicts well in the early season, but the prediction does not improve when adding new time features later the season. Overall, TCN has lower NRMSE than RF. Note that both models performed worse with winter wheat and rye (denoted with brown text). These are winter crops whose phenological development differs from spring crops.
As an example of how in-season TCN predictions evolve, Figure 11 shows yield predictions of a farm during the 2020 growing season (spring wheat) vis-à-vis the mean predictions and standard deviations of its surrounding farms in the same municipality. The blue vertical lines denote real observation dates, whereas missing dates are zero-padded. Both the individual farm prediction and the mean prediction of the region are quite close to the actual harvested yield from the mid-June until early July. In July, the predictions start to overestimate but again approach the true yield by the end of August. It is notable that the region prediction has clear in-season trends with moderate standard deviation. The trend is similar in both regional and farm prediction. This suggests that in these prediction curves, it may be possible to observe external factors in the growing conditions that determine the phenological development.
Next, we calculated region-level histograms and their TCN predictions. Figure 12 shows the relationship of farm- and region-level predictions and actual spring wheat yields. In Figure 12a, the years are distinguishable. In the actual yields, there is great variation; in 2018 the yields were lower, in 2019 higher, and in 2020 average. The model overestimated the yields in 2018 and underestimated yields in 2019 and 2020. The area of spring wheat fields does not seem to make a visible difference. Note that there is vertical clustering of observations revealing rounding of reported yields. Figure 12b shows similar behavior to the model at the region level. The predictions tend to be conservative (closer to mean), and the area of an observation does not seem to be visually discriminative. The overall accuracy calculated in root-mean-square error (RMSE) is better at region level (729 kg/ha) than farm level (1131 kg/ha).
In Table 2, we have gathered several error estimation metrics of the iterated region-level crop yield predictions with 12 different feature engineering methods. Each dataset contains either an aggregated histogram of a municipality’s pixels (region level) or mean-aggregated predictions from farms within the municipality (farm level). The datasets are either from non-cloud-masked or from cloud-masked data. The datasets are either from those municipalities that are overlaid by one or multiple Sentinel-2 tiles. The number of observations in the subset (region/crop/year) is either 495 if all regions are included, and 176 if only regions overlaid by one tile are included.
We use RMSE as the main metric for evaluating the quality of the solutions, since it provides the most direct interpretation in form of average error in kg/ha. We provide the average RMSE over all prediction tasks (different crop types and the three prediction scenarios corresponding to different years) as the aggregate summary, but additionally report the relative RMSE normalized with the mean regional true yields to even out the scale differences in yields across regions and crops. Mean average error (MAE) is provided as an additional aggregate summary. We also report the average correlation coefficient ρ between the predictions and true yields, averaged over the different prediction tasks. For this metric we only consider cases with at least 10 samples, needed for reliable estimation of the correlation coefficient.
In terms of RMSE and MAE, the best prediction accuracy is achieved with farm-level histograms without cloud masking or temporal compositing. Similarly, high correlation ρ agrees the result. Note that some model variants show even higher correlation indicating that they can reliably rank the relative yields, but would be less useful for yield forecasting due to large average error. Indeed, temporal compositing seems to perform worse in all cases, indicating that we lose important information in the temporal compositing procedure. We expected to have superior results with regional histograms to farm histograms, especially in cases where the region falls under several tiles on adjacent orbits. However, the number of tiles does not seem to have importance here. Nor is cloud masking essential for good performance.
Finally, we compared our best performing predictions with country-level crop yield forecasts published by JRC and LUKE (Figure 13). Here, we have used the best performing TCN model with a non-cloud-masked feature set. The actual yield levels have great variation between years. Overall, TCN predictions and forecasts alike tend to be more conservative and more likely to underestimate the yield compared to the actual yields. The TCN model is less accurate for winter crops (winter wheat and rye). To compare deviations from the actual yields, Table 3 shows that on average TCN outperforms published forecasts by 2.5 percentage points. If we include only forecast times in which JRC or LUKE forecasts are published, TCN outperforms the published forecasts by 2.2 percentage points.

4. Discussion

4.1. Prediction Performance

The growing conditions for crops at northern European high latitudes are in many ways exceptional in global perspective [62]. Weather conditions have large inter- and intra-annual as well as spatial variation [66] resulting in high natural variation in yields, which makes accurate prediction difficult. A similar study by Engen et al. [78] on farm-level crop yield prediction was located in Norway having somewhat similar cropping systems to our study area [66]. The amount of observations were also approximately comparable with ours. The best performing model was incorporating weather features and raw image data as 7-day composites into a CNN-RNN model. When farm-level yield predictions were scaled up to municipality-level, they reported nationwide RMSE of 308 kg/ha (compared to our 617 kg/ha). Unfortunately, NRMSE was not reported, making the results incomparable. However, interestingly, the study showed that raw satellite image data performed better than conventional handcrafted features (vegetation indices). Similar results were shown by [79]. Moreover, adding climatic data improved the performance.
Our results show that, the winter crop models perform worse on average than spring crop models, especially at the end of August. The winter crops winter wheat and rye have a different timing of phenological cycles than spring crops. Interannual and spatial variation in winter crops is also considerably higher than in spring crops as shown in Figure 2. As winter crops are harvested earlier, the model may have been confused about the post-harvest information in August. In addition, the amount of data in the winter crop subsets was lower than in other subsets, suggesting that with more training data available in the coming years, the model performance may improve. Another improvement to cover the high variability in growth conditions is to adjust the sampling design to ensure spatial heterogeneity of soil properties in the data.
For further improving the accuracy, one could consider fusion of other remote sensing data sources to cover the known growth factors. Grain yield in cereals is determined by the number of spikes per area, grain number per spike, and grain weight. These yield components evolve at different times of the growing season and therefore are exposed to different growth conditions and stresses, such as pathogens, pests, and plant nutrition, e.g., [80,81,82]. In crop yield studies, optical and near-infrared reflectance based vegetation indices have been widely used as a proxy for biomass accumulation e.g., [49,83,84]. However, by the senescence stage, vegetation indices are reported to carry less information for yield prediction, whereas climatic variables are more useful for monitoring yield affecting growing conditions and plant stresses [84,85]. In our study, setting we only utilize information from limited ranges of the electromagnetic spectrum provided by Sentinel-2. However, our deep learning based model readily supports the application of hyperspectral imaging in higher spectral resolution (higher feature space) to crop yield mapping when such data with appropriate temporal resolution becomes available. The prediction accuracy could also benefit from complementary information from microwave and thermal remote sensing, as suggested also by [86].

4.2. Reference Data

Lack of extensive farm-level or higher-resolution ground yields for empirical crop yield models is a common problem in large-scale yield mapping studies [8]. Even if higher-scale datasets exist in commercial agricultural systems, they are rarely leveraged due to privacy concerns. Similarly due to data protection and privacy laws, national statistical institutes can only publish aggregated yields. Typically statistics are published on administrative units. However, with satellite remote sensing, national statistical institutes could publish reliable temporally and spatially finer-scale yield forecasts. This would also provide other practitioners in crop forecasting a more precise proxy as a reference or validation for their operational systems.
The reference data notwithstanding poses a limitation to our study. There exists errors in the farmer-reported data, such as rounding (visible in Figure 12a). Furthermore, farms that do not sell out the crops but use it as livestock feed, may only have a rough estimate of their crop yields. In addition, we note that the model is predicting harvested yield which is unavoidably smaller than the total yield, e.g., due to lodging or shattering. In our data we even had cases of harvest loss as high as 100%, i.e., farmers reported yields of value of zero. This implies crop was not harvested, e.g., due to overly wet harvest conditions.
In terms of potentially improved accuracy, our study would benefit from additional field-level training data, e.g., from yield monitors on-board combine harvesters, especially from data-sparse areas. If such an opportunity opens up our histogram-based approach readily allows fusion of data at different scales (one field or multiple fields of arbitrary sizes). Nevertheless, our results show that the farm-level data work sufficiently well in model training for regional yield forecasting. Therefore this work stands as a promising example for national statistical institutes typically holding historical farm-level survey data.

4.3. Cloud Mask or Not?

Cloud cover is a hampering factor in optical SITS tasks. It is challenging to detect various types of clouds: low and medium altitude water clouds; and high-altitude cirrus clouds in the upper troposphere and in the stratosphere. In addition, clouds cast shadows that result in darker reflectance areas. On the other hand, dark areas can be burned vegetation or topographic shadows. Clouds can also be similar in reflectance with a bright and white surface such as snow, ice, or water.
Several studies have evaluated the performance of cloud-masking algorithms on Sentinel-2 imagery. Typically, the studies evaluate the overall performance of the algorithms across several scenes. Indeed, cloud detection is challenging due to the high reflectance variability of earth surfaces. For example, [43] compared Sen2Cor, MAJA, LaSRC, Fmask, and Tmask in six very different scenes. They concluded that in overall, none of the algorithms outperformed the others. Sen2Cor exhibited the highest omission of clouds and shadows, performing better on clear-sky circumstances. In a similar comparison, for flat agricultural sites in Munich (Germany) and Orleans (France) [87] reported an inferior overall accuracy of Sen2Cor on two cloudy dates when compared with MAJA and FMask. Ref. [88] compared Sen2Cor, Fmask, and ATCOR on 20 Sentinel-2 scenes across all continents, different climates, seasons, and environments. They concluded that the overall accuracy was very high for all three masking codes and nearly the same for all algorithms. Based on their results, they suggested that Sen2Cor can best be applied for rural scenes in moderate climate and in scenes with snow and cloud.
From the application perspective, we aimed for routine use of Sentinel-2 data with a minimal computational burden and fluent automating of the whole deployment pipeline. We therefore chose to start with the Level 2A product, i.e., surface reflectance after atmospheric correction and its Sen2Cor cloud mask attached. We were encouraged by Sen2Cor’s satisfactory performance in comparative studies. Ref. [10] had paved the way for further studies of learning cloud omission, and we tested our processing pipelines with and without cloud masking. Our results suggested that a deep learning based model can indeed learn to ignore cloud-affected data. A more-in-depth investigation is needed into how the model learns. Naturally, our results are restricted to relatively flat vegetated areas in a high-latitude boreal climate. Note that although the cirrus band 10 would be highly informative for the model to learn cloud-affected biases, it is only available in the Level-1C product (top-of-the-atmosphere reflectances). The Level-1C product could therefore be a more attractive target for further studying of end-to-end learning as showed by [10].
Another intriguing topic to study concerns how deep learned cloud omitting and separate cloud masks (such as Sen2Cor) is performing across all developmental phases during the growing season. The reflectance of the vegetation varies greatly through the growing season, especially when it reaches the senescence stage or at harvest time.

4.4. Object Representations

In the study, we constructed six different representations of the observations. At an overall level, we explored the capabilities of histograms as opposed to common median-based features. At farm level the results were ambivalent, meaning that median-based features occasionally outperformed histogram-based features. However, when focusing on regional-level results, histogram-based features performed better. A median-based approach yields 32 times fewer features for the model input. Classical machine learning algorithms would probably benefit from a smaller feature space. As with time, RF simply takes each time step as a new feature, and this also explodes the feature space. On the other hand, RF is famous for being invariant to irrelevant features, because trees are grown using random features [18]. It is therefore noteworthy that RF already shows its best prediction accuracies in the early summer, suggesting that early growing season features are truly most informative for the model.
Our results indicated that temporal compositing did not improve model performance, probably because of lost information. Yet it is possible that shorter window sizes in temporal composition would produce better results, especially if the area of interest is less cloudy. In any case, temporal compositing can be suitable for a visual characterization of the growing season and for easy fusion to other temporal data sources. In the present setting, the TCN can find more useful features in noisier data than in more preprocessed data.

5. Conclusions

The objective of the study was to find an operationally suitable method for large-scale crop yield estimations, so that we can produce in-season crop forecasts on an agroclimatically meaningful territorial scale. First, at farm level, we showed that the histogram-based TCN model outperformed the baseline median-based RF. Secondly, at region level, our results showed that TCN achieved the best prediction accuracy with farm-level histograms without cloud masking or temporal compositing. Our results indicated that cloud masking seemed to lose some information about the crop development. Similarly, temporal compositing did not improve model performance, probably due to lost information. Thirdly, at country-scale, TCN predicted in-season crop yield on average 2.5 percentage points more accurately than published forecasts from LUKE and JRC. In addition, we can produce forecasts for crops for which no in-season forecasts have earlier been published. We believe that more accurate regional level predictions as a reference can also boost the development of global scale crop monitoring.

Author Contributions

Conceptualization, M.Y.-H.; methodology, M.Y.-H.; software, M.Y.-H., S.W. and M.L.; validation, M.Y.-H.; formal analysis, A.K. and M.Y.-H.; investigation, M.Y.-H.; resources, M.Y.-H.; writing—original draft preparation, M.Y.-H. and M.L.; writing—review and editing, M.Y.-H., S.W., M.L., M.S., J.H., E.P. and A.K.; visualization, M.Y.-H.; supervision, P.P., J.H., M.S. and A.K.; project administration, M.Y.-H.; funding acquisition, M.Y.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Union grant number 101037619.

Informed Consent Statement

Not applicable.

Data Availability Statement

The reference data used in this data originates from farmer survey and are confidential. The codes are available at https://github.com/myliheik/cropyieldArticle, (accessed on 1 July 2022).

Acknowledgments

Sentinel-2 imagery originates from the European Copernicus Sentinel Programme. We thank Anneli Partala (LUKE) for her continuous support and sharing of knowledge in crop production and crop statistics; Heikki Laurila (Statistics Finland) for invaluable commenting on the work from the outset; Mirva Kokkinen and Anna Auvinen (LUKE) for meticulous data curation; Jyrki Niemi (LUKE) for explaining global food market mechanisms; Ari Rajala (LUKE) for providing expertise in crop science; and Eeva Vaahtera and Jouni Hyvärinen (LUKE) for creating graphics worth a thousand words. We also acknowledge Kylli Ek and Johannes Nyman at CSC—IT Center for Science Finland for technical support throughout the project. We made use of geocomputing tools provided by the Open Geospatial Information Infrastructure for Research (Geoportti, urn:nbn:fi:research-infras-2016072513) funded by the Academy of Finland, CSC—IT Center for Science Finland, and other Geoportti consortium members. Open access funding provided by University of Helsinki.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
EOEarth Observation
CNNConvolutional Neural Networks
DNNDeep Neural Network model
JRCJoint Research Center
LPISLand Parcel Identification System
LUKENatural Research Institute Finland
MAEMean Average Error
NRMSENormalized Root Mean Squared Error
RFRandom Forest
RMSERoot Mean Squared Error
RNNRecurrent Neural Networks
SITSSatellite Image Time Series
TCNTemporal Convolutional Network

Appendix A

Appendix A.1

Figure A1. Displacement plots of histograms of all bands from raw BOA Sentinel-2 images before and after cloud masking. Range [1, 6000], bins 1000. (a) Band 2. (b) Band 3. (c) Band 4 (Red). (d) Band 5. (e) Band 6. (f) Band 7. (g) Band 8 (NIR). (h) Band 8A. (i) Band 11. (j) Band 12 (SWIR).
Figure A1. Displacement plots of histograms of all bands from raw BOA Sentinel-2 images before and after cloud masking. Range [1, 6000], bins 1000. (a) Band 2. (b) Band 3. (c) Band 4 (Red). (d) Band 5. (e) Band 6. (f) Band 7. (g) Band 8 (NIR). (h) Band 8A. (i) Band 11. (j) Band 12 (SWIR).
Remotesensing 14 04193 g0a1aRemotesensing 14 04193 g0a1b
Table A1. The farm-level mean and standard deviations of root mean square errors (kg/ha) on test set validation with 10 training iterations of Temporal Convolutional Network (TCN) and random forests (RF) models when using crop type-wise subsets and median or histogram based features. The lowest mean error (crop-wise) at forecast time is printed in boldface.
Table A1. The farm-level mean and standard deviations of root mean square errors (kg/ha) on test set validation with 10 training iterations of Temporal Convolutional Network (TCN) and random forests (RF) models when using crop type-wise subsets and median or histogram based features. The lowest mean error (crop-wise) at forecast time is printed in boldface.
Crop TypeMethodMaskJuneJulyAugustEnd-Of-Season
Winter wheatRF (histogram)Cloud-masked1964 ± 3831988 ± 3862078 ± 4322088 ± 427
Cloudy1978 ± 3841996 ± 3932102 ± 4432109 ± 442
RF (median)Cloud-masked1912 ± 3361914 ± 3351979 ± 3762003 ± 387
Cloudy2001 ± 4392027 ± 4292090 ± 4642072 ± 459
TCN (histogram)Cloud-masked1629 ± 2611544 ± 1351637 ± 1361735 ± 247
Cloudy1822 ± 2261594 ± 1891673 ± 1181754 ± 198
TCN (median)Cloud-masked1476 ± 2691489 ± 3511576 ±2431630 ± 257
Cloudy1553 ± 1521513 ± 2621659 ± 1881749 ± 307
TCN (11-day histogram)Cloud-masked2225 ± 8011820 ± 5731325 ± 1801281 ± 180
Cloudy2147 ± 4642324 ± 9851789 ± 6021616 ± 436
TCN (11-day median)Cloud-masked1539 ± 1581396 ± 1571405 ± 3531459 ± 441
Cloudy2326 ± 9202459 ± 8752429 ± 7582571 ± 957
BarleyRF (histogram)Cloud-masked1127 ± 1311105 ± 1141112 ± 1071115 ± 123
Cloudy1125 ± 1261109 ± 1091115 ± 991119 ± 114
RF (median)Cloud-masked1131 ± 1471080 ± 1121080 ± 1041096 ± 137
Cloudy1148 ± 1361110 ± 1181111 ± 1121122 ± 139
TCN (histogram)Cloud-masked1313 ± 1361066 ± 931016 ± 731054 ± 88
Cloudy1285 ± 1621039 ± 1031000 ± 811048 ± 117
TCN (median)Cloud-masked1132 ± 109980 ± 73923 ± 62998 ± 72
Cloudy1174 ± 107969 ± 112936 ± 116992 ± 146
Feed barleyRF (histogram)Cloud-masked1121 ± 1151099 ± 961104 ± 871107 ± 102
Cloudy1117 ± 1091103 ± 911108 ± 831108 ± 95
RF (median)Cloud-masked1128 ± 1341074 ± 1021073 ± 901087 ± 121
Cloudy1146 ± 1221109 ± 1081107 ± 1011114 ± 122
TCN (histogram)Cloud-masked1343 ± 1681108 ± 1021077 ± 1051100 ± 121
Cloudy1309 ± 1471030 ± 91988 ± 601043 ± 92
TCN (median)Cloud-masked1176 ± 1151007 ± 91963 ± 72997 ± 85
Cloudy1250 ± 114993 ± 64960 ± 581007 ± 64
TCN (11-day histogram)Cloud-masked1340 ± 2631541 ± 6421258 ± 2861188 ± 228
Cloudy1205 ± 1381206 ± 1471088 ± 1381070 ± 136
TCN (11-day median)Cloud-masked1194 ± 1041939 ± 13971490 ± 6431348 ± 460
Cloudy1466 ± 5752021 ± 10571917 ± 8581684 ± 630
Malting barleyRF (histogram)Cloud-masked1253 ± 2761217 ± 2651229 ± 2601234 ± 280
Cloudy1268 ± 2921214 ± 2451219 ± 2581220 ± 265
RF (median)Cloud-masked1254 ± 2891212 ± 2721205 ± 2621210 ± 288
Cloudy1247 ± 2941223 ± 2921239 ± 3221237 ± 330
TCN (histogram)Cloud-masked1358 ± 1821044 ± 1011070 ± 801101 ± 110
Cloudy1442 ± 1861054 ± 1121079 ± 801148 ± 146
TCN (median)Cloud-masked1030 ± 761003 ± 1011117 ± 1951159 ± 232
Cloudy1033 ± 651029 ± 1011173 ± 2461233 ± 272
TCN (11-day histogram)Cloud-masked1399 ± 3572267 ± 11111706 ± 6341465 ± 440
Cloudy1762 ± 6501921 ± 10011834 ± 7681717 ± 714
TCN (11-day median)Cloud-masked1445 ± 3461614 ± 4891202 ± 2231124 ± 163
Cloudy1643 ± 7611718 ± 9931391 ± 5511335 ± 468
OatsRF (histogram)Cloud-masked1283 ± 491264 ± 271263 ± 241270 ± 30
Cloudy1288 ± 421274 ± 301270 ± 301277 ± 35
RF (median)Cloud-masked1264 ± 501244 ± 241230 ± 281239 ± 39
Cloudy1294 ± 641283 ± 561273 ± 601273 ± 74
TCN (histogram)Cloud-masked1381 ± 1701140 ± 1451069 ± 1331090 ± 135
Cloudy1450 ± 1321147 ± 1101071 ± 961092 ± 100
TCN (median)Cloud-masked1203 ± 131975 ± 81904 ± 61922 ± 71
Cloudy1343 ± 1751022 ± 142978 ± 1501025 ± 149
TCN (11-day histogram)Cloud-masked1398 ± 1701413 ± 3891190 ± 2051166 ± 165
Cloudy1394 ± 1901179 ± 1511101 ± 871090 ± 99
TCN (11-day median)Cloud-masked1487 ± 3041702 ± 5951422 ± 4211370 ± 443
Cloudy1726 ± 4361855 ± 7501482 ± 3171453 ± 326
RyeRF (histogram)Cloud-masked1850 ± 4711884 ± 4421923 ± 4491919 ± 450
Cloudy1853 ± 4971876 ± 4641919 ± 4701911 ± 472
RF (median)Cloud-masked1760 ± 3861786 ± 3701829 ± 3751833 ± 390
Cloudy1849 ± 4231869 ± 4221884 ± 4331869 ± 462
TCN (histogram)Cloud-masked1647 ± 1991519 ± 1231528 ± 1161610 ± 223
Cloudy1610 ± 2221471 ± 2341504 ± 1651572 ± 218
TCN (median)Cloud-masked1433 ± 2911441 ± 2891510 ± 2911692 ± 332
Cloudy1557 ± 3231601 ± 3361740 ± 3331946 ± 357
TCN (11-day histogram)Cloud-masked2103 ± 7941431 ± 3571332 ± 2711344 ± 276
Cloudy2586 ± 15862188 ± 10881761 ± 6611702 ± 632
TCN (11-day median)Cloud-masked1471 ± 2871464 ± 3611500 ± 3451529 ± 412
Cloudy1559 ± 4071513 ± 3631575 ± 5041623 ± 527
Spring wheatRF (histogram)Cloud-masked1310 ± 1141284 ± 1121316 ± 1151317 ± 120
Cloudy1316 ± 1311290 ± 1181324 ± 1231329 ± 130
RF (median)Cloud-masked1314 ± 1211250 ± 1141266 ± 1061276 ± 118
Cloudy1312 ± 1451271 ± 1221283 ± 1241289 ± 138
TCN (histogram)Cloud-masked1475 ± 1931113 ± 961056 ± 831079 ± 92
Cloudy1494 ± 2281126 ± 1401071 ± 1131102 ± 156
TCN (median)Cloud-masked1231 ± 1711069 ± 1191048 ± 1571090 ± 196
Cloudy1259 ± 1741061 ± 1571075 ± 1561101 ± 189
TCN (11-day histogram)Cloud-masked1843 ± 5991960 ± 10211390 ± 3341319 ± 261
Cloudy1606 ± 2521593 ± 5751416 ± 3051277 ± 213
TCN (11-day median)Cloud-masked1990 ± 9062155 ± 12541854 ± 7021702 ± 588
Cloudy1606 ± 3481593 ± 5861734 ± 4821391 ± 231

References

  1. United Nations. Transforming Our World: The 2030 Agenda for Sustainable Development 2015. Available online: https://sustainabledevelopment.un.org/post2015/transformingourworld/publication (accessed on 4 February 2021).
  2. FAO; IFAD; UNICEF; WFP; WHO. The State of Food Security and Nutrition in the World 2021. Transforming Food Systems for Food Security, Improved Nutrition and Affordable Healthy Diets for All. 2021. Available online: https://www.fao.org/documents/card/en/c/cb4474en (accessed on 4 February 2021). [CrossRef]
  3. Wheeler, T.; von Braun, J. Climate change impacts on global food security. Science 2013, 341, 508–513. [Google Scholar] [CrossRef] [PubMed]
  4. Ray, D.K.; Sloat, L.L.; Garcia, A.S.; Davis, K.F.; Ali, T.; Xie, W. Crop harvests for direct food use insufficient to meet the UN’s food security goal. Nat. Food 2022, 3, 367–374. [Google Scholar] [CrossRef]
  5. Fritz, S.; See, L.; Bayas, J.C.L.; Waldner, F.; Jacques, D.; Becker-Reshef, I.; Whitcraft, A.; Baruth, B.; Bonifacio, R.; Crutchfield, J.; et al. A comparison of global agricultural monitoring systems and current gaps. Agric. Syst. 2019, 168, 258–272. [Google Scholar] [CrossRef]
  6. Wulder, M.A.; Loveland, T.R.; Roy, D.P.; Crawford, C.J.; Masek, J.G.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Belward, A.S.; Cohen, W.B.; et al. Current status of Landsat program, science, and applications. Remote Sens. Environ. 2019, 225, 127–147. [Google Scholar] [CrossRef]
  7. Aschbacher, J.; Milagro-Pérez, M.P. The European Earth monitoring (GMES) programme: Status and perspectives. Remote Sens. Environ. 2012, 120, 3–8. [Google Scholar] [CrossRef]
  8. Schauberger, B.; Jägermeyr, J.; Gornott, C. A systematic review of local to regional yield forecasting approaches and frequently used data resources. Eur. J. Agron. 2020, 120, 126153. [Google Scholar] [CrossRef]
  9. Van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
  10. Rußwurm, M.; Körner, M. Self-attention for raw optical Satellite Time Series Classification. ISPRS J. Photogramm. Remote Sens. 2020, 169, 421–435. [Google Scholar] [CrossRef]
  11. Voormansik, K.; Zalite, K.; Sünter, I.; Tamm, T.; Koppel, K.; Verro, T.; Brauns, A.; Jakovels, D.; Praks, J. Separability of mowing and ploughing events on short temporal baseline Sentinel-1 coherence time series. Remote Sens. 2020, 12, 3784. [Google Scholar] [CrossRef]
  12. Garioud, A.; Valero, S.; Giordano, S.; Mallet, C. Recurrent-based regression of Sentinel time series for continuous vegetation monitoring. Remote Sens. Environ. 2021, 263, 112419. [Google Scholar] [CrossRef]
  13. De Vroey, M.; Radoux, J.; Defourny, P. Grassland mowing detection using Sentinel-1 time series: Potential and limitations. Remote Sens. 2021, 13, 348. [Google Scholar] [CrossRef]
  14. Planque, C.; Lucas, R.; Punalekar, S.; Chognard, S.; Hurford, C.; Owers, C.; Horton, C.; Guest, P.; King, S.; Williams, S.; et al. National Crop Mapping Using Sentinel-1 Time Series: A Knowledge-Based Descriptive Algorithm. Remote Sens. 2021, 13, 846. [Google Scholar] [CrossRef]
  15. You, J.; Li, X.; Low, M.; Lobell, D.B.; Ermon, S. Deep Gaussian Process for Crop Yield Prediction Based on Remote Sensing Data. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4559–4566. Available online: https://cs.stanford.edu/~ermon/papers/cropyield_AAAI17.pdf (accessed on 4 February 2022).
  16. Sun, J.; Di, L.; Sun, Z.; Shen, Y.; Lai, Z. County-level soybean yield prediction using deep CNN-LSTM model. Sensors 2019, 19, 4363. [Google Scholar] [CrossRef]
  17. Luotamo, M.; Yli-Heikkilä, M.; Klami, A. Density estimates as representations of agricultural fields for remote sensing-based monitoring of tillage and vegetation cover. Appl. Sci. 2022, 12, 679. [Google Scholar] [CrossRef]
  18. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. Available online: https://www.stat.berkeley.edu/~breiman/57randomforest2001.pdf (accessed on 4 February 2022). [CrossRef]
  19. Belgiu, M.; Drăgut, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  20. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  21. Saeed, U.; Dempewolf, J.; Becker-Reshef, I.; Khan, A.; Ahmad, A.; Wajid, S.A. Forecasting wheat yield from weather data and MODIS NDVI using Random Forests for Punjab province, Pakistan. Int. J. Remote Sens. 2017, 38, 4831–4854. [Google Scholar] [CrossRef]
  22. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  23. Beriaux, E.; Jago, A.; Lucau-Danila, C.; Planchon, V.; Defourny, P. Sentinel-1 Time Series for Crop Identification in the Framework of the Future CAP Monitoring. Remote Sens. 2021, 13, 2785. [Google Scholar] [CrossRef]
  24. Gómez, D.; Salvador, P.; Sanz, J.; Casanova, J.L. New spectral indicator Potato Productivity Index based on Sentinel-2 data to improve potato yield prediction: A machine learning approach. Int. J. Remote Sens. 2021, 42, 3426–3444. [Google Scholar] [CrossRef]
  25. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  26. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
  27. Zhang, X.; Lin, T.; Xu, J.; Luo, X.; Ying, Y. DeepSpectra: An end-to-end deep learning approach for quantitative spectral analysis. Anal. Chim. Acta 2019, 1058, 48–57. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, J.; Wang, H.; Feng, Y. An end-to-end deep model with discriminative facial features for facial expression recognition. IEEE Access 2021, 9, 12158–12166. [Google Scholar] [CrossRef]
  29. Si, K.; Xue, Y.; Yu, X.; Zhu, X.; Li, Q.; Gong, W.; Liang, T.; Duan, S. Fully end-to-end deep-learning-based diagnosis of pancreatic tumors. Theranostics 2021, 11, 1982. [Google Scholar] [CrossRef] [PubMed]
  30. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  31. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  32. Pelletier, C.; Webb, G.I.; Petitjean, F. Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series. Remote Sens. 2019, 11, 523. [Google Scholar] [CrossRef]
  33. Teimouri, N.; Dyrmann, M.; Jørgensen, R.N. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sens. 2019, 11, 990. [Google Scholar] [CrossRef]
  34. Mazzia, V.; Khaliq, A.; Chiaberge, M. Improvement in Land Cover and Crop Classification based on Temporal Features Learning from Sentinel-2 Data Using Recurrent-Convolutional Neural Network (R-CNN). Appl. Sci. 2019, 10, 238. [Google Scholar] [CrossRef]
  35. Garnot, V.S.F.; Landrieu, L. Lightweight Temporal Self-attention for Classifying Satellite Images Time Series. In Proceedings of the Advanced Analytics and Learning on Temporal Data; Lemaire, V., Malinowski, S., Bagnall, A., Guyet, T., Tavenard, R., Ifrim, G., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 171–181. [Google Scholar] [CrossRef]
  36. Lea, C.; Flynn, M.D.; Vidal, R.; Reiter, A.; Hager, G.D. Temporal Convolutional Networks for Action Segmentation and Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1003–1012. [Google Scholar] [CrossRef]
  37. Bai, S.; Kolter, J.Z.; Koltun, V. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
  38. Račič, M.; Oštir, K.; Peressutti, D.; Zupanc, A.; Čehovin Zajc, L. Application of Temporal Convolutional Neural Network for the Classification of Crops on SENTINEL-2 Time Series. ISPRS-Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2020, 43B2, 1337–1342. [Google Scholar] [CrossRef]
  39. Gong, L.; Yu, M.; Jiang, S.; Cutsuridis, V.; Pearson, S. Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN. Sensors 2021, 21, 4537. [Google Scholar] [CrossRef]
  40. Tang, P.; Du, P.; Xia, J.; Zhang, P.; Zhang, W. Channel Attention-Based Temporal Convolutional Network for Satellite Image Time Series Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  41. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for Sentinel-2. In Proceedings of the Image and Signal Processing for Remote Sensing XXIII. International Society for Optics and Photonics, Warsaw, Poland, 11–13 September 2017; Volume 10427, p. 1042704. [Google Scholar] [CrossRef]
  42. Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
  43. Tarrio, K.; Tang, X.; Masek, J.G.; Claverie, M.; Ju, J.; Qiu, S.; Zhu, Z.; Woodcock, C.E. Comparison of cloud detection algorithms for Sentinel-2 imagery. Sci. Remote Sens. 2020, 2, 100010. [Google Scholar] [CrossRef]
  44. Mahajan, S.; Fataniya, B. Cloud detection methodologies: Variants and development—A review. Complex Intell. Syst. 2020, 6, 251–261. [Google Scholar] [CrossRef]
  45. Cihlar, J.; Howarth, J. Detection and removal of cloud contamination from AVHRR images. IEEE Trans. Geosci. Remote Sens. 1994, 32, 583–589. [Google Scholar] [CrossRef]
  46. Li, P.; Dong, L.; Xiao, H.; Xu, M. A cloud image detection method based on SVM vector machine. Neurocomputing 2015, 169, 34–42. [Google Scholar] [CrossRef]
  47. Jeppesen, J.H.; Jacobsen, R.H.; Inceoglu, F.; Toftegaard, T.S. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sens. Environ. 2019, 229, 247–259. [Google Scholar] [CrossRef]
  48. Luotamo, M.; Metsämäki, S.; Klami, A. Multiscale cloud detection in remote sensing images using a dual convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4972–4983. Available online: https://arxiv.org/pdf/2006.00836 (accessed on 4 February 2022). [CrossRef]
  49. Bolton, D.K.; Friedl, M.A. Forecasting crop yield using remotely sensed vegetation indices and crop phenology metrics. Agric. For. Meteorol. 2013, 173, 74–84. [Google Scholar] [CrossRef]
  50. López-Lozano, R.; Duveiller, G.; Seguini, L.; Meroni, M.; García-Condado, S.; Hooker, J.; Leo, O.; Baruth, B. Towards regional grain yield forecasting with 1km-resolution EO biophysical products: Strengths and limitations at pan-European level. Agric. For. Meteorol. 2015, 206, 12–32. [Google Scholar] [CrossRef]
  51. Chen, Y.; Zhang, Z.; Tao, F. Improving regional winter wheat yield estimation through assimilation of phenology and leaf area index from remote sensing data. Eur. J. Agron. 2018, 101, 163–173. [Google Scholar] [CrossRef]
  52. Peng, B.; Guan, K.; Pan, M.; Li, Y. Benefits of seasonal climate prediction and satellite data for forecasting US maize yield. Geophys. Res. Lett. 2018, 45, 9662–9671. [Google Scholar] [CrossRef]
  53. Liu, J.; Shang, J.; Qian, B.; Huffman, T.; Zhang, Y.; Dong, T.; Jing, Q.; Martin, T. Crop Yield Estimation Using Time-Series MODIS Data and the Effects of Cropland Masks in Ontario, Canada. Remote Sens. 2019, 11, 2419. [Google Scholar] [CrossRef]
  54. Li, Y.; Guan, K.; Yu, A.; Peng, B.; Zhao, L.; Li, B.; Peng, J. Toward building a transparent statistical model for improving crop yield prediction: Modeling rainfed corn in the U.S. Field Crop. Res. 2019, 234, 55–65. [Google Scholar] [CrossRef]
  55. Ma, Y.; Zhang, Z.; Kang, Y.; Özdoğan, M. Corn yield prediction and uncertainty analysis based on remotely sensed variables using a Bayesian neural network approach. Remote Sens. Environ. 2021, 259, 112408. [Google Scholar] [CrossRef]
  56. Cao, J.; Wang, H.; Li, J.; Tian, Q.; Niyogi, D. Improving the Forecasting of Winter Wheat Yields in Northern China with Machine Learning—Dynamical Hybrid Subseasonal-to-Seasonal Ensemble Prediction. Remote Sens. 2022, 14, 1707. [Google Scholar] [CrossRef]
  57. Sartore, L.; Rosales, A.N.; Johnson, D.M.; Spiegelman, C.H. Assessing machine leaning algorithms on crop yield forecasts using functional covariates derived from remotely sensed data. Comput. Electron. Agric. 2022, 194, 106704. [Google Scholar] [CrossRef]
  58. Paudel, D.; Boogaard, H.; de Wit, A.; van der Velde, M.; Claverie, M.; Nisini, L.; Janssen, S.; Osinga, S.; Athanasiadis, I.N. Machine learning for regional crop yield forecasting in Europe. Field Crop. Res. 2022, 276, 108377. [Google Scholar] [CrossRef]
  59. Carletto, C.; Gourlay, S.; Winters, P. From guesstimates to GPStimates: Land area measurement and implications for agricultural analysis. J. Afr. Econ. 2015, 24, 593–628. [Google Scholar] [CrossRef]
  60. Braimoh, A.K.; Durieux, M.; Trant, M.; Riungu, C.; Gaye, D.; Balakrishnan, T.K.; Umali-Deininger, D. Capacity Needs Assessment for Improving Agricultural Statistics in Kenya; World Bank: Washington, DC, USA, 2018; Available online: http://documents.worldbank.org/curated/en/801111542740476532/pdf/Capacity-Needs-Assessment-for-Improving-Agricultural-Statistics-in-Kenya.pdf (accessed on 11 February 2022).
  61. Burke, M.; Driscoll, A.; Lobell, D.B.; Ermon, S. Using satellite imagery to understand and promote sustainable development. Science 2021, 371. [Google Scholar] [CrossRef]
  62. Peltonen-Sainio, P. Crop production in a northern climate. In Proceedings of the FAO/OECD Workshop: Building Resilience for Adaptation to Climate Change in the Agriculture Sector; Meybeck, A., Lankoski, J., Redfern, S., Azzu, N., Gitz, V., Eds.; FAO: Rome, Italy, 2012; pp. 183–216. Available online: https://www.fao.org/3/i3084e/i3084e15.pdf (accessed on 4 February 2022).
  63. Mølmann, J.A.; Dalmannsdottir, S.; Hykkerud, A.L.; Hytönen, T.; Samkumar, R.A.; Jaakola, L. Influence of Arctic light conditions on crop production and quality. Physiol. Plant. 2021, 172, 1931–1940. [Google Scholar] [CrossRef]
  64. Natural Resources Institute Finland. Crop production statistics: Use of arable land area by Year and Species. 2021. Available online: https://stat.luke.fi/en/crop-production-statistics (accessed on 27 December 2021).
  65. Natural Resources Institute Finland. Statistics on Utilised Agricultural Area; Sowing Dates. 2021. Available online: https://stat.luke.fi/sites/default/files/kevatkylvot_2000-2021_0.xls (accessed on 27 December 2021).
  66. Aalto, J.; Pirinen, P.; Kauppi, P.E.; Rantanen, M.; Lussana, C.; Lyytikäinen-Saarenmaa, P.; Gregow, H. High-resolution analysis of observed thermal growing season variability over northern Europe. Clim. Dyn. 2022, 58, 1477–1493. [Google Scholar] [CrossRef]
  67. Lilja, H.; Uusitalo, R.; Yli-Halla, M.; Nevalainen, R.; Väänänen, R.; Tamminen, P.; Tuhtar, J. Suomen maannostietokanta: Käyttöopas versio 1.1 (Finnish Soil Database: Manual, version 1.1). 2017. Economydoctor. Soil Class Information Service. Available online: http://www.luke.fi/economydoctor (accessed on 22 November 2021).
  68. Baruth, B.; Bassu, S.; Bussay, A.; Ceglar, A.; Cerrani, I.; Chemin, Y.; De Palma, P.; Fumagalli, D.; Lecerf, R.; Manfron, G.; et al. JRC MARS Bulletin—Crop monitoring in Europe. 2020. Available online: https://publications.jrc.ec.europa.eu/repository/handle/JRC120745 (accessed on 4 February 2022).
  69. Natural Resources Institute Finland. Crop Production Statistics: Advance Estimates of Annual Harvests. 2021. Available online: https://stat.luke.fi/en/crop-production-statistics (accessed on 1 April 2021).
  70. European Space Agency. Sentinel-2 User Handbook; European Space Agency: Paris, France, 2015; Available online: https://sentinel.esa.int/documents/247904/685211/Sentinel-2_User_Handbook (accessed on 27 November 2021).
  71. Wittke, S.; Fouilloux, A.; Lehti, P.; Varho, J.; Karjalainen, M.; Vaaja, M.; Puttonen, E. EODIE—Earth Observation Data Information Extractor. 2022. Available online: http://dx.doi.org/10.2139/ssrn.4067133 (accessed on 4 June 2022).
  72. Richter, R.; Louis, J.; Müller-Wilm, U. Sentinel-2 MSI-Level 2A products algorithm theoretical basis document. Eur. Space Agency Special Publ. ESA SP 2012, 49, 1–72. Available online: https://forum.step.esa.int/uploads/default/original/2X/f/f3aa9be5ad9aab427885b536d0a30a5d47f45202.pdf (accessed on 4 February 2022).
  73. Louis, J. S2 MPC—Level 2A Product Format Specification; European Space Agency: Paris, France, 2017; Available online: https://sentinel.esa.int/documents/247904/685211/Sentinel-2-MSI-L2A-Product-Format-Specifications.pdf. (accessed on 27 November 2021).
  74. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  75. Remy, P. Temporal Convolutional Networks for Keras. 2020. Available online: https://github.com/philipperemy/keras-tcn (accessed on 4 February 2022).
  76. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. arXiv 2015, arXiv:1603.04467. [Google Scholar]
  77. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
  78. Engen, M.; Sandø, E.; Sjølander, B.L.O.; Arenberg, S.; Gupta, R.; Goodwin, M. Farm-Scale Crop Yield Prediction from Multi-Temporal Data Using Deep Hybrid Neural Networks. Agronomy 2021, 11, 2576. [Google Scholar] [CrossRef]
  79. Sagan, V.; Maimaitijiang, M.; Bhadra, S.; Maimaitiyiming, M.; Brown, D.R.; Sidike, P.; Fritschi, F.B. Field-scale crop yield prediction using multi-temporal WorldView-3 and PlanetScope satellite data and deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 174, 265–281. [Google Scholar] [CrossRef]
  80. Peltonen-Sainio, P.; Kangas, A.; Salo, Y.; Jauhiainen, L. Grain number dominates grain weight in temperate cereal yield determination: Evidence based on 30 years of multi-location trials. Field Crop. Res. 2007, 100, 179–188. [Google Scholar] [CrossRef]
  81. Rajala, A.; Peltonen-Sainio, P.; Kauppila, R.; Wilhelmson, A.; Reinikainen, P.; Kleemola, J. Within-field variation in grain yield, yield components and quality traits of two-row barley. J. Agric. Sci. 2007, 145, 445–454. [Google Scholar] [CrossRef]
  82. Rajala, A.; Hakala, K.; Mäkelä, P.; Muurinen, S.; Peltonen-Sainio, P. Spring wheat response to timing of water deficit through sink and grain filling capacity. Field Crop. Res. 2009, 114, 263–271. [Google Scholar] [CrossRef]
  83. Johnson, D.M. An assessment of pre- and within-season remotely sensed variables for forecasting corn and soybean yields in the United States. Remote Sens. Environ. 2014, 141, 116–128. [Google Scholar] [CrossRef]
  84. Zhang, L.; Zhang, Z.; Luo, Y.; Cao, J.; Xie, R.; Li, S. Integrating satellite-derived climatic and vegetation indices to predict smallholder maize yield using deep learning. Agric. For. Meteorol. 2021, 311, 108666. [Google Scholar] [CrossRef]
  85. Guan, K.; Wu, J.; Kimball, J.S.; Anderson, M.C.; Frolking, S.; Li, B.; Hain, C.R.; Lobell, D.B. The shared and unique values of optical, fluorescence, thermal and microwave satellite data for estimating large-scale crop yields. Remote Sens. Environ. 2017, 199, 333–349. [Google Scholar] [CrossRef]
  86. Cai, Y.; Guan, K.; Lobell, D.; Potgieter, A.B.; Wang, S.; Peng, J.; Xu, T.; Asseng, S.; Zhang, Y.; You, L.; et al. Integrating satellite and climate data to predict wheat yield in Australia using machine learning approaches. Agric. For. Meteorol. 2019, 274, 144–159. [Google Scholar] [CrossRef]
  87. Baetens, L.; Desjardins, C.; Hagolle, O. Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA, Sen2Cor, and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure. Remote Sens. 2019, 11, 433. [Google Scholar] [CrossRef]
  88. Zekoll, V.; Main-Knorn, M.; Alonso, K.; Louis, J.; Frantz, D.; Richter, R.; Pflug, B. Comparison of Masking Algorithms for Sentinel-2 Imagery. Remote Sens. 2021, 13, 137. [Google Scholar] [CrossRef]
Figure 1. Study area is determined by 28 Sentinel-2 tiles. Selected regions (municipalities) are used in the study to validate regional-level yield predictions. Note that multiple Sentinel-2 tiles on adjacent orbits can overlay a municipality.
Figure 1. Study area is determined by 28 Sentinel-2 tiles. Selected regions (municipalities) are used in the study to validate regional-level yield predictions. Note that multiple Sentinel-2 tiles on adjacent orbits can overlay a municipality.
Remotesensing 14 04193 g001
Figure 2. The mean and 68% bootstrap confidence interval (dashed lines) of the regional crop yields in the study area according to the crop production statistics [64] in 2018–2020.
Figure 2. The mean and 68% bootstrap confidence interval (dashed lines) of the regional crop yields in the study area according to the crop production statistics [64] in 2018–2020.
Remotesensing 14 04193 g002
Figure 3. Histograms of 10 Sentinel-2 bands with (b) and without (a) cloud masking. The data are from the growing season (May–August 2020) of all field parcels in the study sample. Note that the range of the surface reflectance values is set to [1, 6000]. The number of bins is 1000.
Figure 3. Histograms of 10 Sentinel-2 bands with (b) and without (a) cloud masking. The data are from the growing season (May–August 2020) of all field parcels in the study sample. Note that the range of the surface reflectance values is set to [1, 6000]. The number of bins is 1000.
Remotesensing 14 04193 g003
Figure 4. Displacement plots of surface reflectance histograms of three Sentinel-2 bands without cloud masking (top) and with cloud masking (bottom). The data is from the 11 May–31 August 2020. The range is set to [1, 6000]. The number of bins is 1000. (a) Band 4 (Red). (b) Band 8 (NIR). (c) Band 12 (SWIR).
Figure 4. Displacement plots of surface reflectance histograms of three Sentinel-2 bands without cloud masking (top) and with cloud masking (bottom). The data is from the 11 May–31 August 2020. The range is set to [1, 6000]. The number of bins is 1000. (a) Band 4 (Red). (b) Band 8 (NIR). (c) Band 12 (SWIR).
Remotesensing 14 04193 g004
Figure 5. Median values of 10 Sentinel-2 bands from spring wheat fields of a farm calculated from (a) cloudy and (b) cloud-masked surface reflectance.
Figure 5. Median values of 10 Sentinel-2 bands from spring wheat fields of a farm calculated from (a) cloudy and (b) cloud-masked surface reflectance.
Remotesensing 14 04193 g005
Figure 6. Histograms of three Sentinel-2 bands from spring wheat fields of a farm calculated from (a) cloudy and (b) cloud-masked surface reflectance. The x-axis is time, and the y-axis is the reflectance value range of the histograms (low values at the bottom, high values at the top). The red rectangle is an example of one timepoint in the early May, when the observed pixel values from spring wheat fields of a farm are shrunk into a 32-bin histogram. The brightest yellowish bins are the value ranges where the pixel counts are the highest.
Figure 6. Histograms of three Sentinel-2 bands from spring wheat fields of a farm calculated from (a) cloudy and (b) cloud-masked surface reflectance. The x-axis is time, and the y-axis is the reflectance value range of the histograms (low values at the bottom, high values at the top). The red rectangle is an example of one timepoint in the early May, when the observed pixel values from spring wheat fields of a farm are shrunk into a 32-bin histogram. The brightest yellowish bins are the value ranges where the pixel counts are the highest.
Remotesensing 14 04193 g006
Figure 7. Histograms of three Sentinel-2 bands from spring wheat fields of a farm as 11-day composites. Calculated from cloudy and cloud-masked surface reflectance.
Figure 7. Histograms of three Sentinel-2 bands from spring wheat fields of a farm as 11-day composites. Calculated from cloudy and cloud-masked surface reflectance.
Remotesensing 14 04193 g007
Figure 8. Median values of 10 Sentinel-2 bands from spring wheat fields of a farm as 11-day mean composites. Calculated from (a) cloudy and (b) cloud-masked surface reflectance. X-axis shows the 12 time windows from a growing season.
Figure 8. Median values of 10 Sentinel-2 bands from spring wheat fields of a farm as 11-day mean composites. Calculated from (a) cloudy and (b) cloud-masked surface reflectance. X-axis shows the 12 time windows from a growing season.
Remotesensing 14 04193 g008
Figure 9. Processing pipeline to predict crop yield from satellite time series data. Six object representation schemes were applied as an input to temporal convolutional network and random forests. Object representation schemes were repeated with cloud-masked and not cloud-masked data. Models were trained at both farm-level and regional-level for comparison. Regional predictions were either mean-aggregated from farm-level predictions or taken from regional-level predictions.
Figure 9. Processing pipeline to predict crop yield from satellite time series data. Six object representation schemes were applied as an input to temporal convolutional network and random forests. Object representation schemes were repeated with cloud-masked and not cloud-masked data. Models were trained at both farm-level and regional-level for comparison. Regional predictions were either mean-aggregated from farm-level predictions or taken from regional-level predictions.
Remotesensing 14 04193 g009
Figure 10. The normalized mean of root-mean-squared error (NRMSE), i.e., the share of root-mean-squared error from the mean crop-wise yield, and its 68% bootstrap confidence interval of farm-level predictions from four time steps within the growing season based on cloud-masked data.
Figure 10. The normalized mean of root-mean-squared error (NRMSE), i.e., the share of root-mean-squared error from the mean crop-wise yield, and its 68% bootstrap confidence interval of farm-level predictions from four time steps within the growing season based on cloud-masked data.
Remotesensing 14 04193 g010
Figure 11. An example of in-season TCN predictions of spring wheat yield (kg/ha) of a farm and its surrounding region (municipality) in 2020 from cloud-masked data. Blue vertical lines denote observation dates. The region-level prediction is reported as the mean of all farm-predictions with the standard deviation.
Figure 11. An example of in-season TCN predictions of spring wheat yield (kg/ha) of a farm and its surrounding region (municipality) in 2020 from cloud-masked data. Blue vertical lines denote observation dates. The region-level prediction is reported as the mean of all farm-predictions with the standard deviation.
Remotesensing 14 04193 g011
Figure 12. Scatter plots of farm- and region-level predictions vs. actual, when using histogram-based features with temporal convolutional network model, spring wheat, in 2018–2020, from cloud-masked data. The root-mean-squared error (RMSE) is shown in kg/ha and as a percentage from the average of the actual test set yield. The size of the point is proportional to the area of the observation (farm or region) in hectares. The years are denoted in colors. (a) Farm level. (b) Region level.
Figure 12. Scatter plots of farm- and region-level predictions vs. actual, when using histogram-based features with temporal convolutional network model, spring wheat, in 2018–2020, from cloud-masked data. The root-mean-squared error (RMSE) is shown in kg/ha and as a percentage from the average of the actual test set yield. The size of the point is proportional to the area of the observation (farm or region) in hectares. The years are denoted in colors. (a) Farm level. (b) Region level.
Remotesensing 14 04193 g012
Figure 13. Bar plots of the mean farm-level predictions (TCN) and country-level published forecasts from JRC (MCYFS) and LUKE, in 2018–2020. The horizontal red line shows the 3-year mean of national level yields from the official crop statistics (LUKE). Black vertical lines show the minimum and maximum values of the three years. Note that the scale of the y-axis is zoomed to values between [2000, 5500].
Figure 13. Bar plots of the mean farm-level predictions (TCN) and country-level published forecasts from JRC (MCYFS) and LUKE, in 2018–2020. The horizontal red line shows the 3-year mean of national level yields from the official crop statistics (LUKE). Black vertical lines show the minimum and maximum values of the three years. Note that the scale of the y-axis is zoomed to values between [2000, 5500].
Remotesensing 14 04193 g013
Table 1. Number of farms in the sample sets for all studied crops in 2018–2020.
Table 1. Number of farms in the sample sets for all studied crops in 2018–2020.
Crop TypeYear 2018Year 2019Year 2020Total
Winter wheat (Triticum aestivum L.)2175473921156
Spring wheat (Triticum aestivum L.)1895145315444892
Rye (Secale cereale L.)3945593421295
Feed barley (Hordeum vulgare L.)2633276526298027
Malting barley (Hordeum vulgare L.)7073964861589
Oats (Avena sativa L.)3095313834459678
Total89418858883826,637
Table 2. The mean of root-mean-square errors (RMSE), the share of the error from the mean regional-level true yields (RMSE in %), mean absolute error (MAE), and the Pearson correlation coefficient ρ of regional-level predictions with temporal convolutional network for 2018–2020. The best value per error metric is printed in boldface. The number (#) of tiles means that the dataset are either from those municipalities that are overlaid on one or multiple (1-6) Sentinel-2 tiles.
Table 2. The mean of root-mean-square errors (RMSE), the share of the error from the mean regional-level true yields (RMSE in %), mean absolute error (MAE), and the Pearson correlation coefficient ρ of regional-level predictions with temporal convolutional network for 2018–2020. The best value per error metric is printed in boldface. The number (#) of tiles means that the dataset are either from those municipalities that are overlaid on one or multiple (1-6) Sentinel-2 tiles.
LevelFeature TypeCloud-Masked# of TilesTemporalRMSE (kg/ha)RMSE (%)MAE ρ
FarmHistogram 1–6121 days617174940.54
FarmHistogram 1121 days642185400.67
FarmHistogramx1–6121 days709205850.56
FarmMedianx1–6121 days728205700.50
FarmMedian 1–6121 days738215660.43
FarmHistogramx1121 days750216450.67
FarmMedianx1121 days769226310.55
FarmMedian 1121 days809236310.54
RegionHistogramx1–6121 days809236500.13
RegionHistogramx1121 days841246920.10
RegionHistogram 1–6121 days862246760.11
RegionHistogram 1121 days909267390.06
FarmHistogram 1–611-day1006288090.62
FarmMedian 1–611-day1006287440.45
RegionHistogram 1–611-day1035298290.07
RegionHistogramx1–611-day1038298190.12
FarmMedian 111-day1043297650.58
FarmHistogram 111-day1060308480.71
RegionHistogram 111-day1094318850.00
RegionHistogramx111-day1126329040.07
FarmHistogramx1–611-day1132329230.64
FarmMedianx111-day1161339030.58
FarmHistogramx111-day1180339630.75
FarmMedianx1–611-day1193349380.50
Table 3. The deviation (%) of published country-level forecasts and TCN predictions from the actual crop yield. The lowest deviation (crop-wise) at the forecast time is printed in boldface. NaN means no forecast is available.
Table 3. The deviation (%) of published country-level forecasts and TCN predictions from the actual crop yield. The lowest deviation (crop-wise) at the forecast time is printed in boldface. NaN means no forecast is available.
CropMonthLUKE (%)MCYFS (%)TCN (%)
Spring wheatJuneNaNNaN1.0
July−0.5NaN1.0
August0.7NaN1.2
BarleyJuneNaN−1.6−5.1
July−9.2−8.3−4.9
August−8.0−8.7−1.8
Feed barleyJuneNaNNaN−6.0
JulyNaNNaN−3.9
AugustNaNNaN−1.4
Malting barleyJuneNaNNaN1.3
JulyNaNNaN3.0
AugustNaNNaN0.9
OatsJuneNaNNaN−4.4
July−3.7NaN−2.1
August−7.2NaN1.2
Winter wheatJuneNaNNaN0.9
July−5.8NaN7.9
August−4.9NaN−0.8
RyeJuneNaN−5.9−1.0
July−9.6−4.42.2
August−5.9−4.0−10.7
Absolute mean deviation 5.55.53.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yli-Heikkilä, M.; Wittke, S.; Luotamo, M.; Puttonen, E.; Sulkava, M.; Pellikka, P.; Heiskanen, J.; Klami, A. Scalable Crop Yield Prediction with Sentinel-2 Time Series and Temporal Convolutional Network. Remote Sens. 2022, 14, 4193. https://doi.org/10.3390/rs14174193

AMA Style

Yli-Heikkilä M, Wittke S, Luotamo M, Puttonen E, Sulkava M, Pellikka P, Heiskanen J, Klami A. Scalable Crop Yield Prediction with Sentinel-2 Time Series and Temporal Convolutional Network. Remote Sensing. 2022; 14(17):4193. https://doi.org/10.3390/rs14174193

Chicago/Turabian Style

Yli-Heikkilä, Maria, Samantha Wittke, Markku Luotamo, Eetu Puttonen, Mika Sulkava, Petri Pellikka, Janne Heiskanen, and Arto Klami. 2022. "Scalable Crop Yield Prediction with Sentinel-2 Time Series and Temporal Convolutional Network" Remote Sensing 14, no. 17: 4193. https://doi.org/10.3390/rs14174193

APA Style

Yli-Heikkilä, M., Wittke, S., Luotamo, M., Puttonen, E., Sulkava, M., Pellikka, P., Heiskanen, J., & Klami, A. (2022). Scalable Crop Yield Prediction with Sentinel-2 Time Series and Temporal Convolutional Network. Remote Sensing, 14(17), 4193. https://doi.org/10.3390/rs14174193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop