Next Article in Journal
Research Trends, Biases, and Gaps in Phytochemicals as Insecticides: Literature Survey and Meta-Analysis
Next Article in Special Issue
Effect of Duration of LED Lighting on Growth, Photosynthesis and Respiration in Lettuce
Previous Article in Journal
Variety-Specific Flowering of Sugarcane Induced by the Smut Fungus Sporisorium scitamineum
Previous Article in Special Issue
Biogenic CuO and ZnO Nanoparticles as Nanofertilizers for Sustainable Growth of Amaranthus hybridus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Open-Source Package for Thermal and Multispectral Image Analysis for Plants in Glasshouse

1
Agriculture Victoria, Grains Innovation Park, 110 Natimuk Rd, Horsham, VIC 3400, Australia
2
AgriBio, Centre for AgriBioscience, Agriculture Victoria, 5 Ring Road, Melbourne, VIC 3083, Australia
3
School of Applied Systems Biology, La Trobe University, Melbourne, VIC 3083, Australia
*
Author to whom correspondence should be addressed.
Plants 2023, 12(2), 317; https://doi.org/10.3390/plants12020317
Submission received: 13 November 2022 / Revised: 3 January 2023 / Accepted: 6 January 2023 / Published: 9 January 2023

Abstract

:
Advanced plant phenotyping techniques to measure biophysical traits of crops are helping to deliver improved crop varieties faster. Phenotyping of plants using different sensors for image acquisition and its analysis with novel computational algorithms are increasingly being adapted to measure plant traits. Thermal and multispectral imagery provides novel opportunities to reliably phenotype crop genotypes tested for biotic and abiotic stresses under glasshouse conditions. However, optimization for image acquisition, pre-processing, and analysis is required to correct for optical distortion, image co-registration, radiometric rescaling, and illumination correction. This study provides a computational pipeline that optimizes these issues and synchronizes image acquisition from thermal and multispectral sensors. The image processing pipeline provides a processed stacked image comprising RGB, green, red, NIR, red edge, and thermal, containing only the pixels present in the object of interest, e.g., plant canopy. These multimodal outputs in thermal and multispectral imageries of the plants can be compared and analysed mutually to provide complementary insights and develop vegetative indices effectively. This study offers digital platform and analytics to monitor early symptoms of biotic and abiotic stresses and to screen a large number of genotypes for improved growth and productivity. The pipeline is packaged as open source and is hosted online so that it can be utilized by researchers working with similar sensors for crop phenotyping.

1. Introduction

Plant phenotyping characterises growth and biophysical traits at different plant development stages [1]. Conventional phenotyping methods (e.g., visual observations and destructive sampling) are prone to operator bias, time-consuming, and often destructive [2,3]. Image-based high-throughput plant phenotyping is a promising alternative to conventional phenotyping and has been widely used to measure plant morphological and agronomical traits. Imaging systems, including thermal and multispectral sensors, provide a non-invasive and non-destructive method for detecting emitted and reflected electromagnetic radiation from plant canopies to study plant traits such as growth, biomass accumulation, and stress symptoms [4,5].
Optical images can be used to measure different plant traits, including leaf area, height, canopy biomass, and yield [6]. Multispectral imaging is widely used to extract detailed information about crop attributes by capturing spectral data cubes, consisting of two-dimensional images under different wavelengths [7]. Mathematical combinations of spectral wavebands are often used to create vegetative indices (VIs), such as Normalized Difference Vegetation Index (NDVI) [8] and Enhanced Vegetation Index (EVI) [9], which are used to estimate green biomass [10,11]. Similarly, Normalized Difference Red-Edge (NDRE) [12] and Red-edge Chlorophyll Index (RCI) [13,14] are used to predict chlorophyll concentration in plant tissues [15]. Multispectral imaging processing and machine learning techniques have also been used to detect abiotic and biotic stresses in plants, such as root water stress [16,17], and tomato spotted wilt virus and powdery mildew [18].
Thermal imaging can be used to monitor subtle changes in the temperature of plant canopies over different plant growth stages and in response to environmental conditions. Unlike multispectral imaging, thermal imaging measures the emitted radiation from plants, thereby not requiring an illumination source [19]; hence, thermal sensing can be easily employed at night to observe diurnal changes in plants, thereby enabling study of critical plant physiological processes such as diurnal water loss due to transpiration [20,21]. Moreover, thermal images can detect subtle variations in plant temperature due to water stress. The temperature of plants has been shown to increase long before the appearance of chlorotic or necrotic patches in response to disease. Currently, thermal imagery is widely used for irrigation scheduling [22], detecting stress in canopies due to pathogens [23], heat stress, and stomatal conductance [24,25].
Vieira and Ferrarezi [26] used a handheld thermal camera to determine water stress and assess the water potential of citrus plants growing under glasshouse conditions. In another study by Hu, et al. [27], thermal imaging was combined with a back propagation neural network to compare predictions of Infrared Crop Water Stress Index (ICWSI) with yield. Grant, Chaves and Jones [24] applied thermal imaging under a controlled environment to study the reaction of plants (grapevines, beans, and lupins) under irrigated and non-irrigated conditions. The study observed a significant correlation between temperature and stomatal conductance; however, it also highlighted potential limitations of thermal imaging, such as inaccuracy in temperature values, time-consuming data analysis, and a lack of reliable references to calibrate temperature.
Thermal imaging paired with multispectral imaging provides a dimensional modality to study the physiological response of plants to stress [1]. For instance, thermal images can detect subtle temperature changes, while multispectral images provide complementary information on the presence of any biotic stress (observed from colour change), and crop biomass [28,29,30]. Further, specific technical constraints of unimodal datasets can be resolved through multimodal data fusion. For example, thermal imaging is prone to the temperature of in-scene background targets when measuring plant temperature. The fusion of thermal and spectral imaging enables segmentation algorithms to mask background thermal noise and extract pure thermal pixels from plants [1]. Leinonen and Jones [25] combined images from thermal, red, and NIR bands, which were utilised to separate the plants from the background soil. Their study suggested that the co-registration of visible and thermal images followed by the classification of pixels in the visible spectrum is essential for accurately profiling canopy temperature. Stutsel, et al. [31] used thermal cameras to see variation in temperature of tomato plants under salinity stress. Pixel information from the green–red vegetation index was derived from RGB images and was used to outline individual plants from the soil pixel. Bai, et al. [32] calculated Crop Water Stress Index (CWSI) and Growth Index (GI) using an image processing pipeline created for thermal and multispectral images. The study advocated that fusion of thermal and multispectral imaging in glasshouse conditions has potential to efficiently phenotype wheat genotypes for drought tolerance. Cucho-Padin, et al. [33] have fused the IR and RGB images from a thermal camera to develop software to calculate the Crop Water Stress Index (CWSI) and Green–Red Vegetative Index (GRVI). Bulanon, et al. [34] fused thermal and visible images for orange fruit detection using Laplacian pyramid transform and fuzzy logic. This study highlighted the benefits image fusion compared to simply using the thermal images for improved efficacy of fruit detection. Despite the availability of a few studies involving the fusion of thermal imaging with other visible imaging in agriculture applications, its wide scale adoption is still limited. To large extent, this is due to the unavailability of an image processing tool to enable ease of data analytics.
Thermal and multispectral imaging are used in both aerial and handheld modes to study plant growth and response to treatments under field and glasshouses conditions [35]. Aerial imageries are beneficial for covering large areas in fields. Handheld sensors are advantageous for studying individual plants, as the analysis of different sections of the canopy can be performed with higher spatial resolution and accuracy [36]. However, both multispectral and thermal imaging are impacted by various environmental factors (including air temperature, humidity, haze, illumination intensity and direction) and, therefore, require correction of raw images before further analysis [4,37]. This study aimed to (i) develop an image processing pipeline to optimise thermal and multispectral imagery under glasshouse conditions; (ii) reduce the processing time by incorporating batch processing routines including image calibration, registration, illumination adjustment, temperature rescaling, segmentation, extraction of vegetation indices, and temperature profiles; and (iii) demonstrate the efficacy of the developed package to detect early symptoms of heat stress in wheat plants.

2. Materials and Methods

2.1. Experiment Setup

The experiment was conducted in a glasshouse at the Grains Innovation Park, Horsham, Victoria, Australia. Wheat plants were grown in pots to test and optimise thermal and multispectral imagery. The growing conditions in the glasshouse were 24 °C and 15 °C during day and night, respectively, with relative humidity ranging 55–65%. The imaging was done under natural light conditions.

2.2. Integrated Sensor Platform and Imaging Setups

The experiment used a thermal (FLIR T640, Teledyne FLIR LLC, Wilsonville, OR, USA) and a multispectral (Parrot Sequoia, Parrot SA, Paris, France) sensor to capture multimodal data. The FLIR T640 sensor provides a resolution of 640 × 480 pixels and can detect temperature ranges from −40 to 2000 °C. The camera provides a precision of ±2 °C and thermal sensitivity of less than 0.03 °C at 30 °C. The Parrot Sequoia was used to capture multispectral images with four spectral channels, including green, red, infrared (IR), and red-edge bands with a resolution of 1290 × 960 pixels at 12 Mpix. The central wavelength and wavelength width of the channels are green: 550 ±   40   nm , red: 660 ±   40   nm , red-edge: 735 ±   10   nm and NIR: 790 ±   40   nm [38]. Additionally, the multispectral sensor captures a standard red–green–blue (RGB) colour image with a resolution of 4608 × 3695 pixels.
A special arrangement was required to pair the thermal and multispectral sensors and provide a systematic overlap of respective field-of-views (FoVs) and pixel-to-pixel matching between the sensors. The thermal and multispectral cameras were integrated using a magnetic mount assembly (Figure 1). The physical pairing of the two sensors ensured a fixed relative orientation, with the multispectral sensor providing a larger FoV to envelop the thermal sensor’s FoV. The irradiance sensor of the multispectral camera was positioned over the thermal camera to capture variations in local illumination levels during imaging. Finally, a white background target with 80 percent reflectivity was placed behind the object plane to act as a radiometric calibration target. This provided a spectral contrast between the plant and background, which helped to digitally extract the plant and avoid background noise. The camera setup was kept stationary, and the image acquisition was triggered after manually placing a potted plant in front of the imaging setup.

2.3. Image Processing

A processing pipeline was developed to correct geometric distortions, image-to-image registration, radiometric/illumination correction, and segmentation of captured images (Figure 2). The image processing pipeline aimed to correct irregularities in the images due to intrinsic factors (e.g., camera distortion), ambient factors (e.g., light and temperature variations), and segment areas of interest (i.e., the plant canopies from the background). The pipeline also aimed to process images in a batch to simplify the processing. All codes were written in MATLAB to produce a library package which is available at https://github.com/SmartSense-iHub/Thermal-and-Multispectral-Image-Analysis-Processing-Pipeline.git (accessed on 12 November 2022). A MATLAB library Natural-Order Filename Sort (Nsortfiles) [39] was used to load thermal and multispectral images in sequential order.

2.3.1. Correction of Radial Optical Distortions in Multispectral Images

Multispectral sensors have significant radial barrel distortion and different spatial coverage among bands that lead to misregistration effects [40]. Radial distortion occurs when the light rays bend more towards the edges of a lens than at its optical centre, and it is inversely proportional to the size of the lens. Radial distortion occurs if the FoV of a fore−optics lens is greater than the size of the image sensor that captures the image. The inward or outward displacement of light rays before hitting the sensor from its ideal location causes the straight lines on the image to render the shapes of an arc due to radial distortion [41,42]. Radial distortion is the measure of the image height (Ih) divided by object height (Oh), i.e., transverse magnification (M = I h O h ) with the off-axis image distance (r). An increase in M with r results in pincushion distortion, whereas barrel distortion is observed when M decreases with r [41] (Figure 3a).
Camera calibration involves determining the intrinsic, extrinsic, and distortion parameters. Extrinsic parameters transfer the 3D world coordinates [X Y Z] to the 3D camera coordinates [ X c   Y c   Z c ] [43]. The extrinsic parameters consist of Rotation (R) and Translation (T) (Figure 3b).
Three intrinsic parameters—principal points (optical centre), focal length, and skew coefficient(s) convert 3D camera coordinates to 2D pixel coordinates (x, y) [43,44]. The intrinsic parameters can be represented in a matrix as:
f ϰ 0 0 s f y 0 c x c y 1
where [ c x   c y ] is the optical centre in pixels, [ f x   f y ] is the focal length in pixels F P , F represents the focal length in world units (mm), P is the pixel in world units, and s is defined by f x tan α , where α is the angle between the image axes.
However, after applying extrinsic and intrinsic translation, due to the radial distortion, the camera captures the values of the distorted pixels x d ,   y d   instead of the real points (x, y). The distorted relation between the distorted points is represented as:
x d = x 1 + k 1 r 2 + k 1 r 4   + k 1 r 6
y d = y 1 + k 1 r 2   + k 1 r 4 + k 1 r 6
where x and y are 2D undistorted pixel coordinates after the application of intrinsic projections, and r 2 = x 2   + y 2 and k 1 ,   k 2 ,   k 3 are the radial distortion coefficients of the lens.
This study utilises the traditional checkerboard corner detection method [45] to remove distortion. A 5 × 8 grid checkerboard with a 50 mm size for each square grid was used for this process. For distortion correction, images were taken from different distances and angles using the multispectral camera. The world points were detected from the corners to determine the extrinsic parameters for each band (Figure 4a,b).

2.3.2. Registration of Optical, Multispectral, and Thermal Images

Image registration was used to align multiple images with geometric shifts to create a composite view and improve the signal-to-noise ratio. Image registration matches two or more images acquired from different viewpoints, sensors, time, or FoVs to extract valuable information otherwise impossible from the individual images. The purpose of image registration for this research was to align and stack the images from the multispectral and thermal cameras, which have a different FoV (Figure 5a,b) and sensor plane offsets. The images were aligned for analysis using a two-step image registration process, i.e., a coarse registration followed by a fine registration.
Coarse registration: The coarse registration process involves identifying discernible features using key point descriptors, filtering the image features using a saliency measure such as M-estimator Sample Consensus (MSAC) [46], computing geometric transformation using the filtered image features, and applying the geometric transformation on the image pair for registration [47]. In this implementation process, an image feature-based registration [48] was applied to register optical (RGB), multispectral and thermal images coarsely. Image features are discernible points that are common among two or more images to be aligned.
Typically, checkerboards are used for automated corner detection (using key point descriptors) and matching algorithms (using geometric transformation) to operate on optical input images, e.g., for correct shifts in different spectral bands [11]. However, it is difficult to detect black and white checkerboards in thermal images, as the temperature between the white and black marks remains the same. Thus, a set of geometric shapes cut-out in a corflute sheet (20 × 20 cm) was used to provide discernible and common reference feature points. The geometric shapes cut-out was placed in front of a higher temperature back wall. The setup was arranged to detect geometric corners as image features between the optical, multispectral, and thermal images. The temperature difference can be observed as corners in the geometric shapes in the thermal image, and the difference in colour acts as corners in optical and multispectral images (Figure 5a,b).
The coarse registration step was implemented to provide rigid transformation, which varies with the orientation of the cameras and the distance of imaging. The rigid transformation matrix includes translation and rotation, and nonrigid matrix includes shear and scale with the matrix representations, as shown in Table 1.
Translation, scale, and shear measure the displacement, scale factor, and shear along the x- and y-axis, respectively. The angle of rotation about the origin is denoted by q. These parameters are combined depending on the position and orientation of the two image pairs to create a geometric transformation matrix. Figure 6a shows how a moving image is projected in the FoV of the fixed image. The matched points of fixed and moving image pairs were used to determine the translation, shear, scale, and rotation angle between the two image pairs. These four matrices are concatenated using matrix multiplication to obtain a geometric transformation matrix, which was used to project the moving image into the frame of the fixed image. The coarse registration resulted in fixed and projected image pairs to nearly overlap with minute shifts, represented as distance in pixels (Figure 6b).
Fine registration: A fine registration was applied using intensity-based registration [49,50] to fix the misalignment after coarse registration. The misalignment may be caused by the difference in image capture times between the two sensors combined with the movement of the plant’s canopy. Intensity-based registration aligns images based on the pixel intensity levels of the two images that overcome the local anatomical differences. An image similarity metric (Mattes mutual information algorithm) [51] and one-plus-one evolutionary optimiser [52] were used for fine registration. An image similarity metric determines the statistical closeness of pixel-level intensity information between two images, and optimisers work iteratively to minimise the similarity metric, thereby achieving perfect overlap (Figure 7b). The fine registration step involves a nonrigid geometric transformation model involving shear (Sh) and scale (S) transformation in addition to Translation (T) and Rotation (R), as in Table 1. Therefore, the nonrigid transformation model enables an organic alignment of plant tissues such as stems and leaves at fine level, and to a large extent can adjust for minor shaking of plant due to air.

2.3.3. Radiometric Rescaling of Thermal Images

Radiometric rescaling is used to convert the Digital Numbers (DN) to corresponding parametric values. In captured thermal images, temperature values remain scaled as 8-bit DN equivalent in a range of 0–255 (Figure 8a). The first step to convert the DN to temperature values was to extract the maximum ( maxT ) and minimum temperature ( minT ), which are, respectively, embedded at the top and bottom of the temperature scale in the captured thermal images. An Optical Character Recognition (OCR) algorithm was used to extract the maxT and minT levels. Secondly, the max and the min DN numbers were determined from each image. Finally, a standard radiometric rescaling model (Equation (4)) was used to convert the DN values to temperature values (Figure 8b).
T = minT + maxT minT maxDN minDN × DN

2.3.4. Gradient Removal and Illumination Correction of Multispectral Images

Illumination variation is caused by (i) non-uniformness in the spatial distribution of radiation on the object plane due to the direction of the incident radiation, and (ii) changes in the intensity of the incident radiation through time, i.e., for images taken at different time points [53,54]. Inaccurate retrieval of reflectance images and inaccuracies in image analysis and segmentation, are some of the main problems associated with illumination variation [55].
Traditional ways of correcting illumination involve creating an imaging chamber with perfect light conditions, which is expensive, time-consuming, and not feasible in all glasshouse settings. The method used in this research is suitable for a glasshouse environment, easy to replicate, and cost-effective. This method was carried out in two steps—gradient variance was carried out followed by the illumination correction. A variation in gradient is the change in colour or intensity of images in a certain direction. This is caused due to directional light source during image acquisition and can produce a significant variation in pixel values.
To correct the gradient in the image, first, an interpolated gradient reference image ( G ref ) was created. A 4 × 4 pixels size Region of Interest (ROI) was selected from the four corners of the original image. Next, the average illumination of these four corners was calculated to interpolate G ref   [54]. The difference between G ref and the minimum pixel value of G ref was calculated as G dif (Equation (5)). Finally, the corrected gradient ( G cor ) was determined by subtracting the original image I org with the G dif   (Equation (6)).
G dif = G ref   min ( G ref )
G cor = I org   G dif  
After the gradient correction, the temporal variation in illumination levels was applied to the corrected image. A dark reference image ( I dark ) was taken by covering the lenses of the multispectral camera. An interpolated image was created from the four corners of the gradient-corrected image ( G cor ) as the white reference image ( I white ). The final corrected image I cor was derived from Equation (7), where the corrected data were subtracted from the dark reference data and then divided by the difference of the white and dark reference image, and the output was multiplied by the spectral reflectance factor (ref) of the background band, which was 80%. Figure 9a,b show the results after gradient correction and illumination correction; the output after illumination correction facilitates the segmentation process.
S I cor = G cor I dark I white + I dark × ref

2.3.5. Segmentation to Separate the Plant from the Background

Segmentation is a crucial precursor in image analysis for plants. Image segmentation helps remove the nonvegetative parts, which improves the extraction of spectral and temperature profiles of different parts of plants, such as leaves, stems, and heads [56,57].
For this experiment, since the spatial resolution of the RGB was better than the thermal image, the RGB image was used to create a foreground mask for the plant. A segmentation mask was used to extract the plant from the background for thermal and multispectral images [58]. The image segmentation was carried out using an adaptive thresholding method. The adaptive thresholding method has an advantage over fixed thresholding as it provides an optimal threshold of pixels based on the intensity of its neighbour pixels. Additionally, adaptive thresholding solves the issue with shadow pixels, which are incorrectly considered parts of a plant during segmentation in a global thresholding approach [59]. Figure 10a shows the foreground mask and the segmented RGB image and Figure 10b the output of the segmented image.
The image processing pipeline’s output was an eight-band stacked image (Figure 11). The band was stacked in the sequence RGB, green, red, NIR, red-edge, and thermal image. Each of the stacked images only represents the pixel associated with the canopy of the plant, and the remaining pixels were set to Not a Number (NAN), which represents undefined pixel values. The final eight-band image facilitates analysis and comparison of the plant canopies for different biotic and abiotic stresses at different levels simultaneously.

2.3.6. Vegetation Indices

Vegetation indices were calculated from the eight-band stacked image by using a mathematical combination of several spectral bands to maximise the information obtained from the vegetation while minimising the noise caused by atmospheric effects or reflectance from soil [60]. Biotic and abiotic stresses can highly affect the biophysical property of plants and are correlated to the VIs of crops [61]. NVDI is one of the commonly used indices that combines reflectance from NIR and red light; a low NVDI value indicates the presence of stress in plants caused by biotic or abiotic factors [62,63]. The value of NDVI ranges from −1 to 1, where the higher value represents healthy and dense vegetation [64] (Figure 12a). CI red edge (CIre) is calculated with the help of NIR and red-edge wavelength (Figure 12b). Observing small variations in chlorophyll contents is useful since a linear relationship exists between the reflectance of NIR and the inverse of the red-edge band [65]. Some other VIs that can be associated with the presence of pathogens are related to water content in plants and chlorophyll pigmentation. The Triangle Vegetation Index (TVI) [66] that determines the radiant energy absorption of chlorophyll has been used to classify healthy and unhealthy crops. NDRE is calculated similarly to the NDVI, but uses a red edge instead of the red band [67]. NDRE is used to identify healthy plants during the mid to late stages of plant growth. The modified indices with red edges improve the vegetative indices since red-edge light is highly sensitive to mid- and high-level chlorophyll contents [30]. The package generates the VIs listed in Table 2; however, any VIs having components of green, red, red-edge, NIR wavelengths can be generated by the users.

3. Results

The results of the study are split into three subsections. The results show the performance of the three main methods used in the image processing pipeline, i.e., radial distortion correction, image registration, and segmentation.

3.1. Correction of Radial Optical Distortion

The distortion parameters were used to correct the optical distortion of the crop images. The reprojected points (Figure 13) were translated with an overall mean error of 0.21 pixels, with the highest reprojection error of 0.6 pixels for an individual image (NIR band). The reprojection error measures the qualitative accuracy of the undistorted image. The reprojection error is the distance between a pattern key point detected in an undistorted image, and a corresponding reference point projected on the undistorted image. The mean reprojection error for each band is listed in Table 3.

3.2. Image Registration

Mattes Mutual Information (MMI) [51] was calculated to determine the accuracy of the registration process. The MMI measures how related one set of pixels of an image is to another. The higher MMI implies less entropy among the images and that the images are better aligned. In Figure 14, the bar graph represents MMI, and the image numbers are shown on the x-axis. The value of MMI was greater than 0.9, indicating the images were well aligned.

3.3. Segmentation

Since there was no ground-truth data for the images, Root Mean Square Error (RMSE) and Structural Similarity Map (SSIM) [71] were calculated to find the accuracy of the image segmentation process. Figure 15a represents the RMSE value for images, and the average value of RMSE between the segmented image and the original image was below 0.8.
The SSIM was also used to validate the accuracy of the segmentation process. In the SSIM map, a large local SSIM had bright pixels representing the common regions between two images (Figure 15b).

4. Discussion

Digital imaging is pivotal in high-throughput plant phenotyping to characterise morpho-physiological traits reliably and efficiently [1]. For imaging, thermal and multispectral sensors provide essential modalities, i.e., thermal cameras detect the infrared energy emitted from the object to generate a digital image, whereas multispectral cameras convert the light reflected from the object to a visual image. The different modalities of these two cameras can be used in conjunction to add dimensionality to the information [4,5].
An objective of this study was to develop an image processing tool to fuse the information obtained from thermal and multispectral images for application in plant research. For image fusion, special care should be taken so that the two sensors are aligned correctly to provide uniformity in the FoV of the images. Data correction is often one of the most trivial but critical tasks in image processing, and it is essential to correct the different discrepancies in images (distortion, illumination, contrast, etc.) for accurate and swift analysis [72]. This study reports a sequential image processing pipeline to help researchers effectively utilise thermal and multispectral cameras for plant phenotyping. The steps involved in the image processing pipeline include correction of optical distortion, image co-registration, thermal radiometric scaling, background illumination correction, and segmentation.
The radial barrel distortion is a common issue, especially with sensor having smaller focal length [73], which is observed in the green, red, NIR, and red-edge images taken by the multispectral camera [40]. The removal of optical distortion helped solve the distortion so that the corrected images could be used for co-registration and segmentation processes. In this study, a 5 × 8 grid checkerboard pattern with a 50 mm box size was used to calculate the intrinsic and extrinsic parameters of the multispectral camera; the reprojection error of the overall bands was below 0.29 pixels except for the red-edge band (0.6). A similar approach was applied by Das Choudhury, et al. [74] to correct images taken from a Parrot Sequoia camera, using a 28 × 28 mm box size and achieved an average error of less than 0.3 pixels.
Image co-registration was used to fuse the thermal and multispectral images. The idea behind image fusion was to extract information from both cameras. The co-registration of the images was carried out in two steps: feature-based and intensity-based. The feature-based transformation was used to scale and transform the thermal image to an RGB image using a transformation matrix. However, the feature-based transformation is rigid and varies with the distance between the target image and the camera [75]. Hence, the transformation matrix was calculated at variable distances and used for the coarse registration step of the image registration for each imaging setup. To avoid image misalignment caused by the movement of the plant canopy, the imaging was performed in an area of the glasshouse without significant air movement, and image acquisition between thermal and multispectral cameras was closely synchronised.
Radiometric scaling of thermal images was performed as the pixel values are stored as DN values instead of temperature values. Most studies have not explained how the DN values were converted into actual temperature values in thermal images [33]. The only temperature values provided in the thermal image are the minimum and maximum values of temperature recorded for the entire image, which is recorded at the right side of the image. In this study, the maximum and minimum temperature values were extracted with the help of a text recognition algorithm called OCR (Optical Character Recognition) [76], followed by a formula for radiometric scaling to derive each pixel’s temperature value from the DN values. In a recent study, measures were taken to correct the canopy temperature that may be impacted by the emission from the surroundings [33]. However, this approach was not applied in this experiment, since it was carried out in a controlled environment, with the FOV of the thermal camera on the canopy and a fair distance was maintained between the background and the plants. The thermal camera used here captures a radiometric thermogram and saves the file as a radiometric JPG image (RJPG), which allows the adjustment of the distance of object, reflected temperature, emissivity, and surrounding temperature within the camera settings [77].
Segmentation of the background from an object can be challenging due to noise present in the image other than the object of interest [78,79]. A white background was used to help remove background noise to facilitate the segmentation process. Since the intensity and resolution of the RGB image were greater than the thermal image and other bands, the RGB image was used to create a foreground mask that was applied to the remaining images. In a study to identify water stress, Leinonen and Jones [25] have also used visible images to classify vegetative from nonvegetative pixels and extract temperature values of only the vegetation. Adaptive thresholding or local method was utilised for segmentation, this method is useful for nonuniform lightning conditions and solves the problem of shadowing. In another study, the local thresholding method was also utilised to segment the maize canopy from a white background [74].
The image processing pipeline was designed specifically for controlled environment conditions, since, for field conditions, image acquisition has moved from handheld cameras to sensors mounted on unmanned aerial vehicles. Other segmentation methods such as Mask R-CNN [80] and semantic segmentation [81] can be implemented if the setup is to be used in field conditions. Additionally, measures can be applied to correct temperature that may be affected by the thermal emissions and reflections from surrounding objects [33].
Although FLIR thermal cameras provide licensed software for the scaling of thermal images, the users are limited to only temperature values of canopy for their study. A recent study fused RGB and thermal images from a FLIR E60 thermal camera to determine water management in potato [33]. However, this study was only limited to calculation of CWSI from thermal and GRVI from RGB sensors. Researchers are keen on using multiple sensors to study crop phenological changes during stress; however, there is a limitation of open-source packages that helps users to correct and combine information from thermal and multispectral images at pixel level. The output of our study helps to facilitate the researchers with a package that combines the information from thermal and multispectral sensors. This package generates stacked images with eight-bands in the order: RGB (1st–3rd band), green (4th band), red-edge (5th band), NIR (6th band), red (7th band), and thermal (8th band), which contained only the pixels of the plant, which are easy to compare and compute different VIs. Importantly, the image processing pipeline allows batch processing of images to save computational time and efforts. This package enables users to create the already known indices in multispectral bands and create new indices by combining both multispectral and thermal imagery.

5. Conclusions

An image processing pipeline was established and packaged to analyse multispectral and thermal images captured in a glasshouse environment. The automated image processing pipeline fixes issues of radial distortion in multispectral images, co-registration of the thermal and multispectral images, normalisation of variation in illumination across the multispectral image, and classification of canopy pixels from background noise. The final output received from the pipeline is a stacked image with an eight-band composite retaining only the canopy pixels for each band, which can be used to create vegetative indices. The process is efficient as images are processed and analysed in batches across all bands. The image processing pipeline will be helpful for researchers working with thermal and multispectral imaging in glasshouse conditions.

Author Contributions

N.S., B.P.B. and S.K. conceived the experiment. N.S. conducted experiments, performed data analysis, and writing original draft. S.K. and M.H. provided project administration, supervision, and funding acquisition. S.K., B.P.B. and M.H. and reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data is freely shared in google drive and can be accessed from the following link. https://drive.google.com/file/d/1VSqRu5CUZhyd3MF23kdRjqrtRke7sbJU/view?usp=share_link.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, L.; Zhang, Q.; Huang, D. A Review of Imaging Techniques for Plant Phenotyping. Sensors 2014, 14, 20078–20111. [Google Scholar] [CrossRef] [PubMed]
  2. Riaz, M.W.; Yang, L.; Yousaf, M.I.; Sami, A.; Mei, X.D.; Shah, L.; Rehman, S.; Xue, L.; Si, H.; Ma, C. Effects of Heat Stress on Growth, Physiology of Plants, Yield and Grain Quality of Different Spring Wheat (Triticum aestivum L.) Genotypes. Sustainability 2021, 13, 2972. [Google Scholar] [CrossRef]
  3. Mohanty, N. Photosynthetic characteristics and enzymatic antioxidant capacity of flag leaf and the grain yield in two cultivars of Triticum aestivum (L.) exposed to warmer growth conditions. J. Plant Physiol. 2003, 160, 71–74. [Google Scholar] [CrossRef] [PubMed]
  4. Pineda, M.; Barón, M.; Pérez-Bueno, M. Thermal Imaging for Plant Stress Detection and Phenotyping. Remote Sens. 2020, 13, 68. [Google Scholar] [CrossRef]
  5. Xu, R.; Li, C.; Paterson, A. Multispectral imaging and unmanned aerial systems for cotton plant phenotyping. PLoS ONE 2019, 14, e0205083. [Google Scholar] [CrossRef] [Green Version]
  6. Waiphara, P.; Bourgenot, C.; Compton, L.J.; Prashar, A. Optical Imaging Resources for Crop Phenotyping and Stress Detection. Methods Mol. Biol. 2022, 2494, 255–265. [Google Scholar] [CrossRef]
  7. Bian, L.; Suo, J.; Situ, G.; Li, Z.; Fan, J.; Chen, F.; Dai, Q. Multispectral imaging using a single bucket detector. Sci. Rep. 2016, 6, 24752. [Google Scholar] [CrossRef] [Green Version]
  8. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  9. Huete, A.R.; Liu, H.Q.; Batchily, K.; Van Leeuwen, W. A comparison of vegetation indices over a global set of TM images for EOS-MODIS. Remote Sens. Environ. 1997, 59, 440–451. [Google Scholar] [CrossRef]
  10. Khaple, A.K.; Devagiri, G.M.; Veerabhadraswamy, N.; Babu, S.; Mishra, S.B. Chapter 6—Vegetation biomass and carbon stock assessment using geospatial approach. In Forest Resources Resilience and Conflicts; Kumar Shit, P., Pourghasemi, H.R., Adhikary, P.P., Bhunia, G.S., Sati, V.P., Eds.; Elsevier: Amsterdam, The Netherlands, 2021; pp. 77–91. [Google Scholar]
  11. Banerjee, B.P.; Spangenberg, G.; Kant, S. Fusion of Spectral and Structural Information from Aerial Images for Improved Biomass Estimation. Remote Sens. 2020, 12, 3164. [Google Scholar] [CrossRef]
  12. Gitelson, A.; Merzlyak, M.N. Quantitative estimation of chlorophyll-a using reflectance spectra: Experiments with autumn chestnut and maple leaves. J. Photochem. Photobiol. B Biol. 1994, 22, 247–252. [Google Scholar] [CrossRef]
  13. Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30, 1248. [Google Scholar] [CrossRef] [Green Version]
  14. Gitelson, A.A.; Keydan, G.P.; Merzlyak, M.N. Three-band model for noninvasive estimation of chlorophyll, carotenoids, and anthocyanin contents in higher plant leaves. Geophys. Res. Lett. 2006, 33, L11402. [Google Scholar] [CrossRef] [Green Version]
  15. Ju, C.-H.; Tian, Y.-C.; Yao, X.; Cao, W.-X.; Zhu, Y.; Hannaway, D. Estimating Leaf Chlorophyll Content Using Red Edge Parameters. Pedosphere 2010, 20, 633–644. [Google Scholar] [CrossRef]
  16. Melo, L.L.d.; Melo, V.G.M.L.d.; Marques, P.A.A.; Frizzone, J.A.; Coelho, R.D.; Romero, R.A.F.; Barros, T.H.d.S. Deep learning for identification of water deficits in sugarcane based on thermal images. Agric. Water Manag. 2022, 272, 107820. [Google Scholar] [CrossRef]
  17. Chandel, N.; Chakraborty, S.; Rajwade, Y.; Dubey, K.; Tiwari, M.K.; Jat, D. Identifying crop water stress using deep learning models. Neural Comput. Appl. 2021, 33, 5353–5367. [Google Scholar] [CrossRef]
  18. Schor, N.; Berman, S.; Dombrovsky, A.; Elad, Y.; Ignat, T.; Bechar, A. Development of a robotic detection system for greenhouse pepper plant diseases. Precis. Agric. 2017, 18, 394–409. [Google Scholar] [CrossRef]
  19. Ishimwe, R.; Abutaleb, K.; Ahmed, F. Applications of Thermal Imaging in Agriculture—A Review. Adv. Remote Sens. 2014, 3, 128–140. [Google Scholar] [CrossRef] [Green Version]
  20. Fricke, W. Night-Time Transpiration—Favouring Growth? Trends Plant Sci. 2019, 24, 311–317. [Google Scholar] [CrossRef]
  21. Gil-Pérez, B.; Zarco-Tejada, P.; Correa-Guimaraes, A.; Relea-Gangas, E.; Gracia, L.M.; Hernández-Navarro, S.; Sanz Requena, J.F.; Berjón, A.; Martín-Gil, J. Remote sensing detection of nutrient uptake in vineyards using narrow-band hyperspectral imagery. Vitis 2010, 49, 167–173. [Google Scholar]
  22. Parihar, G.; Saha, S.; Giri, L.I. Application of infrared thermography for irrigation scheduling of horticulture plants. Smart Agric. Technol. 2021, 1, 100021. [Google Scholar] [CrossRef]
  23. Mutka, A.M.; Bart, R.S. Image-based phenotyping of plant disease symptoms. Front. Plant Sci. 2015, 5, 734. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Grant, O.; Chaves, M.; Jones, H. Optimizing thermal imaging as a technique for detecting stomatal closure induced by drought stress under greenhouse conditions. Physiol. Plant. 2006, 127, 507–518. [Google Scholar] [CrossRef]
  25. Leinonen, I.; Jones, H.G. Combining thermal and visible imagery for estimating canopy temperature and identifying plant stress. J. Exp. Bot. 2004, 55, 1423–1431. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Vieira, G.H.S.; Ferrarezi, R.S. Use of Thermal Imaging to Assess Water Status in Citrus Plants in Greenhouses. Horticulturae 2021, 7, 249. [Google Scholar] [CrossRef]
  27. Hu, Z.; Wang, Y.; Shamaila, Z.; Zeng, A.; Song, J.; Liu, Y.; Wolfram, S.; Joachim, M.; He, X. Application of BP Neural Network in Predicting Winter Wheat Yield Based on Thermography Technology. Spectrosc. Spectr. Anal. 2013, 33, 1587–1592. [Google Scholar] [CrossRef]
  28. Bhandari, M.; Xue, Q.; Liu, S.; Stewart, B.A.; Rudd, J.C.; Pokhrel, P.; Blaser, B.; Jessup, K.; Baker, J. Thermal imaging to evaluate wheat genotypes under dryland conditions. Agrosystems Geosci. Environ. 2021, 4, e20152. [Google Scholar] [CrossRef]
  29. Berger, K.; Machwitz, M.; Kycko, M.; Kefauver, S.C.; Van Wittenberghe, S.; Gerhards, M.; Verrelst, J.; Atzberger, C.; van der Tol, C.; Damm, A.; et al. Multi-sensor spectral synergies for crop stress detection and monitoring in the optical domain: A review. Remote Sens. Environ. 2022, 280, 113198. [Google Scholar] [CrossRef]
  30. Galieni, A.; D’Ascenzo, N.; Stagnari, F.; Pagnani, G.; Xie, Q.; Pisante, M. Past and Future of Plant Stress Detection: An Overview From Remote Sensing to Positron Emission Tomography. Front. Plant Sci. 2021, 11, 1975. [Google Scholar] [CrossRef]
  31. Stutsel, B.; Johansen, K.; Malbéteau, Y.M.; McCabe, M.F. Detecting Plant Stress Using Thermal and Optical Imagery From an Unoccupied Aerial Vehicle. Front. Plant Sci. 2021, 12, 2225. [Google Scholar] [CrossRef]
  32. Bai, G.F.; Blecha, S.; Ge, Y.; Walia, H.; Phansak, P. Characterizing Wheat Response to Water Limitation Using Multispectral and Thermal Imaging. Trans. ASABE 2017, 60, 1457–1466. [Google Scholar] [CrossRef]
  33. Cucho-Padin, G.; Rinza Díaz, J.; Ninanya Tantavilca, J.; Loayza, H.; Roberto, Q.; Ramirez, D. Development of an Open-Source Thermal Image Processing Software for Improving Irrigation Management in Potato Crops (Solanum tuberosum L.). Sensors 2020, 20, 472. [Google Scholar] [CrossRef] [Green Version]
  34. Bulanon, D.M.; Burks, T.F.; Alchanatis, V. Image fusion of visible and thermal images for fruit detection. Biosyst. Eng. 2009, 103, 12–22. [Google Scholar] [CrossRef]
  35. Rosenqvist, E.; Großkinsky, D.K.; Ottosen, C.-O.; van de Zedde, R. The Phenotyping Dilemma—The Challenges of a Diversified Phenotyping Community. Front. Plant Sci. 2019, 10, 163. [Google Scholar] [CrossRef] [Green Version]
  36. Jiménez-Bello, M.; Ballester, C.; Castel, J.; Intrigliolo, D. Development and validation of an automatic thermal imaging process for assessing plant water status. Agric. Water Manag. 2011, 98, 1497–1504. [Google Scholar] [CrossRef] [Green Version]
  37. Kelcey, J.; Lucieer, A. Sensor Correction of a 6-Band Multispectral Imaging Sensor for UAV Remote Sensing. Remote Sens. 2012, 4, 1462–1493. [Google Scholar] [CrossRef] [Green Version]
  38. Lu, H.; Fan, T.; Ghimire, P.; Deng, L. Experimental Evaluation and Consistency Comparison of UAV Multispectral Minisensors. Remote Sens. 2020, 12, 2542. [Google Scholar] [CrossRef]
  39. Stephen23. Natural-Order Filename Sort. MATLAB Central File Exchange. 2022. Available online: https://www.mathworks.com/matlabcentral/fileexchange/47434-natural-order-filename-sort (accessed on 6 November 2022).
  40. Jhan, J.-P.; Rau, J.; Haala, N.; Cramer, M. Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 157–163. [Google Scholar] [CrossRef] [Green Version]
  41. Choi, K.; Lam, E.; Wong, K. Automatic source camera identification using the intrinsic lens radial distortion. Opt. Express 2006, 14, 11551–11565. [Google Scholar] [CrossRef]
  42. Wu, F.; Wang, X. Correction of image radial distortion based on division model. Opt. Eng. 2017, 56, 013108. [Google Scholar] [CrossRef] [Green Version]
  43. Heikkilä, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  44. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  45. Wang, G.; Zheng, H.; Zhang, X. A Robust Checkerboard Corner Detection Method for Camera Calibration Based on Improved YOLOX. Front. Phys. 2022, 9, 828. [Google Scholar] [CrossRef]
  46. Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
  47. Banerjee, B.; Raval, S.; Cullen, P.J. Alignment of UAV-hyperspectral bands using keypoint descriptors in a spectrally complex environment. Remote Sens. Lett. 2018, 9, 524–533. [Google Scholar] [CrossRef]
  48. Chui, H.; Win, L.; Schultz, R.; Duncan, J.S.; Rangarajan, A. A unified non-rigid feature registration method for brain mapping. Med. Image Anal. 2003, 7, 113–130. [Google Scholar] [CrossRef]
  49. Myronenko, A.; Song, X. Intensity-based image registration by minimizing residual complexity. IEEE Trans. Med. Imaging 2010, 29, 1882–1891. [Google Scholar] [CrossRef]
  50. Aghajani, K.; Yousefpour, R.; Shirpour, M.; Manzuri, M.T. Intensity based image registration by minimizing the complexity of weighted subtraction under illumination changes. Biomed. Signal Process. Control 2016, 25, 35–45. [Google Scholar] [CrossRef]
  51. Aylward, S.; Jomier, J.; Barre, S.; Davis, B.; Ibanez, L. Optimizing ITK’s Registration Methods for Multi-processor, Shared-Memory Systems. Insight J. 2007. [Google Scholar] [CrossRef]
  52. Keikhosravi, A.; Li, B.; Liu, Y.; Eliceiri, K.W. Intensity-based registration of bright-field and second-harmonic generation images of histopathology tissue sections. Biomed. Opt. Express 2020, 11, 160–173. [Google Scholar] [CrossRef]
  53. Dey, N. Uneven Illumination Correction of Digital Images: A Survey of the State-of-the-Art. Optik 2019, 183, 483–495. [Google Scholar] [CrossRef]
  54. Banerjee, B.P.; Joshi, S.; Thoday-Kennedy, E.; Pasam, R.K.; Tibbits, J.; Hayden, M.; Spangenberg, G.; Kant, S. High-throughput phenotyping using digital and hyperspectral imaging-derived biomarkers for genotypic nitrogen response. J. Exp. Bot. 2020, 71, 4604–4615. [Google Scholar] [CrossRef]
  55. Mishra, P.; Lohumi, S.; Ahmad Khan, H.; Nordon, A. Close-range hyperspectral imaging of whole plants for digital phenotyping: Recent applications and illumination correction approaches. Comput. Electron. Agric. 2020, 178, 105780. [Google Scholar] [CrossRef]
  56. Roth, L.; Aasen, H.; Walter, A.; Liebisch, F. Extracting leaf area index using viewing geometry effects—A new perspective on high-resolution unmanned aerial system photography. ISPRS J. Photogramm. Remote Sens. 2018, 141, 161–175. [Google Scholar] [CrossRef]
  57. Weyler, J.; Magistri, F.; Seitz, P.; Behley, J.; Stachniss, C. In-Field Phenotyping Based on Crop Leaf and Plant Instance Segmentation. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 2968–2977. [Google Scholar]
  58. Mario, S.; Madec, S.; David, E.; Velumani, K.; Lozano, R.; Weiss, M.; Frederic, B. SegVeg: Segmenting RGB images into green and senescent vegetation by combining deep and shallow methods. Plant Phenomics 2022, 2022, 9803570. [Google Scholar]
  59. Thanh, D.; Thanh, L.; Dvoenko, S.; Prasath, S.; San, N. Adaptive Thresholding Segmentation Method for Skin Lesion with Normalized Color Channels of NTSC and YCbCr. In Proceedings of the 14th International Conference on Pattern Recognition and Information Processing (PRIP’2019), Minsk, Belarus, 21–23 May 2019; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  60. Fang, H.; Liang, S. Leaf Area Index Models. In Reference Module in Earth Systems and Environmental Sciences; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  61. Zhao, H.; Yang, C.; Guo, W.; Zhang, L.; Zhang, D. Automatic Estimation of Crop Disease Severity Levels Based on Vegetation Index Normalization. Remote Sens. 2020, 12, 1930. [Google Scholar] [CrossRef]
  62. Morales, A.; Guerra Hernández, R.; Horstrand, P.; Diaz, M.; Jimenez, A.; Melián, J.; Lopez, S.; Lopez, J. A Multispectral Camera Development: From the Prototype Assembly until Its Use in a UAV System. Sensors 2020, 20, 6129. [Google Scholar] [CrossRef]
  63. Abdulridha, J.; Ampatzidis, Y.; Kakarla, S.; Roberts, P. Detection of target spot and bacterial spot diseases in tomato using UAV-based and benchtop-based hyperspectral imaging techniques. Precis. Agric. 2020, 21, 955–978. [Google Scholar] [CrossRef]
  64. Drisya, J.; Kumar, D.S.; Roshni, T. Chapter 27—Spatiotemporal Variability of Soil Moisture and Drought Estimation Using a Distributed Hydrological Model. In Integrating Disaster Science and Management; Samui, P., Kim, D., Ghosh, C., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; pp. 451–460. [Google Scholar]
  65. He, J.; Zhang, N.; Xi, S.; Lu, J.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.; Tian, Y. Estimating Leaf Area Index with a New Vegetation Index Considering the Influence of Rice Panicles. Remote Sens. 2019, 11, 1809. [Google Scholar] [CrossRef] [Green Version]
  66. Borges, M.V.V.; de Oliveira Garcia, J.; Batista, T.S.; Silva, A.N.M.; Baio, F.H.R.; da Silva Junior, C.A.; de Azevedo, G.B.; de Oliveira Sousa Azevedo, G.T.; Teodoro, L.P.R.; Teodoro, P.E. High-throughput phenotyping of two plant-size traits of Eucalyptus species using neural networks. J. For. Res. 2022, 33, 591–599. [Google Scholar] [CrossRef]
  67. Boiarskii, B.; Hasegawa, H. Comparison of NDVI and NDRE Indices to Detect Differences in Vegetation and Chlorophyll Content. J. Mech. Contin. Math. Sci. 2019, 4, 20–29. [Google Scholar] [CrossRef]
  68. Roujean, J.-L.; Breon, F.-M. Estimating PAR absorbed by vegetation from bidirectional reflectance measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  69. Vincini, M.; Frazzi, E.; D’Alessio, P. A broad-band leaf chlorophyll vegetation index at the canopy scale. Precis. Agric. 2008, 9, 303–319. [Google Scholar] [CrossRef]
  70. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef] [PubMed]
  71. Renieblas, G.P.; Nogués, A.T.; González, A.M.; Gómez-Leon, N.; Del Castillo, E.G. Structural similarity index family for image quality assessment in radiological images. J. Med. Imaging 2017, 4, 035501. [Google Scholar] [CrossRef] [PubMed]
  72. Grande, J.C. Principles of Image Analysis. Metallogr. Microstruct. Anal. 2012, 1, 227–243. [Google Scholar] [CrossRef] [Green Version]
  73. Drap, P.; Lefèvre, J. An Exact Formula for Calculating Inverse Radial Lens Distortions. Sensors 2016, 16, 807. [Google Scholar] [CrossRef] [Green Version]
  74. Das Choudhury, S.; Bashyam, S.; Qiu, Y.; Samal, A.; Awada, T. Holistic and component plant phenotyping using temporal image sequence. Plant Methods 2018, 14, 35. [Google Scholar] [CrossRef]
  75. Saleh, Z.H.; Apte, A.P.; Sharp, G.C.; Shusharina, N.P.; Wang, Y.; Veeraraghavan, H.; Thor, M.; Muren, L.P.; Rao, S.S.; Lee, N.Y.; et al. The distance discordance metric-a novel approach to quantifying spatial uncertainties in intra- and inter-patient deformable image registration. Phys. Med. Biol. 2014, 59, 733–746. [Google Scholar] [CrossRef] [Green Version]
  76. Memon, J.; Sami, M.; Khan, R.A.; Uddin, M. Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR). IEEE Access 2020, 8, 142642–142668. [Google Scholar] [CrossRef]
  77. Pereyra Irujo, G. IRimage: Open source software for processing images from infrared thermal cameras. PeerJ Comput. Sci. 2022, 8, e977. [Google Scholar] [CrossRef]
  78. Wang, S.; Sun, G.; Zheng, B.; Du, Y. A Crop Image Segmentation and Extraction Algorithm Based on Mask RCNN. Entropy 2021, 23, 1160. [Google Scholar] [CrossRef]
  79. Sodjinou, S.G.; Mohammadi, V.; Sanda Mahama, A.T.; Gouton, P. A deep semantic segmentation-based algorithm to segment crops and weeds in agronomic color images. Inf. Process. Agric. 2022, 9, 355–364. [Google Scholar] [CrossRef]
  80. Zhang, W.; Chen, X.; Qi, J.; Yang, S. Automatic instance segmentation of orchard canopy in unmanned aerial vehicle imagery using deep learning. Front. Plant Sci. 2022, 13, 1041791. [Google Scholar] [CrossRef]
  81. Munz, S.; Reiser, D. Approach for Image-Based Semantic Segmentation of Canopy Cover in Pea-Oat Intercropping. Agriculture 2020, 10, 354. [Google Scholar] [CrossRef]
Figure 1. Integrated multimodal and imaging setup. A multispectral camera was physically mounted on the top of a thermal camera using a magnetic mount assembly to provide a uniform field-of-view (FoV). The irradiance sensor was mounted on the top. During imaging, a radiometric calibration target with 80 percent reflectivity was placed behind the plants.
Figure 1. Integrated multimodal and imaging setup. A multispectral camera was physically mounted on the top of a thermal camera using a magnetic mount assembly to provide a uniform field-of-view (FoV). The irradiance sensor was mounted on the top. During imaging, a radiometric calibration target with 80 percent reflectivity was placed behind the plants.
Plants 12 00317 g001
Figure 2. The image processing pipeline. The steps include image acquisition by thermal and multispectral cameras, and image processing for distortion correction of multispectral images, image registration (coarse and fine), radiometric scaling of thermal images, and illumination correction of multispectral images.
Figure 2. The image processing pipeline. The steps include image acquisition by thermal and multispectral cameras, and image processing for distortion correction of multispectral images, image registration (coarse and fine), radiometric scaling of thermal images, and illumination correction of multispectral images.
Plants 12 00317 g002
Figure 3. Process of capturing images and radial optical distortion. (a) An image without distortion (top) and image with radial barrel distortion (bottom) where r is the off-axis image distance, which increases with distortion. (b) Extrinsic parameters (Rotation (R) and Translation (T)) are used to convert the 3D world plane coordinates ( O w ) to a 3D camera plane coordinates ( O c ), which are converted to 2D image coordinates ( O i ) with the help of intrinsic parameters; O p represents the pixel plane.
Figure 3. Process of capturing images and radial optical distortion. (a) An image without distortion (top) and image with radial barrel distortion (bottom) where r is the off-axis image distance, which increases with distortion. (b) Extrinsic parameters (Rotation (R) and Translation (T)) are used to convert the 3D world plane coordinates ( O w ) to a 3D camera plane coordinates ( O c ), which are converted to 2D image coordinates ( O i ) with the help of intrinsic parameters; O p represents the pixel plane.
Plants 12 00317 g003
Figure 4. (a) A distorted image of a checkerboard pattern from a multispectral camera and (b) extrinsic parameters action visualisation. Images were taken from different angles and distances to calculate extrinsic parameters and minimise radial barrel distortion.
Figure 4. (a) A distorted image of a checkerboard pattern from a multispectral camera and (b) extrinsic parameters action visualisation. Images were taken from different angles and distances to calculate extrinsic parameters and minimise radial barrel distortion.
Plants 12 00317 g004
Figure 5. Setup for coarse image registration: (a) FoV of RGB; (b) FoV of thermal image. A white corflute with different geometric cut-outs was placed in front of a black background with a higher surface temperature than the corflute sheet. The cut-outs are visible in optical (RGB), multispectral, and thermal image bands.
Figure 5. Setup for coarse image registration: (a) FoV of RGB; (b) FoV of thermal image. A white corflute with different geometric cut-outs was placed in front of a black background with a higher surface temperature than the corflute sheet. The cut-outs are visible in optical (RGB), multispectral, and thermal image bands.
Plants 12 00317 g005
Figure 6. (a) Feature detection between RGB and thermal images; the green cross and red circles represent common features (corners) for thermal and multispectral images, respectively. (b) Working principle of the projection of moving image into the FoV of a fixed image using a geometric transformations matrix. The red circle, blue cross, and green cross represent the features of the fixed image, moving image, and projected image, respectively.
Figure 6. (a) Feature detection between RGB and thermal images; the green cross and red circles represent common features (corners) for thermal and multispectral images, respectively. (b) Working principle of the projection of moving image into the FoV of a fixed image using a geometric transformations matrix. The red circle, blue cross, and green cross represent the features of the fixed image, moving image, and projected image, respectively.
Plants 12 00317 g006
Figure 7. Image registration output after (a) coarse registration and (b) fine registration. Pink represents misalignment between the thermal and optical images, which was significantly reduced after fine registration.
Figure 7. Image registration output after (a) coarse registration and (b) fine registration. Pink represents misalignment between the thermal and optical images, which was significantly reduced after fine registration.
Plants 12 00317 g007
Figure 8. Pixel values of images: (a) before radiometric rescaling—the pixel values are stored as Digital Numbers (DNs); (b) after radiometric rescaling—the DNs are converted to temperature values. The maximum and minimum temperature values are recorded on the right of the thermal image.
Figure 8. Pixel values of images: (a) before radiometric rescaling—the pixel values are stored as Digital Numbers (DNs); (b) after radiometric rescaling—the DNs are converted to temperature values. The maximum and minimum temperature values are recorded on the right of the thermal image.
Plants 12 00317 g008
Figure 9. Output of RGB images after (a) gradient correction and (b) illumination correction.
Figure 9. Output of RGB images after (a) gradient correction and (b) illumination correction.
Plants 12 00317 g009
Figure 10. Image segmentation: (a) foreground mask after adaptive thresholding and (b) segmented RGB image after application of the foreground mask. The non-canopy pixel values are converted to zero.
Figure 10. Image segmentation: (a) foreground mask after adaptive thresholding and (b) segmented RGB image after application of the foreground mask. The non-canopy pixel values are converted to zero.
Plants 12 00317 g010
Figure 11. An eight-band stacked image representing only the canopy pixels in the following order: RGB, green, red, NIR, red-edge, and thermal. Each pixel of an image represents the same pixels for the other images, which are represented by red squares in the images.
Figure 11. An eight-band stacked image representing only the canopy pixels in the following order: RGB, green, red, NIR, red-edge, and thermal. Each pixel of an image represents the same pixels for the other images, which are represented by red squares in the images.
Plants 12 00317 g011
Figure 12. Vegetative indices: (a) Normalized Difference VI (NDVI) and (b) Chlorophyll Index red edge (CIre).
Figure 12. Vegetative indices: (a) Normalized Difference VI (NDVI) and (b) Chlorophyll Index red edge (CIre).
Plants 12 00317 g012
Figure 13. Reprojection error between the distorted and undistorted image for NIR band. The x- and y-axis represent the number of images and mean errors in pixels, respectively.
Figure 13. Reprojection error between the distorted and undistorted image for NIR band. The x- and y-axis represent the number of images and mean errors in pixels, respectively.
Plants 12 00317 g013
Figure 14. Mattes mutual information between multispectral and thermal images after registration.
Figure 14. Mattes mutual information between multispectral and thermal images after registration.
Plants 12 00317 g014
Figure 15. Error metric for segmentation: (a) Root Mean Square Error (RMSE) and (b) Structural Similarity Map (SSIM) between the original image and segmented images.
Figure 15. Error metric for segmentation: (a) Root Mean Square Error (RMSE) and (b) Structural Similarity Map (SSIM) between the original image and segmented images.
Plants 12 00317 g015
Table 1. Rigid and nonrigid transformation matrix.
Table 1. Rigid and nonrigid transformation matrix.
RigidNonrigid
Translation   ( T ) = 1 0 0 0 1 0 t x t y 1 Shear   Sh = 1 sh y 0 sh ϰ 1 0 0 0 1
Rotation   R = cos ( q ) sin ( q ) 0 sin ( q ) cos ( q ) 0 0 0 1 Scale   S = s ϰ 0 0 0 s y 0 0 0 1
Table 2. Vegetation indices generated from the image processing pipelines.
Table 2. Vegetation indices generated from the image processing pipelines.
IndicesEquationsReferences
Normalized Difference Vegetation Index (NDVI) NDVI = NIR     RED NIR   +   RED [8]
Normalized Difference Red Edge (NDRE) NDRE = NIR     RED _ EDGE NIR   +   RED _ EDGE [12]
Chlorophyll Index red edge (CIre) CIre = NIR RED _ EDGE 1 [14]
Triangle Vegetation Index (TVI)0.5(120( NIR     GREEN ) 200 RED     GREEN ) [66]
Renormalized Difference Vegetation Index (RDVI) RDVI = NIR     RED NIR   +   RED [68]
Chlorophyll Vegetation Index (CVI) CVI = NIR   RED GREEN 2 [69]
Chlorophyll Index green (CIg) CIg = NIR GREEN 1 [70]
Table 3. Mean reprojection error between the distorted and undistorted image for different multispectral bands.
Table 3. Mean reprojection error between the distorted and undistorted image for different multispectral bands.
BandsRedRed-EdgeNIRGreen
Mean reprojection
error (pixels)
0.290.600.210.20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharma, N.; Banerjee, B.P.; Hayden, M.; Kant, S. An Open-Source Package for Thermal and Multispectral Image Analysis for Plants in Glasshouse. Plants 2023, 12, 317. https://doi.org/10.3390/plants12020317

AMA Style

Sharma N, Banerjee BP, Hayden M, Kant S. An Open-Source Package for Thermal and Multispectral Image Analysis for Plants in Glasshouse. Plants. 2023; 12(2):317. https://doi.org/10.3390/plants12020317

Chicago/Turabian Style

Sharma, Neelesh, Bikram Pratap Banerjee, Matthew Hayden, and Surya Kant. 2023. "An Open-Source Package for Thermal and Multispectral Image Analysis for Plants in Glasshouse" Plants 12, no. 2: 317. https://doi.org/10.3390/plants12020317

APA Style

Sharma, N., Banerjee, B. P., Hayden, M., & Kant, S. (2023). An Open-Source Package for Thermal and Multispectral Image Analysis for Plants in Glasshouse. Plants, 12(2), 317. https://doi.org/10.3390/plants12020317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop