1. Introduction
The performance parameters of fragments primarily involve the number [
1], material composition, and velocity. Identifying explosive fragments is crucial for assessing explosive power and designing effective explosion-proof measures. Currently, research primarily focuses on evaluating target damage effectiveness [
2] and implementing optical imaging techniques [
3]. The dispersion [
4], shape [
5], average quality [
6], and other characteristics of explosive fragments hold significant importance in performance evaluations. However, existing fragment detection methods only provide one-dimensional information about fragments, such as mass, spatial details, and spectral information, typically captured using a single band, such as visible or infrared. These methods overlook the varying degrees of light absorption at different wavelengths by fragments and background materials, failing to extract the multidimensional information of the fragments. Consequently, incorporating multidimensional information for fragment analysis is essential for enhancing their structure and performance.
Hyperspectral imaging technology combines the advantages of traditional imaging technology and spectral technology [
7]. It can obtain the spatial information and the spectral information of continuous bands in the images [
8]. Hyperspectral images can provide reference images for spatial positions and provide multi-dimensional information for target classification. In the realm of object detection, hyperspectral imaging technology offers substantial advantages in the enhancement of classification accuracy [
9]. Over the course of several decades, this technology has found applications in civilian sectors, including agricultural production [
10], landscape characterization [
11], environmental monitoring [
12], and food ingredient identification [
13]. In military domains, hyperspectral detection technology has been successfully employed in the recognition of maritime and shallow-water targets [
14], battlefield camouflage [
15], and land targets [
16], showcasing its extensive range of potential applications.
The small size and irregular shape of the generated fragments pose challenges for conventional imaging methods in accurately identifying ground fragments. The fragments and background materials have different molecular structures, leading to variations in the reflection characteristics of light across different wavelengths. On the other hand, hyperspectral images provide a contiguous spectral signature for each pixel, reflecting the physical and chemical properties of objects [
17]. Therefore, the method of hyperspectral imaging can capture more detailed information under different spectra [
18], making it highly suitable for distinguishing between different fragments and background materials. The recognition of iron fragments was studied to verify the feasibility of hyperspectral detection methods for iron fragment recognition based on traditional methods [
19].
In comparison to traditional classification methods, deep learning offers the advantage of automated feature extraction and selection within the model, eliminating the need for manual feature construction [
20]. At present, various methods of deep learning have been proposed for the classification of hyperspectral images [
18,
21]. Compared with other deep-learning methods, convolutional neural networks (CNN) have extensive applications in hyperspectral image classification due to their excellent spectral feature representation. Hu first used the CNN in hyperspectral image classification with only a one-dimensional (1D) convolution kernel and focused on the spectral features of hyperspectral imagery [
22]. Despite CNNs having achieved promising results in most cases [
23,
24], they still encounter the following limitations:
- (1)
CNN is a vector-based approach that treats the input of hyperspectral images as a collection of pixel vectors [
25]. The equal treatment of all channels can lead to a CNN being incredibly insensitive to the spectral sequence information [
26]. The results are that the feature maps cannot effectively represent the spectral sequence information from hyperspectral images;
- (2)
A 1D CNN can only consider the spectral information of hyperspectral images and ignores the spatial relationships between pixels.
However, the spectral information of the sample points in hyperspectral images is typical sequence data. The change in reflectivity intensity of adjacent bands usually leads to a change in the overall trend of reflectivity. While CNNs are known for their automatic feature-extraction capabilities [
27], recurrent neural networks (RNNs) are more adept at handling sequential data [
28]. Specifically, bidirectional long short-term memory networks (BiLSTM), a type of RNN, are well-suited for processing time series data and effectively capturing the relationships between the current data and their preceding or subsequent data points [
29]. Taking advantage of both CNNs and RNNs, we adopted a combined model called a convolutional neural network–bidirectional long short-term memory network (CNN-BiLSTM) for spectral information classification. The CNN-BiLSTM model offers robust feature-extraction capabilities while simultaneously considering the spectral reflectance sequence’s front and back band information for each sample point [
30]. This fusion of CNN and BiLSTM allows for comprehensive analysis and improves the classification performance of hyperspectral data.
Nowadays, the application of deep-learning technology in the image segment has attracted extensive attention. Ronneberger et al. proposed a U-shaped network (U-Net) at the MICCAI conference in 2015 [
31]. U-Net has distinct advantages, including its ability to achieve high accuracy with less training data and its fast training speed [
32]. To extract the spatial relationships between pixels from hyperspectral images, this study trained the semantic segmentation model based on U-Net. The model trained with U-Net can restrain the irrelevant regions in an input image and highlight the striking features suitable for specific tasks [
33]. It is conducive to eradicating the inevitability of applying backgrounds of cascading CNNs.
On this basis, the study developed a spatial–spectral combination algorithm wherein the majority category of pixels within a region determines the category assigned to the entire region. On the basis of U-Net for image segmentation, the CNN-BiLSTM model is used to recognize fragments and background pixel by pixel. The category with the highest number of pixels in each region represents the category of that region, accurately identifying the target fragments from the background. The method increases the precision of the model and retains the spatial data aimed at 2D segmentation. By combining both the spatial and spectral information, the method effectively leverages the strengths of the respective techniques.
The main technical contributions of this work are summarized as follows:
- (1)
Creation of datasets: We collected hyperspectral images containing five types of explosive fragments and five common background materials. We preprocessed these data, including black-and-white correction and region of interest (ROI) extraction. These datasets include both spectral and spatial information, providing valuable resources for further analysis;
- (2)
Spectral information classification model: To classify the spectral information accurately, we adopted the CNN-BiLSTM model. This novel approach combines CNN and BiLSTM to achieve high classification accuracy, even with limited training samples;
- (3)
Spatial–spectral combination algorithm: We designed a spatial–spectral combination algorithm that successfully classifies targets in hyperspectral images. By combining U-Net with CNN-BiLSTM, we effectively merge both spatial and spectral information, enhancing the overall classification performance;
- (4)
Experimental verification: To validate the effectiveness of our hyperspectral imaging detection methods in the identification of explosive fragments, we conducted classification experiments on the collected datasets. The results demonstrate the feasibility and potential of our approach.
This study used hyperspectral imaging technology for fragment classification based on the differences in explosion fragments and background spectral reflectance information. This research could effectively distinguish between fragments and background in a laboratory environment and visualize the materials and positions of different fragments and background materials. The purpose of this study is to verify the feasibility of using hyperspectral imaging to identify explosive fragments. At the same time, it provides guidance for the intelligent identification and recovery of fragments in the field in the future and has practical significance for evaluating the power of fragments.
4. Discussion
This study used hyperspectral imaging technology for fragments classification based on the differences in explosion fragments and their background spectral reflectance information. We collected ten samples of tungsten fragments, iron fragments, copper fragments, aluminum fragments, composite fragments, soil, stones, leaves, bark, and foam. In a darkroom environment, full-band hyperspectral images in the range of 400.00 to 1000.00 nm of ten types of samples and fragment simulation scattering scenes were collected. After preprocessing, a deep-learning method was used to build a spatial–spectral combination approach for fragment classification using hyperspectral imaging, aiming to address the requirements of fragment identification and spatial dispersion analysis in ex-plosive fields. In addition, some further discussions are presented below.
In our previous research, there was a problem of low accuracy in using decision trees to identify stones and iron fragments [
19]. This was because the decision tree model did not extract appropriate features, and the model was susceptible to noise and outliers. From
Table 4, we can see that the classification accuracy of CNN, BiLSTM, and CNN-BiLSTM based on deep learning is significantly higher than that of decision tree and SVM. This was because deep-learning algorithms can automatically capture the correlation between spectral features and obtain the intrinsic characteristics of explosive fragments. The accuracy of CNN was slightly lower than that of BiLSTM. The reason for this situation is that the spectral information of sampling points in hyperspectral images is typical sequential data. The BiLSTM is better at handling sequence-type data than the CNN. The CNN-BiLSTM combined the advantages of CNN and BiLSTM, with an accuracy rate of up to 93.8%. It proved that the method was beneficial for identifying explosive fragments using hyperspectral images.
In the application of hyperspectral image segmentation, most studies only treated segmentation as a part of preprocessing and did not delve into it in-depth [
43]. Currently, most studies use machine-learning-based segmentation methods [
44,
45]. The segmentation method we used in our previous research was based on morphology [
19]. The two methods required human involvement to some extent. Therefore, we trained a U-Net model that could automatically extract explosion fragment features for fragment and background object segmentation using the idea of transfer learning. The trained U-Net network achieved an accuracy of 93.5% in segmenting sample images. The results indicated that U-Net could effectively segment fragments in the background from other background objects and extract the spatial features of the samples.
This study used the spatial–spectral combination method to identify explosive fragments, which could classify and express targets in the scattering field, with a recognition rate of better than 94.9% for each category. Compared to using the CNN-BiLSTM model to classify each pixel in the scattering field, the spatial–spectral combination algorithm significantly improved the fragment recognition rate. Compared with previous studies, we found that the algorithm used in this article greatly reduced human involvement [
19]. In recent years, many studies have focused on how to fully utilize the spatial and spectral information contained in hyperspectral images and have also proposed many novel algorithms, such as the random patches network (RPNet) [
46], etc. However, these algorithms are usually used in publicly available hyperspectral datasets and are rarely applied in practical fields. We will study relevant algorithms in the future and truly apply novel algorithms to the field of explosive fragment recognition.
As above, the fragments classification method used in this article could automatically extract features in both the spatial and spectral domains and perform spatial–spectral combinations through a cascading approach. After experimental verification, it was feasible to use the method of spatial–spectral combination with hyperspectral imaging technology for the classification of explosive fragments.