Next Article in Journal
Research on Defect Detection Method for Composite Materials Based on Deep Learning Networks
Previous Article in Journal
Assessing the Feasibility of Removing Graffiti from Railway Vehicles Using Ultra-Freezing Air Projection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain

1
School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China
2
Institute for Interdisciplinary and Innovate Research, Xi’an University of Architecture and Technology, Xi’an 710055, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(10), 4166; https://doi.org/10.3390/app14104166
Submission received: 15 April 2024 / Revised: 12 May 2024 / Accepted: 13 May 2024 / Published: 14 May 2024

Abstract

:
X-ray imaging is a valuable non-destructive tool for examining bronze wares, but the complexity of the coverings of bronze wares and the limitations of single-energy imaging techniques often obscure critical details, such as lesions and ornamentation. Therefore, multiple imaging is required to fully present the key information of bronze artifacts, which affects the complete presentation of information and increases the difficulty of analysis and interpretation. Using high-performance image fusion technology to fuse X-ray images of different energies into one image can effectively solve this problem. However, there is currently no specialized method for the fusion of images of bronze artifacts. Considering the special requirements for the restoration of bronze artifacts and the existing fusion framework, this paper proposes a new method. It is a novel multi-scale morphological gradient and local topology-coupled neural P systems approach within the Non-Subsampled Shearlet Transform domain. It addresses the absence of a specialized method for image fusion of bronze artifacts. The method proposed in this paper is compared with eight high-performance fusion methods and validated using a total of six evaluation metrics. The results demonstrate the significant theoretical and practical potential of this method for advancing the analysis and preservation of cultural heritage artifacts.

1. Introduction

Bronze wares are remains and monuments from human social activities with crucial historical, artistic, and scientific values, making them vital objects of study in archaeology [1]. Due to natural and human causes, many unearthed bronze wares are often found to have mutilations, fractures, and foreign objects covering them, along with deteriorated important decorations and inscriptions. This not only seriously damages the information they convey but it also threatens the material survival of the bronze wares themselves, creating difficulties for later study and use. Identifying and extracting key historical information from bronze wares is a critical issue for archaeologists. In the realm of traditional cultural relics preservation, the cleaning and information extraction of damaged bronze wares is frequently approached through a method of gradual, in-depth peeling. However, this technique is not only time-consuming but also has the potential to cause damage to the bronze wares.
With the widespread expansion of scientific and technological testing applications, an increasing number of testing equipment and methods are being employed in the protection of bronze wares [2]. This trend has led to enhanced operational security and precision. Non-destructive examination technology utilizes various non-destructive imaging techniques such as X-ray and computed tomography to conduct contactless scanning and testing of bronze wares. It has important practical significance for the research of bronze wares.
This paper focuses on the X-ray imaging of Han Dynasty bronze mirrors. During the X-ray imaging process, the varying transmittance of X-rays in distinct areas of the bronze mirrors, specifically the decoration and rim areas, results in differences in brightness and darkness in the final image. Additionally, the optimal diffraction energy needed varies between the center and rim of the mirror due to their differing thicknesses. To acquire more comprehensive information about the mirror, it is necessary to capture images of different areas using varying levels of diffraction energy. As a result, multiple X-ray images of a mirror are often produced, which poses a challenge for heritage workers in analyzing and extracting information about the mirror. To improve the efficiency of cultural relics protection work, multiple bronze mirror X-ray image fusion using image fusion technology can present the information of the bronze mirror more clearly and comprehensively.
Image fusion technology is widely utilized in cultural relic protection and research. Its primary principle is to enhance texture clarity, information reproduction, and detail richness of the target image by merging pixel data from the source image with the corresponding positions of the target image [3]. This process generates a more precise and realistic fused image, making it an ideal fit for most bronze ware pixel-level image fusion applications.
Due to the unique nature of bronze wares, their digitized information is often not as extensive as that of other fields. As a result, traditional methods are predominantly used for the image fusion of bronze wares, with multi-scale transform (MST) being the most prevalent method in recent years [4]. The MST fusion method decomposes the image into high and low-frequency subbands using a transform and applies a specific fusion rule to merge them, and, finally, obtains the fused image by inverse transformation. Commonly used MST methods includes Wavelet Transform (WT) [5], Shearlet Transformation (ST) [6], Discrete Wavelet Transform (DWT) [5], Non-Subsampled Contourlet Transform (NSCT) [7] and Non-Subsampled Shearlet Transform (NSST) [8].
However, these methods are mostly applied in the fields of medical image fusion and infrared and visible image fusion. At present, there is limited research on bronze X-ray image fusion, with only the use of basic fusion frameworks. This paper draws on medical image fusion, as well as infrared and visible light image fusion techniques to process the bronze X-ray images.
To observe all the information present on bronze mirrors simultaneously, enhancing the expression of the bronze mirror decorations and highlighting the defective areas of the mirrors through image fusion techniques is necessary. However, achieving such an effect utilizing the traditional MST method proves challenging, so some scholars have improved the fusion rule. Zhu et al. [9] proposed an NSCT-based fusion technique that incorporates the local Laplace operator and phase consistency for enhanced fusion performance, but it results in the loss of some fusion details. Liu et al. [10] proposed an image fusion method based on multi-decomposition LatLRR. The method fuses images after decomposing them several times with LatLRR, which improves the brightness while preserving the detail information. However, it is worth noting that in some cases, the fused images may exhibit overly excessive brightness. Mei et al. [11] proposed a method for image fusion that combines NSCT and adaptive PCNN. The fusion rules utilize the sum of directional and global gradients, leading to improved method timeliness. However, it is important to note that a potential disadvantage of this method is that some fused images may exhibit subtle blurring. Vanitha et al. [12] proposed a spatial frequency excitation-based PA-PCNN fusion method in the NSST domain. The method utilizes a maxima strategy for the low-frequency subband coefficients. However, it is important to note that a potential drawback of this method is that the resulting image contrast may not be sufficient, leading to a loss of energy and detail information. Chinmaya et al. [13] proposed the NSST-based PAULPCNN fusion method to fuse significant complementary details in grayscale images with pseudo-color images. This method can extract rich information and is effective in medical image fusion. However, these methods still have more defects in terms of image quality and detail preservation in bronze mirrors. In order to solve this problem, this paper undertakes the task of introducing a multi-scale morphological gradient operator combined with a WSEML operator in the NSST domain for high-frequency information fusion. The Laplace operator, a second-order differential operator, is highly sensitive to grayscale variations in images, making it ideal for accentuating areas with rapid changes. Building on this, the multi-scale morphological gradient operator adjusts weight coefficients to refine the geometric relationships among eight neighboring pixels, thereby enhancing edge detection accuracy and mitigating noise impact. For the low-frequency components, fusion is achieved through a locally topologically coupled neural P system that leverages nonlinear modulation and dynamic thresholds to preserve the image’s overall integrity and detail. The results demonstrate the optimal performance in the evaluation metrics of Average Gradient, Mutual Information, Gradient-based Fusion Performance, and visual information fidelity, with an average improvement of about 4%. Visual information fidelity is more prominent, increasing by 6.2%.

2. Materials and Methods

In this paper, a new MST-based fusion framework for X-ray images of bronze wares is proposed. As illustrated in Figure 1, the proposed method entails three key steps: NSST decomposition, the fusion of high-pass and low-pass subbands, and NSST inversion.
A and B are properly aligned X-ray source images of bronze wares. These images undergo decomposition into high-pass subbands and low-pass subbands, using NSST to describe the detailed and structured information of the corresponding X-ray images. The high-pass and low-pass subbands are individually fused according to different fusion rules. High-pass subbands are fused using rules based on multi-scale morphological gradients, whereas low-pass subbands are fused using rules based on localized topological CNP systems. Finally, the fused image is obtained by inverse transforming the fused high-pass and low-pass subbands using NSST inverse transform.

2.1. Non-Subsampled Shearlet Transform

In 2005, Colonna et al. [14] introduced a synthetic dilation affine system in their research. They utilized multi-resolution and geometric analysis to construct shearlet-wave transformations and developed the basic framework for continuous and discrete shearlet-wave transformations. On this basis, Easley et al. [15] proposed he Non-Subsampled Shearlet Transform (NSST) method. NSST has numerous advantages in the field of image processing, making it one of the most frequently used methods. It not only inherits the flexibility of shearlet-wave but also offers a better choice of spatial orientation, compared to other methods. This implies that NSST has the capability to efficiently capture and describe the intricate details of an image, thereby enhancing the quality of the fused image and significantly minimizing the probability of the Gibbs effect. NSST achieves an optimal sparse representation of an image, which is highly important in image processing. This means that NSST can accurately represent image features and reduce redundancy by selectively representing crucial information in the image.
First, the high-frequency subbands and the low-frequency subbands of the image are obtained by multi-scale decomposition of the source image using a non-subsampled pyramid filter (NSPF). Then, the high frequency subbands of the image are decomposed in multiple directions using a shearlet filter (SF). Finally, the image is reconstructed by inverse NSST operation after completing the processing of the corresponding subbands. This process is shown in Figure 2, which presents a schematic diagram of the three-level NSST decomposition.

2.2. MSMG-Based High-Pass Subband Fusion Rule

High-pass subband fusion is an image processing technique that extracts precise details such as edges and textures from the source image through spectral analysis and filtering. The primary objective of high-pass subband fusion is to preserve the intricate information of the source image and produce a fused image that is superior in clarity, richness, and depiction of the detailed attributes within the image. In the process of high-pass subband fusion, the essential factor is to optimize all the detailed characteristics of the source image. The multi-scale morphological gradient (MSMG) [16] shares similar functionality with common edge detection operators (e.g., Prewitt, Sobel, Laplacian) in detecting image edges. However, common edge detection operators enhance the noise in the image and degrade the image quality when extracting the image edges [17]. For the detection of boundaries, it is very important in the fusion of bronze mirror X-ray images [18]. The MSMG is an operator that can extract gradient information from an image. It represents the contrast difference between a pixel and its neighboring pixels. Consequently, the MSMG is frequently utilized in scenarios such as edge detection and image segmentation [17]. This method of transformation can maintain a low enhancement rate in noise while extracting image edges, effectively decreasing the noise’s impairment of the results of edge detection. To enhance the informational value of the high-pass subband image and create a fused image with detailed edges and textures, this study utilizes the multi-scale morphological gradient operator (MSMG) in combination with the eight-neighborhood-based weighted and modified Laplace operator (WSEML) [9]. This approach significantly enhances the clarity of contour edges in the fused image, preserves detailed information, and leads to improved fusion of the high frequency subbands.
The single-scale morphological gradient is defined in Equation (1):
G l [ ( x , y ) ] = f ( x , y ) g l ( x , y ) f ( x , y ) g l ( x , y )
where ( x , y ) denotes the pixel position, l stands for the number of multi-scale levels, f ( x , y ) represents the source image, g l ( x , y ) refers to the structural elements at the l level scale, G l [ ( x , y ) ] represents the structural information at the l level scale, and and denote the dilation and erosion operations, respectively.
In summary, the MSMG is calculated as in Equation (2):
M S M G ( x , y ) = a = 1 n   w q × G q ( x , y )
where w q denotes the gradient weight of the G q level, which can be expressed as in Equation (3):
w q = 1 2 q + 1
The multi-scale morphological gradient of the high frequency subbands I l , k A and I l , k B in each layer are as follows:
M G l , k A = M S M G a b s I l , k A , n M G l , k B = M S M G a b s I l , k B , n
In Equation (4), M G l , k A and M G l , k B are the multi-scale morphological gradient of the l high frequency subbands of the k layer of the bronze mirror X-ray images A and B, respectively.
W S E M L is the metric used for extracting details and is defined by the following equation:
W S E M L l r ( i , j ) = m = r r   n = r r   w ( m + r + 1 , n + r + 1 ) × E M L ( i + m , j + n )
w is the weighting matrix of ( 2 r + 1 ) × ( 2 r + 1 ) , where E M L is defined as follows:
E M L l r ( i , j ) = 2 C l r ( i , j ) C l r ( i 1 , j ) C l r ( i + 1 , j ) + 2 C l r ( i , j ) C l r ( i , j 1 ) C l r ( i , j + 1 ) + 1 2 2 C l r ( i , j ) C l r ( i 1 , j 1 ) C l r ( i + 1 , j + 1 ) + 1 2 2 C l r ( i , j ) C l r ( i 1 , j + 1 ) C l r ( i + 1 , j 1 )
where C l r ( i , j ) is the high-frequency subband coefficient located at ( i , j ) in the l -layer and r directions.
Assuming that M S M G W S E M L l r A ( i , j ) = W S E M L l r A ( i , j ) M S M G l r A ( i , j ) and M S M G W S E M L l r B ( i , j ) = W S E M L l r B ( i , j ) M S M G l r B ( i , j ) are associated with the bronzoscopic X-ray images A, B, the fusion rules based on the M S M G and W S E M L high-pass subbands are defined as follows:
C l r F i , j = C l r A i , j ,   i f   M S M G W S E M L l r A ( i , j ) M S M G W S E M L l r B ( i , j ) C l r B i , j ,   i f   M S M G W S E M L l r A ( i , j ) < M S M G W S E M L l r B ( i , j )
where C l r A i , j and C l r B i , j are the high-frequency subband coefficients of the two bronze mirror X-ray images at position i , j in the l -layer direction r , respectively, and C l r F i , j is the high-frequency subband coefficient of the fused image F at position i , j in the l -layer direction r .

2.3. CNP-Based Low-Pass Subband Fusion Rule

In general, the fusion strategy for the low-pass subbands has a significant impact on the final fused image. These subbands contain a significant amount of energy from the source image, which in turn has a significant influence on the fusion result. These low-frequency data are essential during the fusion process to maintain the overall structure, detail, and color balance. Therefore, selecting a suitable low-pass subband fusion approach is essential to achieving superior fused images. In order to fully guarantee the energy and extract as much detail as possible, a rule based on a localized topological CNP system is used in this paper for the fusion of low-pass subband components in the NSST domain.
Inspired by the synchronized pulse bursts in mammalian visual cortex, H. Peng et al. [19] proposed a similar neural P system called the coupled neural P system (CNP). The CNP system is a computational model consisting of multiple inter-coupled neurons linked together, each containing receptive fields, modulation, and output modules. These neurons form a directed graph structure among themselves, resembling a spiking neural P system, utilized for distributed parallel computation. The CNP system has unique characteristics. Firstly, it has a nonlinear coupling modulation property, which means that the interactions between neurons are nonlinear, and the coupling strength can be modulated. This feature allows the CNP system to exhibit enhanced computational abilities for complex tasks. And secondly, the CNP system also has a dynamic thresholding mechanism. The output of a neuron depends on its input, as well as on a dynamically adjusted threshold. By adjusting the thresholds dynamically, the CNP can adapt to various environments and task demands, thus enhancing computation’s flexibility and adaptability. In order to better handle the image fusion problem, B. Li et al. [20] designed the CNP system as a neural array with local topology, which is a CNP system with local topology.
A CNP system with an m × n sublocal topology is defined as follows:
Π = O , σ 11 , σ 12 , , σ 1 n , , σ m 1 , σ m 2 , , σ m n ,   syn  
where O = { a } is an alphabet, and the object a is called a spike. In this model, the spikes play a neuronal role. Spikes can generate and propagate pulse signals, simulating action potentials in neurons. These pulsed signals are used to transfer and process information in the computational model. The connection and coupling between the spikes form the network structure for parallel computing through the interactions in the network. σ 11 , σ 12 , , σ m n is an array of coupled neurons of m × n in the specific form of σ i j = ( u i j , v i j , τ i j , R i j ) , where 1 i m , 1 j n .
s y n = { ( i j , k l ) | 1 i m , 1 j n , | k i r | , | l j r , i k , j l }
where r is the neighborhood radius.
According to the spike mechanism, the state equation of neuron σ i j is as follows:
u i j ( t + 1 ) = u i j ( t ) u + C i j + σ k l δ r   ω k l p k l ( t ) , ( i f   σ i j   f i r e s ) u i j ( t ) + C i j + σ k l δ r   ω k l p k l ( t ) , ( o t h e r w i s e )
v i j ( t + 1 ) = v i j ( t ) v + σ k l δ r   ω k l p k l ( t ) , ( i f   σ i j   f i r e s ) v i j ( t ) + σ k l δ r   ω k l p k l ( t ) , ( o t h e r w i s e )
τ i j ( t + 1 ) = τ i j ( t ) τ + p , ( i f   σ i j   f i r e s ) τ i j ( t ) , ( o t h e r w i s e )
where p k l ( t ) is the spike received by neuron σ i j from the neighboring neuron σ k l , σ k l ( t ) is the corresponding local weight, C i j is the external stimulus, and p is the spike generated when neuron σ i j is excited [20].
If X-ray images of bronze mirrors A and B, respectively, equate to Π A and Π B , the low-pass subbands of two bronze mirror X-ray images are used externally for Π A and Π B . The two CNP systems operate from an initial state until the number of iterations reaches t m a . Let T A and T B denote the excitation matrices related to Π A and Π B , respectively, where T A = t i j A m × n and T B = t i j B m × n , and let t i j A ( t i j B ) denote the excitation frequency of neuron σ i j in the Π A ( Π B ) . Then, the fusion rules for the low-pass subbands are as follows:
C l 0 F i , j = C l 0 A i , j , i f   t i j A t i j B C l 0 B i , j , i f   t i j A < t i j B
where C l 0 A i , j and C l 0 B i , j represent the low-frequency subband coefficients of the decomposition of the source image, and C l 0 F i , j represent the low-frequency coefficients of the fused image F located at i , j   ( 1 i m , 1 j n ) .

2.4. NSST Reconstruction

The fused image F was reconstructed through an inverse transformation method, using NSST, from C l r A B i , j and C l n A B i , j .
F = n s s t _ r e ( C l r A B i , j , C l n A B i , j )

3. Results

To evaluate the effectiveness of the proposed fusion method, this study used four sets of eight registered Han Dynasty bronze mirror X-ray images for fusion performance testing. To ensure consistency, both the high- and low-energy X-ray images of the same bronze mirror must have the same size. The resolution of the images is set at 300 × 300. The image data provided for this study were obtained from the Shaanxi Provincial Institute of Cultural Relics Protection. The X-ray exposure of bronze mirrors was conducted using an ART-GIL350/6 fixed flaw detector manufactured by GILARDONI, Milan, Italy. This detector operates within a voltage range of 95 to 350 kV and has a maximum current of 5 mA. Figure 3 illustrates the ART-GIL350/6 fixed flaw detector.
In this study, the proposed method is compared to eight other multi-scale image fusion techniques. This comparison is carried out using both subjective visual evaluation and the evaluation of six sets of objective metrics. These evaluations aim to authenticate the efficacy of the proposed method. The eight multi-scale fusion methods are LDR [21], MDLatLRR [22], F-PCNN [23], NMP [24], IVFusion [25], PL-NSCT [9], MMIF [26], and IFE [27], with the same parameters as in the papers on each of these eight methods. The objective evaluation metrics are selected as entropy (EN) [28], average gradient (AG) [28], mutual information (MI) [29], gradient-based fusion performance (QAB/F) [30], peak signal-to-noise ratio (PSNR) [31], and visual information fidelity (VIF) [32], totaling 6 metrics. The method uses the following parameters: the NSST decomposition consists of four layers, with 16, 16, 8, and 8 decomposition directions respectively, the MSMG operator has three scales, and t m a x = 110 , τ 0 = 0.3 , r = 7 , p = 1 in the CNP system. The fusion performance tests were implemented on the Intel(R) Core(TM) i7-6700 CPU @ 3.40 GHz and 16.00 GB of RAM produced in Shanghai, China in the MATLAB 2018B environment, and the other compared fusion methods were implemented on the same platform using open-source code.

3.1. Qualitative Analysis

The Han Dynasty bronze mirrors are two-dimensional flat cultural relics with unique painting styles and artistic values, so it is important to subjectively analyze the effects of image fusion. In this study, a detailed subjective analysis of the fused images of the bronze mirrors is shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. Specifically, in the fusion result of the bronze mirror images, blue and yellow rectangles represent crack areas with significant differences in the effectiveness of each method, while red rectangles represent texture areas with significant differences in the effectiveness of each method.
In Figure 4 and Figure 5, the method used in this study, as well as the other eight methods incorporate the edge information and the ornamentation and inscription information of the bronze mirror. However, the LRD, NMP, F-PCNN, and PL-NSCT aspects do not present the crack information of the upper part of the bronze mirror clearly. The fusion performance of the F-PCNN, LatLRR, IVFusion, PL-NSCT, and MMIF methods is found to be weak in terms of preserving the ornamentation information. These methods do not clearly present the ornamentation and inscriptions found on the bronze mirror. Additionally, the information appears blurred and cluttered in the fused images produced by these methods. On the other hand, the LRD, F-PCNN, LatLRR, PL-NSCT, MMIF, and IFE methods also struggled to clearly present the uneven rusting and breakage information on the mirror rim. This difficulty arose due to overexposure of the rim portion during the fusion process. Overall, the fusion quality of the LRD, NMP, and IFE methods is good, but some of the subtle lesions and ornamentation information are not completely preserved. In contrast to the methods, the method proposed in this paper achieves a clearer fusion of bronze mirrors. It particularly excels in preserving the detail information of features such as disease information on the rim of the mirrors, as well as breakage and inscriptions in the ornamentation area. These details are more prominently displayed in the fused images produced by the method, and Figure 6, Figure 7 and Figure 8 show a comparison of specific details.
Because the third and fourth sets of bronze mirrors are more severely damaged, it is better to highlight the differences in the fusion methods. As shown in Figure 9 and Figure 10, the LRD, NMP, and IFE methods have better fusion quality in terms of ornamentation information. The pattern of the bronze mirror can be clearly seen using these methods. In contrast, the other methods exhibit poor fusion quality in terms of ornamentation information, with blurred ornamentation areas and insufficiently prominent internal cracks.
For the mirror edge region, the LRD, NMP, F-PCNN, LatLRR, PL-NSCT, and MMIF methods did not clearly reflect the rust disease information in the mirror edge portion. Furthermore, the IFE method showed a black shadow in the mirror edge region, covering the disease information in that area. As shown in Figure 11, specific effect comparisons are presented.
The F-PCNN, LatLRR, IVFusion, PL-NSCT, and MMIF methods exhibit significant blurring in the fragmented portion of the bronze mirrors, failing to accurately represent the varying levels of fragmentation, as well as the transformation in thickness information, as shown in Figure 12 and Figure 13. The method used in this study demonstrates high contrast in the third and fourth sets of bronze mirror fusion. The decoration information is clearly visible, and the information regarding bronze mirror diseases, as well as the level of fragmentation in the bronze mirror fragments, is more prominently displayed. It is evident that the method utilized in this study enhances the visibility of the fused result in the fusion of bronze mirrors. This method enables clear visualization of detailed information, such as ornamentation, while effectively highlighting cracks, edge details, and the transformation of bronze mirror levels. As a result, the fused image of the bronze mirror exhibits improved information richness.

3.2. Quantitative Analysis

Table 1, Table 2, Table 3 and Table 4, respectively, show the evaluation results of the first, second, third, and fourth sets of bronze mirrors on six objective evaluation indicators. Table 5 displays the average values for six evaluation indices from 36 fusion result charts across four groups of eight bronze mirrors.
The method used in this paper outperforms the other comparative methods on four of these metrics, and the remaining two metrics are better than most of the comparative methods. EN responds to the richness of the image information: generally, the larger the EN value, the richer the information. The AG value indicates the sharpness of the fused image, with a larger value indicating a sharper image. MI represents how much information of the source image is acquired by the fused image, and a larger value indicates that the fused resultant map retains more information of the source image. QAB/F, on the other hand, reflects the quality of the visual information obtained from the source image by the fusion resultant map, with larger values indicating that more of the original information is retained after fusion. PSNR reflects whether the image is distorted or not, and ideally, a larger value indicates better image quality. VIF is indeed an evaluation index that incorporates the quality of human visual perception and has a strong correlation with human judgment of visual quality. In the context of methodology concerning cultural relics protection, the specific nature of this work emphasizes the significance of subjective human eye judgment. Hence, the introduction of VIF allows for a comprehensive assessment of visual quality, aligning with the aim of effectively preserving cultural relics.
The method employed in this study demonstrates the highest values in all four evaluation indexes, namely AG, MI, QAB\F, and VIF. Notably, it excels particularly in the VIF index, suggesting that this method is adept at capturing more intricate details from the source image during the fusion process of X-ray images of bronze mirrors. As a result, the fused image exhibits a richer texture, aligning more closely with human visual perception.
The method of this study is ranked at a medium level on the PSNR index, which indicates that some noise is present in the fusion process. However, due to the specificity of PSNR, the score cannot be completely aligned with the visual quality perceived by the human eye. The sensitivity of human vision to errors is not absolute, and its perception can be influenced by several factors and variations. In the context of the fusion of bronze mirrors, it is important to focus on the visual perception of the human eye. Therefore, the PSNR index ranking can be considered acceptable.
Figure 14 represents the visualization of six evaluation indicators for the first, second, third, and fourth sets of bronze mirrors in different fusion methods. From the figure, although the method can still be further improved to a certain extent, such as in improving the PSNR value, the method in this chapter has demonstrated significant advantages in bronze mirror image fusion processing. Notably, considering the large difference in some data, the values of QAB/F, PSNR, and VIF, shown in Figure 14, are proportionally scaled for analysis.
In addition, this paper compares the computational costs of the proposed method and the comparison methods. In order to conduct this evaluation fairly, all methods are programmed by MATLAB 2018B and executed on a desktop equipped with an Intel(R) Core(TM) i7-6700 CPU @ 3.40 GHz and 16.00 GB of RAM. Then, the average running time of each method processing the entire bronze mirror images was statistically analyzed and listed in Table 6.
This table shows that the proposed method is competitive with the recent image fusion methods in terms of computational efficiency. Specifically, the running time of the proposed method is comparable to that of the PL-NSCT method and significantly better than those of the LRD, NMP, F-PCNN, LatLRR, IVFusion, and MMIF methods. The IFE method results in the shortest execution time but does not yield competitive results.
In summary, the method employed in this paper effectively preserves both the decorative and disease information present in the source images of bronze mirrors. It achieves a visual quality that closely resembles human perception and surpasses the performance of the other eight methods in terms of overall fusion results. This significant outcome holds great potential for advancing research and conservation efforts concerning bronze mirrors.

4. Conclusions and Discussion

This study presents a method for fusing X-ray images of bronze wares using pixel-level-based image fusion technology, with a multi-scale morphology gradient in the NSST domain and a local topology CNP system. Indeed, the proposed method has been specifically developed to address the unique requirements of the X-ray imaging process applied to bronze wares. The method takes into consideration the distinct characteristics and requirements associated with cultural relics research and preservation efforts. By tailoring the fusion approach to suit the particularities of bronze wares, the method can effectively enhance the quality of X-ray images and assist in the analysis, research, and conservation of these valuable cultural relics.
In high-frequency information fusion, using MSMG to represent the nature of the contrast strength between the pixels in the image and their neighboring pixels can be combined with WSEML. This combination ensures that the detail information of the source image is fully retained. Notably, this approach is well-suited for bronze wares research and conservation work, as it enables the scientific and comprehensive analysis of bronze wares. For the fusion of low-frequency information, a local topology CNP system is employed. This system draws inspiration from the impulse discharge mechanism of coupled neurons. The rich complementary information within the system can stimulate more local neuron discharges, resulting in enhanced clarity in the corresponding regions of the fused image. In this study, a comparative analysis was conducted using four groups of eight Han Dynasty bronze mirror X-ray images and eight multi-scale fusion methods from the control group experimental data. According to the experiment, the AG, MI, QAB/F, and VIF indicators showed an average increase of 4%. The objective was to effectively retain the maximum degree of information related to the bronze mirror rim disease and mirror center decoration.
This method fulfills the observation and protection requirements for the detailed features and conditions of bronze mirror jewelry. It enhances the edge and detail information within the image, thereby fully demonstrating the research value of bronze mirrors. This method reduces the difficulty of analyzing non-destructive testing of X-ray images of bronze mirrors. It provides a solution to the problems existing in X-ray non-destructive testing of bronze mirrors, helping to study and preserve the characteristics of bronze mirrors in cultural relic research and protection.
As the research progresses, a meaningful direction for further research in the field has been identified. The image evaluation indicators used in this paper are universal evaluation indicators, and the differences in bronze mirror types and thicknesses currently studied are relatively small. To accommodate the analysis of more complex, three-dimensional large-scale bronze artifacts, it is necessary to add depth operators to the evaluation operators for analysis. In the future, through in-depth research in this direction, the methods proposed in this paper can provide more valuable assistance for the protection and repair of bronze mirrors and even large bronze artifacts.

Author Contributions

Conceptualization, M.W.; methodology, M.W. and L.Y.; software, L.Y.; validation, R.C.; writing—original draft, L.Y.; writing—review and editing, M.W.; supervision, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61701388), the Cross-disciplinary Fund of Xi’an University of Architecture and Technology (No. X2022082), (No. X20230085) and the Fund of the Ministry of Housing and Urban-Rural Development (No. Z20230826).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data were obtained from Shaanxi History Museum and are available from the authors with the permission of Shaanxi History Museum. The data are not publicly available due to privacy.

Acknowledgments

We are grateful for the assistance provided by Jiankai Xiang from the Shaanxi Institute for the Preservation of Cultural Heritage, and Qianwen Zhang from the School of Information and Control Engineering at Xi’an University of Architecture and Technology. Xiang supplied us with a set of X-ray images of bronze mirrors, and Zhang assisted us in data visualization. We sincerely appreciate their valuable contributions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Elias, H. The Southwest Silk Road: Artistic exchange and transmission in early China. Bull. Sch. Orient. Afr. Stud. 2024, 1–26. [Google Scholar] [CrossRef]
  2. Fragnoli, P.; Manuelli, F. Petrographic and geochemical analyses of Late Bronze and Iron Age pottery from Arslantepe (Malatya, Turkey): Insights into the local organization of the production and extra-regional networks of exchange. Archaeol. Anthropol. Sci. 2023, 15, 112. [Google Scholar] [CrossRef]
  3. Yi, X.; Xu, H.; Zhang, H.; Tang, L.; Ma, J. Text-IF: Leveraging Semantic Text Guidance for Degradation-Aware and Interactive Image Fusion. arXiv 2024, arXiv:2403.16387. [Google Scholar]
  4. Chen, J.; Li, X.; Luo, L.; Mei, X.; Ma, J. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf. Sci. 2020, 508, 64–78. [Google Scholar] [CrossRef]
  5. Polinati, S.; Dhuli, R. Multimodal medical image fusion using empirical wavelet decomposition and local energy maxima. Optik 2020, 205, 163947. [Google Scholar] [CrossRef]
  6. Dogra, A.; Kumar, S. Multi-modality medical image fusion based on guided filter and image statistics in multidirectional shearlet transform domain. J. Ambient Intell. Humaniz. Comput. 2023, 14, 12191–12205. [Google Scholar] [CrossRef]
  7. Wang, Z.; Cuia, Z.; Zhu, Y. Multi-modal medical image fusion by Laplacian pyramid and adaptive sparse representation. Comput. Biol. Med. 2020, 123, 103823. [Google Scholar] [CrossRef]
  8. Amrita; Joshi, S.; Kumar, R.; Dwivedi, A.; Rai, V.; Chauhan, S.S. Water wave optimized nonsubsampled shearlet transformation technique for multimodal medical image fusion. Concurr. Comput. Pract. Exp. 2023, 35, e7591. [Google Scholar] [CrossRef]
  9. Zhu, Z.; Zheng, M.; Qi, G.; Wang, D.; Xiang, Y. A Phase Congruency and Local Laplacian Energy Based Multi-Modality Medical Image Fusion Method in NSCT Domain. IEEE Access 2019, 7, 20811–20824. [Google Scholar] [CrossRef]
  10. Liu, X.; Wang, L. Infrared polarization and intensity image fusion method based on multi-decomposition LatLRR. Infrared Phys. Technol. 2022, 123, 104129. [Google Scholar] [CrossRef]
  11. Mei, Q.; Li, M. Nonsubsampled Contourlet Transform and Adaptive PCNN For Medical Image Fusion. J. Appl. Sci. Eng. 2022, 26, 213–220. [Google Scholar]
  12. Vanitha, K.; Satyanarayana, D.; Prasad, M.N.G. Multi-Modal Medical Image Fusion Algorithm Based on Spatial Frequency Motivated PA-PCNN In NSST Domain. Curr. Med. Imaging 2021, 17, 634–643. [Google Scholar] [CrossRef] [PubMed]
  13. Panigrahy, C.; Seal, A.; Gonzalo-Martín, C.; Pathak, P.; Jalal, A.S. Parameter adaptive unit-linking pulse coupled neural network based MRI–PET/SPECT image fusion. Biomed. Signal Process. Control 2023, 83, 104659. [Google Scholar] [CrossRef]
  14. Colonna, F.; Easley, G.; Guo, K.; Labate, D. Radon transform inversion using the shearlet representation. Appl. Comput. Harmon. Anal. 2010, 29, 232–250. [Google Scholar] [CrossRef]
  15. Easley, G.; Labate, D.; Lim, W.Q. Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 2008, 25, 25–46. [Google Scholar] [CrossRef]
  16. Tan, W.; Zhang, J.; Xiang, P.; Zhou, H.; Thitn, W. Infrared and visible image fusion via NSST and PCNN in multiscale morphological gradient domain. In Proceedings of the Optics, Photonics and Digital Technologies for Imaging Applications VI, SPIE Photonics Europe, Online, 6–10 April 2020; p. 49. [Google Scholar]
  17. Li, S.; Zou, Y.; Wang, G.; Lin, C. Infrared and visible image fusion method based on principal component analysis network and multi-scale morphological gradient. Infrared Phys. Technol. 2023, 133, 104810. [Google Scholar] [CrossRef]
  18. Zhu, Z.; He, X.; Qi, G.; Li, Y.; Cong, B.; Liu, Y. Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI. Inf. Fusion 2023, 91, 376–387. [Google Scholar] [CrossRef]
  19. Peng, H.; Wang, J. Coupled Neural P Systems. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1672–1682. [Google Scholar] [CrossRef] [PubMed]
  20. Li, B.; Peng, H.; Luo, X.; Wang, J.; Song, X.; Pérez-Jiménez, M.J.; Riscos-Núñez, A. Medical Image Fusion Method Based on Coupled Neural P Systems in Nonsubsampled Shearlet Transform Domain. Int. J. Neural Syst. 2021, 31, 2050050. [Google Scholar] [CrossRef]
  21. Li, X.; Guo, X.; Han, P.; Wang, X.; Li, H.; Luo, T. Laplacian Redecomposition for Multimodal Medical Image Fusion. IEEE Trans. Instrum. Meas. 2020, 69, 6880–6890. [Google Scholar] [CrossRef]
  22. Li, H.; Wu, X.J.; Kittler, J. MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion. IEEE Trans. Image Process. 2020, 29, 4733–4746. [Google Scholar] [CrossRef] [PubMed]
  23. Panigrahy, C.; Seal, A.; Mahato, N.K. Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion. Opt. Lasers Eng. 2020, 133, 106141. [Google Scholar] [CrossRef]
  24. Tan, W.; Tiwari, P.; Pandey, H.M.; Moreira, C.; Jaiswal, A.K. Multimodal medical image fusion algorithm in the era of big data. Neural Comput. Appl. 2020, 1–27. [Google Scholar] [CrossRef]
  25. Li, G.; Lin, Y.; Qu, X. An infrared and visible image fusion method based on multi-scale transformation and norm optimization. Inf. Fusion 2021, 71, 109–129. [Google Scholar] [CrossRef]
  26. Veshki, F.G.; Ouzir, N.; Vorobyov, S.A.; Ollila, E. Multimodal image fusion via coupled feature learning. Signal Process. Off. Publ. Eur. Assoc. Signal Process 2022, 200, 108637. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Zhang, L.; Bai, X.; Zhang, L. Infrared and Visual Image Fusion through Infrared Feature Extraction and Visual Information Preservation. Infrared Phys. Technol. 2017, 83, 227–237. [Google Scholar] [CrossRef]
  28. Roberts, J.W.; Van Aardt, J.A.; Ahmed, F.B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2008, 2, 023522. [Google Scholar]
  29. Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 313–315. [Google Scholar] [CrossRef]
  30. Xydeas, C.S.; Petrovic, V.S. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef]
  31. Jagalingam, P.; Hegde, A.V. A Review of Quality Metrics for Fused Image. Aquat. Procedia 2015, 4, 133–142. [Google Scholar] [CrossRef]
  32. Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A new image fusion performance metric based on visual information fidelity. Inf. Fusion 2013, 14, 127–135. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the fusion method. A, B is the input image, F is the fusion result image.
Figure 1. Flowchart of the fusion method. A, B is the input image, F is the fusion result image.
Applsci 14 04166 g001
Figure 2. The schematic diagram of the three-level NSST decomposition.
Figure 2. The schematic diagram of the three-level NSST decomposition.
Applsci 14 04166 g002
Figure 3. Bronze mirror x-light source images (a). Clear X-ray image of rim area and (b) clear X-ray image of the decorative area. (1)–(4) Bronze mirror images for the first to fourth groups, respectively.
Figure 3. Bronze mirror x-light source images (a). Clear X-ray image of rim area and (b) clear X-ray image of the decorative area. (1)–(4) Bronze mirror images for the first to fourth groups, respectively.
Applsci 14 04166 g003
Figure 4. Fusion results of the first set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.
Figure 4. Fusion results of the first set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g004
Figure 5. Fusion results of the second set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.
Figure 5. Fusion results of the second set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g005
Figure 6. Comparison of methods for unclear presentation of bronze mirror crack information and texture information. (a) the first set of bronze mirrors; (b) the second set of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Figure 6. Comparison of methods for unclear presentation of bronze mirror crack information and texture information. (a) the first set of bronze mirrors; (b) the second set of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g006
Figure 7. Comparison of methods for unclear presentation of crack information in the first group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Figure 7. Comparison of methods for unclear presentation of crack information in the first group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g007
Figure 8. Comparison of methods for unclear presentation of texture information in the second group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Figure 8. Comparison of methods for unclear presentation of texture information in the second group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g008
Figure 9. Fusion results of the third set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.
Figure 9. Fusion results of the third set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g009
Figure 10. Fusion results of the fourth set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Red and yellow represent crack areas, while red represents textured areas.
Figure 10. Fusion results of the fourth set of bronze mirror images. (a) LRD; (b) NMP; (c) F-PCNN; (d) LatLRR; (e) IVFusion; (f) PL-NSCT; (g) MMIF; (h) IFE; (i) the proposed method. Red and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g010
Figure 11. Comparison of methods for unclear presentation of crack information in the third group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Figure 11. Comparison of methods for unclear presentation of crack information in the third group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g011
Figure 12. Comparison of methods for unclear presentation of crack information in the fourth group of bronze mirrors. Red and yellow represent crack areas, while red represents textured areas.
Figure 12. Comparison of methods for unclear presentation of crack information in the fourth group of bronze mirrors. Red and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g012
Figure 13. Comparison of methods for unclear presentation of texture information in the fourth group of bronze mirrors. Red and yellow represent crack areas, while red represents textured areas.
Figure 13. Comparison of methods for unclear presentation of texture information in the fourth group of bronze mirrors. Red and yellow represent crack areas, while red represents textured areas.
Applsci 14 04166 g013
Figure 14. Visualization of six evaluation indicators for four sets of bronze mirror images across different fusion methods. (a) The first set of bronze mirror images; (b) the second set of bronze mirror images; (c) the third set of bronze mirror images; (d) the fourth set of bronze mirror images.
Figure 14. Visualization of six evaluation indicators for four sets of bronze mirror images across different fusion methods. (a) The first set of bronze mirror images; (b) the second set of bronze mirror images; (c) the third set of bronze mirror images; (d) the fourth set of bronze mirror images.
Applsci 14 04166 g014
Table 1. The six evaluation results of the first set of bronze mirror fusion result graphs.
Table 1. The six evaluation results of the first set of bronze mirror fusion result graphs.
ENAGMIQAB/FPSNRVIF
LDR4.5581 8.5712 3.52070.6997 64.3976 0.9010
NMP4.8758 8.6371 3.3371 0.7238 65.9869 0.8979
F-PCNN4.3725 8.3092 2.7447 0.6857 66.0080 0.8765
LatLRR4.9789 7.1978 3.8001 0.6769 66.0513 0.8769
IVFusion4.3124 9.3490 2.2765 0.5001 57.9493 0.8195
PL-NSCT4.3624 8.3570 2.8639 0.6679 65.4475 0.9141
MMIF4.7962 8.2818 3.0071 0.6199 65.4340 0.8704
IFE4.6811 9.4547 4.2365 0.7381 62.6001 0.9237
Proposed method4.8323 9.9722 4.5274 0.7617 65.8743 0.9797
Table 2. The six evaluation results of the second set of bronze mirror fusion result graphs.
Table 2. The six evaluation results of the second set of bronze mirror fusion result graphs.
ENAGMIQAB/FPSNRVIF
LDR4.3628 8.6100 3.5340 0.7140 64.4138 0.9103
NMP5.0672 8.7101 3.3504 0.7331 66.0203 0.9106
F-PCNN4.4178 8.3208 2.7536 0.6904 66.1032 0.8859
LatLRR5.1190 7.2113 3.8345 0.6886 66.0713 0.9023
IVFusion4.5630 9.3981 2.3213 0.5143 57.9608 0.8209
PL-NSCT4.4992 8.3862 2.9104 0.6731 65.4509 0.9275
MMIF4.9005 8.3057 3.1346 0.6307 65.4408 0.8865
IFE4.7209 9.4783 4.2633 0.7400 62.6172 0.9345
Proposed method5.1031 9.8763 4.5406 0.7703 65.9549 0.9884
Table 3. The six evaluation results of the third set of bronze mirror fusion result graphs.
Table 3. The six evaluation results of the third set of bronze mirror fusion result graphs.
ENAGMIQAB/FPSNRVIF
LDR4.7363 8.5674 3.5116 0.7018 64.3992 0.8996
NMP4.9137 8.5399 3.3463 0.7194 65.9793 0.8846
F-PCNN4.3255 8.2711 2.7491 0.6833 66.0695 0.8524
LatLRR4.9931 7.1796 3.7994 0.6743 66.0584 0.8867
IVFusion4.6186 9.3566 2.2805 0.5029 57.9520 0.8154
PL-NSCT4.3971 8.3602 2.8599 0.6598 65.4397 0.9006
MMIF5.1362 8.2784 3.0218 0.6204 65.4217 0.8653
IFE4.5993 9.4596 4.2301 0.7283 62.6021 0.9222
Proposed method4.7987 10.0025 4.5218 0.7594 65.9081 0.9832
Table 4. The six evaluation results of the fourth set of bronze mirror fusion result graphs.
Table 4. The six evaluation results of the fourth set of bronze mirror fusion result graphs.
ENAGMIQAB/FPSNRVIF
LDR4.4968 8.5354 3.5225 0.7053 64.3978 0.8919
NMP4.6341 8.7789 3.3494 0.7201 66.0031 0.8993
F-PCNN4.2082 8.3824 2.7530 0.6882 66.1009 0.8800
LatLRR5.3102 7.2188 3.8320 0.6730 66.0698 0.8901
IVFusion4.4284 9.3535 2.3034 0.5143 57.9519 0.8130
PL-NSCT4.3869 8.3718 2.8370 0.6720 65.4490 0.9070
MMIF4.7183 8.3001 2.9949 0.6138 65.4335 0.8698
IFE4.8363 9.4582 4.2589 0.7224 62.5987 0.9200
Proposed method4.9647 9.9622 4.5390 0.7598 65.9207 0.9883
Table 5. Mean values of six evaluation indexes for 36 fused images.
Table 5. Mean values of six evaluation indexes for 36 fused images.
ENAGMIQAB/FPSNRVIF
LDR4.5385 8.5701 3.5222 0.7052 64.4021 0.9007
NMP4.8727 8.6665 3.3458 0.7241 65.9974 0.8981
F-PCNN4.3310 8.3193 2.7501 0.6869 66.0704 0.8739
LatLRR5.1003 7.2020 3.8165 0.6782 66.0627 0.8890
IVFusion4.4806 9.3643 2.2952 0.5079 57.9535 0.8172
PL-NSCT4.4114 8.3688 2.8678 0.6682 65.4488 0.9123
MMIF4.8872 8.2915 3.0396 0.6212 65.4325 0.8730
IFE4.7079 9.4627 4.2427 0.7322 62.6034 0.9251
Proposed method4.9274 9.9533 4.5322 0.7628 65.9145 0.9849
Table 6. Average running time (in seconds) across different methods.
Table 6. Average running time (in seconds) across different methods.
LDRNMPF-PCNNLatLRRIVFusion
Avg. runtime (s)386.1675105.6395466.050942.113699.241
PL-NSCTMMIFIFEProposed
Avg. runtime (s)10.679516.55620.587211.2355
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, M.; Yang, L.; Chai, R. Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain. Appl. Sci. 2024, 14, 4166. https://doi.org/10.3390/app14104166

AMA Style

Wu M, Yang L, Chai R. Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain. Applied Sciences. 2024; 14(10):4166. https://doi.org/10.3390/app14104166

Chicago/Turabian Style

Wu, Meng, Lei Yang, and Ruochang Chai. 2024. "Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain" Applied Sciences 14, no. 10: 4166. https://doi.org/10.3390/app14104166

APA Style

Wu, M., Yang, L., & Chai, R. (2024). Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain. Applied Sciences, 14(10), 4166. https://doi.org/10.3390/app14104166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop