Next Article in Journal
ResNet Modeling for 12 nm FinFET Devices to Enhance DTCO Efficiency
Previous Article in Journal
The Impact of Gate Annealing on Leakage Current and Radio Frequency Efficiency in AlGaN/GaN High-Electron-Mobility Transistors
Previous Article in Special Issue
Advanced Visitor Profiling for Personalized Museum Experiences Using Telemetry-Driven Smart Badges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a New Non-Destructive Analysis Method in Cultural Heritage with Artificial Intelligence

1
Department of Conservation and Restoration of Cultural Properties, Ankara Hacı Bayram Veli University, Ankara 06830, Turkey
2
Department of Computer Engineering, Ankara University, Ankara 06830, Turkey
3
Faculty of Artificial Intelligence and Data Engineering, Ankara University, Ankara 06830, Turkey
4
Department of Physis Engineering, Ankara University, Ankara 06830, Turkey
5
Department of Faculty of Medicine and Health Technology, Tampere University, 33720 Tampere, Finland
6
VTT Technical Research Centre of Finland, 33101 Tampere, Finland
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(20), 4039; https://doi.org/10.3390/electronics13204039
Submission received: 13 September 2024 / Revised: 2 October 2024 / Accepted: 11 October 2024 / Published: 14 October 2024

Abstract

:
Cultural assets are all movable and immovable assets that have been the subject of social life in historical periods, have unique scientific and cultural value, and are located above ground, underground or underwater. Today, the fact that most of the analyses conducted to understand the technologies of these assets require sampling and that non-destructive methods that allow analysis without taking samples are costly is a problem for cultural heritage workers. In this study, which was prepared to find solutions to national and international problems, it is aimed to develop a non-destructive, cost-minimizing and easy-to-use analysis method. Since this article aimed to develop methodology, the materials were prepared for preliminary research purposes. Therefore, it was limited to four primary colors. These four primary colors were red and yellow ochre, green earth, Egyptian blue and ultramarine blue. These pigments were used with different binders. The produced paints were photographed in natural and artificial light at different light intensities and brought to a 256 × 256 pixel size, and then trained on support vector machine, convolutional neural network, densely connected convolutional network, residual network 50 and visual geometry group 19 models. It was asked whether the trained VGG19 model could classify the paints used in archaeological and artistic works analyzed with instrumental methods in the literature with their real identities. As a result of the test, the model was able to classify paints in artworks from photographs non-destructively with a 99% success rate, similar to the result of the McNemar test.

1. Introduction

Cultural assets are all movable and immovable assets on the ground, underground or underwater that are related to science, culture, religion and fine arts from historical periods or that have been the subject of social life in prehistoric or historical periods and have scientific and cultural original value. Immovable cultural assets consist of rock cemeteries, inscribed, illustrated and embossed rocks, illustrated caves, mounds, tumuli, archaeological sites, acropolises and necropolises; castles, churches, synagogues, mosques, basilicas, monasteries, social complexes, old monuments and wall ruins; and frescoes, embossments, mosaics, fairy chimneys and similar immovable assets. Movable cultural assets consist of tiles, ceramics, sculptures, figurines, tablets, papyrus, parchment or documents written or depicted on metal, ornaments or jewelry, coins, stamped or inscribed tablets, manuscript or illuminated books, miniatures, engravings, oil or watercolor paintings of artistic value, fabrics and similar items.
Today, these assets are usually investigated by characterization methods that lead to minimum destruction in the culture asset such as Raman Spectroscopy (RS), X-Ray Diffraction (XRD), X-Ray Fluorescence Spectroscopy (XRF), Fourier = Transform Infrared Spectroscopy (FTIR), Laser-Induced Breakdown Spectroscopy (LIBS), Inductively Coupled Plasma Mass Spectrometry (ICP-MS), Scanning Electron Microscopy Energy-Dispersive X-Ray Spectrometry (SEM-EDX), Atomic Force Microscopy (AFM) and by non-destructive methods such as Portable X-Ray Fluorescence Spectroscopy (pXRF), Macro X-Ray Fluorescence (MA-XRF), Portable Raman Spectrometry (Portable Raman), Portable Fourier-Transform Infrared Spectroscopy (Portable FTIR), Reflectance Spectroscopy and Multispectral/Hyperspectral Imaging [1,2,3,4,5,6].
Since these analyses performed on cultural assets enable the recognition and distinction of that period, culture or artist, they provide information about the originality of the work to professionals including art historians, archaeologists, etc. For the conservator-restorer, knowing the material chemistry of the work is also important for the conservation-restoration work to be carried out. Nevertheless, the necessity of employing destructive methods to obtain samples from the artwork, which is subject to legal permission, leads to hesitations among individuals and organizations tasked with its preservation. Generally, destructive characterization methods are employed on samples obtained from areas that do not impact the aesthetic value of the cultural asset, on shed fragments and amorphous groups. As a result, they offer information that reflects only a limited part of the entire work. Non-destructive methods provide ease of analysis for portable cultural assets. Since material analyses of immovable cultural assets are limited to the area that the analyst can reach, works such as mosaics, tiles, wall paintings, hand-drawn works, etc., on a dome and transition elements cannot be analyzed. When it comes to accessing the work, a power source is required for the operation of some devices. Working directly on the work is not always reliable with some devices, such as a Raman spectrometer, which tends to burn dark colors. It requires preliminary research. For this reason, analyses conducted to obtain information about that period or artist become a problem for art historians, archaeologists, conservators and restorers.
In addition to the cost, there is a need for a system that can be easily used by everyone to provide documentation of period materials in archaeology, art history, conservation and restoration science, which is also lacking in analysis and interpretation. This is because this negativity causes the inability to obtain information about many cultural assets that are the common heritage of humanity. This situation has recently been addressed in cultural heritage rights and human rights law [7]. In this study, which was prepared to find solutions to the problems experienced in the national and international arenas, it is aimed at developing a non-destructive, cost-minimizing and easy-to-use analytical method for cultural assets. For this purpose, it is aimed at classifying digital images (RGB) of pigments used in all areas of cultural heritage. Harth investigated how pigments in cultural heritage could be studied using machine learning. He found that a recent trend in the field is toward spectral imaging techniques for the chemical mapping of paint surfaces. Although the tendency toward this type of examination in the literature is due to its low cost compared to non-destructive and spectroscopic methods, it still requires budgeting. Although classification and identification studies are carried out with deep learning architectures as a non-destructive method in the literature on pigment and color science, these studies have a costly approach since they are performed with data from multispectral/hyperspectral cameras and a reflectance spectrometer [8,9,10,11,12,13].
In a study by Andronache and colleagues, spectral data of 45 pigments painted on canvas and wood were analyzed using 30 Mu.S.I.S NIR cameras, equally spaced in the range of 400–1000 nm. These data were processed with statistical hierarchical methods, fractal algorithms and complexity measurements. PCA combined with clustering methods allowed the spectral data to be referenced with the Mahalanobis connection distance and to highlight clusters directly related to the intensity differences in the NIR range for the segmented spectral cubes of each panel.
Thanks to this research, it was found that the spectral cube of a painting in the spectral range of 420–1000 nm could be identified with the closest example (plain or overpainted) of the painting’s surface database, and the combination of colors or pigments that made up the color could be identified. However, this method, which allows for the non-destructive identification of pigment, is again costly as it requires a multispectral camera [14].
In a study conducted by Mandal et al., pigments imaged hyperspectrally in near infrared were classified using CNN as well as a Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), Spectral Information Difference (SID), Spectral Similarity Scale (SSS), and hybrid combinations of SID-SAM and SID–SCM algorithms. In the study, it was determined that CNN performed better than other machine learning algorithms [15].
In a study conducted by Pouyet et al., it was determined that material characterization was achieved when the hyperspectral reflections of historical pigments were examined with shortwave infrared (SWIR). Within the scope of the study, a new spectral database was developed using a deep neural network (DNN) to eliminate the complexity in the data of pigment references. When the historical image was examined with the created database, it was observed that the model showed good performance in identifying and mapping pigments in complex materials (spectrum matching) for unknown mixtures or multi-layered systems [16].
In a study conducted by Chen et al., the structures of pure (not having a mixture of two dyes) pigments in images were determined by the XRF method and their reflections were recorded with a hyperspectral camera. In the study, which allowed the analysis of a hyperspectral image using a combination of convolutional neural networks and the SCM spectral metric function, image segmentation was performed based on a database of pure elementary reflections. The results obtained produced accurate results that were verified by the use of analytical techniques, namely XRF analysis [17].
Except for historical pigments, when we looked at the literature on the identification of pigments in the food field, it was observed that similar classification methods were used. In a study conducted by Prilianti et al., digital images of three main photosynthetic pigments (anthocyanin, chlorophyll and carotenoid) found in plant leaves were taken with a multispectral camera. A convolutional neural network (CNN) model was developed to provide a real-time analytical system. The input of the system is a multispectral digital image of a plant leaf and the output is the content estimate of the pigments. From all experiments conducted with three different CNN models (ShallowNet, AlexNet and VGGNet), it was determined that the ShallowNet-based architecture was the best architecture for photosynthetic pigment estimation. It achieved a satisfactory estimation with an in-sample MSE of 0.0037 and an out-of-sample estimation of 0.0060. The real data range was from −0.1 to 2.2 [18].
In a classification made by Kazdal on black teas of two different qualities, a CNN was used along with algorithms such as SVM and Naive Bayes. Classification was performed with datasets consisting of features obtained from RGB, HSV and YCbCr color spaces of images for SVM and Naive Bayes classifiers. The SVM algorithm showed a high accuracy rate of 99% in the test with features obtained from the YCbCr color space of two different quality black teas. In addition, the CNN made a classification with 98.52% accuracy on training images and 98.56% accuracy on images used for verification without the need for any feature extraction process on the images due to its structure [19].
In the classification made by Büyükarıkan and Ülker on fruits, a CNN was used again. Fruit images consisting of 29 classes were obtained with 12 different color temperatures and they used an ALOI-COL dataset consisting of 1000 classes. Fruit images consisting of 29 classes in the ALOI-COL dataset were classified using the ESA architectures AlexNet, VGG16 and VGG19. The images in the dataset were enriched with image processing techniques and 51 images were obtained from each class. The study was examined in two structures as 80–20% and 60–40% training tests. As a result of 50 cycles, the test data were classified with 100% accuracy in the AlexNet (80–20%) and VGG16 (60–40%) architectures and 86.49% accuracy in the VGG19 (80–20%) architecture [20]. Flachot, who investigated the effect of light on color, worked on identifying the correct Munsell chip used as the surface reflection for the object using the ResNet, ConvNet and DeepCC deep neural networks (DNNs) in a scene derived from three-dimensional objects in a room illuminated with a Munsell chip, illuminated under 278 different natural lighting conditions. In his research, he found that ResNet and ConvNet performed well, while DeepCC represented colors in three color dimensions of human color vision [21].
In a study conducted by Bianco et al. on color constancy and illumination estimation, a CNN was used, and in a study conducted by Choi et al., a deep convolutional neural network (DCNN) and ResNET18 architecture were used [22,23]. In a study conducted by Huang et al. to estimate the pigment mixture in watercolors, an average ∆ELab of 2.29 and ∆ELab < 5 of 88.7% success were achieved by using a loss function and CNN to minimize perceptual differences in color [24].
Color, which is an important criterion in image classification, was used in the diagnostic system of middle and outer ear diseases by Viscaino et al. in the field of health. Viscaino et al. developed a computer-aided diagnosis (CAD) system with an F1 score of 96 by training the eardrum imaged for each ear disease in a CNN with RGB color channels [25]. A study by Sáez-Hernández and colleagues used a chemometric support vector classifier to estimate CIELAB values from RGB values of an image taken with a smartphone that was colorimetrically characterized to distinguish between different historical inorganic pigments used in murals. The study showed that RGB images taken with a smartphone could be used in color classification [26].
In the research conducted by Al-Omaisi Asia et al., a CNN, which is a deep learning application in fundus photography, was used to distinguish the stages of diabetic retinopathy. To detect DR, the power of the CNN with different residual neural network (ResNet) structures was taken advantage of, namely ResNet-101, ResNet-50 and VggNet-16 [27]. In a study conducted by Kwiek and Jakubowska, vitamin C was detected from images by creating color standardization in chemical solutions using the deep learning method [28]. Signh, who empirically investigated the importance of colors in object recognition for CNNs, examined five different datasets with different architectures (MobileNet-v2, DenseNet-121, Resnet-50, BagNet-91, BagNet-9). In his research, he found that different architectures exhibited similar behaviors in terms of color importance across the datasets. Sighn’s study provided empirical evidence to highlight the high impact of colors for CNNs [29]. In a study conducted by Rachmadi and Purnama, CNN architecture was used to develop a vehicle color recognition system. Data from the Chen et al. study were used for training. The dataset contained 15,601 vehicle images with eight classes of vehicle colors, namely black, blue, cyan, gray, green, red, white and yellow. Each sample was resized to 256 × 256@3 resolution with specific color spaces and four different color spaces, namely RGB, CIE Lab, CIE XYZ and HSV, were used. The best accuracy from the experiment was obtained using the RGB color space. In the paper, a vehicle color recognition system with 94.47% accuracy was developed using a convolutional neural network [30].
Based on the findings of these researchers, deep learning and convolutional neural networks were used in this study because they were successful in image classification [31,32].

2. Materials

Since the purpose of this article was to develop a new non-destructive method with artificial intelligence, the materials were prepared according to a preliminary study. Therefore, it was limited to four primary colors. These four primary colors were red and yellow ochre, green earth, Egyptian blue and ultramarine blue, because they are the most widely used dyes in history. To see how a CNN performed in pigment classification, there were two types of pigments chosen in blue.
In order to achieve the purpose of this study, paint was prepared with ultramarine, Egyptian blue, green earth, yellow and red ochre pigments, and different binders.
These binders consisted of materials used in fresco, secco, tempera, tempera grassa, oil paintings and watercolor techniques [33,34,35]. The reason for using different binders in paint is that the pigment volume concentration ratio changes according to the binder.
This makes the paint matte or shiny, causing it to reflect light differently [36,37]. This feature in the paint causes the color to be perceived differently.

2.1. Fresco-Secco

Fresco and secco examples were made in a wooden frame with a thickness of 1 cm, a width of 7.5 cm and a dimension of 15 cm. A total of 1/3 of washed river sand (3) and Kaymak lime (1) were used on the bottom of the wooden frame. A total of 1/1 of Kaymak lime and marble dust mortar was used on the surface of the dried ground. Before the mortar was applied on the ground, the fresco paints, consisting of a mixture of calcium hydroxide, pure water and pigment, were prepared. The prepared paints were applied on wet lime plaster in 2.5 cm squares. The other binder type in the fresco technique was pure water. When the lime plaster dried, the secco technique, consisting of a mixture of egg yolk and pigment, was applied to the other square. The construction phase of this sample is given in Figure 1.

2.2. Tempera-Tempera Grassa

Tempera and tempera grassa samples were made on two pieces of plywood of 1 cm thickness, 7.5 cm width and 20 cm dimensions. A stucco consisting of 1/17 ratio Bologna plaster and rabbit glue was applied to the surface in 10 layers, horizontally and vertically. After the surface of the dried plaster was smoothed with sandpaper, the templates where the paints would be applied were drawn with a pencil. In the tempera technique, egg yolk and egg white were used to bind the pigment to the surface, consisting of 2.5 cm squares. In tempera grassa, known as oil tempera, pigments were mixed with egg yolk (YS) and drying oils such as linseed oil (KTY), walnut oil (CY) and poppy seed oil (HY). The same applications were made for egg white (YA). The paints were applied to the surface with a number 2 brush. The construction phase of this sample is given in Figure 2.

2.3. Oil Paint

Oil painting samples were made on a template drawn with a pencil on canvas with dimensions of 7.5 cm in width and 15 cm in length. Yellow and red ochre, green earth, Egyptian blue, and ultramarine were used in the pigment of the paint, and cold-pressed linseed oil, walnut oil and poppy oil were used in the binder. Drying oil was added to some pigment on the palette and mixed with a spatula. The pigment mixed with the spatula was ground with a muller until it became smooth. The paint prepared was applied on canvas with a number 2 oil paint brush. The construction phase of this sample is given in Figure 3.

2.4. Watercolor

The samples were made on a 2.5 cm wide and 15 cm stencil on watercolor paper. The materials used in the watercolor were arabica gum and pigment. The arabica gum and pigment were mixed with a spatula and smoothed with a muller. The prepared paint was applied to the stencil drawn on watercolor paper with a number 2 oil paint brush. The construction phase of this sample is given in Figure 4.

2.5. Tone Scale

The lightness and darkness levels of the pigments according to their color tones were prepared on watercolor paper with arabica gum and a pure water binder. In the tone scale, the lightest and darkest tone values of the pigment were created according to the color density. Since the aim was to train the pure tone of the pigment in deep learning, the tone scale was not made with black and white tube paints. Therefore, the color density was prepared by applying the paint in one, two, three, four and five layers in the template and increasing the pigment density in template two. The construction phase of this sample is given in Figure 5.

2.6. Chemical Structure of Materials

The following spectroscopic analyses were performed on prepared samples to form a dataset and train artificial intelligence for determining whether the samples had similar content to the dyes used in cultural assets.

2.6.1. X-ray Fluorescence Spectroscopy

X-ray Fluorescence Spectrometry (XRF) is a non-destructive analytical technique that uses the interaction of X-rays with a material to determine its elemental composition for quantitative and qualitative analysis. The chemistry of a sample is obtained by measuring the fluorescent (or secondary) X-ray emitted from a sample following its excitation by a primary X-ray source. In this study, the technique was performed to obtain information about the chemical content of the pigments. The SPECTRO X-Lab 2000 PEDX brand spectrometer used in X-Ray Fluorescence analysis operates in a Polarized Energy-Dispersive (PED-XRF) system. The spectrometer, which can analyze sodium (Na) as having an atomic number of 11 and uranium (U) as having an atomic number of 92, has a sensitivity of 0.5 ppm for heavy elements and 10 ppm for light elements. Information about the analyzed pigments is given in Table 1, and the analytical results are given in Table 2. When the XRF results were examined, it was seen that the yellow ochre pigment of the S1 coded sample had an FeO (OH)·nH2O chemistry, and the celadonite and glaconite minerals of the Y1 coded sample had the elements K [(Al, Fe3+), (Fe2+, Mg)] (AlSi3, Si4) O10 (OH)2.
The sample coded K1 had a chemical structure of the red ochre pigment ∝-Fe2O3, the sample coded MM1 had the compounds of Egyptian blue (CaCuSi4O10 or CaOCuO (SiO2)4), and the sample coded UM1 had the elements of the ultramarine pigment Na7Al6Si6O24S3. Therefore, the pigments from which the samples were created were directly compatible with the structure of the pigments used in historical painting. For this reason, the pigments identified into deep learning were comparable to the pigments in historical paintings.

2.6.2. Confocal Raman Microprobe Spectroscopy

Raman spectroscopy is a non-destructive chemical analysis technique that uses the interaction of monochromatic light usually from a laser (in the visible, near infrared or near ultraviolet ranges) with matter to determine vibrational modes of molecules.
This technique is based upon the interaction of light with the chemical bonds within a material by measuring the scattered light from a specific angle following irradiation of a sample with a powerful laser source. In this study, this method was performed to identify the chemical fingerprint of the binders used in the paints. The spectrometer used in this research was of the Thermo Fisher brand, from Waltham, MA, USA: a DXR Raman, with an Olympos microscope. The wavelength of the laser source was 633 nm. The analyses were performed in the range of 100–300 cm−1. The results of the analyses performed on the binders are given below in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
Polysaccharides were seen in the weak bands in the Raman spectrum between 1500 and 1800 cm−1. The spectral properties of different polysaccharides are distinguished by different gums. Of these polysaccharides, arabica gum is defined by having COC sugar ring vibrations at 1000 and 800 cm−1 [38]. When the Raman band of Arabica gum in Figure 6 was examined, COC vibrations were seen at 844.34 and 1080.93 cm−1.
In Figure 7 egg yolk gave Raman band peaks at 441.32, 1625.90, 2330.78 and 2378.10 cm−1, respectively, while egg white in Figure 8 gave Raman band peaks at 1004.24, 1235.94, 1448.05, 1656.91 and 2941.02 cm−1, respectively. Cold-pressed linseed oil in Figure 9 showed strong peaks at 996.09 cm−1 and weak peaks at 1655.27 cm−1 in the Raman band, while cold-pressed walnut oil in Figure 10 showed strong peaks at 966.72 and 1015.67 cm−1 and weak peaks at 1656.91, 2854.54 and 2896.96 cm−1. In Figure 11, cold pressed poppy seed oil gave strong peaks at 943.87, 994.45 and 1017.30 cm−1 and weak peaks at 2846.65 and 2910.02 cm−1. This strong peak seen between 900 cm−1 and 1000 cm−1 in all three oils represents the CH=CH bond, while the peak at 1600 cm−1 represents the C=C double bond, which promotes the formation of solid films in the drying oils [39].

3. Methods

A dataset was created by taking photographs of samples. An image or image is a visual representation of something and is obtained because of some light events [40]. As the wavelength in a light spectrum changes, the pixel values in each color channel of the image are captured by a camera change, because different wavelengths of light affect the light reflected from the object surface [41].
Light is the heat coming to our eyes from an energy source transformed into electromagnetic waves. According to Lambert’s law, when light reaches the surface of an object, some portion of it is reflected, some is transmitted and the rest is absorbed. Since the amount of reflected, transmitted and absorbed light varies according to the properties of the object surface, bulk structure and wavelength of the light, the perception of color also changes [40,42,43]. The different colors in electromagnetic waves are due to the different wavelengths (frequencies) and vibrations of these waves. In other words, each color sends us vibrations of different wavelengths. Wavelength values according to light colors are given in Table 3 [43]. The whiteness of a light is characterized by the color temperature (CCT) of a lamp and is defined in Kelvin. The white light source illuminating the object was divided into three groups according to their color temperatures. This grouping is given in Table 4. The table shows that as the color temperature decreased, the image became more reddish, and as the color temperature increased, the image became more bluish [41].
In natural light, the color temperature changes according to the weather and the movement of the sun, so the perception of color changes. The color temperature of sunlight filtered by the atmosphere is 5600 K. This value is the color temperature of noon and cloudless weather. In the morning and evening hours, this value drops below 4000 K. In clear, blue skies, this value increases to 10,000 K or above. The color temperature of a tungsten incandescent lamp used in homes is 2700 K. Although the amount of light emitted from any lamp is not related to the color of the lamp, the amount of light emitted from the lamp affects a viewer’s ability to see the object. Another factor that affects the viewer is the background of the surface where the paint is placed, because light is not absorbed or reflected only by the paint. For example, since a black background does not reflect all light, the colors on it appear clearer, more vivid and lighter, while a white background reflects almost all light and color on it appears darker [43].
For this reason, the images of the samples in the dataset were taken on a black and white background using different light sources and intensities. A Nikon D7100 digital camera (Nikon, Tokyo, Japan) was used in the photo shoot. In natural light shots, the ISO setting was 100, aperture 8 and shutter speed 60. In artificial light shots, the ISO was set to 200 to benefit more from the light. An aperture of 8 and shutter speed of 60 were used. The intensity of the light in the photo shoots was measured with an Illuminance UV Recorder TR-74Ui brand lux meter. The size of the photos was 6000 × 4000 and they were taken in large size. No flash or color calibration card was used in the photo shoots, because artificial intelligence would be trained on the pigment type, not on the brightness light. In other words, if the artificial intelligence was asked to estimate the color tone in the pigment or the light used, the calibration card could have been a reference while creating the data. However, artificial intelligence was asked to recognize the pigment. For this reason, the pigment was photographed under different light sources and light intensities. A more concrete data analyzer, a lux meter, was used to observe the color change in the pigment. In the photo shoot, the fresco-secco, tempera, tempera grassa, oil paint, watercolor, tone scale samples, pigments used in the production of paints, and another sample prepared to see the color of the pigment in light and shadow were used. This sample was prepared on watercolor paper with a mixture of arabica gum and pigment. The sample was folded in four. Thus, light and shadow were created, because when painters created their works, they determined the colors of objects according to the light source in the work.
For this reason, while a pigment appeared in its color under daylight in some pictures, it appeared darker or lighter in situations where the light varied. This is related to the dominant light source in the picture and how this source illuminates the environment. The artist achieved this change in the pigment by darkening the paint with black or lightening it with white or by making the color more green, reddish or orangey. Within the scope of this project, only light was used as a source to determine this change in the pigment. The pigment used in the photo shoots and the sample created to see the color tone based on light and shadows are given in Figure 12 and Figure 13.
Natural Light: Since the images of the samples in natural light changed according to the angle of incidence of the sun rays, photographs were taken outdoors at sunrise, noon and sunset on a black and white background. According to the luxmeter data, the light intensity was measured at different intensities as 3415 K at sunrise, 1390 K-8185-7591 K at noon, and 5962-5317-4651-3821 K toward sunset. The photo shooting environment is given in Figure 14 and Figure 15.
Artificial Light: Artificial light shots were taken under white and yellow light. LED was used as the white light source and an incandescent lamp was used as the yellow light source. LED light shots were taken in the Life of Photo product shooting tent. There were 120 LED lamps with a power of approximately 50 W on the upper part of the shooting tent. In this tent environment, samples were photographed under two different conditions. The first of these was adjusting the LED lights in the tent from the lowest to the highest level, and the second was keeping the LED lights constant at the highest light intensity and selecting different color temperatures on the camera. Under both conditions, photos were taken using black and white backgrounds. An example of the photo shoots is given in Figure 16.
The light intensities of the photographs taken under the first condition were the following: 1029 K, 1239 K, 1325 K, 1895 K, 2214 K, 3031 K, 3546 K, 4024 K, 5015 K and 6512 K. The photographs taken under this condition are given in Table 5. Under the second condition, the LED was set to 5000 Kelvin in accordance with daylight, and the photographs were taken according to the different Kelvins in the color temperature options in the camera’s white balance setting. These photographs are given in Table 6. The camera was set to 2700 K for the warm white image, 4000 Kelvin for the warm white image and 5600 K for the cold white image. Since a blue tone was observed in the pigments in the photographs taken at 2700 K to obtain a warm white image, they were not used as a dataset.
The second light source in artificial lighting, the incandescent lamp, was used to see the colors of the pigments in a dim yellow light environment. The light intensity of the incandescent lamp was measured as 2888 K in the lux meter. The photo shooting environment is shown in Figure 17.
In addition to natural and artificial light, photographs of the pigments in their wet and dry states were used as a dataset, because the pigments could be observed to lighten or darken in color during the wetting and drying processes. These data obtained while creating the samples are given in Figure 18.

3.1. Dataset Creation

The photographed samples were cropped in a square shape using the Image Cropper Pro application, and each pigment was saved in its own folder by numbering the number of images. The name K1 was used for the red ochre pigment, MM1 for Egyptian blue, SO1 for yellow ochre, YTP1 for the green soil and UM1 for ultramarine blue. An example of the pigment folders is given in Figure 19. A total of 8332 images were created, with 1643 from Egyptian blue, 1620 from red ochre, 1691 from yellow ochre, 1682 from ultramarine and 1696 from the green soil pigment. In creating the images, care was taken to ensure that the pigments were close to each other in number and balanced. The data created were converted to 256 × 256 pixels in the FastStone Photo Reziser application and prepared for the working environment.

3.2. Working Environment

Visual Studio Code 1.87.2 was used in the working environment and Python (version 3.9) was used as the programming language. Training and testing of artificial intelligence algorithms were performed on an AMD Ryzen 5 3600 6-Core processor and NVIDIA GeForce GTX 1080 TI (11 GB) graphics card. SVM machine learning algorithms were used except for the deep learning models studied on the processor and using the Python sklearn library. The TensorFlow library developed by Google was used for the deep learning algorithms. In addition, neural network modelling was performed with Keras. Keras was preferred because it is a library that can work with the TensorFlow library and makes neural network modelling more practical. Since artificial intelligence models produced with libraries such as TensorFlow work more efficiently with graphics cards, the data developed and tested with this structure were processed on the graphics card. Data processed by a graphics card are less costly compared to processors.

3.3. Classification Models

A support vector machine (SVM), Convolutional Neural Network (CNN), DenseNet, VGG19 and Resnet50 were used in the classification of the data. The difference between the CNN and the other models was that its architecture was specially designed for the classification of historical pigments. Other models were pre-trained. The reason for using more than one model in the classification was to see the performance of the CNN compared to the pre-trained models and to find the model that gave the best result in the classification of historical pigments.
In the context of color classification, CNN, SVM, ResNet50, DenseNet and VGG19 exhibit unique advantages and limitations. CNNs are naturally well suited for capturing local patterns like color gradients and texture, making them effective in color-based tasks. However, their performance can degrade on more complex color spaces due to the reliance on fixed filter sizes, which may miss subtle color variations. SVMs, although highly efficient for smaller, well-defined feature spaces, often struggle with the intricate relationships in color data, especially when subtle hues or overlapping shades must be distinguished. SVMs excel in linear separability but require extensive feature engineering to match the flexibility of CNNs.
ResNet50, with its residual connections, is particularly adept at capturing color relationships in deeper networks, allowing it to excel in tasks that require understanding finer color distinctions. The added depth, however, can make ResNet50 overcomplicated for simple color classification tasks, leading to unnecessary computational overhead. DenseNet’s dense connections, which aggregate color features from multiple layers, offer a more compact solution, potentially improving accuracy by reusing learned color features. This reduces the need for redundant computations, but the network’s structure demands higher memory, which can be limiting in resource-constrained environments.
VGG19, while effective in basic color classification, tends to suffer from inefficiency due to its deep, sequential architecture and large parametric size. Though it can handle the hierarchical structure of colors, it lacks the innovations seen in more modern architectures like ResNet50 and DenseNet, making it slower and more prone to overfitting, particularly in smaller color datasets. Therefore, while all the models have their utility in color classification, deeper architectures like ResNet50 and DenseNet offer a more nuanced handling of complex color data, while a SVM and CNN may suffice for simpler color tasks with fewer computational demands.

3.3.1. Support Vector Machine

Support vector machines are a machine learning method developed by Vladimir Vapnik and Alexey Chervonenkis in 1960 based on basic statistical learning theory. This method is used in data mining for classification problems in datasets where the patterns between variables are unknown [44]. SVM classifies data as linear in two-dimensional space, planar in three-dimensional space and hyperplanar in multi-dimensional space. The best hyperplane for an SVM is the one with the largest margin between the two classes. The points closest to a hyperplane are called support vectors. In support vector machines, training data were used to find the most suitable hyperplane, and test data were used to classify the part of the area separated by the hyperplane by including it in the class of that part [45].

3.3.2. Convolutional Neural Network

Convolutional neural network architecture is a supervised machine learning algorithm and artificial neural network type used in analyzing high-dimensional data. It has a multi-layer feedforward neural network formed by sequentially overlapping many hidden layers [46]. CNN architecture basically consists of three layers: an input layer, a hidden layer and an output layer. The hidden layer consists of the following: a convolutional layer, pooling layer and fully connected layer [47].
CNNs are designed to learn spatial hierarchies of features automatically and adaptively through backpropagation using multiple building blocks like in the hidden layer [46]. They are used to extract the features of a image and obtain a result in line with the purpose for which they are used [47]. The CNN designed for the classification of historical pigments had 4 (four) convolutional layers, 4 (four) pooling layers, one (one) flattening layer and one fully connected layer. A summary of information on the CNN model is shown in Table 7 and its architectural structure is shown in Figure 20.
The convolutional layer of this architectural structure was used to perform feature extraction by applying a filter to the image and to detect colors. The pooling layer was used to reduce the dimensions of the input and make the feature extraction process more accurate [47].
The third layer, which was a fully connected layer, served to provide the output necessary to classify the extracted features. The convolutional layer served as a combination of linear and nonlinear operations. The pooling layer provided a downsampling operation that reduced the in-plane dimensionality of the feature maps to add translational invariance to small distortions and reduce the number of subsequent learnable parameters. The output feature maps of the last convolutional or pooling layer are typically flattened and connected to one or more fully connected layers, also known as dense layers [46].
In all convolutional layers of the CNN model designed for the classification of the historical pigments, ReLU activation function was used, and 32 filters were applied in the first convolutional layer, 64 filters in the second convolutional layer, and 128 filters in the third and fourth convolutional layers. The pooling layers were applied after each convolutional layer. Classification was performed using a flattening layer and a fully connected layer to turn the outputs into a single-dimensional vector. A total of 833,733 parameters were used. The pigments were cropped using the Image Cropper Pro application, and after obtaining 256 × 256 images using the FastStone Photo Resiser application, they were sent to the CNN model for classification without any feature extraction.

3.3.3. Densely Connected Convolutional Network

DenseNet is a type of CNN in which all layers are directly connected to each other (with matching feature map sizes) and it uses dense connections between layers through dense blocks. In this neural network, each layer receives additional inputs from all previous layers and transfers its own feature maps to all subsequent layers [48]. In other words, in the system, each layer is connected to other layers in a feedforward manner [49]. In general, by using a structure similar to the ResNet architecture, the features coming out of a layer in DenseNet architecture are given as input to all lower layers [50].

3.3.4. Residual Network 50

ResNet is a type of neural network introduced in 2015 by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun in the article “Deep Residual Learning for Image Recognition” to facilitate the training of networks that are significantly deeper. Designed based on the VGG-19 architecture, the network uses a 34-layer flat network architecture with fewer filters and lower complexity than VGG networks. The innovative aspect of the ResNet neural network is the block solution it offers to the problem of the disappearance of the gradient that occurs with increasing depth, which is one of the fundamental problems of deep learning architectures. When the network depth is increased, the convolutional blocks in the upper layers are connected to the outputs of the convolutional blocks in the lower layers at certain periods, preventing the disappearance of the gradient problem. The network is given input images of 256 × 256 in size and consists of approximately 25 million parameters. This architecture, which has fewer trainable parameters than the VGG architecture, is widely used in image classification due to its high performance [50].

3.3.5. Visual Geometry Group 19

VGG is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from Oxford University in 2014, and it achieved successful results on ImageNet, a dataset containing more than 14 million images of 1000 classes, in the ILSVR-2014 competition. The network was created by making improvements in the AlexNet convolutional neural network model and takes inputs with dimensions of 224 × 224. The architecture, which has approximately 143 million parameters, consists of three convolutional layers [50].

4. Experiments and Results

In this section, the training experiments and the results obtained from the five models used in this study are discussed in detail.

4.1. Training

The CNN, DenseNet, VGG19, Resnet50 and SVM models were applied for the detection of historical pigments. Before applying the models, the dataset, consisting of 8332 images of 256 × 256 pixels in total, was divided into 70% training, 20% testing and 10% validation. A total of 1665 images were used for testing in the five models.
For deep learning algorithms, it is generally recommended to decrease the learning rate as the training cycle progresses while training the model. In the four deep learning models, the initial value of the learning rate was 0.001 and the learning rate was decreased every 10 epochs. The momentum value was set to 0.9 and the batch size was set to 32. Each model was run for 20 epochs and the weights with the highest validation value were recorded.
In the SVM, a utilized classifier with a linear kernel was used to address the classification task. The linear kernel was chosen due to its effectiveness in linearly separating data and its computational efficiency for high-dimensional feature spaces. Specifically, the SVM model was implemented using the SVC class from the scikit-learn library, with the hyperparameter C set to 1.0 and a fixed random state of 42 to ensure the reproducibility of the results. The regularization parameter C controlled the trade-off between maximizing the margin and minimizing classification error; a value of 1.0 was selected as a balance between these objectives. The classification performances of the models were evaluated with accuracy, precision, recall and F1 score performance measures. The results of the models are given below.

4.2. Results

The SVM, CNN, DenseNet, Resnet50 and VGG19 classification results are presented below in Table 8, Table 9, Table 10, Table 11 and Table 12, respectively. According to the SVM result report, the accuracy rate of all pigments was 97%, and the F1 scores were 97% for the green soil, 100% for red ochre, 96% for ultramarine, 98% for yellow ochre and 95% for Egyptian blue. As stated in the CNN classification result report, the accuracy rate of all pigments was 97%, and the F1 scores were 98% for the green soil, 100% for red ochre, 96% for ultramarine, 98% for yellow ochre and 95% for Egyptian blue.
Conforming to the DenseNet classification result report, the accuracy rate of all pigments was 100%, and the F1 scores were 100% for the green earth pigment, 100% for red ochre, 99% for ultramarine, 100% for yellow ochre and 99% for Egyptian blue. As reported by the ResNet50 classification result report, the accuracy rate of all pigments was 100%, and the F1 scores were 100% for the green earth pigment, 100% for red ochre, 100% for ultramarine, 100% for yellow ochre and 100% for Egyptian blue. For the VGG19 classification result report, the accuracy rate of all pigments was 99%, and the F1 scores were 99% for the green earth pigment, 100% for red ochre, 99% for ultramarine, 100% for yellow ochre and 99% for Egyptian blue.
A summary of the results of the models used in the classification of historical pigments is given in Table 13. McNemar’s test was performed to see the success performance of the models.

4.3. McNemar Test Results

The McNemar test is a variant of the χ2 test and is a nonparametric test used to analyze matched data pairs. It is used to determine the strengths and weaknesses of each learning algorithm in machine learning [51]. This test was conducted to compare the models used in the classification of historical pigments within the scope of the research and to find the most successful model. In the test, the PA value of each of the 1665 images was calculated, and then the PA values of the models for each image were compared with each other. During this comparison, the high model was given a value of 1 and the low model a value of 0 according to the PA values. Equal PA values were not included in the calculation. Then, the z value was calculated according to the calculation formula, allowing the PA values of the 1665 test images to be found and compared with each other [47]. Sample values for comparison are shown in Figure 21.
All PA values were compared as presented in the examples above. The A (1,0) value was taken as 1 if the two PA values were greater than each other and as 0 if they were smaller. The A (0,1) value was taken as 0 if the PA value in the two compared values was greater than each other and as 1 if they were smaller. As a result, the values of 1 were collected in all comparisons and the z values were calculated [47]. The calculated z values are shown in Figure 21. While the directions of the arrows (←, ↑) used in the comparison of the models show the more successful model, the gray areas show that the comparison was made with the previous model. According to Table 14, while the DenseNet, ResNet50 and VGG19 models were more successful than the CNN, the CNN was more successful than the SVM. While the DenseNet model was more successful than the SVM and VGG19, ResNet50 was more successful than DenseNet. ResNet50 was more successful than VGG19 and the SVM. The VGG19 model was more successful than the SVM.
When all models were evaluated, it was seen that the most successful model was ResNet50. After ResNet50, this order was followed by DenseNet, VGG19, the SVM and the CNN. The results obtained in the McNemar test had similar success with the code result.

4.4. Field Work on the Created Artificial Intelligence System

The aim of this research was to develop a new analytical method that did not take samples from painted historical artifacts and had a minimal cost. In line with this purpose, whether the trained artificial intelligence could make correct pigment estimations by using images of cultural assets whose pigment chemistry was determined by instrumental methods in the literature was examined. Within the scope of this study, images were taken from the area where there was no paint mixture and where a single tone was dominant. The sample area description is given in Figure 22.
In the selection of the example, no attention was paid to the lighting conditions under which the photograph was taken, because within the scope of this study, artificial intelligence was trained with pigment photographs of different light sources and different light intensities. For this reason, it was not important whether the photograph of the work to be taken as an example was taken in daylight (morning, noon, evening) or under artificial light. The dataset in artificial intelligence was created by taking these light variables into account. The photographs of the artwork to be tested were cropped into a square shape in the Image Cropper Pro application and then converted to 256 × 256 pixels in the FastStone Photo Reziser application. This process is shown in Figure 23. In the test dataset created for each pigment type, images of artworks, as well as images of different paint companies and paints with different tonal contents, were used. The test of the pigments for the real class and predicted class was carried out in the VGG19 model, which had a 99% success rate. The data used in the test of the pigments and the VGG19 test results are given below.
In the images loaded into the VGG19 model for testing purposes, the model did not know which pigment was used for the image. The data owner knew which pigment the relevant images had according to the analyses made with instrumental methods to date. Therefore, whether the VGG19 model, which had been trained on pigments, could correctly classify the pigments used in archaeological and artistic works was tested. Test data are given in Table 15. Visuals of the artworks in Table 15 are given in Figure 24.
The VGG19 model predicted nine yellow ochre pigments with 9/9 accuracy, 11 green earth pigments with 11/11 accuracy, five red ochre pigments with 5/5 accuracy, 11 Egyptian blue pigments with 11/11 accuracy and seven ultramarine pigments with 7/4 accuracy. The model correctly classified the pigment used in the artwork as ultramarine, and it classified the ultramarine color produced by different paint companies as Egyptian blue. When this result obtained in the ultramarine data was examined, it was seen that the VGG19 prediction was correct, because it was determined that there was no color close to a similar tone in the ultramarine training of the model and that the ultramarine color produced by different paint companies was closer to the Egyptian blue in the VGG19 model. The findings of this determination are given in Table 16. For this reason, when the model was trained with the ultramarine color produced by different paint companies, it was seen that the prediction would have had 100% success like the other pigments.
This was clearly shown by the fact that it classified the Egyptian blue and ultramarine colors, which were close to each other in the works of art, because Egyptian blue is the first synthetic paint produced by the Ancient Egyptians to obtain a color similar to lapis lazuli rock. Ultramarine is a pigment obtained from lapis lazuli rock. The fact that the model distinguished these two pigments, which have different chemical structures but similar colors, showed that deep learning models are extremely successful. The findings obtained as a result of the test are directly proportional to the model code and the McNemar test, and the paints in the works of art could be classified non-destructively through photographs with a 99% success rate.

5. Conclusions

Cultural assets carry value belonging to humanity’s past. Today, destructive and non-destructive analytical methods are used to understand the technology of this value. Since destructive methods require the examination of a cultural asset under laboratory conditions, they require taking samples from the work. Since these analyses are subject to legal permission, they make it difficult for art historians and archaeologists.
Non-destructive analytical methods provide the opportunity to work in the field. However, since these analyses are limited to the area that the analyst can reach, this method provides ease of analysis for movable cultural assets. When both methods are evaluated with their advantages and disadvantages, the cost becomes negative for cultural heritage workers. This study, which was prepared to find solutions to the problems experienced in the national and international arenas, was aimed to develop a non-destructive, cost-minimizing and easy-to-use analytical method for cultural assets.
For this purpose, historical paints were produced by different painting techniques in this study. Artificial intelligence was trained by taking photographs of the produced paints. In this way, the paints used in the historical artifacts could be examined easily and non-destructively in the created system without requiring any cost by taking photographs. Since the aim of this article was to develop a new non-destructive method with artificial intelligence, the materials were prepared according to a preliminary study. Therefore, they were limited to four main colors. These four main colors were red and yellow ochre, green earth, Egyptian blue, and ultramarine blue because these are the most commonly used paints in history. Two types of pigments in blue were used to see how a CNN performed in pigment classification. Pigments were used in fresco-secco, tempera, tempera grassa, oil paintings and watercolor techniques. The reason for using different binders in the paint was that they reflected light differently. The produced paints were photographed under natural and artificial light, at different light intensities, and converted to a 256 × 256 pixel size, and then the CNN was trained on SVM, Res-Net50, DenseNet and VGG19 models. In the trained models, the SVM and CNN showed 97% accuracy, the VGG19 99% accuracy, and the ResNet50 and DenseNet 100% accuracy. The performance of the models was examined with the McNemar test. With this examination, the most successful model was determined as ResNet50. After ResNet50, this order was followed by DenseNet, VGG19, the SVM and the CNN. The results obtained in the McNemar test had similar success with the code result. The trained VGG19 model was asked whether it could classify the paints used in archaeological and artistic works analyzed with instrumental methods in the literature with their real identities. The VGG19 test was able to classify the paints in the works of art in the photographs with a 99% success rate.
Also during the VGG19 test, ultramarine blue produced by different companies was predicted as Egyptian blue. The ultramarine blue in the training set did not have the same blue tone as paints from different companies. The closest color to this was Egyptian blue. Although it is normal for a model to predict a blue color as Egyptian blue due to the absence of this color in a training set, this result clearly showed that the model could be trained with paints produced by different companies., because paints are produced with different raw materials or different chemical processes by companies. For example, although celadonite and glauconite minerals are found in green soil, the percentages of major elements vary regionally. Therefore, the same pigment found in France and Russia differs in tone.
This article investigated whether pigment photographs could be classified using artificial intelligence. Therefore, this paper is a preliminary study. However, the 99% success rate of the system is promising as it provides a new non-destructive analytical method for the cultural heritage field. The authors believe that it would be useful to conduct the following studies in the future to achieve 100% results and aim to carry their work further:
  • Using pigments from different regions;
  • Using pigments produced by different companies;
  • Using paints produced by different companies and their variations (dark, light, dark, etc.);
  • Increasing the variety of binders (for example, boiled linseed oil, etc.);
  • Using varnish varieties in samples;
  • Creating a tone scale with different chemical white and black pigment colors (including white and black paints produced by different companies);
  • Aging produced paints in climate chambers.
With this research, it was understood that information could be obtained about the paints used in cultural assets using artificial intelligence. Since the model training was conducted through photographs, each paint in the works of art could be evaluated quickly and practically without requiring analytical costs. This evaluation is valid for paints where a single tone is dominant. It did not cover more than one paint mixture. For example, color transitions used by a painter on facial tone were not included in the scope of the research.
In addition, the model did not provide information about the chemical content of the paint. It only estimated the paint used in the picture with a 99% accuracy rate. For this reason, it is important to note that spectroscopic methods are needed for information such as trace elements that show from which region paint comes or which region the artist used the paint. However, if the system is trained with data (photographs, spectra, etc.) from different civilizations, civilizations, artists and craftsmen, it has the potential to be used in originality studies.
In the field of paint science, a system can be trained with different pigment types and transformed into an interfaced application. This application can be used by conservators-restorers, archaeologists, art historians, museum curators, archaeometrists and material scientists in the field. For this reason, in future studies, the dataset used here will be expanded with material diversity and its continuity will be ensured in different projects.

Author Contributions

Conceptualization, B.B.G. and H.E.; methodology, E.B., B.E., B.C.E., H.E., K.A., D.K. and T.A.; software, E.B., H.E., K.A. and T.A.; validation, E.B., H.E., K.A. and T.A.; formal analysis, B.B.G., H.E. and T.A.; investigation, B.B.G. and H.E.; resources, B.B.G.; data curation, H.E.; writing—original draft preparation, B.B.G.; writing—review and editing, B.B.G., T.A., E.B., H.E., D.K., B.E. and B.C.E.; visualization, B.B.G.; supervision, E.B., K.A., D.K., H.E. and T.A.; project administration, E.B.; funding acquisition, E.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the TUBITAK 3005 project.

Data Availability Statement

The raw data supporting the conclusions of this article can be made available by the authors on request.

Acknowledgments

This research was conducted within the scope of TÜBİTAK 3005 Innovative Solutions in Social and Human Areas, with the project number 123K163, titled “Detection of Historical Pigments with Artificial Intelligence”. The researchers thank TÜBİTAK for this support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhao, C.; Zhang, Y.; Wang, C.-C.; Hou, M.; Li, A. Recent progress in instrumental techniques for architectural heritage materials. Herit. Sci. 2019, 7, 36. [Google Scholar] [CrossRef]
  2. Saverwyns, S.; Currie, C.; Lamas-Delgado, E. Macro X-ray fluorescence scanning (MA-XRF) as tool in the authentication of paintings. Microchem. J. 2018, 137, 139–147. [Google Scholar] [CrossRef]
  3. Chiari, G.; Scott, D. Pigment analysis: Potentialities and problems. Period. Di Mineral. 2004, 73, 227–237. [Google Scholar]
  4. Cosentino, A. Identification of pigments by multispectral imaging; a flowchart method. Herit. Sci. 2014, 2, 8. [Google Scholar] [CrossRef]
  5. Reıche, I. Mineral pigments: The colourful palette of nature. EMU Notes Mineral. 2019, 20, 283–322. [Google Scholar]
  6. Pfaff, G. Inorganic Pigments; de Gruyter: Berlin, Germany, 2017. [Google Scholar]
  7. Aykan, B. Kulturel Miras Hakkı: Kulturel Mirasa İnsan Hakları Temelli Güncel Yaklaşımlar. Altern. Polit. 2018, 10, 231–252. [Google Scholar]
  8. Harth, A. The Study of Pigments in Cultural Heritage: A Review Using Machine Learning. Heritage 2024, 7, 3664–3695. [Google Scholar] [CrossRef]
  9. Casini, A.; Cucci, C.; Picollo, M.; Stefani, L.; Vitorino, T. Creation of a Hyperspectral Imaging Reference Database of Red Lake Pigments. COSCH E-Bull. 2015. [Google Scholar]
  10. Balas, C.; Epitropou, G.; Tsapras, A.; Hadjinicolaou, N. Hyperspectral imaging and spectral classification for pigment identification and mapping in paintings by El Greco and his workshop. Multimed. Tools Appl. 2018, 77, 9737–9751. [Google Scholar] [CrossRef]
  11. Skalleberg, V. Identifying Pigments A Multi-Instrumental Study of Ivar Arosenius’ Use of Pigments. Program in Integrated Conservation of Cultural Property Graduating’s Thesis, Department of Conservation, Unıversıty of Gothenburg, Göteborg, Sweden, 2017.
  12. Grabowski, B.; Masarczyk, W.; Głomb, P.; Mendys, A. Automatic pigment identification from hyperspectral data. J. Cult. Herit. 2018, 31, 1–12. [Google Scholar] [CrossRef]
  13. Rohani, N.; Pouyet, E.; Walton, M.; Cossairt, O.; Katsaggelos, A.K. Pigment unmixing of hyperspectral images of paintings using deep neural networks. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3217–3221. [Google Scholar]
  14. Andronache, I.; Papageorgiou, I.; Alexopoulou, A.; Makris, D.; Bratitsi, M.; Ahammer, H.; Liritzis, I. Can complexity measures with hierarchical cluster analysis identify overpainted artwork? Scientific Culture. 2024, 10, 1. [Google Scholar]
  15. Mandal, D.J.; Pedersen, M.; George, S.; Deborah, H.; Boust, C. An Experiment-based Comparative Analysis of Pigment Classification Algorithms using Hyperspectral Imaging. J. Imaging Sci. Technol. 2023, 67, 030403-1–030403-18. [Google Scholar] [CrossRef]
  16. Pouyet, E.; Miteva, T.; Rohani, N.; de Viguerie, L. Artificial Intelligence for Pigment Classification Task in the Short-Wave Infrared Range. Sensors 2021, 21, 6150. [Google Scholar] [CrossRef]
  17. Chen, A.; Jesus, R.; Vilarigues, M. Convolutional Neural Network-Based Pure Paint Pigment Identification Using Hyperspectral Images. In Proceedings of the ACM Multimedia AsiA, Gold Coast, Australia, 1–3 December 2021; pp. 1–7. [Google Scholar]
  18. Prilianti, K.R.; Onggara, I.C.; Adhiwibawa, M.A.; Brotosudarmo, T.H.; Anam, S.; Suryanto, A. Multispectral imaging and convolutional neural network for photosynthetic pigments prediction. In Proceedings of the 2018 5th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Malang, Indonesia, 16–18 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 554–559. [Google Scholar]
  19. Kazdal, S.A. Siyah çayın makine öğrenimi ile sınıflandırılması. Master’s Thesis, Maltepe Üniversitesi, Lisansüstü Eğitim Enstitüsü, Istanbul, Türkiye, 2020. [Google Scholar]
  20. Büyükarıkan, B.; Ülker, E. Aydınlatma Özniteliği Kullanılarak Evrişimsel Sinir Ağı Modelleri İle Meyve Sınıflandırma. Uludağ Üniversitesi Mühendislik Fakültesi Derg. 2020, 25, 81–100. [Google Scholar]
  21. Flachot, A.; Akbarinia, A.; Schütt, H.H.; Fleming, R.W.; Wichmann, F.A.; Gegenfurtner, K.R. Deep neural models for color classification and color constancy. J. Vis. 2022, 22, 17. [Google Scholar] [CrossRef] [PubMed]
  22. Bianco, S.; Cusano, C.; Schettini, R. Color constancy using CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015; pp. 81–89. [Google Scholar]
  23. Choi, H.H.; Kang, H.S.; Yun, B.J. CNN-based illumination estimation with semantic information. Appl. Sci. 2020, 10, 4806. [Google Scholar] [CrossRef]
  24. Huang, Y.B.; Chen, M.Y.; Ouhyoung, M. Perceptual-based CNN model for watercolor mixing prediction. In Proceedings of the ACM SIGGRAPH 2018 Posters, Vancouver, BC, Canada, 12–16 August 2018; pp. 1–2. [Google Scholar]
  25. Sáez-Hernández, R.; Luque, M.J.; Morales-Rubio, Á.; Cervera, M.L.; Mauri-Aucejo, A.R. A smartphone image-based method for the colorimetric characterization of historical pigments in mural paintings. MethodsX 2024, 12, 102746. [Google Scholar] [CrossRef]
  26. Viscaino, M.; Talamilla, M.; Maass, J.C.; Henríquez, P.; Délano, P.H.; Auat Cheein, C.; Auat Cheein, F. Color dependence analysis in a CNN-based computer-aided diagnosis system for middle and external ear diseases. Diagnostics 2022, 12, 917. [Google Scholar] [CrossRef]
  27. Asia, A.O.; Zhu, C.Z.; Althubiti, S.A.; Al-Alimi, D.; Xiao, Y.L.; Ouyang, P.B.; Al-Qaness, M.A. Detection of diabetic retinopathy in retinal fundus images using CNN classification models. Electronics 2022, 11, 2740. [Google Scholar] [CrossRef]
  28. Kwiek, P.; Jakubowska, M. Color Standardization of Chemical Solution Images Using Template-Based Histogram Matching in Deep Learning Regression. Algorithms 2024, 17, 335. [Google Scholar] [CrossRef]
  29. Singh, A.; Bay, A.; Mirabile, A. Assessing the importance of colours for CNNs in object recognition. arXiv 2020, arXiv:2012.06917. [Google Scholar]
  30. Rachmadi, R.F.; Purnama, I. Vehicle color recognition using convolutional neural network. arXiv 2015, arXiv:1510.07391. [Google Scholar]
  31. Chen, L.; Li, S.; Bai, Q.; Yang, J.; Jiang, S.; Miao, Y. Review of image classification algorithms based on convolutional neural networks. Remote Sens. 2021, 13, 4712. [Google Scholar] [CrossRef]
  32. Xin, M.; Wang, Y. Research on image classification model based on deep convolution neural network. Eurasip J. Image Video Process. 2019, 1, 40. [Google Scholar] [CrossRef]
  33. Laurie, A.P. The Painter’s Methods and Materials; Рипoл Классик; Dover Publications: New York, NY, USA, 1926. [Google Scholar]
  34. Mayer, R. The Artist’s Handbook of Materials and Techniques, Viking; Penguin Group Penguin Books USA: Washington, DC, USA, 1991. [Google Scholar]
  35. Cennini, C. The Book of the Art of, (Çev. CristianaJ. Herringham); Hanson and Co at the Ballantyne Press: London UK, 1954. [Google Scholar]
  36. Gueli, A.M.; Pasquale, S.; Troja, S.O. Influence of vehicle on historical pigments colour. Color Res. Appl. 2017, 42, 823–835. [Google Scholar] [CrossRef]
  37. Masschelein-Kleiner, L. Ancient Binding Media, Varnishes and Adhesives; ICCROM: Rome, Italy, 1995. [Google Scholar]
  38. Vandenabeele, P.; Moens, L.; Edwards, H.G.M.; De Reu, M.; Van Hooydonk, G. Identification and classification of natural organic binding media and varnishes by micro-Raman spectroscopy. In Proceedings of the 15th World Conference on Non- Destructive Testing, Italian Society for Non-Destructive Testing and Monitoring Diagnostics, Roma, Italy, 15–21 October 2000. [Google Scholar]
  39. Saggu, M.; Liu, J.; Patel, A. Identification of subvisible particles in biopharmaceutical formulations using Raman spectroscopy provides insight into polysorbate 20 degradation pathway. Pharm. Res. 2015, 32, 2877–2888. [Google Scholar] [CrossRef] [PubMed]
  40. Öncü, S. Bilgisayarlı Goru ve Ses Algılama Tekniği ile Hareketli Nesne Takibi. Doctoral Dissertation, Ankara Üniversitesi, Fen Bilimleri Enstitusu, Ankara, Türkiye, 2014. [Google Scholar]
  41. Büyükarıkan, B. Aydınlatmanın Görüntü İşleme Problemlerine Etkisinin Yapay Zeka Teknikleri Kullanılarak Analizi. Ph.D. Thesis, Lisansüstü Eğitim Enstitüsü, Konya Teknik Üniversitesi, Konya, Türkiye, 2022. [Google Scholar]
  42. Bayram, F. Işık ve Aydınlatma: Işığın Televizyon ve Sinemada İşlevsel Kullanımı Uzerine Bir Değerlendirme. Erciyes İletişim Derg. 2009, 1, 122–131. [Google Scholar]
  43. Taşkın, Y. Hava perspektifinin ışık ve renk acısından incelenmesi ve Empresyonizmde uygulama biçimleri. Master’s Thesis, Ankara Üniversitesi, Yenimahalle, Turkey, 2012. [Google Scholar]
  44. Aksehirli, Ö.Y.; Ankarali, H.; Aydin, D.; Saraçli, Ö. Tibbi Tahminde Alternatif Bir Yaklasim: Destek Vektör Makineleri/An Alternative Approach in Medical Diagnosis: Support Vector Machines. Türkiye Klin. Biyoistatistik 2013, 5, 19. [Google Scholar]
  45. Çınar, İ. Yapay Zekâ Teknikleri Kullanılarak Pirinç Çeşitlerinin Sınıflandırılması. Master’s Thesis, Fen Bilimleri Enstitüsü, Selçuk Üniversitesi, Konya, Turkey, 2019. [Google Scholar]
  46. Yağın, B. Yapay Zekâ Tabanlı Goruntu İşleme Yontemleri ile COVID-19 Tahmini. Master’s Thesis, Sağlık Bilimleri Enstitusu, İnonu Universitesi, Malatya, Turkey, 2022. [Google Scholar]
  47. Erten, H. Sar (Sentetik Acıklıklı Radar) Goruntulerinde Bolutleme. Master’s Thesis, Fen Bilimleri Enstitüsü, Ankara Üniversitesi, Konya, Turkey, 2021. [Google Scholar]
  48. Aktaş, A. Derin Öğrenme Yöntemleri ile Görüntü İşleme Uygulamaları. Master’s Thesis, Fen Bilimleri Enstitüsü, Marmara Üniversitesi, İstanbul, Turkey, 2020. [Google Scholar]
  49. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  50. Şahin, N.; Alpaslan, N.; İlcin, M.; Hanbay, D. Evrişimsel Sinir Ağı Mimarileri ve Oğrenim Aktarma ile Bitki Zararlısı Cekirge Turlerinin Sınıflandırması. Fırat Univ. Muhendis. Bilim. Derg. 2023, 35, 321–331. [Google Scholar] [CrossRef]
  51. Bostanci, B.; Bostanci, G.E. An evaluation of classification algorithms using Mc Nemar’s test. In Proceedings of the Seventh International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA 2012); Springer: New Delh, India, 2013; Volume 1, pp. 15–26. [Google Scholar]
Figure 1. Fresco-Secco Construction Stages.
Figure 1. Fresco-Secco Construction Stages.
Electronics 13 04039 g001
Figure 2. Tempera-Tempera Grassa Construction Stages.
Figure 2. Tempera-Tempera Grassa Construction Stages.
Electronics 13 04039 g002
Figure 3. Oil Paint Construction Stages.
Figure 3. Oil Paint Construction Stages.
Electronics 13 04039 g003
Figure 4. Watercolor Construction Stages.
Figure 4. Watercolor Construction Stages.
Electronics 13 04039 g004
Figure 5. Tone Scale Construction.
Figure 5. Tone Scale Construction.
Electronics 13 04039 g005
Figure 6. Raman Spectrum of Arabica Gum.
Figure 6. Raman Spectrum of Arabica Gum.
Electronics 13 04039 g006
Figure 7. Egg Yolk Raman Spectrum.
Figure 7. Egg Yolk Raman Spectrum.
Electronics 13 04039 g007
Figure 8. Egg White Raman Spectrum.
Figure 8. Egg White Raman Spectrum.
Electronics 13 04039 g008
Figure 9. Raman Spectrum of Linseed Oil.
Figure 9. Raman Spectrum of Linseed Oil.
Electronics 13 04039 g009
Figure 10. Raman Spectrum of Walnut Oil.
Figure 10. Raman Spectrum of Walnut Oil.
Electronics 13 04039 g010
Figure 11. Raman Spectrum of Poppy Seed Oil.
Figure 11. Raman Spectrum of Poppy Seed Oil.
Electronics 13 04039 g011
Figure 12. (A) Yellow Ochre on a White Background, (B) Yellow Ochre on a Black Background.
Figure 12. (A) Yellow Ochre on a White Background, (B) Yellow Ochre on a Black Background.
Electronics 13 04039 g012
Figure 13. Sample Created to See the Light and Shadow Tones of the Pigment—Natural Light Shooting.
Figure 13. Sample Created to See the Light and Shadow Tones of the Pigment—Natural Light Shooting.
Electronics 13 04039 g013
Figure 14. Photo Shooting in Natural Light: (A) Sample on White Background, (B) Sample on Black Background.
Figure 14. Photo Shooting in Natural Light: (A) Sample on White Background, (B) Sample on Black Background.
Electronics 13 04039 g014
Figure 15. Sample Appearance According to Solar Movement: (A) Sunrise, (B) Noon, (C) Sunset.
Figure 15. Sample Appearance According to Solar Movement: (A) Sunrise, (B) Noon, (C) Sunset.
Electronics 13 04039 g015
Figure 16. (A) Sample Shooting on a White Background, (B) Photo Shooting on a Black Background.
Figure 16. (A) Sample Shooting on a White Background, (B) Photo Shooting on a Black Background.
Electronics 13 04039 g016
Figure 17. (A) Photographing with an Incandescent Lamp, (B) Appearance of Tempera—Tempera Grassa under the Yellow Light of an Incandescent Lamp.
Figure 17. (A) Photographing with an Incandescent Lamp, (B) Appearance of Tempera—Tempera Grassa under the Yellow Light of an Incandescent Lamp.
Electronics 13 04039 g017
Figure 18. Photographing the Wet and Dry States of Paints: (A) Paint Belonging to the Secco Technique, (B) Area Where the Paint Was Applied.
Figure 18. Photographing the Wet and Dry States of Paints: (A) Paint Belonging to the Secco Technique, (B) Area Where the Paint Was Applied.
Electronics 13 04039 g018
Figure 19. An Example of Pigment Folders Used in the Dataset.
Figure 19. An Example of Pigment Folders Used in the Dataset.
Electronics 13 04039 g019
Figure 20. CNN Architectural Structure.
Figure 20. CNN Architectural Structure.
Electronics 13 04039 g020
Figure 21. PA Values of Models and Comparison Examples for Each Test Image.
Figure 21. PA Values of Models and Comparison Examples for Each Test Image.
Electronics 13 04039 g021
Figure 22. (A) Area Where More Than One Mixture Existed So No Samples Were Taken, (B) Area Where One Paint Dominates So Samples Were Taken.
Figure 22. (A) Area Where More Than One Mixture Existed So No Samples Were Taken, (B) Area Where One Paint Dominates So Samples Were Taken.
Electronics 13 04039 g022
Figure 23. Ultramarine Blue Sample, Sassoferrato, “The Virgin in Prayer”, The National Gallery.
Figure 23. Ultramarine Blue Sample, Sassoferrato, “The Virgin in Prayer”, The National Gallery.
Electronics 13 04039 g023
Figure 24. Artifacts in Field Work.
Figure 24. Artifacts in Field Work.
Electronics 13 04039 g024
Table 1. Color Codes of Pigments Used in Samples.
Table 1. Color Codes of Pigments Used in Samples.
Yellow OchreGreen EarthRed OchreRed OchreUltramarine
Electronics 13 04039 i001Electronics 13 04039 i002Electronics 13 04039 i003Electronics 13 04039 i004Electronics 13 04039 i005
Proper name
Kremer Yellow Moroccan Ochre < 120 μ
Company Color Code
#116420
Analysis Code
S1
Proper name
Kremer Verona Green Earth 0–120 μ, genuine earth pigment
Company Color
Code
#11000
Analysis Code
Y1
Proper name
Kremer Red Moroccan Ochre < 120 μ
Company Color Code
#116430
Analysis Code
K1
Proper name
Kremer Egyptian Blue blue copper silicate, < 120 μ
Company Color
Code
#10060
Analysis Code
MM1
Proper name
Kremer Ultramarine Blue, light synthetic mineral pigment
Company Color Code
#45080
Analysis Code
UM1
Table 2. XRF Results of Pigments.
Table 2. XRF Results of Pigments.
ElementS1Y1K1MM1UM1
%
Na0.0420.0320.0451.355.76
Mg0.826.4840.8690.1390.054
Al9.8613.78210.90.00576.797
Si16.0315.1518.8626.2510.11
P0.25660.21150.2180.01390.0551
S0.26370.037650.15680.038074.641
Cl0.3580.0040.14650.16440.04812
K1.5011.091.690.11620.5634
Ca0.78047.9440.59099.47720.7
Ti0.45461.2160.50770.011950.0822
V0.04330.01750.04550.000870.00086
Cr0.007440.017690.012910.000120.0024
Mn0.2070.12850.1590.003440.01474
Fe14.437.36515.260.03690.1495
Ppm
Co12659.41453.911.5
Ni208.7171.3228.45.13.8
Cu160.462.3204152,8004.5
Zn437.578.6455.11115
Ga26.313.928.811.714.4
Ge0.610.51.20.5
As280.629.72.10.6
Se0.40.30.40.60.3
Br5.60.210.72.3
Rb71.519.8780.859.7
Sr128.7349.9101.849.1413.7
Y43.420.937.30.816.2
Zr204.4158.1206.561.298.9
Nb4.738.123.95.710.2
Mo16.44.226.26.73.6
Cd1.11.52.42.30.9
In1.10.91.31.42.1
Sn3.211.42.313.2
Sb10.911.41.4
Te1.31.21.51.61.2
I2.62.12.52.62
Cs4.13.54.14.16.5
Ba1529343.7994.2813172.6
La30.33332.27.930.7
Ce110.164.288.31125.2
Hf7.44.38.52206.7
Ta8.95102801.7
W5.93.66.1252.1
Hg10.94.41.90.7
Tl1.311.51.90.8
Pb17.95.619.851.719.7
Bi0.90.60.929.67.3
Th1.32.11.12.21.2
U1.81.81.21.81.3
Table 3. Wavelength and Frequency Values of Different Colors.
Table 3. Wavelength and Frequency Values of Different Colors.
ColorWavelengthFrequency
Red720–630400–470 hz
Orange610–590470–50 hz
Yellow590–570520–590 hz
Green550–510590–650 hz
Blue480–450650–700 hz
Indigo Blue450–430700–760 hz
Purple430–380760–800 hz
Table 4. Relationship Between Color Temperature and Image Color.
Table 4. Relationship Between Color Temperature and Image Color.
CCT (K)Image Color
<3300Hot white
3300–5300Warm white
>5300Cold white
Table 5. Photographs Taken Under the First Condition in the Product Shooting Tent: Tone Scale 1.
Table 5. Photographs Taken Under the First Condition in the Product Shooting Tent: Tone Scale 1.
Electronics 13 04039 i006Electronics 13 04039 i007Electronics 13 04039 i008Electronics 13 04039 i009Electronics 13 04039 i010
1029 K1239 K1325 K1895 K2214 K
Electronics 13 04039 i011Electronics 13 04039 i012Electronics 13 04039 i013Electronics 13 04039 i014Electronics 13 04039 i015
3031 K3546 K4024 K5015 K6512 K
Table 6. Photographs Taken Under the Second Condition in Product Shooting Tent: Tone Scale 1.
Table 6. Photographs Taken Under the Second Condition in Product Shooting Tent: Tone Scale 1.
Hot whiteWarm WhiteCold White
Electronics 13 04039 i016Electronics 13 04039 i017Electronics 13 04039 i018
2700 K4000 K5600 K
Table 7. CNN Model Summary Information.
Table 7. CNN Model Summary Information.
Layer (Type)Output (Shape)Parameter No.
Input_1 (input layer)(None, 256, 256, 3)0
con2vd(None, 254, 254, 32)896
max_pooling2d (maxPooling2D)(None, 127, 127, 32)0
conv2d_1(Conv2D)(None, 125, 125, 64)18,496
max_pooling2d_2 (maxPooling2D)(None_41, 41, 64)0
conv2d_3 (conv2D)(None, 39, 39, 128)73,856
max_pooling2d_2(maxPooling2D)(None, 13, 13, 128)0
Conv2D_3 (conv2D)(None, 11, 11, 128)147,584
max_pooling2d_3 (maxPooling2D)(None, 3, 3, 128)0
Flatten (Flatten)(None, 1152)0
Dropout (Dropout)(None, 1152)0
Dense (Dense)(None, 512)590,336
Dense_1 (Dense)(None, 5)2565
Table 8. SVM Classification Result Report.
Table 8. SVM Classification Result Report.
ClassPrecisionRecallF1 ScoreSupport
Green Earth0.960.970.97328
Red Ochre1.001.001.00324
Ultramarine0.960.970.96338
Yellow Ochre0.990.960.98336
Egyptian Blue0.950.960.95339
Accuracy0.971665
Macro average0.970.970.971665
Weighted average0.970.970.971665
Table 9. VGG19 Classification Result Report.
Table 9. VGG19 Classification Result Report.
ClassPrecisionRecallF1 ScoreSupport
Green Earth0.991.000.99339
Red Ochre1.001.001.00324
Ultramarine0.990.990.99336
Yellow Ochre1.000.991.00338
Egyptian Blue0.980.990.99328
Accuracy0.991665
Macro average0.990.990.991665
Weighted average0.990.990.991665
Precision0.9934
Recall0.9993
F1 Score0.9993
Table 10. DenseNet Classification Result Report.
Table 10. DenseNet Classification Result Report.
ClassPrecisionRecallF1 ScoreSupport
Green Earth1.001.001.00339
Red Ochre1.001.001.00324
Ultramarine0.991.000.99336
Yellow Ochre1.001.001.00338
Egyptian Blue0.990.990.99328
Accuracy1.001665
Macro average1.001.001.001665
Weighted average1.001.001.001665
Precision0.9970
Recall0.9969
F1 Score0.9969
Table 11. ResNet50 Classification Result Report.
Table 11. ResNet50 Classification Result Report.
ClassPrecisionRecallF1 ScoreSupport
Green Earth1.001.001.00339
Red Ochre1.001.001.00324
Ultramarine1.001.001.00336
Yellow Ochre1.001.001.00338
Egyptian Blue1.001.001.00328
Accuracy1.001665
Macro average1.001.001.001665
Weighted average1.001.001.001665
Precision0.9994
Recall0.9993
F1 Score0.9993
Table 12. CNN Classification Result Report.
Table 12. CNN Classification Result Report.
ClassPrecisionRecallF1 ScoreSupport
Green Earth0.951.000.98339
Red Ochre1.001.001.00324
Ultramarine0.980.930.96336
Yellow Ochre1.000.970.98338
Egyptian Blue0.930.960.95328
Accuracy0.971665
Macro average0.970.970.971665
Weighted average0.970.970.971665
Table 13. Summary of the Results of the Models Used in Classifying Historical Pigments.
Table 13. Summary of the Results of the Models Used in Classifying Historical Pigments.
Model NameTraining AccuracyTraining LossValidation AccuracyValidation LossTest AccuracyTest Loss
SVM1 0.9816 0.9779
CNN0.97220.0860.97850.07310.97290.0822
DenseNet0.99260.02870.99880.00770.99690.0108
ResNet500.99610.00960.99880.0250.99930.0076
VGG190.98920.02810.99160.02430.99330.0201
Table 14. McNemar Test Results Applied to All Models.
Table 14. McNemar Test Results Applied to All Models.
ModelCNNDenseNetResnet50VGG19SVM
CNN Electronics 13 04039 i0196.448Electronics 13 04039 i0206.482Electronics 13 04039 i0215.092Electronics 13 04039 i0220.138
DenseNet Electronics 13 04039 i0231.5Electronics 13 04039 i0241.336Electronics 13 04039 i0255.917
Resnet50 Electronics 13 04039 i0262.846Electronics 13 04039 i0276.635
VGG19 Electronics 13 04039 i0285.160
SVM
Table 15. VGG19 Test Result.
Table 15. VGG19 Test Result.
Name of the Work and Different Types of PaintEstimatedReal
Bosch, The Haywain/Prado 1Yellow ochreYellow ochre
Bosch, The Haywain/Prado 2Yellow ochreYellow ochre
Bosch, The Haywain/Prado 3Yellow ochreYellow ochre
Bosch, The Haywain/Prado 4Yellow ochreYellow ochre
Bosch, The Haywain/Prado 5Yellow ochreYellow ochre
Bosch, The Haywain/Prado 6Yellow ochreYellow ochre
RCP04OXB920 Yellow 920 Ocres de FranceYellow ochreYellow ochre
Kremer 40214 Gold Ochre DDYellow ochreYellow ochre
Kama PS-MI0015 Spanish Gold Ochre -Py42hYellow ochreYellow ochre
Michelangelo The Manchester Madonna 1Green earthGreen earth
Michelangelo The Manchester Madonna 2Green earthGreen earth
Michelangelo The Manchester Madonna 3Green earthGreen earth
Couleurs Leroux Pigments Purs Leroux, Natural Green Earth ClearGreen earthGreen earth
11111 Russian Green Earth, extra fine Kremer PigmentGreen earthGreen earth
423-19 Tavush Green Earth Natural Pigments PigmentGreen earthGreen earth
40810 Bohemian Green Earth Kremer PigmentGreen earthGreen earth
40830 France Green Earth Kremer PigmentGreen earthGreen earth
41750 Vagone Green Earth Kremer PigmentGreen earthGreen earth
25001 Antique Green Earth Jackson’s Artist PigmentGreen earthGreen earth
41800 Bohemian Green Earth, imitation Kremer PigmentGreen earthGreen earth
Ice Age Megafauna Rock Art in the Colombian Amazon 1Red ochreRed ochre
Ice Age Megafauna Rock Art in the Colombian Amazon 2Red ochreRed ochre
Arjantin Cueva de las Manos 1Red ochreRed ochre
Arjantin Cueva de las Manos 2Red ochreRed ochre
Winsor & Newton Red OchreRed ochreRed ochre
Mumya Maskesi, National Museum of Natural History 1Egyptian blueEgyptian blue
Mumya Maskesi, National Museum of Natural History 2Egyptian blueEgyptian blue
Mısır Hawara Mumya Maskesi Manchester Museum 1Egyptian blueEgyptian blue
Mısır Hawara Mumya Maskesi Manchester Museum 2Egyptian blueEgyptian blue
Egyptian Mummy Mask The British Museum 1Egyptian blueEgyptian blue
Egyptian Mummy Mask The British Museum 2Egyptian blueEgyptian blue
Egyptian Mummy Mask The British Museum 3Egyptian blueEgyptian blue
Shabti 1Egyptian blueEgyptian blue
Shabti 2Egyptian blueEgyptian blue
Mummy of Priest Hornedjitef The British Museum 1Egyptian blueEgyptian blue
Mummy of Priest Hornedjitef The British Museum 2Egyptian blueEgyptian blue
The Wilton Diptych, The National Gallery 1UltramarineUltramarine
The Wilton Diptych, The National Gallery 2UltramarineUltramarine
Sassoferrato, “The Virgin in Prayer”, The National Gallery 1UltramarineUltramarine
Sassoferrato, “The Virgin in Prayer”, The National Gallery 2UltramarineUltramarine
Turner Artists’ Water Colour 060 Ultramarine DeepEgyptian blueUltramarine
Winsor & Newton Professional Water Colour 667 UltramarineEgyptian blueUltramarine
Da Vinci Watercolors 284 Ultramarine BlueEgyptian blueUltramarine
Table 16. Ultramarine Color Produced by Different Paint Companies and VGG19 Egyptian Blue Set.
Table 16. Ultramarine Color Produced by Different Paint Companies and VGG19 Egyptian Blue Set.
Electronics 13 04039 i029Electronics 13 04039 i030
Turner Artists’ Water Colour 060 Ultramarine DeepVGG19 Egyptian Blue Training Set
Electronics 13 04039 i031Electronics 13 04039 i032
Winsor & Newton Water Colour 667 UltramarineVGG19 Egyptian Blue Training Set
Electronics 13 04039 i033Electronics 13 04039 i034
Da Vinci Watercolors 284 Ultramarine BlueVGG19 Egyptian Blue Training Set
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bilici Genc, B.; Bostanci, E.; Eskici, B.; Erten, H.; Caglar Eryurt, B.; Acici, K.; Ketenoglu, D.; Asuroglu, T. Development of a New Non-Destructive Analysis Method in Cultural Heritage with Artificial Intelligence. Electronics 2024, 13, 4039. https://doi.org/10.3390/electronics13204039

AMA Style

Bilici Genc B, Bostanci E, Eskici B, Erten H, Caglar Eryurt B, Acici K, Ketenoglu D, Asuroglu T. Development of a New Non-Destructive Analysis Method in Cultural Heritage with Artificial Intelligence. Electronics. 2024; 13(20):4039. https://doi.org/10.3390/electronics13204039

Chicago/Turabian Style

Bilici Genc, Bengin, Erkan Bostanci, Bekir Eskici, Hakan Erten, Berna Caglar Eryurt, Koray Acici, Didem Ketenoglu, and Tunc Asuroglu. 2024. "Development of a New Non-Destructive Analysis Method in Cultural Heritage with Artificial Intelligence" Electronics 13, no. 20: 4039. https://doi.org/10.3390/electronics13204039

APA Style

Bilici Genc, B., Bostanci, E., Eskici, B., Erten, H., Caglar Eryurt, B., Acici, K., Ketenoglu, D., & Asuroglu, T. (2024). Development of a New Non-Destructive Analysis Method in Cultural Heritage with Artificial Intelligence. Electronics, 13(20), 4039. https://doi.org/10.3390/electronics13204039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop