Next Article in Journal
Online Model Updating and Dynamic Learning Rate-Based Robust Object Tracking
Previous Article in Journal
Vibration of a Rotating Micro-Ring under Electrical Field Based on Inextensible Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images

by
Haris Ahmad Khan
1,2,*,†,
Sofiane Mihoubi
3,†,
Benjamin Mathon
3,
Jean-Baptiste Thomas
1,2 and
Jon Yngve Hardeberg
1
1
The Norwegian Colour and Visual Computing Laboratory, NTNU–Norwegian University of Science and Technology, 2815 Gjøvik, Norway
2
Le2i, FRE CNRS 2005, Université Bourgogne Franche-Comté, 21000 Dijon, France
3
Univ. Lille, CNRS, Centrale Lille, UMR 9189—CRIStAL, Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2018, 18(7), 2045; https://doi.org/10.3390/s18072045
Submission received: 23 May 2018 / Revised: 20 June 2018 / Accepted: 22 June 2018 / Published: 26 June 2018

Abstract

:
We present a dataset of close range hyperspectral images of materials that span the visible and near infrared spectrums: HyTexiLa (Hyperspectral Texture images acquired in Laboratory). The data is intended to provide high spectral and spatial resolution reflectance images of 112 materials to study spatial and spectral textures. In this paper we discuss the calibration of the data and the method for addressing the distortions during image acquisition. We provide a spectral analysis based on non-negative matrix factorization to quantify the spectral complexity of the samples and extend local binary pattern operators to the hyperspectral texture analysis. The results demonstrate that although the spectral complexity of each of the textures is generally low, increasing the number of bands permits better texture classification, with the opponent band local binary pattern feature giving the best performance.

1. Introduction

When electromagnetic waves are incident on a material’s surface, some part of it is absorbed and some of it is reflected back. The reflected electromagnetic waves from a surface depends on its material composition, physical and chemical properties, surface roughness and the geometry of incident and reflected waves. The behavior of reflected waves from a surface in a particular wavelength region is known as the spectral signature of that material. To obtain the spectral signature from a surface, devices like spectrophotometers and hyperspectral cameras are used. The advantage of hyperspectral devices is the acquisition of high resolution spectral and spatial information of a material or scene, whereas the spectrophotometer is able to acquire spectral information from one point of the selected surface.
For the acquisition of hyperspectral images, push broom cameras have been extensively used [1]. In the push broom imaging technique, line scanning of the scene is performed and the spectrogram of a particular scan is recorded on a charged couple device sensor. Another method for the acquisition of hyperspectral images is through the use of a liquid crystal tunable filter (LCTF) [2] and MEMS Tunable Fabry-Pérot Filters [3] on top of a monochrome sensor. The use of such filters provides direct imaging ability, and line scan of the scene is no longer required. The drawback of using such filters is the low speed of spectral sampling and the lower number of channels compared to push broom cameras. Recently, low cost hyperspectral imaging devices have been introduced that use off-the-shelf components for the conversion of an imaging sensor into a hyperspectral camera [4].
The analysis of hyperspectral images is performed for image data exploration, classification and quantification tasks. There are various multivariate analysis techniques that are applied to hyperspectral images for data decomposition, pre-processing and performing regression or classification analyses. Some of the methods for hyperspectral data analysis include the principal component analysis [5], linear discriminant analysis [6], partial least squares discriminant analysis [7], K-nearest neighbor algorithm [8], support vector machine [9], least-squares support vector machine [10], artificial neural network [11], deep recurrent neural network [12] and deep convolutional neural networks [13]. Hyperspectral imaging is widely used in a number of applications, e.g., remote sensing [14], medical diagnosis [15], cultural heritage [16,17], face recognition [18], food quality control [19], color management and printing [20,21]. A number of applications have been developed through the multivariate analysis of hyperspectral images, including the investigation of dried blood spots [22], the measurement of oil and protein contents in peanut varieties [23], the classification of muscle foods [24], the detection of defects on peaches [25] and numerous other applications.
Hyperspectral images contain more spectral information compared to color images. The high dimensionality of hyperspectral image data is an open challenge, and a trade-off between high information content and practical handling is often required. To evaluate this compromise and assess its performance, high quality data with high spatial and high spectral resolutions are required. Creating such a dataset takes time and effort, but it is necessary to the research community.
Considering these needs, we present a hyperspectral image dataset of 112 textured objects falling into five different categories. Images are reflectance data that span the visible and near infrared (NIR) parts of the electromagnetic spectrum. The data is referred to as HyTexiLa (Hyperspectral Texture images acquired in Laboratory). The idea behind creating this dataset is to provide a platform for the benchmark analysis of various applications and processing. The areas where this data could be used are in the fields of image processing, computational imaging and computer vision, such as surface identification, spatio-spectral analysis of textured surfaces, image sensor simulation, color reproduction, image relighting and so on. The availability of a high spectral and spatial resolution dataset will provide easy access for the evaluation of different techniques and will allow the results to be compared. In this paper, we present that dataset and analyze its spatial and spectral properties. We define the image acquisition protocol, the distortions that occur during the acquisition of the objects, the method for the correction of such distortions and the effect of such corrections. Our focus in this paper is to present a hyperspectral dataset to the community and to provide an analysis on our dataset as a benchmark for further research. We provide the methodology for pre-processing the raw hyperspectral data before using it for the intended applications. This paper is organized as follows: First we analyze the existing hyperspectral datasets and then, in Section 3, we discuss the hyperspectral image acquisition protocol and the processing of spectral data for addressing the distortions that occur during the acquisition. In Section 4, a spectral analysis of the data is provided. Section 5 consists of the texture analysis of the object surfaces in our dataset, followed by the conclusions.

2. Comparison of HyTexiLa with Existing Hyperspectral Datasets

Several hyperspectral image datasets are publicly available. In this section, we provide an overview of the existing hyperspectral image datasets that consist of close range images, including natural scenes and indoor images of objects. In this section, we provide a brief review of existing hyperspectral image datasets and compare them with the properties of our proposed dataset. We do not include the remote sensing datasets in this section, because remote sensing does not assume the same imaging model. We also exclude reflectance datasets obtained from pointwise spectrometers because they will not contain any spatial relationship between spectra as well as those multispectral image datasets which contain less than 10 channels.
Table 1 gives an overview of the existing hyperspectral image datasets. The datasets in [26,27,28,29,30,31,32,33,34] consist of outdoor scenes which include vegetation and buildings. The wavelength range in [26,30,34,35,36,37,38] is from 400 nm to 700 nm, while the datasets [27,30,31] have spectral ranges from 400 nm to 720 nm. The spectral sampling in [26,27,29,30,31,32,33,35,36,39] is 10 nm. The dataset in [28,32,40,41] has a spectral range of 400 nm to 1000 nm. The datasets in [35,36] consist of images of objects taken indoors. There are some datasets which consist of a specific category of objects, like paintings [42], leaves from apple tree [43], textiles [39], honey [41] and wood samples [44]. Most of the available datasets are sampled or re-sampled at 10 nm intervals, while some of them have different sampling intervals, e.g., [28] has 1.25 nm sampling step, while datasets in [41,43] have spectral sampling intervals of 6 nm and 4.7 nm, respectively.
In this paper, we provide a new dataset of hyperspectral images that has been made publicly available. This dataset consists of samples from five different categories of materials. The spectral range of this dataset is from 405.37 nm to 995.83 nm. As identified in Section 2, there are only a few datasets that span both the visible and NIR regions. The information in NIR is valuable for material classification [45,46,47], and the identification of textile fibers [48] and minerals [49]. Although some of the hyperspectral datasets provide information in the NIR region, they are either for specific samples, as in [41,42,43], or consist of outdoor scenes with many objects, as in [28,32]. Each image in our dataset consists of one specific, rather flat, object. The spectral resolution of data is also an important aspect to be considered. In most of the hyperspectral image datasets mentioned in Table 1, the spectral sampling interval is 10 nm, which is considered enough to representat the spectra [50]. A spectral sampling of 5 nm was recommended for Munsell spectra in [50], but it is also worth noting that more closely sampled data may be required when spectra are less smooth. One of such cases is the presence of spiky illumination spectral power distributions, where the use of more accurate spectral intervals is required. The availability of high resolution spectral and spatial data will allow the development and testing of advanced resampling methods. There is an existing dataset for textile samples [39] but it contains spectral information in the visible region and the sampling has intervals of 5 nm between the bands. Some datasets consist of food items [41,45] but do not provide high spatial resolution. Similarly, the datasets containing images of urban and natural scenes do not provide high spatial resolution for each material. The CAVE dataset [35] contains detailed images of various materials that are captured from close range, but they are only in the visible region.
Our dataset was spectrally sampled at about 3.19 nm and can be used with simulations of most fluorescent tube and light emitting diode-based illumination sources. The advantage of our dataset when compared with already available datasets is the availability of high resolution spectral and spatial data. The spectral range in our dataset covers both the visible and NIR regions. Each image in the dataset contains a unique material, and it can be used as ground-truth spectral data for surface identification applications. Our dataset is available for public use in the form of reflectance data in which each image is cropped to a uniform size. The raw data consists of radiance images captured along with the white diffuser and gray reflectance patch. Since the raw data is of large size (about 442 GB), we do not provide it publicly but if someone is interested in processing the raw hyperspectral data, it can also be provided upon specific request to the authors.

3. Image Acquisition and Processing

In this section, we present and discuss the objects being imaged and the hyperspectral image acquisition setup. The issues faced during image acquisition and protocols to correct for those issues are also part of this section. We describe how we correct for distortions inherent to the system and how we compute reflectance images from the input radiance data.

3.1. Notations

We first list the notations and conventions used in this article. Functions are usually denoted in roman font, with sets in calligraphic font and scalar variables in italic font. All indexes start from 0, and [ A ] denotes the set of integers { 0 , 1 , , A 1 } . B is the set of wavelengths { λ k } k [ K ] which ranges from 405.37 nm to 995.83 nm at 3.19 nm intervals. The hyperspectral images are denoted in bold font and spectral channels in italic font. D , I and R , respectively, denote the uncorrected (distorted) radiance image, the radiance image and the reflectance image. These images are made of K channels of M × N pixels associated with K spectral bands, centered at wavelengths λ k B . D k , I k and R k denote the k-th channel of the respective hyperspectral images. A pixel coordinate, p, of an image is denoted by a couple of bounded integer values: p = ( x , y ) [ M ] × [ N ] Z + 2 . D p (resp. I p , R p ) and D ( x , y ) (resp. I ( x , y ) , R ( x , y ) ) denote the vector (spectral) value of D (resp I , R ) at pixel p = ( x , y ) . Similarly, D p k (resp. I p k , R p k ) and D k ( x , y ) (resp. I k ( x , y ) , R k ( x , y ) ) denote the values of channel D k (resp I k , R k ) at pixel p = ( x , y ) . S = { S ( λ k ) } λ k B is a vector of values of the spectral power distribution of the scene illumination, S , sampled at each λ k B . Finally, · denotes the nearest integer function; · denotes the ceiling function; · , · denotes the inner product; and | | · | | denotes the Euclidean norm.

3.2. Objects in the Dataset

The different objects chosen for the dataset were divided into five classes: textile, wood, vegetation, food and stone. These samples are a set of textured object surfaces that exhibit different properties while remaining fairly flat. The imaged objects were reasonably flat so that the blurring effect due to distance variations from camera was minimal during the scanning process. After the completion of image acquisition, we analyzed each image thoroughly and discarded a few samples which were too blurry. We also excluded materials that exhibited high gloss in the considered viewing conditions to maintain a simple imaging model, as in Equation (1), which allows easy computation of reflectance values:
k [ K ] , ( x , y ) [ M ] × [ N ] , I k ( x , y ) = R k ( x , y ) · S ( λ k ) .
Finally, for this dataset we provide 65 textile samples of various types and colors. They were selected among textile samples provided by the humanitarian association Don De Soie located in Templeuve in France. In the wood category, there are 18 different samples of wood that can be found in the area around Mjøsa lake in Norway. There are 15 images for the vegetation category, which consist of leaves, flowers and one mushroom. The food category consists of 10 samples of tea, coffee and spices, and there are four samples in the stone category. Image labels include the material category, the acquisition sequence number and a specific name if the material has been precisely identified. For the textile, wood, and vegetation samples, the labels also include the color of the material (suffix _colorname if there are at least two samples of the same type but with different colors) and/or the acquisition side (suffix _back if the back of the material has been acquired together with the front). We provide the sRGB rendering of each image along with the dataset for natural visualization of the imaged object. The sRGB values were computed from colorimetric computation on the spectral data (see Section 3.7).

3.3. Acquisition Setup

We used a HySpex VNIR-1800 hyperspectral camera [51], manufactured by Norsk Elektro Optikk AS. This is a line-scan camera, consisting of a focusing mirror that allows only a narrow line of light in the scene to pass. This line is projected on a collimating mirror and then passed through a transmission grating. This grating separates the incoming light into different wavelengths, and each wavelength is focused onto a detector array (sensor). The camera is coupled with a close-up lens with a working distance of 30 cm. The output from that array consists of a slice of hyperspectral image, with spatial information in one direction and spectral information in the other. A combination of these line scans provide a full resolution hyperspectral image of the object or scene. The process of line scan is illustrated in Figure 1.
The output of the HySpex VNIR-1800 camera consists of a radiance image, D , made of K = 186 channels associated with the spectral bands, centered at wavelengths λ k B . The width of the acquired image is M = 1800 which corresponds to the number of photodetectors along the spatial dimension of the detector array of the camera (x-axis), while the height, N, is the number of pixels in the scanning direction (y-axis) and is physically bounded by the size of the scene and the translation stage. During image acquisition, we observed spatial distortions in the output radiance images, which are addressed in the next section.

3.4. Corrections of Spatial Distortions

We observed a cross-track distortion on the x-axis and a shear distortion on the y-axis of the output images. These distortions were observed during the acquisition of test targets (grid images containing perfect squares, Macbeth ColorChecker, rulers). The correction of these distortions can be modeled by two functions, h : [ M ] × [ N ] [ M ] and v : [ M ] × [ N ] [ N ] , so we set the value of the pixel given by ( h ( x , y ) , v ( x , y ) ) of the distorted image to pixel ( x , y ) of the corrected image:
( x , y ) [ M ] × [ N ] , I ( x , y ) = D ( h ( x , y ) , v ( x , y ) ) .
The function h considers cross-track distortion and is called sensor correction since the distortion only depends on the physical properties of the sensor of the camera. This distortion is pointed out by the manufacturer (see Section 3.4.1). The function v considers the shear distortion. This distortion is due to a nonorthogonality between the sensor and the scan direction given by the translation stage. The function v is called affine correction, since the shear distortion can be modeled by an affine transformation (see Section 3.4.2).

3.4.1. Cross-Track Distortion

The acquired images present a distortion across the scan direction called cross-track distortion. This distortion is due to the fact that each pixel is associated with a surface element whose size varies and is the same for each line of a channel and for each channel. Figure 1 shows the whole field of view of the camera for a given line of pixels (yellow triangle). Point O (red cross) is denoted as the intersection of the optical axis of the camera and the translation stage. The camera associates 850 pixels to the left size of O and 949 pixels to its right side, as can be seen on the sensor of the camera. However, the surface area observed on each side of O is the same. Consequently, the pixels on the left side will be associated with higher surface elements (compression) than the pixels on the right side (expansion). To overcome this distortion we use the sensor model function θ provided by the camera manufacturer that represents the field of view of each pixel in an acquired line of pixels. The whole field of view of the camera can be deduced as θ ( 1800 ) θ ( 0 ) = 0.29 rad. We use θ ( x ) to compute the width of surface element P s ( x ) (in mm) observed by each pixel as
x [ M ] , P s ( x ) = H · tan θ ( x ) + β ( x ) / 2 tan θ ( x ) β ( x ) / 2 ,
where H = 340 mm denotes the distance between the camera lens, and O , and β ( x ) represents the angular pixel size. The pixel size is undistorted at O ( x = 850 ) and β ( 850 ) is equal to 1.6 × 10 4 rad (data provided by the camera manufacturer). The angular pixels’ sizes β ( x ) can be computed using the following recurrence:
x [ M ] , β ( x ) = 1.6 × 10 4 ,   if   x = 850 , 2 θ ( x + 1 ) 2 θ ( x ) β ( x + 1 ) ,   if   x < 850 , 2 θ ( x ) 2 θ ( x 1 ) β ( x 1 ) ,   if   x > 850 .
Figure 2 shows the pixel size P s w.r.t. x coordinate. Note that x = 0 1799 P s ( x ) = 98.6 mm is the width observed by our camera.
To correct this distortion on acquired images, we measure the cumulative ratio C r ( x ) with respect to P s ( 850 ) as
x [ M ] , C r ( x ) = i = 0 x 1 P s ( i ) P s ( 850 ) .
Finally the sensor correction h is given by
( x , y ) [ M ] × [ N ] , h ( x , y ) = x + C r ( x ) .
The pixel coordinate is rounded to the nearest integer to avoid artifacts due to the interpolation of pixel values.

3.4.2. Shear Distortion

After the sensor correction, the image still exhibits a distortion on the y-axis due to the geometry of the set-up. Indeed, the direction to which the material moves and the line sensor are not exactly perpendicular. This implies that a rectangle in the scene becomes a parallelogram in the acquired image (shear distortion). This can be assumed to be an affine transform, leading to the correction
( x , y ) [ M ] × [ N ] , v ( x , y ) = a x + b y ,
and the transform only depends on two parameters: a and b. The pixel coordinate is again rounded here to the nearest integer to avoid interpolation artifacts. These parameters are experimentally estimated using the following method: we acquire 10 images of a grid with different rotations (the rotation angle ranges from 0 to π / 4 ). Note that the grid is originally made of perfect squares: four equal sides and four equal angles ( π / 2 rad). Then, we apply the transform given by Equation (7) onto the grid images after sensor correction for different values for a and b. Finally, we measure the mean of the angles, γ , between the “horizontal” and “vertical” lines obtained after Otsu thresholding [52] and Hough transform [53] for all sensor-corrected grid images and three spectral channels, and retain the values of a and b that minimize | γ π / 2 | (i.e., values of a and b that create right angles on grid images). We use the Otsu and Hough techniques here since they require few parameters and are adequate for the estimation of lines in these simple grid images which are just made of dark straight lines on a light background. The angular resolution for the Hough transform is set to arctan ( 2 / 1800 ) = 1.2 × 10 3 rad and represents the smallest angle between two lines of pixels in an image acquired by our camera. Optimal values of a and b give
( x , y ) [ M ] × [ N ] , v ( x , y ) = 0.021 x + 1.032 y .
Table 2 shows the mean and standard deviation of | γ π / 2 | before and after applying the correction of Equation (8). It can be seen that, after the correction, the mean γ of the angles of the squares of the grid images is close to π / 2 with a difference of 6 × 10 4 rad which is lower than the angular resolution ( 1.2 × 10 3 rad).
Figure 3 represents the vector field of the correction on an image of size 1800 × 2000 pixels. The origin of each vector is ( h ( x , y ) , v ( x , y ) ) , and its end is ( x , y ) . Both corrections are translations along the x-axis (Equation (6)) or y-axis (Equation (8)), so they can be done jointly or separately. At the end, we obtain, for each acquisition, a corrected radiance image, I , made of K = 186 channels.

3.5. Impact of the Corrections on Pixel Resolution

To quantify the impact of the sensor and affine correction on the pixel resolution, we acquire an image of a ruler with equally spaced markings of 1 mm along the x-axis. Then, we measure the pixel resolution in pixels·mm−1 thanks to the spaces between the markings of the single-channel image resulting from an average on 19 spectral channels without corrections—after sensor correction and after sensor and affine correction. Table 3 shows the means and standard deviations of these resolutions.
It can be seen that the pixel resolutions are similar, on average, before and after applying sensor correction (without corrections, pixel widths are higher to the left of intersection O between the translation stage and the optical axis than to the right, see Section 3.4.1), but the standard deviation is reduced by 2 along the x-axis which means that the sensor correction correctly decreases the cross-track distortion. Moreover, we notice that the pixel resolution, on average, is close to the theoretical one given by 1 / P s ( 850 ) = 18.38 pixels·mm−1 (see Equation (3)) and the affine correction does not impact the pixel resolution along the x-axis.
To illustrate the impact of the corrections, images of the 49th channel (centered at 561.76 nm) of the acquisition of the grid made of perfect squares without corrections, after sensor correction and after sensor and affine correction are shown in Figure 4. It can be seen that without corrections, the pixel resolution increases w.r.t. the x coordinate (sides of square are higher to the right of the image than to the left). After sensor correction or both corrections, the pixel resolution is constant w.r.t. the x coordinate (sides of squares remain constant) which means that the cross-track distortion has been reduced. Moreover, we observe on the image after both corrections that the shear distortion has been reduced (squares have right angles).

3.6. Reflectance Computation

In this section, we describe how we obtain the reflectance image R of a texture from the corrected radiance image I . To properly estimate the reflectance on each surface element of a texture, we first estimate the spectra, S = { S ( λ k ) } λ k B , of the illumination system provided with the HySpex VNIR-1800. This illumination system (3900e DC Regulated ER Lightsouce) is manufactured by Illumination Technologies, Inc. [54]. For the estimation of the spectral power distribution of this illumination, we use the SG-3051 SphereOptics Diffuse Reflectance Tile [55] which provides a reflexivity of 99 % in the range of [ 400 , 1000 ] nm. We then acquire each texture simultaneously with the SG-3051 tile to produce uncorrected radiance images D ; then we correct them using Equation (2) to obtain radiance images, I . We noticed that the SG-3051 tile has a rough texture that produces shaded areas in the acquired images. To produce a robust estimation of the illumination, we adopted the following strategy for each acquired texture: we select a sub-image, P = { P k } k [ K ] , of 550 × 550 pixels centered in the SG-3051 tile in the radiance image, I ; then we compute the summation image, P, of all channels: P = k [ K ] P k and retain the set, Q , of the pixel coordinates ( x , y ) , whose values in P are greater than the 95th percentile. Finally, the estimation of the illumination is given by taking, for each channel ( P k ), the median value of the pixels in Q (to avoid the impact of saturated values):
k [ K ] , S ( λ k ) = Median P k ( x , y ) ( x , y ) Q .
Now, we can compute the reflectance image, R , by dividing each channel of the radiance image, I , by its corresponding value in S , inverting Equation (1):
k [ K ] , ( x , y ) [ M ] × [ N ] , R k ( x , y ) = I k ( x , y ) S ( λ k ) .
Reflectance images are cropped to 1024 × 1024 pixels which corresponds to the minimum of the most relevant regions of interest (ROIs) of the 112 selected textures (ROIs that really contain texture elements). Figure 5 shows a sRGB rendering of two radiance images. For each radiance image, I , the subimage, P , selected for illumination estimation is displayed as a red dashed square, and the texture region that we retain as the reflectance image support, R , is displayed as a green solid square. Note that pixel values that have undergone specular reflection of the light in the radiance image can have values that are higher than the SG-3051 tile and produce values over one in the reflectance image. We decided to keep them unchanged in the final dataset so that the original corrected radiance image can be retrieved by multiplication with the original illumination spectra (which is provided in the metadata of the reflectance image file, see Section 3.7).
As a reference, we also compute the reflectance image of a Macbeth ColorChecker. Figure 6 shows the reflectances of the 24 color patches on average on 200 × 200 pixels for each patch. As for the reflectance measurements in [56], reflectance is mostly flat in the NIR.

3.7. Dataset Description

The dataset is available on the websites of the team/laboratory of the authors (http://color.univ-lille.fr/datasets and https://www.ntnu.edu/web/colourlab/software). Each one of the 112 texture images is provided as both (i) a hyperspectral reflectance image in ENVI format and (ii) a sRGB reconstructed image in PPM format. (i) The ENVI format consists of two files: a binary file (extension .raw) of single-precision floating-point variables (32-bit) that stores the raw hyperspectral reflectance image in the BSQ (Band SeQuential) data organization (channel by channel) and a text file (extension .hdr) which contains the metadata of the image. The .hdr text file holds:
  • A description of camera parameters (i.e., camera ID, integration time, aperture size, etc.). These metadata were written by the HySpex acquisition software and we decided to keep them unchanged.
  • Required information to open the associated binary raw file (i.e., image size, number of bands, data type, etc.),
  • Default bands (65, 45, and 18) whose band centers match with the primary Red, Green and Blue of sRGB standards. Generally these are used to generate false color RGB images using three channels.
  • Wavelength array that contain values of λ k B in nanometers, for example, the center wavelength of channel R k is λ k ,
  • Illumination array that contains the values of illumination { S ( λ k ) } λ k B provided by Equation (9). The illumination can be used to compute the corrected radiance channels, I k , using Equation (10). The average value of this illumination over the 112 images is provided in Figure 7. It can be seen that illumination around 400 is weak, and consequently, respective channels in the reflectance images are likely to undergo noise [57]. We then measure the noise power by analyzing the standard deviation on 200 × 200 pixels on each gray patch of the reflectance image of the Macbeth ColorChecker on Figure 8. In addition to the illumination effect, the problem of having a good signal at low wavelengths is classically due to optics and low sensor sensitivity in this area, where we are at the limit of the optical model that is being used . Nevertheless we chose to provide these data and let the users decide if they want to use them in their simulations.
    Moreover, we provide ImageJ plugins and Python code for opening and processing the data. Matlab users can call the following instruction to open one hyperspectral reflectance image:
    R = multibandread(’<image_name>.raw’, [1024,1024,186],’float32’,0,’bsq’,’ieee-be’); These codes are available as Supplementary Materials.
(ii) The reflectance images are converted from the 186-channel domain to the sRGB color space. This transformation is performed using the CIE XYZ 2° standard observer functions. Then, the reference white D65 is used to convert values from XYZ to the sRGB color space. The resulting sRGB images are provided in PPM format. Figure 9 and Figure 10 show the 112 sRGB samples of textures with filenames.

4. Spectral Dimension Analysis

In hyperspectral images, the adjacent spectral bands are highly correlated [58]. Even though there are many good arguments for processing hyperspectral images at full spectral resolution [59], dimensionality reduction is often beneficial. It is performed by approximating the original hyperspectral data to a varying degree of precision with a smaller number of dimensions. The dimensionality or complexity of acquired reflectance spectra define the dimension of the vector space that is required to describe a spectrum.
There are various methods used for the computation of the effective dimension of spectral data, which can mainly be divided into supervised and unsupervised techniques. Supervised methods aim to preserve the a priori information, while unsupervised methods do not assume any prior information about the input data. Among the unsupervised methods, principal component analysis (PCA) [60,61] and independent component analysis (ICA) [62] are the most popular. PCA aims to maximize the variance of the input data by projecting it into the orthogonal subspace, while ICA looks for the source of changes in the input data, and these sources are assumed to be statistically independent from each other. Among the supervised methods, Fisher’s linear discriminant analysis (LDA) [63] and discriminative locality alignment (DLA) [64] are the prominent techniques, and many variants of these techniques have been proposed in the literature. A review of linear and non-linear dimensionality reduction methods can be found in [65].
Despite several previous studies being performed, there is still no agreement on the number of effective dimensions to represent the spectral data. There are different conclusions by various authors for the number of effective dimensions that can represent the spectral information. As an example, for the Munsell reflectance dataset, different authors have provided different dimensions that effectively represent it, ranging from 3 to 18 dimensional representations [66]. The required dimensionality of data mainly depends on the statistical properties of the spectra being considered and the information that one wants to preserve or emphasize. A review of the work done for finding the dimension of spectral data is provided in [66].

4.1. Spectral Analysis of the Proposed Dataset

In this section we investigate the effective spectral dimension of the acquired hyperspectral images. We do not assume any a priori knowledge about the data and provide a benchmark analysis in the spectral domain. Having these goals in mind, we used the principal component analysis (PCA) technique, which is mainly used to reduce the dimensionality of data consisting of a large number of inter-related variables, while retaining the variance in data [61]. PCA has the ability to de-correlate the spectral data and provide variance information through simple computational methods. This ability makes it one of the most popular techniques for the computation of the effective spectral dimension in hyperspectral imaging [67].
For the computation of effective spectral dimensions, each image is processed individually. First, a covariance matrix is computed, and then eigen decomposition is applied on this matrix to obtain the eigenvalues and eigenvectors. The eigenvectors are arranged relative to their eigenvalues in descending order. These eigenvectors consist of the characteristic reflectances of the input hyperspectral image. For each eigenvalue, corresponding eigenvectors exist, and a strong concentration of variance is present in the first few eigenvectors. If reflectance spectra are linearly independent, then the eigenvalues will all be significantly above zero. This shows that each wavelength contains independent information and is not redundant. However, in the case of hyperspectral imaging, typically, only a few eigenvalues are significantly higher, while the remaining values are close to zero, which indicates that their corresponding eigenvectors contain less variance and are mostly redundant. For an effective representation of the spectral data, we select only the first few eigenvalues, and the projection of spectra on the corresponding eigenvectors is able to provide adequate representation of the acquired data. This means that if we select the principal components containing 95% variance, the rest of principal components can be discarded with a loss of 5% of information. The decision to keep a certain percentage of the variance depends on the application for which a reduced set of representative data is required. For this analysis, we computed the effective dimension for each hyperspectral image which retained 99% and 95% of the total variance [66].
Figure 11a shows the number of components required to retain 99% of variance, and Figure 12 gives information about the effective spectral dimensions of the whole dataset and also about each of the five categories of materials. It can be observed that most of the hyperspectral images show an effective dimension of two or three. All of the “stone” category images have an effective dimension of two, while most of the “food” category show an effective dimension of three. Most of the “textile” category images have an effective dimension between two and five, while some of the images have an effective spectral dimension beyond seven. Among them, three images show effective dimensions of 33, 47 and 58. We further examined the image that showed the highest number of dimensions (textile_05 image) and found that 82.55% of the variance is retained by the first PC while 12.02% variance is retained by the second component. The rest of components add very little to the information, for example, the third PC contains 0.7% of the variance in this image. If the desired value of retained variance is reduced to 95%, then most of the images in our dataset can be defined with one or two principal components, while six images from the “textile” require a third component. Figure 12b represents the number of images that require a certain number of dimensions to retain 95% of the variance.
From these figures, it can be observed that the number of effective dimensions is dependent on the nature of the materials being imaged and may be used as a feature to estimate the category of imaged object. The reason why some of the images require a higher dimension for the description of their spectral properties is due to the presence of anomalies, for example, a tiny part with specular reflection which causes the effective dimension to rise. Figure 11 shows that by reducing the threshold for retained variance, the number of dimensions required to represent the spectral data is reduced.

4.2. Interpretation of the Effective Dimension

The effective dimension analysis of our dataset on the basis of PCA, as described in Section 4.1, provided a number indicating the number of dimensions required to retain a certain percentage of variance. However, it did not provide a physical meaning to the retained spectra since the principal components contain both positive and negative values. In order to obtain the information about the retained spectra, we used the non-negative matrix factorization (NMF) [68]-based decomposition on the hyperspectral images in our dataset. NMF is used in remote sensing hyperspectral images for spectral unmixing of the acquired data [69,70]. The advantage of NMF over PCA is that it extracts sparse and meaningful features from the given data, and the non-negativity constraint ensures the output is interpretable in terms of its physical meaning.
NMF is able to compute the spectral signature of end-members in a given hyperspectral image and the presence of each end-member in each pixel in the image in the form of abundance maps. The assumption behind this computation is the linear mixing model which states that the spectral signature of each pixel is a linear combination of the spectral signature of end members at a particular spatial location.
In NMF-based decomposition, the number of components to be retained is provided by the user, which is mostly based on the prior knowledge of the end-members. For our dataset, we used PCA to compute the required number of components. For the illustration of NMF based decomposition results, we provide two examples in Figure 13 and Figure 14, containing samples of a leaf and textile, respectively. Figure 13b is the color rendering of the vegetation_leaf_03_green image from our dataset, and Figure 13c shows spectra from randomly selected pixels in this hyperspectral image. We decomposed the image into three components and the spectral signatures of computed components are shown in Figure 13c,e,g. The corresponding abundance maps in Figure 13d,f,h, show the presence of extracted components in the image. The first component in Figure 13c is significantly reflective in the near-infrared region, while showing a lower response in the visible region. This spectra suggests that this component is providing information about the chlorophyll in the leaf, and the abundance map in Figure 13d shows the areas where the presence of chlorophyll is prominent. The second component is formed by the veins and lower left part of the leaf, as can be seen in Figure 13f. The green color of leaf corresponds the the third component, as seen in Figure 13g, and the abundance map in Figure 13h provides information about the distribution of this component in the leaf.
Figure 14 shows the decomposition results of the textile_22_cyan image. The first and second components contain information about the reflectance from the textile surface which causes its particular color appearance. It is interesting to note that although the spectral signatures of first two components are almost the same in terms of shape in the visible region, they have different intensities in the near-infrared region. Upon closer inspection of the abundance maps in Figure 14d,f, it can be observed that the first component consists of directly illuminated regions while the second component is composed of regions in shadows. There is a third component in this sample which corresponds to a line in the abundance map (Figure 14h) and is probably caused by some component in the textile which is not easily visible in a color image.
Although we have provided only two examples from the NMF-based decomposition of the spectral data, the results indicate that such an analysis can provide interesting information about the spectral composition of the materials present in an image. For example, the amount of chlorophyll in vegetation, information about pigments and dyes in a textile, and the quality of food materials on the basis of their spectral analysis. We used PCA to get information about the spectral dimension before decomposing it through NMF. From the comparison of spectral behavior and the effective dimension, it can be seen that effective spectral dimension of most of the objects in our dataset is within two to eight. However, this dimension should not be directly related to the required number of spectral channels needed for acquisition. Although there are some studies which have aimed to find the number of filters that can best be used to acquire the maximum spectral information [71], there is no agreement on the optimum number of channels since it is dependent on the nature of surface spectral properties. Nevertheless, knowledge of the effective spectral dimension is useful for dimensionality reduction and obtaining the components of spectra by decomposing it. In the next section, we study the impact of dimensionality reduction on classification performances on the hyperspectral dataset using Local Binary Pattern (LBP)-based descriptors.

5. Texture Classification

A texture image is a characterization of the spatial and spectral properties of the physical structure of a material or an object. A texture analysis can be seen as a set of measures that quantify and provide information about the spatial arrangement of the spectral responses of an object. During texture classification, the goal is to assign an unknown sample texture image to a known texture class. For this purpose, discriminant texture features are extracted from test images and compared to those extracted from training images whose classes are known. To extract a feature from an image, a descriptor is needed. The Local Binary Pattern (LBP) method is one of the most robust feature descriptors, and characterizes the local level variation in a neighborhood of each pixel. Due to its discrimination power and computational efficiency, LBP has become a popular approach for various applications [72]. LBP-based texture classification was first performed on gray-level images, since the original operator only used the spatial information of texture [73]. Later, Palm [74] showed that classification based on a color analysis outperforms the single use of intensity spatial information. Thus, texture feature extraction is extended to the color domain by taking both spatial and color textural information into account. Here, we propose to extend the color LBP-based descriptors to any K-channel radiance image, I . Then, we propose to study the discriminative power of these descriptors on our dataset with respect to an increasing number of channels.

5.1. Texture Features Based on Local Binary Patterns

The LBP operator, [73], characterizes the level variation in the neighborhood, N P , d , made of P pixels at a spatial distance, d, from each pixel, p, of a gray-level image. The outputs of the LBP operator over the multi-channel image are summarized as a concatenated histogram to form a texture feature. Here, we extend four color LBP operators to hyperspectral LBP operators so that the LBP operator is applied on a K-channel radiance image, I . This radiance image, I , is rendered using both a reflectance image, R , for which values above 1 (that correspond to specular surfaces) are clipped to 1 and quantized on an 8-bit, and the standard illuminant, E , lights the whole spectrum uniformly. Note that we only hold the reflectance information by using the illuminant, E , to allow independence from the illumination properties in our study. Moreover, data are quantized on 8-bits since channels of digital radiance and color images are often quantized on integer values.
We now detail each LBP operator and how their radiance value, I p k , of a channel, k, compares with those in the neighborhood, N P , d , with respect to the value of p.
  • Marginal LBP (LBP): The basic LBP operator is applied marginally to each channel I k at each pixel p as [75]
    LBP k [ I ] ( p ) = q N P , d s ( I q k , I p k ) · 2 ϵ ( q ) ,
    where ϵ ( q ) [ P ] is the index of each neighboring pixel, q, of p in N P , d , and the sign function, s, is defined as s ( α , β ) = 1 if α β , or 0 otherwise. The final texture feature results from the concatenation of the K 2 P -bin un-normalized histograms of { L B P k [ I ] } k [ K ] , and its size is K · 2 P .
  • The Opponent Band LBP (OBLBP): Mäenpää et al. [76] improved the LBP operator by taking the inter-channel correlation into account. For this purpose, they considered the opponent band (OBLBP) operator of each pair of channels, ( I k , I l ) , ( k , l ) [ K ] 2 :
    OBLBP ( k , l ) [ I ] ( p ) = q N P , d s ( I q l , I p k ) · 2 ϵ ( q ) .
    The final texture feature results from the concatenation of the K 2 histograms of { OBLBP ( k , l ) [ I ] } k , l [ K ] and its size is K 2 · 2 P .
  • Luminance-Local Color Contrast LBP (L-LCCLBP): This approach considers an image to have both spatial information of luminance and inter-channel information of different bands. The spatial information of the luminance results from the LBP operator being applied to the pseudo panchromatic image (PPI) I P P I , which is computed as the average value over all channels at each pixel [77]:
    I P P I = 1 K k [ K ] I k .
    Regarding the inter-channel content, Cusano et al. [78] define the local color contrast (LCC) operator that depends on the angle, Θ , between the value of a pixel, p, and the average value, I ¯ p = 1 P q N P , d I q , of its neighbors in the spectral domain:
    Θ [ I ] ( p ) = arccos I p , I ¯ p | | I p | | · | | I ¯ p | | .
    The LCC operator is then given by
    LCC [ I ] ( p ) = 2 · 255 π Θ [ I ] ( p ) if 0 Θ [ I ] ( p ) π / 2 , 255 otherwise .
    The final texture feature is the concatenation of the histogram of LBP [ I P P I ] and the histogram of LCC [ I ] , and its size is 2 · 2 P .
  • Luminance-Opponent Band Angles LBP (L-OBALBP): As for L-LCCLBP, this approach first applies the LBP operator to the PPI I P P I , and then Lee et al. [79] considers the angle between each pair of bands, ( k , l ) [ K ] 2 , k l , as
    OBALBP [ I ] ( k , l ) ( p ) = q N P , d s arctan I q k I q l + η , arctan I p k I p l + η · 2 ϵ ( q ) ,
    where η is a constant of small value. The final texture feature is the concatenation of the histogram of LBP [ I P P I ] and the K ( K 1 ) histograms of { OBALBP ( k , l ) [ I ] } k l [ K ] . Its size is ( 1 + K ( K 1 ) ) · 2 P .

5.2. Assessment on Our Proposed Dataset

5.2.1. Covariance Analysis

Since 186 channels is too many to perform the proposed LBP descriptors, we propose to spectrally downsample the 112 radiance images. We selected, for each radiance image, I , L channels whose indexes uniformly ranged in [ K ] . The choice of a uniform distribution of channels was made to comply with the datasets and hyperspectral devices presented in Table 1 that provide spectral channels whose associated bands are uniformly distributed in their respective spectral range. The output is a spectrally sampled radiance image, I L = { I k } k C L where C L = ( l + 1 ) K 1 L + 1 l [ L ] . Note that respective band centers, { λ k } k C L , range uniformly in ] 405.37 , 995.82 [ nm.
In order to determine the minimum number of channels, L, required to provide the best discrimination between channels, we studied the normalized covariance matrix between each couple of channels over the whole dataset computed in Section 4. Figure 15 shows that close channels in terms of band center wavelength are more correlated than distant ones and that infrared channels are highly correlated. The area inside the two red lines represents the covariance between channels with a distance of less than seven channels. In this area, the inter-channel covariance is above 95%. Stating this, we consider that a spacing of seven channels ( 22.33 nm) is enough, so in our study we considered a number of channels, L, that ranged from 1 to 27 channels. In colorimetry and in the colour imaging communities, it has been accepted that sampling spectra at 10 nm provides enough discrimination [80]. It is interesting to note that in our case, such large steps would provide a good description.

5.2.2. Classification Scheme

A texture feature assessment was performed on our dataset by considering the 112 texture images as 112 different classes. Each 8-bit hyperspectral radiance texture image, I L , of size 1024 × 1024 × L was divided into 25 subimages of size 204 × 204 × L (without overlap), among which 12 were randomly considered to be training images and the 13 others as test images. Supports of subimages according to this random sampling and corresponding classes are available as Supplementary Material as two text files (train.txt and test.txt). In a learning phase, LBP histogram features were extracted from each training image. Then, to assign a test image to one of the classes, the same features were extracted from each test image and compared to those of each training image. This comparison was performed using a similarity/dissimilarity measure between test and train features. Finally, each test image was assigned to the class of the training image with the best match by using a decision algorithm. The performances of a classification algorithm are determined by the rate of well classified test images, and depend on three main parts of classification, namely, the choice of discriminating textural features, the feature similarity measure and the decision algorithm. In order to determine the most discriminant texture features with respect to the number of channels, we propose retaining the 1-Nearest Neighbor decision algorithm coupled with the similarity measure based on intersection between histograms [81], since this classification scheme requires no additional parameter. Moreover, for simplicity, we propose using the LBP neighborhood, N P , d , composed of P = 8 pixels at a spatial distance of d = 1 .

5.2.3. Classification Accuracy

Figure 16 shows the classification performances reached by each texture feature presented in Section 5.1 with respect to the number of selected channels, L (see Section 5.2.1). The results show that, except for L-LCCLBP, increasing the number of channels improves the performance, and after 10 channels, the performance stabilizes. The marginal LBP operator, which takes only spatial correlation into account, reached about 92 % accuracy after 10 channels. By taking spectral correlation into account, using information from opponent bands, the OBLBP and L-OBALBP operators reached about 98 % accuracy after 10 channels. Regarding the L-LCCLBP operator, the performance was reduced, since instead of using spatial and spectral correlations for each channel, this operator uses only projected information on the panchromatic intensity. Overall, the results improved from 79.40 % (LBP, OBLBP, L-LCCLBP and L-OBALBP with L = 1 ) to 98.76 % (OBLBP with L = 18 ). In order to highlight the interest of hyperspectral classification against the color classification, we simulated color images using the three spectral sensitivities of the tri-CCD Basler L301kc color camera [82] and E illumination. The results show that, for three operators, the selection of three narrow spectral bands (two in the visible domain and one in the infrared domain) improves the performance compared to the selection of three wide bands in the visible domain.
We conclude that by increasing the number of channels, LBP descriptors provide more discriminative information about spatial and spectral correlations at the expense of feature size and computation time, and except for L-LCCLBP, hyperspectral imaging improves the discriminatory power of descriptors compared to color imaging.
In order to study the influence of channel distribution, we selected 10 sets of 10 random channels. All of these provided performances that were less than or close to performances obtained with a uniform distribution of 10 channels for the marginal LBP operator. We also considered the 10 channels with the highest variance among our dataset. They provided weak classification performances compared to uniform or random distributions, since all the related band centers are in the near infrared domain and consequently, do not take into account the textural information observed in the visible range.
In order to study the discrimination of PCA for hyperspectral texture classification, we propose the application of the marginal LBP operator over the projections of the 186-channel hyperspectral images on the principal axis of the covariance matrix (denoted as PCA channels in the following). Note that the OBLBP, L-LCCLBP and L-OBALBP operators are not relevant for this study since they use spectral correlation which, by definition, does not exist between PCA channels. PCA channels are computed for each image of the dataset by first projecting the 186-channel reflectance image onto each principal axis, such that the first PCA channel is associated with the eigenvector with the highest eigenvalue and so on. For a fair comparison with the previous operators, PCA channels are then quantized on 8-bits using the same quantization function for all PCA channels of all 112 images. The LBP-PCA accuracy is shown Figure 16, where L is the number of sorted (according to decreasing eigenvalues) PCA channels that are considered for the LBP operator. By taking only the PCA channel with the highest variance (1st channel), classification performances are lower than by taking one real channel (93th centered at 699 nm). This can be explained by the specificity of our dataset. Since PCA is a projection of all channels into one channel, two similar textures with different spectral properties are barely discriminated using the PCA. For illustration purposes, Figure 17 shows the sRGB renderings, the 93th channels and the 1st PCA channels of two textures that are spatially similar but spectrally different (textile_14_orange and textile_14_blue). As can be seen, images of the 93th channel of the two textures have different spatial properties and intensities, so they can be easily discriminated, whereas the 1st PCA channels of both textures have similar spatial properties and intensities, so they can be hardly discriminated. By increasing the number of PCA bands, useful information from all channels is used so that it becomes more discriminative than taking each channel separately. In Figure 16, we can see that after five PCA channels, performances exceed the marginal LBP and after 10 PCA channels, the performances stabilize at about 95 % accuracy. To conclude, using the PCA allows us to improve the performances by using only spatial correlation. However, in opposition to OBLBP and L-OBALBP which provide the best performances, it ignores the spectral correlation that allows the comparison of different channels, and thus, discriminates textures using their spectral properties.

6. Conclusions

In this paper, we introduced and described a hyperspectral reflectance dataset of close range textured object surfaces. The spectral range of this dataset is from 405 to 995 nm, thus providing spectral information in both visible and NIR regions. There are five classes of objects, and a total of 112 images are provided. We discussed the hyperspectral image acquisition protocol and the corrections that were applied to the data.
We performed a spectral analysis to quantify the spectral complexity of the samples and showed that a limited number of components permits a good description of their reflectance. Furthermore, we extended local binary pattern operators to a hyperspectral texture analysis. We observed that increasing the number of bands permits better texture classification. We also showed that the opponent band local binary pattern performed the best among the tested texture descriptors. Indeed, such descriptor uses an inter-channel correlation to discriminate textures using their spectral properties.
This dataset is available for scientific use and simulations, and it provides a benchmark to test various computer vision algorithms that are related to object classification and material identification. This dataset will also help in designing optimal spectral sensors, computational imaging systems and spectral reconstruction algorithms.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/18/7/2045/s1, (i) Hyperspectral images (can be downloaded from the websites of the team/laboratory of the authors: http://color.univ-lille.fr/datasets and https://www.ntnu.edu/web/colourlab/software). (ii) Two text files (train.txt and test.txt) which contain supports of subimages of the random sampling for supervised classification of Section 5.2.2 and corresponding classes (iii) ImageJ plugins (MSTools-v18-05.zip) and Python code (falseColors.py) for opening and processing the data.

Author Contributions

Definition of the protocol: H.A.K. (20%), S.M. (20%), B.M. (20%), J.-B.T. (20%), J.Y.H. (20%). Texture acquisitions with VNIR-1800 acquisition system in Colourlab: H.A.K. (50%), S.M. (50%). Abstract, Introduction, Section 2 and Conclusion: H.A.K. (60%), S.M. (10%), B.M. (10%), J.B.T. (10%) and J.Y.H. (10%). Section 3: S.M. (60%), B.M. (25%), H.A.K. (5%), J.-B.T. (5%) and J.Y.H. (5%). Section 4: H.A.K. (70%), J.-B.T. (10%), J.Y.H. (10%) and B.M. (10%). Section 5: S.M. (65%), B.M. (25%), J.B.T. (5%), J.Y.H. (5%).

Acknowledgments

This work was supported (in part) by the National French project CNRS-INS2I-JCJC-2017 MOSAIC, the HyPerCept project funded by the research council of Norway and the conseil régional Bourgogne-Franche-Comté, France. We greatly thank Olivier Losson and Ludovic Macaire from CRIStAL laboratory for their advice on reflectance computation and LBP-based classification.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wolfe, W.L. Introduction to Imaging Spectrometers; SPIE Press: Bellingham, WA, USA, 1997; Volume 25. [Google Scholar]
  2. Miller, P.J. Use of Tunable Liquid Crystal Filters to Link Radiometric and Photometric Standards. Metrologia 1991, 28, 145. [Google Scholar] [CrossRef]
  3. Neumann, N.; Ebermann, M.; Hiller, K.; Seifert, M.; Meinig, M.; Kurth, S. MEMS Tunable Fabry-Pérot Filters for Infrared Microspectrometer Applications. Opt. Soc. Am. 2016. [Google Scholar] [CrossRef]
  4. Sigernes, F.; Syrjäsuo, M.; Storvold, R.; Ao Fortuna, J.; Grøtte, M.E.; Johansen, T.A. Do it yourself hyperspectral imager for handheld to airborne operations. Opt. Express 2018, 26, 6021–6035. [Google Scholar] [CrossRef] [PubMed]
  5. Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-Based Edge-Preserving Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  6. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  7. Amigo, J.M.; Ravn, C.; Gallagher, N.B.; Bro, R. A comparison of a common approach to partial least squares-discriminant analysis and classical least squares in hyperspectral imaging. Int. J. Pharm. 2009, 373, 179–182. [Google Scholar] [CrossRef] [PubMed]
  8. Song, W.; Li, S.; Kang, X.; Huang, K. Hyperspectral image classification based on KNN sparse representation. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016. [Google Scholar]
  9. Mercier, G.; Lennon, M. Support Vector Machines for Hyperspectral Image Classification with Spectral-Based Kernels. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003. [Google Scholar]
  10. Liu, F.; Ye, X.; He, Y.; Wang, L. Application of visible/near infrared spectroscopy and chemometric calibrations for variety discrimination of instant milk teas. J. Food Eng. 2009, 93, 127–133. [Google Scholar] [CrossRef]
  11. Merényi, E.; Farrand, W.H.; Taranik, J.V.; Minor, T.B. Classification of hyperspectral imagery with neural networks: comparison to conventional tools. EURASIP J. Adv. Signal Process. 2014, 2014, 71. [Google Scholar] [CrossRef] [Green Version]
  12. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
  13. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2017. [Google Scholar] [CrossRef]
  14. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  15. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
  16. Fischer, C.; Kakoulli, I. Multispectral and hyperspectral imaging technologies in conservation: Current research and potential applications. Stud. Conserv. 2006, 51, 3–16. [Google Scholar] [CrossRef]
  17. Hardeberg, J.Y.; George, S.; Deger, F.; Baarstad, I.; Palacios, J.E.H. Spectral Scream: Hyperspectral Image Acquisition and Analysis of a Masterpiece. In Public Paintings by Edvard Munch and His Contemporaries: Change and Conservation Challenges; Frøysaker, T., Streeton, N., Kutzke, H., Hanssen-Bauer, F., Topalova-Casadiego, B., Eds.; Archetype Publications: London, UK, 2015. [Google Scholar]
  18. Pan, Z.; Healey, G.; Prasad, M.; Tromberg, B. Face recognition in hyperspectral images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1552–1560. [Google Scholar] [CrossRef] [Green Version]
  19. Gowen, A.; O’Donnell, C.; Cullen, P.; Downey, G.; Frias, J. Hyperspectral imaging–An emerging process analytical tool for food quality and safety control. Trends Food Sci. Technol. 2007, 18, 590–598. [Google Scholar] [CrossRef]
  20. Eckhard, T.; Klammer, M.; Valero, E.M.; Hernández-Andrés, J. Improved Spectral Density Measurement from Estimated Reflectance Data with Kernel Ridge Regression. In Image and Signal Processing; Springer International Publishing: Cham, Switzerland, 2014; pp. 79–86. [Google Scholar]
  21. Coppel, L.G.; Le Moan, S.; Elıas, P.Z.; Slavuj, R.; Harderberg, J.Y. Next generation printing—Towards spectral proofing. Adv. Print. Media Technol. 2014, 41, 19–24. [Google Scholar]
  22. Majda, A.; Wietecha-Posłuszny, R.; Mendys, A.; Wójtowicz, A.; Łydżba-Kopczyńska, B. Hyperspectral imaging and multivariate analysis in the dried blood spots investigations. Appl. Phys. A 2018, 124, 312. [Google Scholar] [CrossRef] [Green Version]
  23. Cheng, J.H.; Jin, H.; Xu, Z.; Zheng, F. NIR hyperspectral imaging with multivariate analysis for measurement of oil and protein contents in peanut varieties. Anal. Methods 2017, 9, 6148–6154. [Google Scholar] [CrossRef]
  24. Cheng, J.H.; Nicolai, B.; Sun, D.W. Hyperspectral imaging with multivariate analysis for technological parameters prediction and classification of muscle foods: A review. Meat Sci. 2017, 123, 182–191. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, B.; Li, J.; Fan, S.; Huang, W.; Zhao, C.; Liu, C.; Huang, D. Hyperspectral imaging combined with multivariate analysis and band math for detection of common defects on peaches (Prunus persica). Comput. Electron. Agric. 2015, 114, 14–24. [Google Scholar] [CrossRef]
  26. Brelstaff, G.J.; Parraga, A.; Troscianko, T.; Carr, D. Hyperspectral camera system: Acquisition and analysis. In Proceedings of the Proceedings Geographic Information Systems, Photogrammetry, and Geological/Geophysical Remote Sensing, Paris, France, 17 November 1995. [Google Scholar]
  27. Foster, D.H.; Amano, K.; Nascimento, S.M.C. Time-lapse ratios of cone excitations in natural scenes. Vis. Res. 2016, 120, 45–60. [Google Scholar] [CrossRef] [PubMed]
  28. Arad, B.; Ben-Shahar, O. Sparse Recovery of Hyperspectral Signal from Natural RGB Images. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  29. Nascimento, S.M.C.; Ferreira, F.P.; Foster, D.H. Statistics of spatial cone-excitation ratios in natural scenes. J. Opt. Soc. Am. A 2002, 19, 1484–1490. [Google Scholar] [CrossRef]
  30. Foster, D.H.; Amano, K.; Nascimento, S.M.C.; Foster, M.J. Frequency of metamerism in natural scenes. J. Opt. Soc. Am. A 2006, 23, 2359–2372. [Google Scholar] [CrossRef]
  31. Nascimento, S.M.C.; Amano, K.; Foster, D.H. Spatial distributions of local illumination color in natural scenes. Vis. Res. 2016, 120, 39–44. [Google Scholar] [CrossRef] [PubMed]
  32. Eckhard, J.; Eckhard, T.; Valero, E.M.; Nieves, J.L.; Contreras, E.G. Outdoor scene reflectance measurements using a Bragg-grating-based hyperspectral imager. Appl. Opt. 2015, 54, D15–D24. [Google Scholar] [CrossRef]
  33. Chakrabarti, A.; Zickler, T. Statistics of Real-World Hyperspectral Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’11), Cambridge, MA, USA, 6–13 November 2011. [Google Scholar]
  34. Nguyen, R.M.H.; Prasad, D.K.; Brown, M.S. Training-Based Spectral Reconstruction from a Single RGB Image. In Computer Vision–ECCV 2014(ECCV’14); Springer-Verlag: Zürich, Switzerland, 2014; pp. 186–201. [Google Scholar]
  35. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Hordley, S.; Finalyson, G.; Morovic, P. A Multi-Spectral Image Database and its Application to Image Rendering Across Illumination. In Proceedings of the 3rd International Conference on Image and Graphics (ICIG’04), Hong Kong, China, 18–20 December 2004. [Google Scholar]
  37. Brainard, D.H. Hyperspectral Image Data. Available online: http://color.psych.upenn.edu/hyperspectral (accessed on 25 June 2018).
  38. Hyperspectral Images of Illustrated Manuscripts. Available online: http://personalpages.manchester.ac.uk/staff/d.h.foster/Hyperspectral_Images_of_Illustrated_Manuscripts.html (accessed on 26 April 2018).
  39. Mirhashemi, A. Introducing spectral moment features in analyzing the SpecTex hyperspectral texture database. Mach. Vis. Appl. 2018, 29, 415–432. [Google Scholar] [CrossRef]
  40. Le Moan, S.; George, S.T.; Pedersen, M.; Blahová, J.; Hardeberg, J.Y. A database for spectral image quality. In Proceedings of the Proceedings Image Quality and System Performance XII, San Francisco, CA, USA, 8 February 2015. [Google Scholar]
  41. Noviyanto, A.; Abdullah, W.H. Honey Dataset Standard Using Hyperspectral Imaging for Machine Learning Problems. In Proceedings of the 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017. [Google Scholar]
  42. Zacharopoulos, A.; Hatzigiannakis, K.; Karamaoynas, P.; Papadakis, V.M.; Andrianakis, M.; Melessanaki, K.; Zabulis, X. A method for the registration of spectral images of paintings and its evaluation. J. Cult. Heritage 2018, 29, 10–18. [Google Scholar] [CrossRef]
  43. Nouri, M.; Gorretta, N.; Vaysse, P.; Giraud, M.; Germain, C.; Keresztes, B.; Roger, J.M. Near infrared hyperspectral dataset of healthy and infected apple tree leaves images for the early detection of apple scab disease. Data Brief 2018, 16, 967–971. [Google Scholar] [CrossRef] [PubMed]
  44. Hirvonen, T.; Orava, J.; Penttinen, N.; Luostarinen, K.; Hauta-Kasari, M.; Sorjonen, M.; Peiponen, K.E. Spectral image database for observing the quality of Nordic sawn timbers. Wood Sci. Technol. 2014, 48, 995–1003. [Google Scholar] [CrossRef]
  45. Skauli, T.; Farrell, J. A Collection of Hyperspectral Images for Imaging Systems Research. In Proceedings of the Digital Photography IX, Burlingame, CA, USA, 4 February 2013. [Google Scholar]
  46. Shin, H.; Reyes, N.H.; Barczak, A.L.; Chan, C.S. Colour Object Classification Using the Fusion of Visible and Near-Infrared Spectra. In PRICAI: Trends in Artificial Intelligence; Zhang, B.T., Orgun, M.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 498–509. [Google Scholar]
  47. Steiner, H.; Schwaneberg, O.; Jung, N. Advances in Active Near-Infrared Sensor Systems for Material Classification. In Imaging and Applied Optics Technical Papers; Optical Society of America: Washington, DC, USA, 2012; p. ITu2C.2. [Google Scholar]
  48. Guifang, W.; Hai, M.; Xin, P. Identification of Varieties of Natural Textile Fiber Based on Vis/NIR Spectroscopy Technology. In Proceedings of the IEEE Advanced Information Technology, Electronic and Automation Control Conference, Chongqing, China, 19–20 December 2015. [Google Scholar]
  49. Horgan, B.H.; Cloutis, E.A.; Mann, P.; Bell, J.F. Near-infrared spectra of ferrous mineral mixtures and methods for their identification in planetary surface spectra. Icarus 2014, 234, 132–154. [Google Scholar] [CrossRef]
  50. Lehtonen, J.; Parkkinen, J.; Jaaskelainen, T. Optimal sampling of color spectra. J. Opt. Soc. Am. A 2006, 23, 2983–2988. [Google Scholar] [CrossRef]
  51. HySpex VNIR-1800. Available online: https://www.hyspex.no/products/vnir_1800.php (accessed on 26 April 2018).
  52. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  53. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef] [Green Version]
  54. Illumination Technologies Inc. 3900e DC Regulated ER Lightsouce. Available online: http://bit.ly/IT3900e (accessed on 21 May 2018).
  55. SG-3051 SphereOptics Diffuse Reflectance Tile. Available online: http://sphereoptics.de/wp-content/uploads/2014/03/Zenith_Product-Brochure.pdf (accessed on 21 May 2018).
  56. Lapray, P.J.; Thomas, J.B.; Gouton, P. A Database of Spectral Filter Array Images that Combine Visible and NIR. In Computational Color Imaging Workshop; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10213, pp. 187–196. [Google Scholar]
  57. Deger, F.; Mansouri, A.; Pedersen, M.; Hardeberg, J.Y.; Voisin, Y. A sensor-data-based denoising framework for hyperspectral images. Opt. Express 2015, 23, 1938–1950. [Google Scholar] [CrossRef] [PubMed]
  58. Gillis, D.; Bowles, J.H.; Winter, M.E. Dimensionality Reduction in Hyperspectral Imagery. In Proceedings of the Proceedings Algorithms and Technologies forMultispectral, Hyperspectral, and Ultraspectral Imagery IX, Orlando, FL, USA, 23 September 2003; Volume 5093. [Google Scholar]
  59. Deborah, H.; Richard, N.; Hardeberg, J.Y. A Comprehensive Evaluation of Spectral Distance Functions and Metrics for Hyperspectral Image Processing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3224–3234. [Google Scholar] [CrossRef]
  60. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417. [Google Scholar] [CrossRef]
  61. Jolliffe, I. Principal Component Analysis. In Wiley StatsRef: Statistics Reference Online; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2014. [Google Scholar]
  62. Wang, J.; Chang, C.I. Independent component analysis-based dimensionality reduction with applications in hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1586–1600. [Google Scholar] [CrossRef] [Green Version]
  63. Zhang, L.; Zhong, Y.; Huang, B.; Gong, J.; Li, P. Dimensionality Reduction Based on Clonal Selection for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4172–4186. [Google Scholar] [CrossRef]
  64. Zhang, T.; Tao, D.; Yang, J. Discriminative Locality Alignment. In Proceedings of the 10th European Conference on Computer Vision, Marseille, France, 12–18 October 2008. [Google Scholar]
  65. Khodr, J.; Younes, R. Dimensionality Reduction on Hyperspectral Images: A Comparative Review Based on Artificial Datas. In Proceedings of the 4th International Congress on Image and Signal Processing, Shanghai, China, 15–17 October 2011. [Google Scholar]
  66. Hardeberg, J.Y. On the Spectral Dimensionality of Object Colours. In Conference on Colour in Graphics, Imaging, and Vision; Society for Imaging Science and Technology: Springfield, VA, USA, 2002; pp. 480–485. [Google Scholar]
  67. Dusselaar, R.; Paul, M. Hyperspectral image compression approaches: Opportunities, challenges, and future directions: discussion. J. Opt. Soc. Am. A 2017, 34, 2170–2180. [Google Scholar] [CrossRef] [PubMed]
  68. Berry, M.W.; Browne, M.; Langville, A.N.; Pauca, V.P.; Plemmons, R.J. Algorithms and applications for approximate nonnegative matrix factorization. Comput. Stat. Data Anal. 2007, 52, 155–173. [Google Scholar] [CrossRef] [Green Version]
  69. Li, J.; Bioucas-Dias, J.M.; Plaza, A.; Liu, L. Robust Collaborative Nonnegative Matrix Factorization for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6076–6090. [Google Scholar] [CrossRef] [Green Version]
  70. Bao, W.; Li, Q.; Xin, L.; Qu, K. Hyperspectral Unmixing Algorithm Based on Nonnegative Matrix Factorization. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016. [Google Scholar]
  71. Alsam, A.; Connah, D.; Hardeberg, J. Multispectral Imaging: How Many Sensors Do We Need? J. Imaging Sci. Technol. 2006, 50, 45–52. [Google Scholar]
  72. Verma, A.; Tyagi, D.; Sharma, S. Recent advancement of LBP techniques: A survey. In Proceedings of the 2016 International Conference on Computing, Communication and Automation (ICCCA), Noida, India, 29–30 April 2016. [Google Scholar]
  73. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef] [Green Version]
  74. Palm, C. Color texture classification by integrative Co-occurrence matrices. Pattern Recogn. 2004, 37, 965–976. [Google Scholar] [CrossRef]
  75. Ojala, T.; Pietikainen, M.; Harwood, D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994. [Google Scholar]
  76. Mäenpää, T.; Pietikainen, M.; Viertola, J. Separating Color and Pattern Information for Color Texture Discrimination. In Proceedings of the Object Recognition Supported by User Interaction for Service Robots, Quebec City, QC, Canada, 11–15 Auguest 2002. [Google Scholar]
  77. Mihoubi, S.; Losson, O.; Mathon, B.; Macaire, L. Multispectral Demosaicing Using Pseudo-Panchromatic Image. IEEE Trans. Comput. Imaging 2017, 3, 982–995. [Google Scholar] [CrossRef] [Green Version]
  78. Cusano, C.; Napoletano, P.; Schettini, R. Combining local binary patterns and local color contrast for texture classification under varying illumination. J. Opt. Soc. Am. A 2014, 31, 1453–1461. [Google Scholar] [CrossRef] [PubMed]
  79. Lee, S.H.; Choi, J.Y.; Ro, Y.M.; Plataniotis, K.N. Local Color Vector Binary Patterns From Multichannel Face Images for Face Recognition. IEEE Trans. Image Process. 2012, 21, 2347–2353. [Google Scholar] [CrossRef] [PubMed]
  80. Trusseli, H.J.; Kulkarni, M.S. Sampling and processing of color signals. IEEE Trans. Image Process. 1996, 5, 677–681. [Google Scholar] [CrossRef] [PubMed]
  81. Swain, M.J.; Ballard, D.H. Color indexing. Int. J. Comput. Vis. 1991, 7, 11–32. [Google Scholar] [CrossRef]
  82. L301kc-Basler L300. Available online: https://www.baslerweb.com/en/products/cameras/line-scan-cameras/l300/l301kc/ (accessed on 14 May 2018).
Figure 1. HySpex VNIR-1800 line scan acquisition process and cross-track distortion illustration on the x-axis. The camera acquires hyperspectral lines of pixels (x-axis). The hyperspectral image is obtained by translation of the object onto the y-axis.
Figure 1. HySpex VNIR-1800 line scan acquisition process and cross-track distortion illustration on the x-axis. The camera acquires hyperspectral lines of pixels (x-axis). The hyperspectral image is obtained by translation of the object onto the y-axis.
Sensors 18 02045 g001
Figure 2. Size of surface element P s ( x ) (in mm) observed by each pixel of coordinate x. The values were computed using Equation (3), and the values of θ ( x ) were provided by the camera manufacturer.
Figure 2. Size of surface element P s ( x ) (in mm) observed by each pixel of coordinate x. The values were computed using Equation (3), and the values of θ ( x ) were provided by the camera manufacturer.
Sensors 18 02045 g002
Figure 3. Vector field of both the x-axis and y-axis corrections. The origin of each vector is ( h ( x , y ) , v ( x , y ) ) , and its end is ( x , y ) . For clarity, only vectors whose final coordinates ( x , y ) are multiple of 100 are shown here.
Figure 3. Vector field of both the x-axis and y-axis corrections. The origin of each vector is ( h ( x , y ) , v ( x , y ) ) , and its end is ( x , y ) . For clarity, only vectors whose final coordinates ( x , y ) are multiple of 100 are shown here.
Sensors 18 02045 g003
Figure 4. Images of the 49th channel of the acquisition of the grid made of perfect squares without corrections (a), after sensor correction (b) and after sensor and affine correction (c). Without corrections, the pixel resolution increases w.r.t. the x coordinate (sides of squares are higher to the right of the image than to the left). After corrections, the pixel resolution is constant w.r.t. the x coordinate (sides of squares remain constant). Moreover, the shear distortion is reduced after sensor and affine correction since the squares have right angles. The red arrows and blue rectangles have been added as overlays to obtain a better view of the distances and angles. Note that they are identical (same lengths) from one image to another.
Figure 4. Images of the 49th channel of the acquisition of the grid made of perfect squares without corrections (a), after sensor correction (b) and after sensor and affine correction (c). Without corrections, the pixel resolution increases w.r.t. the x coordinate (sides of squares are higher to the right of the image than to the left). After corrections, the pixel resolution is constant w.r.t. the x coordinate (sides of squares remain constant). Moreover, the shear distortion is reduced after sensor and affine correction since the squares have right angles. The red arrows and blue rectangles have been added as overlays to obtain a better view of the distances and angles. Note that they are identical (same lengths) from one image to another.
Sensors 18 02045 g004
Figure 5. sRGB rendering of two corrected radiance images. The subimage (red dashed square) inside the SG-3051 reflectance tile is selected for illumination estimation and the texture region (green solid square) is retained as final reflectance image support.
Figure 5. sRGB rendering of two corrected radiance images. The subimage (red dashed square) inside the SG-3051 reflectance tile is selected for illumination estimation and the texture region (green solid square) is retained as final reflectance image support.
Sensors 18 02045 g005
Figure 6. Spectral reflectances of the 24 patches of a Macbeth ColorChecker computed from the data acquired by the HySpex VNIR-1800 (on average on 200 × 200 pixels for each patch). The color of a curve is the sRGB color of the corresponding patch in the ColorChecker.
Figure 6. Spectral reflectances of the 24 patches of a Macbeth ColorChecker computed from the data acquired by the HySpex VNIR-1800 (on average on 200 × 200 pixels for each patch). The color of a curve is the sRGB color of the corresponding patch in the ColorChecker.
Sensors 18 02045 g006
Figure 7. Average illumination computed over the 112 radiance images.
Figure 7. Average illumination computed over the 112 radiance images.
Sensors 18 02045 g007
Figure 8. Standard deviations of spectral reflectances for 200 × 200 pixels of each gray patch of a Macbeth ColorChecker.
Figure 8. Standard deviations of spectral reflectances for 200 × 200 pixels of each gray patch of a Macbeth ColorChecker.
Sensors 18 02045 g008
Figure 9. Reconstructed sRGB images from the food, stone, vegetation and wood categories of the dataset.
Figure 9. Reconstructed sRGB images from the food, stone, vegetation and wood categories of the dataset.
Sensors 18 02045 g009
Figure 10. Reconstructed sRGB images from the textile category of the dataset.
Figure 10. Reconstructed sRGB images from the textile category of the dataset.
Sensors 18 02045 g010
Figure 11. Results of the principle component analysis (PCA) on each image from the five categories of the hyperspectral dataset. The x-axis represents the effective dimension, while the y-axis shows the number of images with that effective dimension. Most of the spectral data in our dataset can be represented by 1, 2 or 3 dimensions while retaining 95% of the variance.
Figure 11. Results of the principle component analysis (PCA) on each image from the five categories of the hyperspectral dataset. The x-axis represents the effective dimension, while the y-axis shows the number of images with that effective dimension. Most of the spectral data in our dataset can be represented by 1, 2 or 3 dimensions while retaining 95% of the variance.
Sensors 18 02045 g011
Figure 12. Number of components (x-axis) required to retain the variance percentage (y-axis). (a) shows the results for the whole dataset, while (b) shows the results for each of the five categories in the dataset. “Food” category images require more dimensions for their representation due to the variety of objects in that category, while those in the “stone” category require the least number of dimensions since the surfaces exhibit similar spectral properties.
Figure 12. Number of components (x-axis) required to retain the variance percentage (y-axis). (a) shows the results for the whole dataset, while (b) shows the results for each of the five categories in the dataset. “Food” category images require more dimensions for their representation due to the variety of objects in that category, while those in the “stone” category require the least number of dimensions since the surfaces exhibit similar spectral properties.
Sensors 18 02045 g012
Figure 13. NMF-based decomposition of the vegetation_03_green image in three components. Each component exhibits the spectral composition of leaf, showing the spectra of chlorophyll, veins and the color of the leaf.
Figure 13. NMF-based decomposition of the vegetation_03_green image in three components. Each component exhibits the spectral composition of leaf, showing the spectra of chlorophyll, veins and the color of the leaf.
Sensors 18 02045 g013
Figure 14. NMF-based decomposition of textile_22_cyan image in three components. The extracted components consist of areas under direct illumination, shadow areas and the presence of different materials in the textile sample.
Figure 14. NMF-based decomposition of textile_22_cyan image in three components. The extracted components consist of areas under direct illumination, shadow areas and the presence of different materials in the textile sample.
Sensors 18 02045 g014
Figure 15. Normalized covariance between each couple of channels over the whole dataset. Values range between 31 % (black) and 100 % (white). The two red lines are separated by seven channels, and inside the red lines, the covariance is above 95%.
Figure 15. Normalized covariance between each couple of channels over the whole dataset. Values range between 31 % (black) and 100 % (white). The two red lines are separated by seven channels, and inside the red lines, the covariance is above 95%.
Sensors 18 02045 g015
Figure 16. Classification accuracy reached by each Local Binary Pattern (LBP) descriptor with respect to the number of channels, L. LBP [73] (Equation (11)), Opponent Band LBP (OBLBP) [76] (Equation (12)), Luminance-Opponent Band Angle LBP (L-LCCLBP) [78] (Equation (15)), L-OBALBP [79] (Equation (16)). LBP-PCA refers to the marginal LBP operator applied to the PCA channels. Classification performances were stabilized after L = 10 channels: about 92 % accuracy for LBP, 98 % accuracy for OBLBP and L-OBALBP, 81 % accuracy for L-LCCLBP and 95 % accuracy for LBP-PCA. The best performance was given by OBLBP with a 98.76 % accuracy for L = 18 channels. The markers refer to the classification accuracies of the descriptors applied on the L = 3 channels of simulated RGB images from the three spectral sensitivities of the tri-CCD Basler L301kc color camera [82].
Figure 16. Classification accuracy reached by each Local Binary Pattern (LBP) descriptor with respect to the number of channels, L. LBP [73] (Equation (11)), Opponent Band LBP (OBLBP) [76] (Equation (12)), Luminance-Opponent Band Angle LBP (L-LCCLBP) [78] (Equation (15)), L-OBALBP [79] (Equation (16)). LBP-PCA refers to the marginal LBP operator applied to the PCA channels. Classification performances were stabilized after L = 10 channels: about 92 % accuracy for LBP, 98 % accuracy for OBLBP and L-OBALBP, 81 % accuracy for L-LCCLBP and 95 % accuracy for LBP-PCA. The best performance was given by OBLBP with a 98.76 % accuracy for L = 18 channels. The markers refer to the classification accuracies of the descriptors applied on the L = 3 channels of simulated RGB images from the three spectral sensitivities of the tri-CCD Basler L301kc color camera [82].
Sensors 18 02045 g016
Figure 17. Texture image sample: textile_14_orange (top) and textile_14_blue (bottom). From the left to the right : sRGB rendering, 93th channel (centered at 699 nm) and 1st PCA channel. The images of the 93th channel of the two textures have different spatial properties and intensities so they can be easily discriminated, whereas the 1st PCA channels of both textures have similar spatial properties and intensities so they can barely be discriminated.
Figure 17. Texture image sample: textile_14_orange (top) and textile_14_blue (bottom). From the left to the right : sRGB rendering, 93th channel (centered at 699 nm) and 1st PCA channel. The images of the 93th channel of the two textures have different spatial properties and intensities so they can be easily discriminated, whereas the 1st PCA channels of both textures have similar spatial properties and intensities so they can barely be discriminated.
Sensors 18 02045 g017
Table 1. Summary of various characteristics of existing and our proposed hyperspectral image dataset (N/A means information not available from the dataset description).
Table 1. Summary of various characteristics of existing and our proposed hyperspectral image dataset (N/A means information not available from the dataset description).
DatasetNature of ImagesCameraNo. of ImagesSpatial ResolutionSpectral Range (nm)No. of Channels
Bristol (1994) [26]Outdoor scenes of vegetationPasecon integrating camera tube29256 × 256400–70031
Natural scenes (2002) [29]Urban and rural scenesPulnix TM-1010 with VariSpec tunable birefringent filter81024 × 1024410–71031
Natural scenes (2004) [30]Reflectance image of natural scenesHamamatsu C4742-95-12ER with VariSpec liquid crystal tunable filter81344 × 1024400–72033
Natural scenes (2015) [31]Radiance image of natural scenesHamamatsu C4742-95-12ER with VariSpec liquid crystal tunable filter301344 × 1024400–72033
Time-Lapse (2015) [27]Images of 5 natural scenes taken at different timesHamamatsu C4742-95-12ER with VariSpec liquid crystal tunable filter331344 × 1024400–72033
ICVL (2016) [28]Urban and rural scenesSpecim PS Kappa D×42011392 × 1300400–1000519
Harvard (2011) [33]Indoor and outdoor imagesNuance FX, CRI Inc.501392 × 1040420–72031
UGR (2015) [32]Outdoor scenesPhoton V-EOS141392 × 1040400–100061
CAVE (2008) [35]Materials and objectsApogee Alta U260 with VariSpec liquid crystal tunable filter32512 × 512400–70031
East Anglia (2004) [36]Everyday objects placed in viewing boothApplied Spectral Imaging Spectracube camera22Various resolutions400–70031
SIDQ (2015) [40]Pseudo-flat objectsHySpex-VNIR-16009500 × 500400–1000160
Brainard (1998) [37]Indoor scenesKodak KA4200 CCD with Optical Thin Films filter92000 × 2000400–70031
Nordic sawn timbers (2014) [44]Wood samplesN/A107320 × 800300–2500440
Scien (2012) [45]Various objects, scenes and facesN/A106Various resolutionsVarious rangeVarious channels
Paintings (2017) [42]PaintingsIRIS II filter wheel camera232560 × 2048360–115023
Ancient manuscripts (2012) [38]Printed documentsHamamatsu C4742-95-12ER with VariSpec liquid crystal tunable filter31344 × 1024400–70033
Apple tree leaves (2018) [43]Near infrared images of healthy & infected leavesHySpex SWIR-320m-eN/AVarious resolutions960–2490256
SpecTex (2017) [39]TextilesImSpector V860640 × 640400–78039
Honey (2017) [41]Honey samplesSurface Optic Corporation SOC710-VP32520 × 696400–1000126
Singapore (2014) [34]Outdoor images of natural objects, man made objects, buildingsSpecim PFD-CL-65-V10E66Various resolutions400–70031
HyTexiLa Our proposed datasetTextured materials from 5 different categoriesHySpex VNIR-18001121024 × 1024400–1000186
Table 2. Mean and standard deviation of | γ π / 2 | (in radians) on 10 grid images with different rotations before and after applying affine correction of Equation (8), where γ is the angle between a “horizontal” and a “vertical” line. It can be seen that the difference between γ and π / 2 is strongly reduced (i.e., γ close to π / 2 ) after the affine correction and is lower than the angular resolution ( 1.2 × 10 3 rad).
Table 2. Mean and standard deviation of | γ π / 2 | (in radians) on 10 grid images with different rotations before and after applying affine correction of Equation (8), where γ is the angle between a “horizontal” and a “vertical” line. It can be seen that the difference between γ and π / 2 is strongly reduced (i.e., γ close to π / 2 ) after the affine correction and is lower than the angular resolution ( 1.2 × 10 3 rad).
Value of | γ π / 2 |
After sensor correction0.0167 (0.0098)
After sensor and affine correction0.0006 (0.0003)
Table 3. Means and standard deviations of pixel resolutions (in pixels·mm−1) measured between each marking or a ruler for a single-channel image resulting from an average of 19 spectral channels without corrections—after sensor correction and after sensor and affine corrections. It can be seen that the pixel resolutions are similar, on average, before and after applying the sensor correction, but the standard deviation is reduced by 2 along the x-axis which means that the sensor correction correctly decreases the cross-track distortion. Note that the affine correction does not impact the pixel resolution along the x-axis.
Table 3. Means and standard deviations of pixel resolutions (in pixels·mm−1) measured between each marking or a ruler for a single-channel image resulting from an average of 19 spectral channels without corrections—after sensor correction and after sensor and affine corrections. It can be seen that the pixel resolutions are similar, on average, before and after applying the sensor correction, but the standard deviation is reduced by 2 along the x-axis which means that the sensor correction correctly decreases the cross-track distortion. Note that the affine correction does not impact the pixel resolution along the x-axis.
Ruler along x-axis
Without corrections18.30 (1.21)
After sensor correction18.28 (0.60)
After sensor and affine correction18.31 (0.61)

Share and Cite

MDPI and ACS Style

Khan, H.A.; Mihoubi, S.; Mathon, B.; Thomas, J.-B.; Hardeberg, J.Y. HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images. Sensors 2018, 18, 2045. https://doi.org/10.3390/s18072045

AMA Style

Khan HA, Mihoubi S, Mathon B, Thomas J-B, Hardeberg JY. HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images. Sensors. 2018; 18(7):2045. https://doi.org/10.3390/s18072045

Chicago/Turabian Style

Khan, Haris Ahmad, Sofiane Mihoubi, Benjamin Mathon, Jean-Baptiste Thomas, and Jon Yngve Hardeberg. 2018. "HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images" Sensors 18, no. 7: 2045. https://doi.org/10.3390/s18072045

APA Style

Khan, H. A., Mihoubi, S., Mathon, B., Thomas, J. -B., & Hardeberg, J. Y. (2018). HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images. Sensors, 18(7), 2045. https://doi.org/10.3390/s18072045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop