Next Article in Journal
Mortar Characterization and Radiocarbon Dating as Support for the Restoration Work of the Abbey of Santa Maria di Cerrate (Lecce, South Italy)
Next Article in Special Issue
The Colors of the Butterfly Wings: Non-Invasive Microanalytical Studies of Hand-Coloring Materials in 19th-Century Daguerreotypes
Previous Article in Journal
Differentiating between Natural and Modified Cellulosic Fibres Using ATR-FTIR Spectroscopy
Previous Article in Special Issue
On the Identification of Colour Photographic Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward Practical Spectral Imaging beyond a Laboratory Context

Munsell Color Science Laboratory, Program of Color Science, Rochester Institute of Technology, 1 Lomb Memorial Drive, Rochester, NY 14623-5604, USA
*
Author to whom correspondence should be addressed.
Heritage 2022, 5(4), 4140-4160; https://doi.org/10.3390/heritage5040214
Submission received: 31 October 2022 / Revised: 4 December 2022 / Accepted: 7 December 2022 / Published: 13 December 2022

Abstract

:
A portable, user-friendly multispectral imaging system assembled almost entirely of common photography equipment and open-source software has been developed. The system serves as an outreach and educational tool for demonstrating and promoting scientific imaging as a more routine practice in the contexts of cultural heritage digitization and photography. These efforts are aimed primarily at institutions where advanced imaging technologies are not already found, and where funding and expertise may limit access to commercial, bespoke multispectral imaging solutions that are currently available. The background and theory that were shared in tutorials given during the system’s initial testing campaign are detailed here. Testing was carried out in one-day on-site visits to six cooperating institutions of different sizes and collection types in the northeast USA. During these visits, the imaging system was presented, and the benefit of collecting spectral data using low barrier-to-entry capture and processing methods relative to conventional imaging methods was discussed. Imaging was conducted on site on selected collections objects to showcase the current capabilities of the system and to inform ongoing improvements to the setup and processing. This paper is a written companion piece to the visits, as a source of further detail and context for the two-light imaging system that was described and demonstrated.

1. Introduction

1.1. Spectral Imaging in a Laboratory Context

Since its adoption by the heritage science community several decades ago [1], spectral imaging has matured into an analytical technique regularly utilized in noninvasive scientific studies of cultural heritage objects. Spectral imaging is differentiated from conventional color imaging by the spectral resolution of the imaging system, where spectral imaging techniques are those that collect more bands—as few as five or six, to hundreds—than typical three-channel RGB imaging. Spectral imaging techniques may be further described as multispectral or hyperspectral according to their relative spectral resolution (tens versus hundreds of image bands), which is largely determined by the image band selection method (e.g., filter- or illumination-based band selection versus diffraction grating wavelength selection). The use of specific terms is fluid and often context-dependent [2]. Taken together, spectral imaging techniques have been proven to be highly effective means of performing noninvasive, spatially resolved reflectance spectroscopy in the UV-VIS-IR range, which is particularly useful for identifying and mapping the distribution of artists’ materials [3,4]. This information is often used to build a better understanding of artists’ working methods, material degradation, and prior restoration campaigns, which together provide an understanding of an object’s physical construction and history.
Despite their impressive abilities, due to the associated cost and complexity of these techniques, spectral imaging capabilities are yet largely siloed in institutions with the monetary and technical expertise necessary to support analytical imaging. Presently, mainly specialized instruments are used to carry out spectral imaging. Many systems that have been developed for lab-based research or in situ field studies are experimental [5,6,7]. They are typically applied in one-off technical studies, and are not intended or appropriate for adoption within routine imaging workflows. Additionally, such systems are not necessarily designed with user-friendly considerations, because they are built-to-purpose, and their design and operation is aligned with the lab-based context of their development. They more closely resemble analytical equipment rather than familiar cameras, and therefore sacrifice intuitive handling properties in favor of technical utility. Finally, these complex instruments are expensive to build, maintain, and operate, necessitating monetary support that smaller institutions simply cannot fund.

1.2. Toward Spectral Imaging in a Studio Workflow

Observing these barriers to its widespread adoption by more institutions, an interest developed in testing practical measures to put this technology into the hands of more users. The priority was physically demonstrating the feasibility of adopting studio-practical spectral imaging strategies during on-site visits to several institutions. The institutions were of diverse size and geographic location, and each had different experience with, approaches to, and goals for imaging collections. Dialogue with the smaller, more resource-strapped institutions was of particular interest. Their feedback is most critical to determine the best methods for getting this technology into the hands of more users moving forward.
More specifically, these visits were aimed at making both the concept and practice of multispectral imaging more approachable for non-experts. It was introduced as an advanced imaging technique, rather than a scientific analysis, and its advantages over conventional color capture for more routine photography and digitization projects were emphasized. These advantages, particularly for color-accurate reproduction [1,8,9], have been recognized for some time and have been summarized previously [10,11]. They include:
  • the elimination of the need for subjective visual editing in post-production,
  • the expansion of archives beyond a single set of viewing, illuminating, and observer conditions (CIE illuminant D50 and 1931 standard observer for ICC color managed archives),
  • the ability to re-render an image under any desired lighting condition to inform curation, exhibition, scholarship, and conservation, and
  • the prevention of undesirable metameric matches of materials used in conservation treatments.
These advantages were conveyed during the course of the visits, which loosely consisted of
  • a tutorial about the theory and practice of conventional color imaging versus the proposed two-light spectral imaging method,
  • a demonstration of two-light imaging, showing how it can be carried out using mainly cameras and equipment that are commonly found in photography studios and are already familiar to cultural heritage imaging professionals, and
  • discussion and questions around these activities.
These efforts to bring awareness and access to spectral imaging are the culmination of recent research focused on developing more user friendly, affordable strategies for spectral capture and processing [12,13,14,15,16,17]. They are based on a history of developments for spectral imaging and color-accurate archiving of cultural heritage that have come out of the RIT Munsell Color Science Lab’s Studio for Scientific Imaging and Archiving of Cultural Heritage. The capture method of two-light imaging, to be introduced herein, is an LED-based spin on the original filter-based Dual-RGB imaging technique [18,19]. With respect to other spectral imaging techniques, dual-RGB and two-light imaging are notable for their efficiency and low complexity. Imaging with either technique is fast, because three spectral channels are collected per capture, and it is also accessible, because it can be performed with familiar, commercially available DSLR and mirrorless cameras that are already found in photography studios.
The means of processing the spectral data collected via two-light imaging had to be equally accessible and intuitive for the prospect of adopting this technique to be compelling. Toward this end, custom image processing software was developed alongside two-light imaging. The software, called Beyond RGB, is a cross-platform compatible, open source and freely available stand-alone software application that is fully operable through an intuitive graphical interface. It is designed to make processing the image sets as simple as possible. A large portion of the on-site visits was dedicated to demonstrating the capabilities of the first release of Beyond RGB and soliciting advice for improvements for future releases. Its high-level functionality and role in the overall workflow will be described below, while a more complete discussion can be found elsewhere [15].

1.3. Obstacles and Opportunities

The spectral image sets that are collected with two-light imaging are built up differently than those collected using more traditional multispectral imaging processes, which capture a single channel at a time sequentially over the wavelength range of interest. Conveying the differences between these two strategies and clearly describing the theory and practice of two-light imaging was a significant challenge. Furthermore, the most well-known application of spectral imaging is deriving reflectance properties of materials and mapping their distribution. In contrast, the emphasis in this work is on its color accuracy benefits. Color calibration based on multispectral information is both a less-common and less-intuitive use for spectral image data.
The details of two-light capture for highly color-accurate reproduction have been discussed elsewhere [16], and will be further described in the Background section below. Together with the Methods section that follows, these provide the information given in the on-site tutorials: the benefits of two-light imaging with respect to conventional color imaging, and an introduction to the setup, capture, and processing involved in the two-light imaging workflow. The Results and Discussion section summarizes the general experience of these real-world tests, impressions and feedback gathered from conversations with the host institutions, and actionable suggestions that will be implemented to improve the utility of the system moving forward.

2. Background

2.1. Dual-RGB Imaging

Given the time-consuming processes and expensive equipment necessary for analytical-level spectral imaging, an alternative, less complex approach is desirable in the context of studio photography. By the early 2000s, the Munsell Color Science Lab had been experimenting with spectral-based color reproduction for a few years. After a number of studies and design iterations to optimize filter choice and characteristics [20,21], the research culminated in the introduction of a spectral imaging method in which a set of two optimized filters are paired with a three-channel, RGB camera, enabling the capture of five spectral channels between two filtered RGB images [22]. This spectral imaging strategy came to be called dual-RGB imaging, and was later commercialized in the Sinar Color To Match (CTM) system [23].
At the time of its conception, the average color accuracy of dual-RGB of 0.9 Δ E00 for both calibration and verification data was superior to that of other contemporary, more complex approaches [22]. These included a system utilizing a 31-band liquid-crystal tunable filter and monochrome sensor (average Δ E00 1.5) [24] and a 13-band system pairing interference filters with a monochrome sensor (average Δ E00 1.5) [25]. This was an important result for the relevance of dual-RGB within studio photography, where obtaining color-accurate images objectively and efficiently, thus overcoming the need for subjective, time-consuming visual editing in post-processing, is an attractive selling point.
A high-level description of the capture and calibration strategies behind the dual-RGB technique is given below. A more detailed account of the history of its development and nuances in its implementation is available elsewhere [19], as well as image quality considerations related to color transformations [26].
Dual-RGB imaging is so named for the practice of capturing two images taken through a blue-green filter and a yellow filter placed in front of the lens of an RGB color filter array (CFA) camera. The spectral transmittance of such a pair of colored glass filters are plotted in Figure 1. To increase throughput in the long red region of the visible spectrum above ~650 nm, improving spectral estimation accuracy in this region, the internal IR cut filter of commercial RGB CFA cameras can be removed. A third, visible bandpass filter can then also be used to more desirably tune the spectral transmission and still limit it to the visible range (Figure 1, black line).
Figure 2 illustrates the result of pairing the blue-green and yellow filters with an IR-modified RGB CFA camera. The spectral sensitivity of the red, green, and blue channels are tuned differently by each colored filter, resulting in modified red, green, and blue channel sensitivities that are different between the two captures, and can be combined to create a six-channel spectral image stack.
The dual-RGB image stack can be calibrated for both colorimetric and spectral reflectance characterization via mathematical transformations that relate the six-channel spectral information ( R b g G b g B b g R y G y B y ) of an imaged calibration target to its measured reference values. For colorimetric calibration, these reference values are the measured CIEXYZ tristimulus values of the patches ( X r e f Y r e f Z r e f ), and for spectral reflectance calibration, they are the measured spectral reflectance curves of the patches ( R λ 1 to R λ n ).
Equation (1) shows the relationship between the dual-RGB camera signals and CIEXYZ reference values. It is defined by the transformation matrix M C , which expanded is a 3-by-6 matrix, illustrated in Equation (2). The coefficients in the transformation matrix M C are defined through iterative optimization, with the goal of minimizing the average CIEDE2000 color difference [27] between the dual-RGB camera signals averaged from a region inside each patch of the calibration target and the target’s measured reference values.
X r e f Y r e f Z r e f = M c o l o r R b g G b g B b g R y G y B y
where
M c o l o r = m 1 , 1 m 1 , 2 m 1 , 3 m 1 , 4 m 1 , 5 m 1 , 6 m 2 , 1 m 2 , 2 m 2 , 3 m 2 , 4 m 2 , 5 m 2 , 6 m 3 , 1 m 3 , 2 m 3 , 3 m 3 , 4 m 3 , 5 m 3 , 6
After optimizing the transformation between six-channel camera signals and reference target values, colorimetric calibration is complete. To obtain the color-calibrated image, the transformation matrix is applied to the six-channel image stack, followed by the desired linear color space matrix (e.g., ProPhotoRGB) and nonlinear encoding function. The process of rendering a highly-color-accurate image from the dual-RGB image stack is complete.
The parallel process of spectral reflectance calibration based on the six-channel image stack follows similar logic. The dual-RGB camera signals of the imaged calibration target are related to measured spectral reflectance through the spectral reflectance transformation matrix M S as shown in Equation (3). R is an n-by-1 vector, where n is determined by the sampling of the reference spectral reflectance data (36 is typical: 380 nm to 730 nm in 10 nm increments). It follows that M S has the dimensions n-by-6. Again, iterative optimization is used to define the transformation matrix coefficients. In this case, the minimized value is the root-mean-square error (RMSE) between the measured spectral reflectance and that estimated from the average dual-RGB camera signals sampled from the calibration target.
R λ 1 R λ n = M S R b g G b g B b g R y G y B y
where
M S = m 1 , 1 m 1 , 2 m 1 , 3 m 1 , 4 m 1 , 5 m 1 , 6 m n , 1 m n , 2 m n , 3 m n , 4 m n , 5 m n , 6
Dual-RGB was developed to provide both accurate color reproduction and spectral estimation, and importantly, it can easily be carried out using familiar cameras, studio lighting, inexpensive filters, and common color targets. As such, it served as the main influence for developing two-light imaging, which similarly builds up a six-channel spectral image cube, but does so through the use of tuned LED lighting, rather than filtered illumination.

2.2. Two-Light Imaging

Tunable, multichannel LED light sources are now widely available and more affordable, and offer a number of advantages over filter-based wavelength selection. Forgoing the need to screw on, slide, or otherwise shift filters into place reduces physical movement of the system that leads to registration error between channels. Different filters may also have varying optical properties that affect registration, and/or lead to large differences in the optimal exposure between shots, which can affect image quality. For a small investment, computer-controlled multichannel LED lights offer an elegant and flexible solution that can be integrated into an automated capture routine. Furthermore, using narrowband LEDs minimizes the object’s exposure to heat and other extraneous radiation [28,29].
Tunable light-based spectral imaging strategies that have been demonstrated and are currently in use for cultural heritage imaging have mainly paired narrowband LEDs with a monochrome sensor [28,30,31], echoing the common monochrome sensor + filter multispectral imaging approach [32]. There have been some other recent studies in which a three-channel RGB sensor is used [33,34]. This is the approach used here, in the two-light imaging technique, in which a six-channel spectral stack is created from the combination of two differently illuminated RGB captures. With the end goal of more approachable, everyday access to this kind of advanced imaging, taking advantage of the inherent three-channel nature of RGB CFA cameras provides an opportunity to explore utilizing familiar professional-level cameras for more studio-friendly spectral imaging.
Early experiments in this research verified that a dual-RGB imaging can be carried out effectively with a prosumer camera and inexpensive colored glass filters [12], and then confirmed that only a two-fold increase in the number of image channels used for colorimetric calibration, from the three channels of conventional RGB, to six spectral channels, significantly improves color rendering accuracy [13]. Others have explored trade-offs between filter-based versus light-based wavelength selection (i.e., dual-RGB versus two-light) [14,16], and effects of camera and lens choice on color accuracy with the two-light technique [17].
The two-light spectral imaging method was developed using ten-channel tunable LED light sources. The spectral power distributions of the ten channels are plotted in Figure 3a. A pair of lighting conditions, each consisting of a combination of three LEDs from the overall set of ten, were created, and their spectral power distributions are plotted in Figure 3b. The LEDs that make up each lighting condition were selected based on an exhaustive search optimization, in which all the color accuracy of all possible pairs of three-LED combinations was assessed, and the optimal combinations were computationally identified for a given camera and calibration target. The details of this process have been discussed elsewhere [16]. Note that the LED system currently utilized is a more advanced equipment option than is necessary based on the results of the optimization process. Only a subset of the ten channels is needed to create the two lighting conditions. These findings are informing future research toward purpose-built, lower cost lighting for two-light imaging.
The capture and calibration of two-light spectral image stacks follows that of dual-RGB imaging, where the objective is to modify the spectral sensitivity of the red, green, and blue channels differently under each lighting condition, resulting in sensitivities that are shifted relative to each other between the two captures, and can be combined to make a six-channel spectral image stack. Figure 4 illustrates the six channel sensitivity of the same IR modified commercial camera as above in Figure 2, but paired with the two lighting conditions from Figure 3b, rather than two filters.
Note that because the search optimization used for LED selection is defined around the color rendering accuracy of a calibration target, the optimal lighting conditions depend not only on the spectral characteristics of the camera sensitivity, but also on those of the specific calibration target chosen. In other words, the optimal lighting conditions will differ based on the camera and target used to define them. The RGB camera sensitivity shown in Figure 4 is that of an IR modified Sony α 7R III, which has been used as the model prosumer camera throughout this research. The ideal lighting conditions plotted in Figure 3 were created for it using a Digital Color Checker SG as the calibrating target.
The high-level capture and calibration strategies for carrying out two-light imaging follow those of dual-RGB imaging described above, where colorimetric and spectral calibration are carried out on the six-channel spectral image stack that is the combination of RGB image data collected under each of the optimized two lighting conditions. The colorimetric and spectral reflectance transforms are determined using CIEDE2000- and RMSE-guided matrix optimization over a reference color target. The final image can then be transformed, encoded, and rendered to a high degree of color accuracy according to these calibrations.

2.3. The Spectral Advantage

The CIEDE2000 matrix optimization method of colorimetric calibration, described above, is a versatile and adaptable method of characterizing camera color with respect to human perception [35]. In conventional imaging workflows, this process is what may be more familiarly called color profiling. It can be extended to the calibration of imaging systems with more than three channels, as was done here, with the definition of a larger transformation matrix. The larger matrix includes coefficients that characterize the contribution of the signal captured in the additional channels in the estimation of the CIEXYZ values for a given color in the image. While it may seem more intuitive to build up this transformation between quantities of the same dimensions, i.e., RGB camera signals to trichromatic human vision, even in conventional three-channel imaging, a single camera channel does not map one-to-one to a single tristimulus value—hence the need to define a 2D transformation matrix. The signals in each camera channel contribute in different amounts to the estimation of each tristimulus value. The quantity of each channel’s contribution is defined by the corresponding matrix coefficients. Increasing camera channels increases the degrees of freedom in the transformation, which improves estimation accuracy.
It follows, then, that the six-channel two-light imaging color calibration outperforms conventional RGB color calibration because of the increased amount of information used to build the transformation matrix. The additional coefficients in the matrix provide a more nuanced characterization of the visible spectrum, and operate as a means of better tuning the estimated CIEXYZ values. This is illustrated in Figure 5, in which the color accuracy of the Digital Color Checker SG target is simulated as rendered from conventional RGB versus two-light imaging data captured with the same camera. The heat maps indicate the color-coded Δ E00 color difference between the measured and rendered color of each patch of the target. Upon visual inspection alone, the conventional RGB rendering contains more lighter green- and yellow-coded patches, indicating that there are more patches rendered with a larger color difference relative to the two-light rendering. This is summarized well by comparing the mean and 90th percentile Δ E00 values across all the patches, reported beneath each map. Those of the two-light method are far smaller; the 90th percentile value of 0.4 Δ E00 is especially notable, as it indicates that a large majority of the color differences are below half of a Δ E00 unit, which is all but insignificant in the context of noticeable color differences in digital images [36].
Finally, an extreme example illustrating shortcomings of conventional color imaging is provided in Figure 6. A painting of the night sky that was imaged and rendered using both conventional (Figure 6a) and two-light (Figure 6b) techniques exhibits large differences in the appearance of a color found mainly along the horizon. To the eye, this paint looks blue, and is not visually different from rest of the dark blue sky. This is confirmed by renderings of the measured color of spots from both regions (Figure 6d). However, the measurements reveal the presence of different blue pigments in the two regions, with that at the horizon being cobalt blue, and the rest, phthalo blue. These two blue pigments, while visually similar, have strikingly different spectral reflectance shapes (Figure 6c). The six-channel sampling, particularly at longer visible wavelengths, leads to the large improvement of rendering this color more accurately, reducing the large 16.5 Δ E00 color difference between the measured and the rendered color exhibited by conventional RGB imaging down to 3.0 Δ E00 (Figure 6d).

3. Capture and Image Processing Methods

3.1. Imaging System

3.1.1. Equipment

The main camera used throughout development, testing, and demonstration of two-light imaging was the Sony α 7R III, a 42MP mirrorless digital camera equipped with pixel-shift multi-shot capabilities. This enables the direct capture of full-frame RGB images, thus bypassing the need for computational demosaicing of the CFA pattern. The particular camera used had its internal IR filter removed, evident in the plots of its spectral sensitivity in Figure 2a and Figure 4a, showing that the sensitivity of both the green and red channels extends above 700 nm. The camera was always controlled via computer tether with Sony’s Imaging Edge Remote.
The ten-channel tunable LED light sources paired with the camera were designed by LEDMotive [37], and have the spectral radiance characteristics described and plotted above (Figure 3a). The particular LEDs in the lights were previously chosen based on research into the optimal ten-channel set for a cultural heritage imaging system following a traditional sequential capture approach with a monochrome sensor [31]. Each light contains the LEDs and an internal integrating sphere for diffusion of the light within a small housing (16 × 12 × 12 cm). Externally, parabolic reflectors are attached around the port to shape the illumination and mimic studio strobe fixtures. The lights were also computer controlled using MATLAB scripts that enabled independent control of each LED channel.
Because this research focused on demonstrating the color accuracy of the two-light technique, color targets were a central part of the workflow for both calibration and verification purposes. Those used most commonly included the Digital Color Checker SG (X-Rite), the Next Generation Target V2 (Avian Rochester), and the Artist Paint Target (Image Science Associates). The Digital Color Checker SG (CCSG) is a familiar target that is already widely used in museum studio photography, and so it works well as an accessible tool in this workflow (Figure 5a). The Next Generation Target (NGT) was originally designed at the request of the Library of Congress to address concerns related to durability, sensitivity to lighting geometry, as well as more appropriate color gamut sampling for heritage materials [38] (Figure 7, left). Finally, the Artist Paint Target (APT), which was originally developed in the Munsell Color Science Lab, is particularly useful as a materially relevant target containing real artist paint mixtures [39] (Figure 7, right).

3.1.2. Setup

Imaging was carried out in a copy stand configuration, using 45°/0° illumination/ detection geometry to mimic the bi-directional detection of the reference spectrophotometer and reduce geometric error in the transfer between spectral reflectance measurement and imaging. The typical image set collected included two-light image pairs of (1) the desired color calibration target(s), (2) a flat field, (3) the dark current, with the lens cap covering the lens, and (4) the object(s). The shutter speeds used for each of the two lighting conditions were set by sampling the histograms of the calibration target’s white patches, and “exposing to the right” (ETTR) at the base ISO, while avoiding clipping, in order to maximize use of the sensor’s dynamic range [40]. All images were captured as photometric linear RAW files.
Because the only other physical components needed aside from the camera and lights are a tripod and light stands to mount them on, and a laptop for tethered computer control, the entire kit is easily packed into several Pelican cases for travel. This level of portability, as well as the simplicity of the setup, was key in enabling its demonstration in a range of different environments, including a small office, a classroom, a conservation lab, and a few imaging studios. A photo of the imaging setup is included in Figure 8.

3.2. Image Processing Software: Beyond RGB

There have been several software tools developed alongside the dual-RGB and other multispectral imaging projects in the Munsell Color Science Lab for processing these image data [41]. However, they found limited use outside the research lab environment due to concerns with stability and ease of use. This was motivation to create a software application to accompany two-light imaging that would be more easily adopted elsewhere, and more intuitive to use. A team of senior software engineering undergraduates was tasked with building such a solution, and the result was the development of Beyond RGB. It is an application that facilitates implementation of the two-light technique by providing a platform to process the resulting image data in a user-friendly way. Furthermore, it is cross-platform compatible and locally installable on both Windows and macOS systems, and it is open source and freely available at the project GitHub repository [42]. The form and function of Beyond RGB are summarized below; further details can be found elsewhere [15].
Beyond RGB carries out the colorimetric and spectral calibrations on two-light spectral image sets “under the hood” of a simple graphical user interface (Figure 9). It takes as input the RAW image set, including flatfields, darks, and the images of the targets and object. After the user provides information about the identity and spatial location of the color target, pre-processing calibration proceeds automatically. Pre-processing involves flat-fielding and dark current corrections to account for nonuniformities in the scene lighting and sensor [43], and to remove the black level of the camera, as well as spatial registration of the spectral channels to correct for chromatic aberration distortions between the two lighting conditions. These operations are all completed prior to carrying out the colorimetric and spectral calibration procedures outlined in the Background section above.
There is a simple image viewer built into Beyond RGB where the calibrated image is populated when calibration is complete. The viewing window enables preliminary visual inspection of the results, and includes a summary of the colorimetric data that characterizes the accuracy of the calibration. Additionally, there is a built-in spectral picker that allows the user to select regions of interest from which to display and export estimated reflectance spectra based on the spectral reflectance transform (Figure 10). The main focus of the application is encouraging the capture of spectral master files and enabling the calibration and export of the color managed RGB image. However, the ability to perform spectral estimation may be of more interest in future versions of the software that expand upon the more familiar applications of spectral imaging, such as pigment characterization and mapping. At present, the viewer and spectral picker are most useful for identifying regions of interest that may be of interest for further study by complementary techniques.

4. Results and Discussion

4.1. Institutions Visited

This final section summarizes the general successes and lessons learned over the course of the demonstration and testing visits. The institutions visited differed in size, kinds of collections, and geographic location, and included
  • the Cary Graphic Arts Collection at the RIT Libraries (Rochester, NY, USA),
  • the National Cryptologic Museum (Annapolis Junction, MD, USA),
  • the US Army Heritage and Education Center (Carlisle, PA, USA),
  • the Museum of Modern Art (New York, NY, USA),
  • the Art Conservation Department at Buffalo State College (Buffalo, NY, USA), and
  • the George Eastman Museum (Rochester, NY, USA)
A majority of the visits included both an introductory presentation on the technique and its advantages over conventional color imaging, followed by a demonstration of the system while imaging some collections objects.

4.2. Example Results

The series of images below (Figure 11, Figure 12, Figure 13 and Figure 14) are representative selections from those captured of collections objects during the visits. They were chosen to show examples of the range of materials and colors imaged. They include the marbled paper inside cover and the title page of an embroidered book from the Cary Graphic Arts Collection, a painted flag and a felt uniform patch, both from the US Army Heritage and Education Center collection, and a platinotype photograph from a study collection at RIT that was included in a previous art reproduction case study [44]. All images were calibrated using the Digital Color Checker SG. The flag, patch, and photograph were captured with both the conventional color imaging and two-light imaging to enable qualitative comparison between the two. For the book, only the two-light image rendering is available. It is included as an example of the diverse material that was imaged. Additionally, spot measurements made with a handheld spectrophotometer were collected on a few regions of interest on each object to provide ground truth against which the rendered colors are compared quantitatively. For each spot, the measured, conventional color imaging (where available), and two-light imaging CIELAB values are reported, as well as the corresponding Δ E00 color difference between the measured and rendered colors.
Unsurprisingly, two-light imaging outperformed conventional color imaging across the board, showing smaller color differences with respect to the true color in the sampled regions. The largest of these color differences are visually obvious when comparing the less accurate to the more accurate rendering. The images of the flag were included as an example of one of these most noticeably different renderings, in the blue background and stars. The Red 1 patch is included because the conservators commented that the olive green color is one that they have noticed in particular that does not photograph well. Two-light imaging appears to have reduced this problem. The platinotype photograph is included because it proved to be a particularly difficult object to reproduce well by museum imaging methods that were in place in a study carried out a decade ago [44]. The photograph has a very flat reflectance curve across the visible range that is not well characterized by three-channel sampling, but two-light imaging better samples this curve shape, and produces comparatively less of the pinkish cast evident in the conventional color rendering.
It is worth noting again that all of these images were calibrated using the Digital Color Checker SG. For the two-light images, the color-calibrated mean Δ E00 across the target did not exceed 1 Δ E00. A typical Δ E00 target heat map result for one of the calibrations is provided in Figure 15, in which the mean Δ E00 is 0.8. However, the mean level of color accuracy of 0.8 Δ E00 is not achieved in the regions of interest checked with spot measurements. This is a somewhat expected result when using a commercial target that is not the same material nor a close representation of the gamut of colors in the real objects. This shows the value that custom, materially specific and color-curated targets can add to a workflow.
As an aside, the cause of the single outlier value of 5.0 Δ E00 for patch C6 in Figure 15 is unknown. There is a possibility that this particular patch of the semi-glossy target caught a glare during capture, throwing off its calibration. The presence of outliers like these illustrate the value in reporting the mean and 90th percentile Δ E00 values as more representative statistics describing the reproduction of the vast majority of the patches in the set.

4.3. Feedback and Future Work

The demonstrated adherence to copy stand lighting geometry was a common source of concern. This setup provides consistency with 45°/0° spectrophotometric target measurement geometry. While it results in technical accuracy, this approach does not leave room for using more complicated illumination setups that better highlight the character of, for example, a painting’s surface texture, nor does it reproduce what one might expect an artwork to look like under gallery lighting conditions. A previous case study found that for conventional color imaging, as long as exposure is set correctly, straying from copy stand lighting geometry had little effect on final color accuracy [45]. This remains to be verified for two-light imaging, but opens the door to possibilities for more creative lighting setups, or even integration with multi-light 2.5D imaging techniques for simultaneously recording color and surface texture [46].
Among the most valuable discussions were those related to improvements and additional features and functions in Beyond RGB. This was a particularly important aspect to gather feedback on, as the project timeline for the development and release of the first version of the software did not allow much time for user testing of the software ahead of the visits. Positive impressions of the software included the simplicity of interacting with it through a graphical user interface, the rendered image viewer and spectral picker as a first-pass inspection tool, and the open-source, free-to-use nature of the project. There were many helpful comments made about small ways to refine the graphical user interface to improve functionality and flow, like drag-and-drop file import, auto-populating fields based on expected naming conventions, and automated target detection. Some of the most important, big picture suggestions for improvements included:
  • Batch processing. At present, a single calibration run of the software handles a single object image at a time, requiring time-consuming resetting of the calibration parameters for each run. Batch processing would enable the user to calibrate an entire set of images captured under the same calibration conditions much more efficiently.
  • A project website. Currently, the project is hosted entirely on its GitHub repository. This can be difficult to navigate for novice users who only need access to the installation packages, Wiki, and user guide. These distributables would be better housed on a separate, dedicated website.
  • An Adobe Camera Raw (ACR) RAW to TIFF workflow. The current version of Beyond RGB was tested with and supports Canon, Fujifilm, Nikon, and Sony RAW file formats. For cases where input images are not in one of these formats, it also supports uncompressed, unprocessed linear TIFFs created from RAW files. As ACR is a popular tool for working with RAW images, it was requested that specific guidelines for setting the correct parameters to obtain unprocessed TIFFs from RAWs using ACR be provided.
  • Material mapping. The ability to estimate pixel-by-pixel reflectance spectra is a feature that already exists in Beyond RGB (Figure 10). Building in the capability to group and visualize the distribution of reflectance spectra having similar spectral features would be the first steps toward exploring the applicability of two-light imaging to the more typical tasks of spectral imaging, like pigment characterization and mapping.
These suggestions and many more, both big and small, were compiled into a master wish list of features that are currently at the center of ongoing updates to Beyond RGB. They will be available in the next version of the software and documentation, which is anticipated in Spring 2023.
The feedback and impressions from the visits were overall positive. The needs and priorities for imaging at each institution differ, and conveying the benefits that two-light imaging could offer in these varied contexts was successful. The fact that two-light imaging involves a specific set of capture and processing procedures is useful in and of itself. For instance, this could aid in providing a more structured approach to documentation imaging in conservation labs in which current practice is less rigorously controlled and outdated cameras are used. Similarly, in institutions without dedicated imaging personnel or space, this is an efficient, portable means of capturing accurate color that largely removes time-consuming, subjective post-production corrections. Two-light imaging is also a useful teaching tool for introducing not just the concepts and practice of spectral imaging, but also human vision, color appearance, and quality control at academic and teaching institutions.
The system that was demonstrated is a proof-of-concept prototype. Presently, it still utilizes LED light sources that are likely out-of-budget for all but larger institutions. These visits were critical to emphasize to the community that with growing interest, there is the potential for future development of simplified lighting solutions that would be a less costly option for carrying out two-light imaging. Alternatively, tunable broadband LED lamps, such as the Broncolor F160, are now becoming a more common studio lighting option. It is not outside the realm of possibility to imagine that a dual-purpose fixture, integrating narrowband channels into such an existing LED system, to meet the needs of both conventional and two-light imaging, could be an attractive option. Regardless, the interest expressed in pursuing such ideas was encouraging, and shows that there is momentum for continuing to improve access to these advanced imaging practices across more institutions.

5. Conclusions

The two-light imaging technique has been demonstrated as an effective capture method for color-accurate rendering and spectral archiving that is practical for integration with routine studio photography workflows. The descriptions of the influential history and development of dual-RGB imaging and the method of transforming six-channel spectral data to a calibrated color-accurate rendering provide context for two-light imaging, and further supplement the information from the on-site demonstrations. Traveling with the system to several institutions with varied imaging capabilities and goals was the first true test of its flexibility beyond the research and development environment. It also provided the opportunity to describe and demonstrate the advantages of the two-light imaging technique to diverse audiences, and to gather impressions about how the system might fit into existing imaging workflows. This feedback is informing updates and improvements to the system moving forward, which will focus especially on adding features to the Beyond RGB software and developing lower-cost lighting solutions optimized specifically for two-light imaging. These will also influence continued efforts to collaborate with institutions to provide accessible education around spectral imaging, and to communicate the ways it might enhance current frameworks of cultural heritage digitization and archiving.

Author Contributions

Conceptualization, O.R.K. and S.P.F.; methodology, O.R.K. and S.P.F.; software, O.R.K.; validation, O.R.K.; formal analysis, O.R.K.; investigation, O.R.K.; resources, S.P.F.; data curation, O.R.K.; writing—original draft preparation, O.R.K.; writing—review and editing, O.R.K. and S.P.F.; visualization, O.R.K.; supervision, S.P.F.; project administration, S.P.F.; funding acquisition, S.P.F. All authors have read and agreed to the published version of the manuscript.

Funding

Funding for this research was provided by the Max Saltzmann Endowed Fellowship in the Color Science of Cultural Heritage at RIT.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank their generous hosts at each of the institutions visited for demonstration and testing: Steve Galbraith, RIT Cary Graphic Arts Collection (Rochester, NY, USA); Rob Simpson, National Cryptologic Museum (Annapolis Junction, MD, USA); Jordan Ferraro, Cynthia Blechl, and Geoffrey Manglesdorf, US Army Heritage and Education Center (Carlisle, PA, USA); Robert Kastler, Denis Doorley, and Emile Askey, Museum of Modern Art (New York, NY, USA); Jiuan Jiuan Chen and Patrick Ravines, SUNY Buffalo Department of Art Conservation (Buffalo, NY, USA); and Elizabeth Chiang, George Eastman Museum (Rochester, NY, USA). Additional thanks to Yosi Pozeilov, Los Angeles County Museum of Art (Los Angeles, CA, USA), for testing and providing valuable feedback about using and improving Beyond RGB.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saunders, D.; Cupitt, J. Image Processing at the National Gallery: The VASARI Project. Natl. Gallery Tech. Bull. 1993, 14, 72–85. [Google Scholar]
  2. Striova, J.; Dal Fovo, A.; Fontana, R. Reflectance imaging spectroscopy in heritage science. Riv. Del Nuovo Cim. 2020, 43, 515–566. [Google Scholar] [CrossRef]
  3. Jung, A. Hyperspectral Imaging. In Digital Techniques for Documenting and Preserving Cultural Heritage; Bentkowska-Kafel, A., MacDonald, L., Eds.; Arc Humanities Press: Leeds, UK, 2018; pp. 217–219. [Google Scholar]
  4. Delaney, J.K.; Dooley, K.A.; van Loon, A.; Vandivere, A. Mapping the pigment distribution of Vermeer’s Girl with a Pearl Earring. Herit. Sci. 2020, 8, 1–16. [Google Scholar] [CrossRef] [Green Version]
  5. Liang, H.; Lucian, A.; Lange, R.; Cheung, C.; Su, B. Remote spectral imaging with simultaneous extraction of 3D topography for historical wall paintings. ISPRS J. Photogramm. Remote Sens. 2014, 95, 13–22. [Google Scholar] [CrossRef] [Green Version]
  6. Cucci, C.; Delaney, J.K.; Picollo, M. Reflectance Hyperspectral Imaging for Investigation of Works of Art: Old Master Paintings and Illuminated Manuscripts. Accounts Chem. Res. 2016, 49, 2070–2079. [Google Scholar] [CrossRef] [PubMed]
  7. George, S.; Hardeberg, J.Y.; Linhares, J.; Macdonald, L.; Montagner, C.; Nascimento, S.; Picollo, M.; Pillay, R.; Vitorino, T.; Webb, E.K. A Study of Spectral Imaging Acquisition and Processing for Cultural Heritage. In Digital Techniques for Documenting and Preserving Cultural Heritage; Bentkowska-Kafel, A., MacDonald, L., Eds.; Arc Humanities Press: Leeds, UK, 2018; Chapter 8; pp. 141–158. [Google Scholar] [CrossRef] [Green Version]
  8. Martinez, K.; Cupitt, J.; Saunders, D.R. High-resolution colorimetric imaging of paintings. In Proceedings of the Cameras, Scanners, and Image Acquisition Systems Conference, San Jose, CA, USA, 31 January–5 February 1993; Volume 1901, pp. 25–36. [Google Scholar] [CrossRef] [Green Version]
  9. Ribés, A.; Schmitt, F.; Pillay, R.; Lahanier, C. Calibration and spectral reconstruction for CRISATEL: An art painting multispectral acquisition system. J. Imaging Sci. Technol. 2005, 49, 563–573. [Google Scholar]
  10. Berns, R.S. Color-Accurate Image Archives Using Spectral Imaging. In Scientific Examination of Art: Modern Techniques in Conservation and Analysis; The National Academies Press: Washington, DC, USA, 2005; Chapter 8; pp. 105–119. [Google Scholar]
  11. Wyble, D.R. Spectral Imaging: A Non-technical Introduction to What It Is, and Why You Should Care. In Proceedings of the Archiving 2021 Conference (Short Course Notes), Online, 8–24 June 2021. [Google Scholar]
  12. Kuzio, O.R.; Berns, R.S. Color and Material Appearance Imaging and Archiving Using a Sony Alpha a7R III Camera; Technical Report’; Rochester Institute of Technology: Rochester, NY, USA, 2018. [Google Scholar]
  13. Kuzio, O.; Farnand, S. Color Accuracy-Guided Data Reduction for Practical LED-based Multispectral Imaging. In Proceedings of the Archiving 2021 Conference, Online, 8–24 June 2021; pp. 65–70. [Google Scholar] [CrossRef]
  14. Kuzio, O.; Farnand, S. LED-based versus Filter-based Multispectral Imaging Methods for Museum Studio Photography. In Proceedings of the International Colour Association Conference 2021, Online, 30 August–3 September 2021; pp. 639–644. [Google Scholar]
  15. Kuzio, O.R.; Farnand, S.P. Beyond RGB: A spectral image processing software application for cultural heritage studio photography. In Proceedings of the Archiving 2022 Conference, Online, 7–10 June 2022; pp. 95–100. [Google Scholar] [CrossRef]
  16. Kuzio, O.R.; Farnand, S.P. Comparing Practical Spectral Imaging Methods for Cultural Heritage Studio Photography. J. Comput. Cult. Herit. 2022; just accepted. [Google Scholar] [CrossRef]
  17. Kuzio, O.; Farnand, S. Simulating the Effect of Camera and Lens Choice for Color Accurate Spectral Imaging of Cultural Heritage Materials. In Proceedings of the International Colour Association (AIC) Conference 2022, Online, 13–16 June 2022; p. TBD. [Google Scholar]
  18. Chen, T.; Berns, R.S. Measuring the Total Appearance of Paintings Using a Linear Source, Studio Strobes, and a Dual-RGB Camera; Technical Report; Rochester Institute of Technology: Rochester, NY, USA, 2012. [Google Scholar]
  19. Berns, R.S. Theory and Practice of Dual-RGB Imaging; Technical Report; Rochester Institute of Technology: Rochester, NY, USA, 2016. [Google Scholar]
  20. Imai, F.H.; Taplin, L.A.; Day, E.A. Comparative Study of Spectral Reflectance Estimation Based on Broad-Band Imaging Systems; Technical Report; Rochester Institute of Technology: Rochester, NY, USA, 2003. [Google Scholar]
  21. Berns, R.S.; Taplin, L.A.; Nezamabadi, M.; Zhao, Y. Modifications of a Sinarback 54 Digital Camera for Spectral and High-Accuracy Colorimetric Imaging: Simulations and Experiments; Technical Report; Rochester Institute of Technology: Rochester, NY, USA, 2004. [Google Scholar]
  22. Berns, R.S.; Taplin, L.A.; Nezamabadi, M.; Mohammadi, M.; Zhao, Y. Spectral imaging using a commercial colour-filter array digital camera. In Proceedings of the Fourteenth Triennial ICOM-CC Meeting, The Hague, The Netherlands, 12–16 September 2005; pp. 743–750. [Google Scholar]
  23. Sinar. Color To Match. Available online: https://sinar.swiss/products/cameras/ctm/ (accessed on 8 October 2022).
  24. Berns, R.S.; Taplin, L.A.; Imai, F.H.; Day, E.A.; Day, D.C. A Comparison of Small-Aperture and Image-Based Spectrophotometry of Paintings. Stud. Conserv. 2005, 50, 253–266. [Google Scholar] [CrossRef]
  25. Liang, H.; Saunders, D.; Cupitt, J. A New Multispectral Imaging System for Examining Paintings. J. Imaging Sci. Technol. 2005, 49, 551–562. [Google Scholar]
  26. Berns, R.S. Image Quality Degradation Caused by Color Transformations in Multispectral Imaging—A Practical Review. In Proceedings of the Archiving 2020 Conference, Online, 18–21 May 2020; pp. 60–68. [Google Scholar] [CrossRef]
  27. Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
  28. Christens-Barry, W.A.; Boydston, K.; France, F.G.; Knox, K.T.; Easton, R.L., Jr.; Toth, M.B. Camera system for multispectral imaging of documents. In Proceedings of the Sensors, Cameras, and Systems for Industrial/Scientific Applications Conference X, San Jose, CA, USA, 20–22 January 2009; Bodegom, E., Nguyen, V., Eds.; SPIE: Bellingham, WA, USA, 2009; Volume 7249, p. 724908. [Google Scholar] [CrossRef]
  29. France, F.G.; Toth, M.B.; Christens-Barry, W.A.; Boydston, K. Advanced Spectral Imaging for Microanalysis of Cultural Heritage. Microsc. Microanal. 2010, 16, 728–729. [Google Scholar] [CrossRef] [Green Version]
  30. Gibson, A.; Piquette, K.E.; Bergmann, U.; Christens-Barry, W.; Davis, G.; Endrizzi, M.; Fan, S.; Farsiu, S.; Fitzgerald, A.; Griffiths, J.; et al. An assessment of multimodal imaging of subsurface text in mummy cartonnage using surrogate papyrus phantoms. Herit. Sci. 2018, 6, 7. [Google Scholar] [CrossRef]
  31. Paray, J.N. LED Selection for Spectral (Multispectral) Imaging. Master’s Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2020. [Google Scholar]
  32. Berns, R.S. Digital color reconstructions of cultural heritage using color-managed imaging and small-aperture spectrophotometry. Color Res. Appl. 2019, 44, 531–546. [Google Scholar] [CrossRef]
  33. Shrestha, R.; Hardeberg, J.Y. An experimental study of fast multispectral imaging using LED illumination and an RGB camera. In Proceedings of the 23rd Color and Imaging Conference, Darmstadt, Germany, 19–23 December 2015; pp. 36–40. [Google Scholar]
  34. Shrestha, R.; Hardeberg, J.Y. Assessment of Two Fast Multispectral Systems for Imaging of a Cultural Heritage Artifact—A Russian Icon. In Proceedings of the 14th International Conference on Signal Image Technology and Internet-Based Systems, Las Palmas de Gran Canaria, Spain, 26–29 November 2018; pp. 645–650. [Google Scholar] [CrossRef]
  35. Fairchild, M.D.; Wyble, D.R.; Johnson, G.M. Matching image color from different cameras. In Proceedings of the Image Quality and System Performance Conference V, San Jose, CA, USA, 28–30 January 2008; Volume 6808. [Google Scholar] [CrossRef]
  36. Stokes, M.; Fairchild, M.D.; Berns, R.S. Precision Requirements for Digital Color Reproduction. ACM Trans. Graph. 1992, 11, 406–422. [Google Scholar] [CrossRef]
  37. Ledmotive Technologies. SPECTRA TUNE LAB: The Light Engine for Scientists. Available online: https://ledmotive.com/stlab/ (accessed on 23 February 2021).
  38. Wyble, D.R. Next generation camera calibration target for archiving. In Proceedings of the Archiving 2017 Conference, Riga, Latvia, 15–18 May 2017; pp. 127–132. [Google Scholar] [CrossRef]
  39. Berns, R.S. Artist Paint Target (APT): A Tool for Verifying Camera Performance; Technical Report; Rochester Institute of Technology: Rochester, NY, USA, 2014. [Google Scholar]
  40. Reichmann, M. Expose Right. 2003. Available online: https://luminous-landscape.com/expose-right/ (accessed on 8 October 2022).
  41. Studio for Scientific Imaging and Archiving of Cultural Heritage: Software. Available online: https://www.rit.edu/science/studio-scientific-imaging-and-archiving-cultural-heritage#software (accessed on 8 October 2022).
  42. Beyond RGB: Initial Releases. 2022. Available online: https://github.com/BeyondRGB/Imaging-Art-beyond-RGB/releases (accessed on 8 October 2022).
  43. Witwer, J.; Berns, R.S. Increasing the versatility of digitizations through post-camera flat-fielding. In Proceedings of the Archiving 2015 Conference, Los Angeles, CA, USA, 19–22 May 2015; pp. 110–113. [Google Scholar]
  44. Frey, F.S.; Farnand, S. Benchmarking Art Image Interchange Cycles; RIT School of Print Media: Rochester, NY, USA, 2011. [Google Scholar]
  45. Geffert, W.S. Transitioning to international imaging standards at the Metropolitan Museum of Art’s Photograph Studio: A case study. In Proceedings of the Archiving 2011 Conference, Salt Lake City, UT, USA, 16–19 May 2011; pp. 205–210. [Google Scholar]
  46. Cox, B.D.; Berns, R.S. Imaging artwork in a studio environment for computer graphics rendering. In Proceedings of the SPIE/IS&T Electronic Imaging Conference, San Francisco, CA, USA, 8–12 February 2015. [Google Scholar] [CrossRef]
Figure 1. Spectral transmittance of blue-green and yellow colored glass filters, and a visible bandpass filter.
Figure 1. Spectral transmittance of blue-green and yellow colored glass filters, and a visible bandpass filter.
Heritage 05 00214 g001
Figure 2. (a) Spectral sensitivity of a commercial RGB CFA camera that has had its IR filter removed. (b) Spectral sensitivity of the camera when equipped with blue-green filter. (c) Spectral sensitivity of the camera when equipped with a yellow filter.
Figure 2. (a) Spectral sensitivity of a commercial RGB CFA camera that has had its IR filter removed. (b) Spectral sensitivity of the camera when equipped with blue-green filter. (c) Spectral sensitivity of the camera when equipped with a yellow filter.
Heritage 05 00214 g002
Figure 3. (a) Spectral power distributions of the ten-channel LED lights, labeled by peak wavelength. (b) The spectral power distributions of the pair of lighting conditions, each consisting of a mixture of 3 of the LEDs plotted in (a). For this camera and target, the optimal pairs are combinations of the LEDs with peak wavelengths (1) 450 nm, 525 nm, and 735 nm, and (2) 450 nm, 545 nm, and 735 nm.
Figure 3. (a) Spectral power distributions of the ten-channel LED lights, labeled by peak wavelength. (b) The spectral power distributions of the pair of lighting conditions, each consisting of a mixture of 3 of the LEDs plotted in (a). For this camera and target, the optimal pairs are combinations of the LEDs with peak wavelengths (1) 450 nm, 525 nm, and 735 nm, and (2) 450 nm, 545 nm, and 735 nm.
Heritage 05 00214 g003
Figure 4. (a) Spectral sensitivity of a commercial RGB CFA camera that has had its IR filter removed. (b,c) Spectral sensitivity of the camera when imaging under lighting conditions 1 and 2 (shown in Figure 3b).
Figure 4. (a) Spectral sensitivity of a commercial RGB CFA camera that has had its IR filter removed. (b,c) Spectral sensitivity of the camera when imaging under lighting conditions 1 and 2 (shown in Figure 3b).
Heritage 05 00214 g004
Figure 5. (a) Digital Color Checker SG. (b) A heat map that color codes the size of the Δ E00 color difference between the measured color of each target patch and the color as rendered using conventional RGB imaging. The mean and 90th percentile Δ E00 values across all of the patches are given below the heat map. (c) The same as (b), but rendered using the two-light imaging technique.
Figure 5. (a) Digital Color Checker SG. (b) A heat map that color codes the size of the Δ E00 color difference between the measured color of each target patch and the color as rendered using conventional RGB imaging. The mean and 90th percentile Δ E00 values across all of the patches are given below the heat map. (c) The same as (b), but rendered using the two-light imaging technique.
Heritage 05 00214 g005
Figure 6. A painting of the night sky over a field imaged and rendered using conventional RGB capture (a) and two-light capture (b). (c) Spectral reflectance measured from the two spots indicated by the green circles in (a,b), which indicate the use of two different pigments, cobalt blue (spot 1) and phthalo blue (spot 2) in painting the sky. (d) Comparison of the spot colors rendered from the measured spectral reflectance versus from the RGB, and two-light image data, along with the Δ E00 color differences.
Figure 6. A painting of the night sky over a field imaged and rendered using conventional RGB capture (a) and two-light capture (b). (c) Spectral reflectance measured from the two spots indicated by the green circles in (a,b), which indicate the use of two different pigments, cobalt blue (spot 1) and phthalo blue (spot 2) in painting the sky. (d) Comparison of the spot colors rendered from the measured spectral reflectance versus from the RGB, and two-light image data, along with the Δ E00 color differences.
Heritage 05 00214 g006aHeritage 05 00214 g006b
Figure 7. Next Generation Target V2 (left) and Artist Paint Target (right).
Figure 7. Next Generation Target V2 (left) and Artist Paint Target (right).
Heritage 05 00214 g007
Figure 8. The two-light imaging setup in a classroom at the George Eastman Museum.
Figure 8. The two-light imaging setup in a classroom at the George Eastman Museum.
Heritage 05 00214 g008
Figure 9. A screenshot of the Beyond RGB graphical user interface illustrating interactive target patch selection.
Figure 9. A screenshot of the Beyond RGB graphical user interface illustrating interactive target patch selection.
Heritage 05 00214 g009
Figure 10. A screenshot showing the spectral picker built into the Beyond RGB calibrated image viewer.
Figure 10. A screenshot showing the spectral picker built into the Beyond RGB calibrated image viewer.
Heritage 05 00214 g010
Figure 11. An image of the marbled paper inside cover and first page of an embroidered book from the Cary Graphic Arts Collection rendered from a two-light spectral capture. A handheld spectrophotometer was used to measure the color at the three locations marked on the image. The CIELAB values from the measurement and the image at each location, and the Δ E00 between them, are reported in the table.
Figure 11. An image of the marbled paper inside cover and first page of an embroidered book from the Cary Graphic Arts Collection rendered from a two-light spectral capture. A handheld spectrophotometer was used to measure the color at the three locations marked on the image. The CIELAB values from the measurement and the image at each location, and the Δ E00 between them, are reported in the table.
Heritage 05 00214 g011
Figure 12. “Flag Mid-20th Century”, 1936–1965, U.S. Army Heritage and Education Center, Carlisle, PA. An image of a flag painted with the insignia of the Chairman of the Joint Chiefs of Staff rendered from a conventional color capture (left) and a two-light spectral capture (right). A handheld spectrophotometer was used to measure the color at the three locations marked on the conventional color rendering. The CIELAB values from the measurement and both images at each location, and the Δ E00 between them, are reported in the table.
Figure 12. “Flag Mid-20th Century”, 1936–1965, U.S. Army Heritage and Education Center, Carlisle, PA. An image of a flag painted with the insignia of the Chairman of the Joint Chiefs of Staff rendered from a conventional color capture (left) and a two-light spectral capture (right). A handheld spectrophotometer was used to measure the color at the three locations marked on the conventional color rendering. The CIELAB values from the measurement and both images at each location, and the Δ E00 between them, are reported in the table.
Heritage 05 00214 g012
Figure 13. “Insignia Mid-20th Century”, 1936–1965, U.S. Army Heritage and Education Center, Carlisle, PA. An image of a wool and felt 1st Infantry Division uniform patch rendered from a conventional color capture (left) and a two-light spectral capture (right). A handheld spectrophotometer was used to measure the color at the three locations marked on the conventional color rendering. The CIELAB values from the measurement and both images at each location, and the Δ E00 between them, are reported in the table.
Figure 13. “Insignia Mid-20th Century”, 1936–1965, U.S. Army Heritage and Education Center, Carlisle, PA. An image of a wool and felt 1st Infantry Division uniform patch rendered from a conventional color capture (left) and a two-light spectral capture (right). A handheld spectrophotometer was used to measure the color at the three locations marked on the conventional color rendering. The CIELAB values from the measurement and both images at each location, and the Δ E00 between them, are reported in the table.
Heritage 05 00214 g013
Figure 14. An image of a historic platinotype photograph from a study collection at RIT rendered from a conventional color capture (left) and a two-light spectral capture (right). A handheld spectrophotometer was used to measure the color at the three locations marked on the conventional color rendering. The CIELAB values from the measurement and both images at each location, and the Δ E00 between them, are reported in the table.
Figure 14. An image of a historic platinotype photograph from a study collection at RIT rendered from a conventional color capture (left) and a two-light spectral capture (right). A handheld spectrophotometer was used to measure the color at the three locations marked on the conventional color rendering. The CIELAB values from the measurement and both images at each location, and the Δ E00 between them, are reported in the table.
Heritage 05 00214 g014
Figure 15. An example of a green-to-red color-coded heat map visualization (left) of the magnitude of the Δ E00 color difference between the reference data and rendered image data for the CCSG (right) that was a typical result for a two-light spectral capture from the on-site visits.
Figure 15. An example of a green-to-red color-coded heat map visualization (left) of the magnitude of the Δ E00 color difference between the reference data and rendered image data for the CCSG (right) that was a typical result for a two-light spectral capture from the on-site visits.
Heritage 05 00214 g015
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kuzio, O.R.; Farnand, S.P. Toward Practical Spectral Imaging beyond a Laboratory Context. Heritage 2022, 5, 4140-4160. https://doi.org/10.3390/heritage5040214

AMA Style

Kuzio OR, Farnand SP. Toward Practical Spectral Imaging beyond a Laboratory Context. Heritage. 2022; 5(4):4140-4160. https://doi.org/10.3390/heritage5040214

Chicago/Turabian Style

Kuzio, Olivia R., and Susan P. Farnand. 2022. "Toward Practical Spectral Imaging beyond a Laboratory Context" Heritage 5, no. 4: 4140-4160. https://doi.org/10.3390/heritage5040214

APA Style

Kuzio, O. R., & Farnand, S. P. (2022). Toward Practical Spectral Imaging beyond a Laboratory Context. Heritage, 5(4), 4140-4160. https://doi.org/10.3390/heritage5040214

Article Metrics

Back to TopTop