Next Article in Journal
Spatio-Temporal Changes of Arable Land and Their Impacts on Grain Output in the Yangtze River Economic Belt from 1980 to 2020
Next Article in Special Issue
The Synergy between Artificial Intelligence, Remote Sensing, and Archaeological Fieldwork Validation
Previous Article in Journal
Machine Learning-Based Fine Classification of Agricultural Crops in the Cross-Border Basin of the Heilongjiang River between China and Russia
Previous Article in Special Issue
Use of Geoinformatics for the Digitization and Visualization of Sensitive Space in the Urban Landscape: A Case Study of the Gross-Rosen Sub-Camps Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Drone-Acquired Short-Wave Infrared (SWIR) Imagery in Landscape Archaeology: An Experimental Approach

1
Department of Anthropology, Dartmouth College, Hanover, NH 03755, USA
2
Spatial Archaeometry Lab (SPARCL), Dartmouth College, Hanover, NH 03755, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(10), 1671; https://doi.org/10.3390/rs16101671
Submission received: 8 March 2024 / Revised: 10 April 2024 / Accepted: 14 April 2024 / Published: 9 May 2024
(This article belongs to the Special Issue Applications of Remote Sensing in Landscape Archaeology)

Abstract

:
Many rocks, minerals, and soil types reflect short-wave infrared (SWIR) imagery (900–2500 nm) in distinct ways, and geologists have long relied on this property to aid in the mapping of differing surface lithologies. Although surface archaeological features including artifacts, anthrosols, or structural remains also likely reflect SWIR wavelengths of light in unique ways, archaeological applications of SWIR imagery are rare, largely due to the low spatial resolution and high acquisition costs of these data. Fortunately, a new generation of compact, drone-deployable sensors now enables the collection of ultra-high-resolution (<10 cm), hyperspectral (>100 bands) SWIR imagery using a consumer-grade drone, while the analysis of these complex datasets is now facilitated by powerful imagery-processing software packages. This paper presents an experimental effort to develop a methodology that would allow archaeologists to collect SWIR imagery using a drone, locate surface artifacts in the resultant data, and identify different artifact types in the imagery based on their reflectance values across the 900–1700 nm spectrum. Our results illustrate both the potential of this novel approach to exploring the archaeological record, as we successfully locate and characterize many surface artifacts in our experimental study, while also highlighting challenges in successful data collection and analysis, largely related to current limitations in sensor and drone technology. These findings show that as underlying hardware sees continued improvements in the coming years, drone-acquired SWIR imagery can become a powerful tool for the discovery, documentation, and analysis of archaeological landscapes.

1. Introduction

Although archaeology is popularly associated with excavation, most archaeological sites—the remnants of past settlements or other activities—are recognized by the presence of artifacts and other features on the ground surface [1,2]. Beyond simply indicating the location of buried remains, analysis of the distribution and type of the surface archaeological record can offer evidence for the various periods of settlement at a given site; reveal the ways in which space was utilized for agriculture, productive practices, or ritual; and suggest patterns in the movement of people, things, and ideas across the landscape [3,4,5,6,7,8]. Fundamentally, most investigations of archaeological landscapes rely on our ability to find, map, and interpret artifacts and features found on the surface [9,10]. Yet, while an emerging suite of technological advances has transformed almost every aspect of contemporary archaeological research, our strategies for locating and mapping humble surface fragments of the past remain largely unchanged. Ultimately, archaeologists must walk slowly while looking at the ground, noting the location of things observable to the human eye.
This paper presents our initial experiments to develop a new method for the documentation and analysis of surface archaeological materials using drone-acquired, hyperspectral, short-wave infrared (SWIR) imagery. Many rocks, minerals, and soils that cannot be differentiated based on how they reflect visible (450–700 nm) or near-infrared (700–900 nm) light can be easily distinguished based on how they reflect longer wavelengths of SWIR light (900–2500 nm). Relying on this principle, geologists commonly use multispectral SWIR satellite imagery to locate and map areas with distinct lithologies. Theoretically, the same approach could be used to identify and characterize surface archaeological features, from architectural remains to individual artifacts. However, the low spatial resolution of existing satellite sensors, combined with the large size and slow framerate of airborne SWIR sensors, has made such an approach impractical until recently.
A new generation of drone-deployable sensors now offers the possibility of collecting SWIR imagery with a spatial resolution of better than 10 cm that enables the discrimination of hundreds of bands or channels, revealing detailed reflectance histograms of the ground surface [11]. Moreover, drone surveys allow for the collection of imagery at a relatively low cost and under optimal seasonal or ground-cover conditions. In principle, drone-acquired SWIR imagery could therefore be used to identify individual surface artifacts or other small features and to characterize these artifacts based on their reflectance properties, enabling us to map the distribution of artifacts and features over vast areas of the landscape with an efficiency and accuracy that has never before been possible. This paper presents our first efforts, supported by a NASA Space Archaeology grant, to develop this potentially transformative new approach to investigations of the archaeological landscape through a controlled experiment. Results demonstrate the possibilities and challenges of this exciting new technology, highlighting successes in locating and characterizing individual surface artifacts, while also pointing to key areas for continued development.

2. Background

Geologists and other earth scientists have long relied on SWIR imagery derived from Landsat and ASTER satellite programs to identify and characterize different types of rocks, minerals, and soils across large areas of the landscape [12,13,14,15,16,17,18,19,20]. Collecting samples of different rocks or minerals within a study area, it is possible to measure their SWIR reflectance values in a lab setting to create a “library” of spectral profiles, each offering a unique spectral fingerprint. Using remote-sensing classification tools, researchers can then train software to identify any pixels within an image that correspond to these reflectance values, with some limitations derived from the low spectral resolution of most public satellite sensors (Figure 1).
A handful of studies have similarly shown that despite its low spatial and spectral resolution, SWIR imagery from public satellite sensors can be used to detect archaeological sites and features due to the distinct ways in which anthropogenic soils or surface artifacts reflect SWIR wavelengths. For example, ASTER SWIR imagery at a 30 m resolution has been used to locate sites and roadways in northern Mesopotamia [21,22], map historical soil erosion in southern India [23], and model resource procurement in the Andes [24]. Somewhat higher-resolution (3.7 m) eight-band SWIR imagery from the commercial WorldView-3 satellite program has likewise been shown to be an effective tool for archaeological-site and feature detection in the Middle East [25] and on Rapanui/Easter Island [26], but these data are costly to acquire, and the effectiveness of this imagery is highly dependent on the timing of data collection as it relates to localized ground cover. The prospects for the discovery and mapping of archaeologically significant features using higher-resolution SWIR imagery have been suggested by a handful of studies that have utilized aircraft-acquired data [27,28] or used portable spectrometers to illustrate unique reflectance properties of archaeological soils and surface artifacts [29,30,31,32,33]. However, these sensors are too large to be mounted on drones, and conventional aircraft cannot fly low or slow enough to collect imagery at a sub-decimeter resolution.
Archaeology has been revolutionized in recent years by the continuous advancement of drone technology [3,34], and our research builds on these advances. Archaeologists now commonly use visible-light images collected with inexpensive consumer drones for mapping archaeological sites and landscapes [35,36,37], and they are increasingly experimenting with more sophisticated sensors to aid in site and feature detection, including multispectral near-infrared imagery, thermal imagery, and lidar [25,38,39,40,41,42,43,44]. However, until recently, commercially available SWIR sensors were too large to be carried by consumer-grade drones, preventing experimentation with this potentially powerful tool for imaging archaeological landscape features. This study is the first to experiment with the archaeological potential of ultra-high-resolution (<10 cm ground sampling distance [GSD]), hyperspectral (>100 bands) SWIR imagery collected using a consumer-grade drone.

3. Materials and Methods

This project presents our experimental research using a hyperspectral, drone-deployed SWIR sensor, undertaken as part of a larger project exploring applications of SWIR imagery in archaeology [25]. Our goal was to determine whether it would be possible, given current technological constraints, to collect SWIR imagery of sufficient spatial and spectral resolution to recognize and characterize individual artifacts, including ceramics, stone tools, and metals, as well as architectural and other surface remains. We further experimented with various approaches to image analysis and classification in order to automate the detection of individual artifacts. Below, we outline the technical details of the SWIR sensor, the drone and hardware, survey mission planning, our experimental design, and our approach to data processing.

3.1. Drone-Deployable SWIR Sensor

In developing our approach to this study, we evaluated all commercially available SWIR sensors that have the potential for deployment on a drone. The best-quality airborne SWIR sensors available today are far too heavy to be carried on a consumer-grade drone as they are designed for deployment on piloted aircraft and are too costly for the budgetary constraints of most archaeological research projects. Most lower-cost SWIR sensors are used primarily in stationary lab settings and are not designed for aerial deployment. We ultimately selected the Pika IR+, an aerial SWIR sensor developed by the Montana-based firm Resonon, which collects up to 326 spectral bands in the 900–1700 nm spectral range, with a spectral resolution of up to 5.6 nm [45]. We also considered the Headwall Micro-Hyperspec SWIR sensor, which collects data in a longer spectral range (900–2500 nm) than Resonon’s Pika IR+, but ultimately the significantly lower cost and more-compact size of the Pika provided the best balance between cost, sensor size, and spectral resolution.
The Resonon Pika IR+ is a pushbroom sensor, as are most SWIR systems, collecting lines of data up to 640 pixels across. Because the image is formed by the forward movement of the sensor, the resolution of the imagery is a product of the pixel resolution of individual lines, the speed of the drone during data collection, the altitude of the sensor above the ground, and the lens configuration. For aerial deployment, the Pika IR+ sensor is mounted on an aluminum frame, connected to a small flight control computer, a solid-state hard drive for data collection, and a survey-grade Ellipse N IMU. The IMU receives GNSS (GPS) input from an external GPS puck, and the entire system is powered by an external LiPo battery.
Initially, we sought to collect imagery at a very fine resolution (<4 cm ground sampling distance [GSD]), which pushes the practical limits of most SWIR sensors available today. In order to collect high-quality data that are relatively free of noise or interruptions, the sensor requires maximal light reflection off the ground surface. SWIR wavelengths carry far less energy than shorter-wavelength visible or NIR light, which makes it difficult to produce high-resolution sensors that collect data at very high framerates. In order to achieve the highest resolution possible, SWIR surveys must be conducted under optimal lighting conditions, ideally near solar noon, on a day free of clouds or atmospheric haze. In field settings, these conditions cannot always be met, but we found that attempting data collection on fully overcast days did not result in useful data as the low framerate required for those conditions necessitated a flight speed and height that were untenable for achieving high spatial resolution. As a rule, the greater the intensity of solar surface reflection, the higher the resolution and cleaner the SWIR data it is possible to collect.
Resonon’s software interface enables users to customize both the framerate and the number of spectral bands that the sensor will collect, and reducing both of these parameters will increase the signal and reduce noise, thereby producing higher-quality spectral data. Choosing the best data collection parameters is a balance between framerate, drone altitude, and speed, and the optimal parameters vary depending on the field conditions. A slower flight speed allows for a longer integration time (i.e., exposure), which increases the signal-to-noise ratio (SNR) of the data and is particularly important under lower illumination conditions [46]. However, a slower framerate results in coarser along-track spatial resolution, which requires a decrease in drone speed in order to offset this decrease and maintain sufficient spatial resolution. Similarly, higher framerates result in finer along-track resolution, which then necessitates a decrease in flight altitude in order to decrease the cross-track resolution and maintain an aspect ratio of at least two [47]. Balancing the framerate, flight speed, and flight altitude is key to a successful mission. In our surveys, we sought to collect imagery with the highest spatial resolution possible, ideally at a 3 cm GSD, in order to differentiate individual surface artifacts. We experimented with many different settings and found that on a clear day under optimal field conditions, the most effective framerate is usually 40–60 frames/second, with the drone flying 30–40 m AGL at 0.5–1.5 m/s. However, the actual GSD often ends up coarser than predicted by the flight planning software due to noise in the data caused by flight instabilities and spectral noise.

3.2. Drone and Mission Planning

For our experimental surveys, we mounted the Resonon Pika IR+ on a DJI Matrice 600 Enterprise-grade hexacopter (Figure 2). The DJI M600 is an aging drone model, originally released to consumers in 2016, and as such, it lacks many of the performance, hardware, and software upgrades of more recent drones. It nonetheless remains one of the only consumer drones capable of lifting the Pika sensor package, which weighs 4.3 kg (9.47 lbs) with all batteries, cables, and peripherals (a newer model of the Pika sensor released in 2023 is only 2.7 kg and thus can be flown on newer drone models like the DJI M300 or M350). For our surveys, we secured the Pika’s aluminum frame to a Ronin gimbal mount that attaches to the M600’s payload bars. The flight control computer and sensor require an external LiPo battery, which we mounted in a custom box attached to the airframe. We also attached an extra GPS antenna mast to the top of the airframe and mounted a GPS puck that delivers location data to the Pika’s IMU.
Planning surveys for SWIR data collection using any currently available commercial sensor is considerably more complex than with sensors that are integrated into drone hardware. Users first plan survey areas of interest (AOIs) in a GIS or Google Earth and then upload a KMZ file with this AOI into the Pika IR+ flight control computer using a USB-cabled interface. In theory, the sensor is then programmed to begin data collection whenever it enters the AOI. In order to collect multiple, adjacent lines of data, the flight plan must include enough overshoot past the edge of the AOI so that the sensor does not record during turns. Because each transect is processed independently (see below), pushbroom SWIR surveys do not require significant sidelapping, so we usually plan on 10–15%. Using swath width estimates from Resonon’s flight planning software, users manually calculate the appropriate amount of overlap between transects. However, because the SWIR sensor is un-gimbaled, wind gusts can cause the drone to roll during transects, resulting in gaps in coverage if transects do not have sufficient sidelapping coverage.
In order to achieve the desired image resolution of a 3–4 cm GSD with our 25 mm lens, we conducted surveys at 30–55 m above ground level, at around 1 m per second—a speed that can feel agonizingly slow. See Table 1 for the flight parameters used in the experiment. With the heavy payload of the Pika sensor, we found that the M600 could only fly for 13–15 min, depending on wind and weather conditions. In addition, prior to beginning a survey, the drone must perform a figure-eight series of turns in order to calibrate the IMU, which ideally should be conducted at faster airspeeds to conserve battery. In order to accommodate these complex mission planning requirements, we used the Universal Ground Control Station (UgCS) mission planning application, as it enables users to change the drone speed, orientation, turn type, and altitude at each waypoint [48]. However, to execute a flight in the field, the software requires both the drone remote controller and a laptop, connected by an external WiFi network—this is a challenge in field settings, as we found with previous lidar surveys using the M600 [41].

3.3. Experimental Design

In order to test the viability of ultra-high-resolution SWIR imaging in archaeology, we designed a simple experimental survey that would enable us to determine how effectively the Resonon Pika IR+ sensor could be used to locate and characterize artifacts. We first created faux artifacts intended to mimic common types of materials encountered on archaeological sites. These included (1) dark-grey/black dacite chert flakes, (2) white novaculite chert flakes, (3) red mahogany obsidian flakes, (4) black obsidian flakes, (5) glazed whiteware ceramic sherds from un-provenienced historical New England collections, (6) plain redware ceramic sherds produced from replica ancient pottery, (7) pure-copper sheet-metal squares, (8) galvanized-steel sheet-metal squares, and (9) fired brick fragments from a 19th-century building foundation (Figure 3). We created two size categories of faux artifacts: a larger version measuring 5–7 cm in diameter and a smaller version at ~3 cm in diameter.
We selected a local baseball field for our experimental surveys (Figure 4) as this would enable us to place artifacts on one area with grass and another area with an infield mix (a mixture of sand, silt, clay, and fine gravel). We then arranged the faux artifacts in lines, with two lines of two different sizes on each ground-cover type. We placed ground control targets around the edges of the survey area and recorded precise XYZ locations of the targets using an Emlid Reach RS2 RTK survey system. A custom ground calibration panel was placed in the AOI as a standard to convert the raw data into reflectance during data processing. We recorded additional ground control points at the corners of the panel. Finally, we collected high-resolution visible-light drone imagery of the survey area using a DJI Mavic 2 Pro and produced 1 cm resolution orthoimagery of the survey area using Agisoft Metashape. These data were exported to ArcGIS Pro for subsequent analysis.
We conducted surveys using the Pika sensor across numerous different days at this same site, repositioning artifacts and control points each time. This enabled us to experiment with many different imagery collection parameters, refining settings for the framerate and spectral channels, as well as varying settings for the speed, altitude, and transect spacing of the drone. Here we present the most effective parameters and arrangement.
We additionally explored the potential of a portable, low-cost SWIR spectrometer for developing a spectral library of artifact samples that might aid in image classification. We used a NirvaScan NIR-M-F1 (Allied Scientific Pro, Gatineau, QC, Canada), which has a spectral range of 900–1700 nm and 228 bands, to collect SWIR reflectance data for all the faux artifact types in our study. The samples were pressed against the sensor, which includes a built-in halogen lamp with an 8 mm shield that blocks ambient light and ensures a consistent distance between the sensor and the material. Each spectrum was an average of 6 scans, and 30 spectra were acquired for each sample, with the sensor being moved for each spectrum. The built-in reference spectrum for the sensor was used for all scans. After exporting the reflectance data from NirvaScan, all processing was completed in Stata, and the results were compared to spectral reflectance data from the Resonon aerial sensor.

3.4. Data Processing and Classification

Once airborne SWIR data were successfully collected, we used Resonon’s Spectronon software to process and analyze the data [47]. Hyperspectral data are commonly recorded in the form of data cubes, which can store data in multiple dimensions, unlike a two-dimensional table or a three-dimensional raster file. In Spectronon, first, the data are converted to radiance and reflectance, then georectified, converted to first-derivatives, and finally analyzed using several possible classification tools. Each flight transect is represented by a single data cube, with each data cube processed separately in Spectronon before they are mosaiced together in ArcGIS Pro as the final step.
First, data are converted to radiance using the provided tools based on standard conversion formulas and the imager calibration file that is specific to each sensor. Then the radiance data are converted to reflectance using the dual-shade calibration panel as a ground standard. The conversion to reflectance requires the precisely measured reference spectrum of the panel (provided by Resonon) and the selection of a region of interest (ROI) on the panel in the imagery, which allows the tool to compare the expected reflectance values to the actual reflectance values of the calibration panel and then apply the calculated corrections to the rest of the data cube. The last step before analysis is to convert the data to first-derivatives using a Savitzky–Golay filter [49], which helps to smooth some of the inherent noise while still maintaining the integrity of the data [47].
At any point in the processing workflow, the data can be geocorrected. By nature, the spatial accuracy of pushbroom data is highly sensitive to the pitch of the drone, the stability of the gimbal (or lack thereof), and the accuracy of the IMU [50]. Frequently, the data cubes are quite difficult to interpret visually until after geocorrection. Spectronon’s GeoRectify plugin uses IMU information from the survey, the mean elevation of the survey area, and calculated physical offsets between the IMU and imager to spatially correct the data cube. After georectification, the data cubes can then be exported as separate images and mosaiced in ArcGIS Pro, as the native mosaicing tools in Spectronon are relatively simple.
Before undertaking image classification, we generated spectral profiles of different features and experimented with different band combinations to determine which wavelengths most effectively distinguish between the faux artifacts and the environment. Spectronon includes a tool that allows the user to easily compare spectral profiles of different ROIs in the image. In our first analyses of the SWIR imagery, we also experimented with differing approaches to visualizing these complex datasets, assigning different spectral bands to RGB channels in order to create basic band combinations in which most artifacts are visible against both grass and gravel backgrounds (Figure 5).
After processing the data cubes, we undertook a supervised classification [51] of the drone-based SWIR imagery in which we manually selected training samples that represent each artifact type as well as vegetation and soil, and then employed one of the numerous algorithms to automate the detection of other pixels that share a set of similar reflectance values. In principle, this approach could be used on a large scale to locate and identify artifacts across a large study region. Spectronon offers many different classification algorithms, and there are other innovative approaches to improve the results of classification appearing in the recent literature [52,53]. For an experiment at this scale and with a small number of training samples, we chose the spectral angle mapper (SAM) for its simplicity as well as its sensitivity to spectral shape and insensitivity to brightness, which is especially important in datasets with variable illumination [17] First, we used it as an exploratory tool to determine which materials were most easily distinguished from the background, and then we performed a classification of the materials with the most spectrally distinct angles. The SAM treats spectra as vectors in an N-dimension space, where N is equal to the number of bands, and calculates the angle between these vectors. A maximum angle threshold is specified for each class, and pixels are not classified when the values of pixels are further away than that threshold. The pixel color in the classification represents the best match for each pixel, while the pixel value is calculated as the spectral angle divided by the threshold value to determine the class with the lowest value for a given pixel. After running the classification, we adjusted the thresholds for each class to optimize the detection of the artifacts.

4. Results

The results of our experimental efforts to locate artifacts are promising in many respects but also highlight many of the challenges in operationalizing our larger goal of employing this emerging new technology in archaeological field settings.
Pushbroom sensors like the Pika IR+ are highly dependent on the stability of the aircraft, which can be a challenge even with a larger and more stable drone like the M600 [11,50]. We found that the accuracy of the Pika’s onboard IMU was insufficient for small survey areas and that Resonon’s relatively new RTK system for airborne deployment was necessary to obtain acceptable quality location data. The RTK system communicates via radios plugged into the Ellipse IMU on the aircraft and the Emlid Reach RS2 base station and proved to be an effective way to obtain higher-accuracy locations for data collection and georectification.
The effective deployment of the system to obtain high-resolution data hinges on balancing a good signal-to-noise ratio with drone flight parameters that result in both high spectral and spatial resolution [50]. Field conditions other than clear sunny days are particularly challenging as a lower framerate (and thus a slower flight speed and higher flight altitude) is needed on overcast days. The presence of errant clouds can also completely invalidate a flight line and require a repeated survey of the same area. Furthermore, the framerate and flight settings should result in a cross, versus the along-track aspect ratio of approximately two to counteract flight instabilities, which can require the user to make compromises in the spatial resolution of the imagery.
Hyperspectral data can be noisy, especially if collected with a less-than-ideal signal-to-noise ratio. We tried multiple approaches for dealing with noise, including taking the first-derivative, removing noisy wavelengths prior to analysis, and collecting data at full spectral resolution before averaging the bands during processing. Furthermore, each scan line is processed separately, and we often observed significant differences in the reflectance of the same object across data cubes. This is due to the bidirectional reflectance function of the object (i.e., the angle of the sun relative to the position of the imager). To account for reflectance differences across data cubes, we selected training samples from across all data cubes before performing any supervised classifications.
Despite these challenges, our results are promising for the prospects of drone-based SWIR imaging as an archaeological tool. The analysis of spectral plots for various artifact categories as well as samples of bare earth and grass show that in some spectral ranges, there is a large degree of convergence in reflectance profiles, while other wavelengths reveal greater discrimination. Figure 6 illustrates the mean reflectance values for all artifact categories in our experiment, in addition to green grass, dead grass, and bare earth backgrounds. It is evident in all spectral curves that there is significant noise at the low and high end of data collection (800–900 nm and >1650 nm), as well as around the 1350–1450 nm range. This is because the weaker illumination at short and long wavelengths degrades the signal, as well as because of atmospheric absorption in the mid-spectrum [46]. White- and redware ceramics, as well as white chert, brick, and copper, all have quite distinct spectral profiles that enable us to discriminate them from one another with relative ease. On the other hand, black and red obsidian and black chert all have very low reflectance values and appear similar to one another in their reflectance values across spectra. Furthermore, these materials are also very close to the shape and values of the dark gravel and soil in the infield mix, making these artifact types difficult to recognize in our data and likely problematic to differentiate in a supervised classification.
We experimented with many different band combinations that would take advantage of spectral ranges with the greatest differences among artifact categories and also between background values to create visualizations in which artifacts appear. For example, whiteware ceramics show a large spike in reflectance values around 960, 1360, and 1680 nm, with gradually declining values across other spectra, whereas white chert, appearing similar in visible spectra, has more modest peaks at 1350, 1440, and 1680 nm and otherwise flat and lower values in other spectra. These kinds of differences across artifacts enable us to enhance the visibility of different artifact types by toggling various band combinations, as illustrated in Figure 5B,C.
Ultimately, however, a better approach when dealing with such a complex dataset is to use supervised classification tools. Spectronon offers many different image-analysis functions to aid in processing these hyperspectral data, and we experimented with many of them. Processing data using a Savitzky–Golay filter [49] provides a smoothed, first-derivative image that reveals the largest number of artifact types while retaining much of the underlying spectral variability. We rely on this image as the basis for a supervised classification using Spectronon’s spectral angle mapper (SAM) classification tool (Figure 7). The results of this classification method successfully identified much of the plain redware (50%), glazed whiteware (50%), and brick samples (75%) in both gravel and grass backgrounds (Table 2). It was less successful at locating other artifact categories, finding only one of four copper samples and no samples of white chert. Red and black obsidian and black chert were not included in the classification as these materials were not visible in the image and thus training samples could not be created. Steel could not be distinguished from the soil/gravel class using a SAM approach, so it was not included in the final classification.
Although the classification accuracy for most artifact types was not high, the classification did succeed in distinguishing artifacts from the grass and soil background reasonably well. The SAM classification successfully located 41% of the artifacts in our study, which is already a good result given that many of the artifacts are smaller than the final spatial resolution of the imagery at a 4 cm GSD. If we consider only those artifact types that we are able to recognize readily in SWIR data (white ceramics, red ceramics, brick, white chert, and copper) the SAM classification does even better, locating 70% of the artifacts. On the other hand, there are a number of false positives—areas of grass or soil were misclassified as one of our artifact types in the SAM classification. These could be larger pieces of gravel or stone, small pieces of trash, or even simply areas of the ground where reflectance off the surface was not easily classified.
As discussed above, we also attempted to collect SWIR spectral plots of all the artifact types in our experimental study using a handheld spectrometer, in hopes that these data could be used to train a classification algorithm (Figure 8). In our study, we found that the reflectance values collected using our inexpensive portable SWIR spectrometer diverged considerably from those recorded by the Resonon sensor during our experimental surveys; this was due to the very different lighting conditions under which data were collected as well as the presumed differences in the hardware of the two sensors. Additionally, the extremely low and largely undifferentiated reflectance values in the entire 900–1700 nm range for both metals and the darker faux artifacts in our study (black chert, obsidian, and brick) suggest that these data may be less useful for the analysis of some artifact types.

5. Discussion

The results of this first experiment with drone-based SWIR imaging in archaeology show the potential of this technology to transform archaeological surveys and landscape-based investigations more broadly, while also highlighting several key challenges in the realization of this promise. Our experimental data indicate that several artifact types can be readily distinguished in SWIR imagery of adequate resolution and that a supervised classification algorithm can use training samples of known artifact types to plot the location and type of other similar artifacts within a survey area. This basic demonstration of the technology’s potential is highly significant and suggests that future experimentation and the continued development of this approach are warranted.
However, our experiment shows that the current nascent state of UAV-borne SWIR sensor technology makes it challenging to collect imagery at fine spatial resolution due to the tradeoff between the sensor’s framerate and the drone’s speed and altitude. We were able to achieve close to this goal, at a 4 cm GSD, and successfully use this to locate and characterize artifacts; however, this was only by surveying under optimal lighting conditions at an airspeed of less than 1 m/s. Clearly, the short duration of flights that are possible with the heavy Pika IR+ mounted on an M600 would make the collection of SWIR data at this resolution over a large area a logistical challenge. However, Resonon has already released an improved version of the Pika SWIR sensor, the IR-L+, which is around half the weight of the model used in this experiment, meaning it could be deployed on a drone with a smaller max payload and longer flight time than the M600. The IR-L+ also includes higher spectral resolution, with 470 bands in the range of 925–1700 nm versus the 336 bands of the older IR+ model. As improvements in the underlying SWIR sensor technology continue in the future, SWIR surveys at the resolution we desire for artifact identification will only become easier.
We should note, however, that even somewhat lower-resolution SWIR imagery, at a 10–20 cm GSD for example, would be in line with the spatial resolution of most aerial thermal data we have collected [40,42] as well as that of most terrestrial geophysics like magnetometry, resistivity, or ground-penetrating radar [54]. While individual artifacts would be difficult to identify at this lower resolution, it would offer a potentially powerful prospection and mapping tool for other kinds of surface features, including architectural remains, field systems, roadways, pits, ditches, or earthworks. All these features could be manifested at the surface by differences in soil composition or by the incorporation of exotic stones and building materials, making them good candidates for documentation using SWIR imagery. Likewise, SWIR data could be used at lower resolutions to locate areas of differing artifact surface density—a key tool for mapping past occupation [21] or farming [55]. Even if we cannot identify individual artifacts, a sufficient concentration of artifacts within a larger pixel will impact the overall reflectance values of that area of the ground, enabling us to differentiate areas with greater or lesser artifact densities, as has been demonstrated experimentally for low-resolution thermal imagery [56].
Our experiment also illustrates that some artifact categories, particularly those that are very dark in color, are more difficult to identify in the data than others. In our analysis, the dark-grey dacite chert and the black obsidian were consistently the most difficult to pick out visually in band combinations and were also the least likely to be successfully identified in a supervised classification. These materials tend to absorb more energy at all wavelengths, and the shiny, planar surfaces of lithic debitage likely cause greater variation in the bidirectional reflectance function of the material. On the other hand, even artifacts with these issues of absorbtion and reflectance might be identifiable against a very light background, such as quartz sands or salt flats. It is also possible that a SWIR sensor with a broader spectral range, such as the 700–2500 nm range of the highest-end sensors, might offer better results for these low-reflectance objects. Nonetheless, no survey method will locate all archaeological materials with equal success, so simply being aware of the variability in the effectiveness of SWIR data for different material types will help to target the application of this powerful technology.
Our efforts to use a handheld spectrometer to measure the reflectance values of specific artifacts did not produce results that were directly comparable to the spectral profiles of the same artifacts recorded by the Resonon sensor. Geologists have employed handheld spectrometers to create spectral libraries of the SWIR reflectance histograms for particular types of rocks and minerals, and they subsequently use these data to interpret multispectral satellite imagery or even train a classification [16,18,57]. Archaeologists have sometimes similarly used handheld spectrometers for artifact analysis, primarily as a tool for sourcing or identifying anthropogenic soils [29,30,31,32,33], and we hoped that this would enable the creation of a spectral library of artifact types in our study. However, the lack of correspondence in reflectance values between the two different SWIR sensors combined with the very-low and nearly flat histograms for most dark-colored artifacts in our study makes handheld spectrometer data difficult to utilize in our study. It would instead be preferable to record the spectral profiles of known artifacts using the same sensor as employed in a survey, and this would likely produce data that could be used to train a classification of artifacts across a site. For example, we could conduct a small surface collection of artifacts following a SWIR survey, identify key types of artifacts that are present on a given site, and then record their reflectance profiles either by mounting the SWIR sensor on a bench or by laying the artifacts out on a spectrally flat material and conducting a low-altitude survey with the sensor on the drone.

6. Conclusions

This paper has presented the initial results of an archaeological experiment designed to test the feasibility of using a drone-deployed SWIR sensor to locate and characterize surface artifacts and features. Although we encountered many challenges in executing the experiment and the results are not a panacea, our findings nonetheless demonstrate the potential for high-resolution, hyperspectral SWIR imaging in archaeology. Collecting imagery of sufficient spatial resolution over a large enough area and recognizing all artifact types against differing background values proved to be the greatest challenges. However, these core difficulties are both technological problems for which we can expect improvements to come soon. Resonon has already released a newer airborne SWIR sensor that offers potentially higher-resolution imagery and can be deployed on a more reliable drone with longer flight times, thereby improving both resolution and areal coverage. A key competitor, Headwall, has similarly released a more-compact version of their SWIR sensor, now small enough to mount on a consumer drone, offering spectral coverage from 900 to 2500 nm [58]. As these and other firms continue to develop underlying sensor technology and drone capabilities continue to improve, we can anticipate that deploying SWIR sensors in field settings will become easier and more productive in the near future.
The successful development and deployment of drone-based SWIR imaging could have wide-ranging impacts on archaeological research and cultural heritage management. In regions with high densities of surface artifacts, we could map their distribution with a precision and scale never before possible, offering unparalleled insights into the extent of ancient settlements, the differing densities of artifacts across space, and the variability in the types of artifacts as a proxy for the dating and characterization of occupation. At individual sites, we would be able to define habitation areas, refuse dumps, or areas of industrial production activities. For sites with surface architectural remains or other similar features, we could map their full extent rapidly, precisely, and non-destructively. At a larger landscape scale, the method could reveal differing intensities of past agricultural practice, possible sites for pastoral or nomadic activities, or specialized ritual centers. The results of SWIR imaging can also be combined with other datasets, including those of excavations, archaeological geophysics, or high-resolution topographic mapping to help us better understand the character of ancient sites.
Archaeologists face an unprecedented challenge today as we strive to document countless archaeological sites that are under enormous threats from the intensification of agricultural land use [59,60,61], climate change [62,63], and looting or other forms of intentional site destruction [64,65,66]. The speed and scope with which the archaeological record is being lost demands that we develop new tools and methods to discover and document sites and features more rapidly, both to preserve a record of the past and to support local stakeholder communities and heritage management professionals tasked with the preservation and protection of archaeological sites. SWIR imaging helps to address these challenges by offering a powerful new tool for mapping archaeological sites and features in a manner that has never before been possible, facilitating the discovery and documentation of surface archaeological features quickly, over large areas, non-destructively, and at modest cost.

Author Contributions

Conceptualization, Methodology, Analysis, Investigation, Writing, and Visualization J.C. and C.F.; Project Administration and Funding Acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This project was supported by a grant from the NASA Space Archaeology Program (19-IDS19-0023). Additional support for data processing costs was provided by a grant from the National Science Foundation’s Archaeometry Program (#1822107). Costs for instrument acquisition were provided by a grant from the Neukom Institute for Computational Science.

Data Availability Statement

All data produced by this project are curated at the Spatial Archaeometry Lab at Dartmouth College, USA, where data are publicly available. Researchers interested in accessing data should contact the corresponding author.

Conflicts of Interest

The authors of this study have no conflicts of interest.

References

  1. Banning, E.B. Archaeological Survey; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  2. Wilkinson, T.J. Surface Collection, Field Walking, Theory and Practice, Sampling Theories. In Introduction to Archaeological Science; Wiley: London, UK, 2001; pp. 529–541. [Google Scholar]
  3. Casana, J. Rethinking the Landscape: Emerging Approaches to Archaeological Remote Sensing. Annu. Rev. Anthropol. 2021, 50, 167–186. [Google Scholar] [CrossRef]
  4. Johnson, M.H. Phenomenological Approaches in Landscape Archaeology. Annu. Rev. Anthropol. 2012, 41, 269–284. [Google Scholar] [CrossRef]
  5. Markofsky, S.; Bevan, A. Directional Analysis of Surface Artefact Distributions: A Case Study from the Murghab Delta, Turkmenistan. J. Archaeol. Sci. 2012, 39, 428–439. [Google Scholar] [CrossRef]
  6. Wilkinson, T.J. Archaeological Landscapes of the Near East; University of Arizona Press: Tuscon, AZ, USA, 2003; ISBN 978-0-8165-2173-9. [Google Scholar]
  7. Kosiba, S.; Bauer, A. Mapping the Political Landscape: Toward a GIS Analysis of Environmental and Social Difference. J. Archaeol. Method Theory 2013, 20, 61–101. [Google Scholar] [CrossRef]
  8. Scholnick, J.B.; Munson, J.L.; Macri, M.J. Positioning Power in a Multi-Relational Framework: A Social Network Analysis of Classic Maya Political Rhetoric. In Network Analysis in Archaeology: New Approaches to Regional Interaction; Oxford University Press: Oxford, UK, 2013; pp. 95–124. [Google Scholar]
  9. Kantner, J. The Archaeology of Regions: From Discrete Analytical Toolkit to Ubiquitous Spatial Perspective. J. Archaeol. Res. 2008, 16, 37–81. [Google Scholar] [CrossRef]
  10. Mattingly, D. Methods of Collection, Recording and Quantification. In Extracting Meaning from Ploughsoil Assemblages; The Archaeology of Mediterranean Landscapes; Oxbow: Oxford, UK, 2000; Volume 5. [Google Scholar]
  11. Angel, Y.; Turner, D.; Parkes, S.; Malbeteau, Y.; Lucieer, A.; McCabe, M. Automated Georectification and Mosaicking of UAV-Based Hyperspectral Imagery from Push-Broom Sensors. Remote Sens. 2019, 12, 34. [Google Scholar] [CrossRef]
  12. Askari, G.; Pradhan, B.; Sarfi, M.; Nazemnezhad, F. Band Ratios Matrix Transformation (BRMT): A Sedimentary Lithology Mapping Approach Using ASTER Satellite Sensor. Sensors 2018, 18, 3213. [Google Scholar] [CrossRef] [PubMed]
  13. Testa, F.; Villanueva, C.; Cooke, D.; Zhang, L.-J. Lithological and Hydrothermal Alteration Mapping of Epithermal, Porphyry and Tourmaline Breccia Districts in the Argentine Andes Using ASTER Imagery. Remote Sens. 2018, 10, 203. [Google Scholar] [CrossRef]
  14. Hewson, R.; Robson, D.; Carlton, A.; Gilmore, P. Geological Application of ASTER Remote Sensing within Sparsely Outcropping Terrain, Central New South Wales, Australia. Cogent Geosci. 2017, 3, 1319259. [Google Scholar] [CrossRef]
  15. Pour, A.B.; Hashim, M.; Hong, J.K. Application of Multispectral Satellite Data for Geological Mapping in Antarctic Environments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLII-4/W1, 77–81. [Google Scholar] [CrossRef]
  16. Calvin, W.M.; Littlefield, E.F.; Kratt, C. Remote Sensing of Geothermal-Related Minerals for Resource Exploration in Nevada. Geothermics 2015, 53, 517–526. [Google Scholar] [CrossRef]
  17. Kruse, F.; Baugh, W.; Perry, S. Validation of DigitalGlobe WorldView-3 Earth Imaging Satellite Shortwave Infrared Bands for Mineral Mapping. J. Appl. Remote Sens. 2015, 9, 096044. [Google Scholar] [CrossRef]
  18. Gomez, C.; Viscarra Rossel, R.A.; McBratney, A.B. Soil Organic Carbon Prediction by Hyperspectral Remote Sensing and Field Vis-NIR Spectroscopy: An Australian Case Study. Geoderma 2008, 146, 403–411. [Google Scholar] [CrossRef]
  19. Watts, D.R.; Harris, N.B.W. Mapping Granite and Gneiss in Domes along the North Himalaya Antiform with ASTER SWIR Band Ratios. GSA Bull. 2005, 117, 879–886. [Google Scholar] [CrossRef]
  20. Cudahy, T.J.; Hewson, R.; Huntington, J.F.; Quigley, M.A.; Barry, P.S. The Performance of the Satellite-Borne Hyperion Hyperspectral VNIR-SWIR Imaging System for Mineral Mapping at Mount Fitton, South Australia. In Proceedings of the IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No.01CH37217), Sydney, NSW, Australia, 9–13 July 2001; Volume 1, pp. 314–316. [Google Scholar] [CrossRef]
  21. Menze, B.; Ur, J. Mapping Patterns of Long-Term Settlement in Northern Mesopotamia at a Large Scale. Proc. Natl. Acad. Sci. USA 2012, 109, E778–E787. [Google Scholar] [CrossRef]
  22. Kalayci, T.; Lasaponara, R.; Wainwright, J.; Masini, N. Multispectral Contrast of Archaeological Features: A Quantitative Evaluation. Remote Sens. 2019, 11, 913. [Google Scholar] [CrossRef]
  23. Bauer, A. Impacts of Mid- to Late-Holocene Land Use on Residual Hill Geomorphology: A Remote Sensing and Archaeological Evaluation of Human-Related Soil Erosion in Central Karnataka, South India. Holocene 2013, 24, 3–14. [Google Scholar] [CrossRef]
  24. Vining, B. Reconstructions of Local Resource Procurement Networks at Cerro Baúl, Peru Using Multispectral ASTER Satellite Imagery and Geospatial Modeling. J. Archaeol. Sci. Rep. 2015, 2, 492–506. [Google Scholar] [CrossRef]
  25. Casana, J.; Ferwerda, C. Archaeological Prospection Using WorldView-3 Short-wave Infrared (SWIR) Satellite Imagery: Case Studies from the Fertile Crescent. Archaeol. Prospect. 2023, 30, 327–340. [Google Scholar] [CrossRef]
  26. Davis, D. The Applicability of Short-Wave Infrared (SWIR) Imagery for Archaeological Landscape Classification on Rapa Nui (Easter Island), Chile. Alpenglow 2017, 3, 3. [Google Scholar] [CrossRef]
  27. Rowlands, A.; Sarris, A. Detection of Exposed and Subsurface Archaeological Remains Using Multi-Sensor Remote Sensing. J. Archaeol. Sci. 2007, 34, 795–803. [Google Scholar] [CrossRef]
  28. Challis, K.; Kincey, M.; Howard, A.J. Airborne Remote Sensing of Valley Floor Geoarchaeology Using Daedalus ATM and CASI. Archaeol. Prospect. 2009, 16, 17–33. [Google Scholar] [CrossRef]
  29. Okyay, U.; Khan, S.; Lakshmikantha, M.R.; Sarmiento, S. Ground-Based Hyperspectral Image Analysis of the Lower Mississippian (Osagean) Reeds Spring Formation Rocks in Southwestern Missouri. Remote Sens. 2016, 8, 1018. [Google Scholar] [CrossRef]
  30. Matney, T.; Barrett, L.R.; Dawadi, M.B.; Maki, D.; Maxton, C.; Perry, D.S.; Roper, D.C.; Somers, L.; Whitman, L.G. In Situ Shallow Subsurface Reflectance Spectroscopy of Archaeological Soils and Features: A Case-Study of Two Native American Settlement Sites in Kansas. J. Archaeol. Sci. 2014, 43, 315–324. [Google Scholar] [CrossRef]
  31. Hassler, E.R.; Swihart, G.H.; Dye, D.H.; Li, Y.S. Non-Destructive Provenance Study of Chert Using Infrared Reflectance Microspectroscopy. J. Archaeol. Sci. 2013, 40, 2001–2006. [Google Scholar] [CrossRef]
  32. Parish, R. The Application of Visible/Near-Infrared Reflectance (VNIR) Spectroscopy to Chert: A Case Study from the Dover Quarry Sites, Tennessee. Geoarchaeology 2011, 26, 420–439. [Google Scholar] [CrossRef]
  33. Fishel, R.L.; Wisseman, S.U.; Hughes, R.E.; Emerson, T.E. Sourcing Red Pipestone Artifacts from Oneota Villages in the Little Sioux Valley of Northwest Iowa. Midcont. J. Archaeol. 2010, 35, 167–198. [Google Scholar] [CrossRef]
  34. Campana, S. Drones in Archaeology. State-of-the-Art and Future Perspectives. Archaeol. Prospect. 2017, 24, 275–296. [Google Scholar] [CrossRef]
  35. Orengo, H.A.; Garcia-Molsosa, A. A Brave New World for Archaeological Survey: Automated Machine Learning-Based Potsherd Detection Using High-Resolution Drone Imagery. J. Archaeol. Sci. 2019, 112, 105013. [Google Scholar] [CrossRef]
  36. Olson, K.G.; Rouse, L.M. A Beginner’s Guide to Mesoscale Survey with Quadrotor-UAV Systems. Adv. Archaeol. Pract. 2018, 6, 357–371. [Google Scholar] [CrossRef]
  37. Herrmann, J.; Glissmann, B.; Sconzo, P.; Pfälzner, P. Unmanned Aerial Vehicle (UAV) Survey with Commercial-Grade Instruments: A Case Study from the Eastern Ḫabur Archaeological Survey, Iraq. J. Field Archaeol. 2018, 43, 269–283. [Google Scholar] [CrossRef]
  38. Materazzi, F.; Pacifici, M. Archaeological Crop Marks Detection through Drone Multispectral Remote Sensing and Vegetation Indices: A New Approach Tested on the Italian Pre-Roman City of Veii. J. Archaeol. Sci. Rep. 2022, 41, 103235. [Google Scholar] [CrossRef]
  39. Casana, J.; Kantner, J.; Wiewel, A.; Cothren, J. Archaeological Aerial Thermography: A Case Study at the Chaco-Era Blue J Community, New Mexico. J. Archaeol. Sci. 2014, 45, 207–219. [Google Scholar] [CrossRef]
  40. Casana, J.; Wiewel, A.; Cool, A.; Hill, A.C.; Fisher, K.D.; Laugier, E.J. Archaeological Aerial Thermography in Theory and Practice. Adv. Archaeol. Pract. 2017, 5, 310–327. [Google Scholar] [CrossRef]
  41. Casana, J.; Laugier, E.J.; Hill, A.C.; Reese, K.M.; Ferwerda, C.; McCoy, M.D.; Ladefoged, T. Exploring Archaeological Landscapes Using Drone-Acquired Lidar: Case Studies from Hawai’i, Colorado, and New Hampshire, USA. J. Archaeol. Sci. Rep. 2021, 39, 103133. [Google Scholar] [CrossRef]
  42. Hill, A.; Laugier, E.; Casana, J. Archaeological Remote Sensing Using Multi-Temporal, Drone-Acquired Thermal and Near Infrared (NIR) Imagery: A Case Study at the Enfield Shaker Village, New Hampshire. Remote Sens. 2020, 12, 690. [Google Scholar] [CrossRef]
  43. VanValkenburgh, P.; Cushman, K.C.; Butters, L.J.C.; Vega, C.R.; Roberts, C.B.; Kepler, C.; Kellner, J. Lasers Without Lost Cities: Using Drone Lidar to Capture Architectural Complexity at Kuelap, Amazonas, Peru. J. Field Archaeol. 2020, 45, S75–S88. [Google Scholar] [CrossRef]
  44. McLeester, M.; Casana, J.; Schurr, M.; Hill, A.; Wheeler, J. Detecting Prehistoric Landscape Features Using Thermal, Multispectral, and Historical Imagery Analysis at Midewin National Tallgrass Prairie, Illinois. J. Archaeol. Sci. Rep. 2018, 21, 450–459. [Google Scholar] [CrossRef]
  45. Resonon Airborne Systems Data Sheet 2024. Available online: https://resonon.com/content/airborne-remote-system-sp/Resonon---Airborne-Remote-Sensing-Hyperspectral-Systems.pdf (accessed on 15 February 2024).
  46. Swanson, R. Signal-to-Noise Ratio (SNRs) for Resonon Cameras. Hyperspectral Imaging. 2023. Available online: https://resonon.com/blog-snr-in-hyperspectral-cameras (accessed on 1 March 2024).
  47. Resonon Airborne User Manual, Release 7.11 2024. Available online: https://docs.resonon.com/airborne/AirborneUserManual.pdf (accessed on 1 March 2024).
  48. UgCS UgCS Software Manual. Available online: https://manuals-ugcs.sphengineering.com/ (accessed on 1 March 2024).
  49. Savitzky, A.; Golay, M.J.E. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  50. Hruska, R.; Mitchell, J.; Anderson, M.; Glenn, N. Radiometric and Geometric Analysis of Hyperspectral Imagery Acquired from an Unmanned Aerial Vehicle. Remote Sens. J. 2012, 4, 2736–2752. [Google Scholar] [CrossRef]
  51. Tuia, D.; Volpi, M.; Copa, L.; Kanevski, M.; Munoz-Mari, J. A Survey of Active Learning Algorithms for Supervised Remote Sensing Image Classification. IEEE J. Sel. Top. Signal Process. 2011, 5, 606–617. [Google Scholar] [CrossRef]
  52. Aguilar, M.A.; Jiménez-Lao, R.; Aguilar, F. Evaluation of Object-Based Greenhouse Mapping Using WorldView-3 VNIR and SWIR Data: A Case Study from Almería (Spain). Remote Sens. 2021, 13, 2133. [Google Scholar] [CrossRef]
  53. Chen, X.; Zhu, G.; Liu, M. Remote Sensing Image Scene Classification with Self-Supervised Learning Based on Partially Unlabeled Datasets. Remote Sens. 2022, 14, 5838. [Google Scholar] [CrossRef]
  54. Ernenwein, E.G. Geophysical Survey Techniques; Wiley: Hoboken, NJ, USA, 2023; ISBN 978-1-119-59204-4. [Google Scholar]
  55. Cajigas, R.; Quade, J.; Rittenour, T. Multitechnique Dating of Earthen Irrigation Canals at the La Playa Site, Sonora, Mexico. Geoarchaeology 2020, 35, 834–855. [Google Scholar] [CrossRef]
  56. Buck, P.E.; Sabol, D.E.; Gillespie, A.R. Sub-Pixel Artifact Detection Using Remote Sensing. J. Archaeol. Sci. 2003, 30, 973–989. [Google Scholar] [CrossRef]
  57. Acosta, I.C.C.; Khodadadzadeh, M.; Tusa, L.; Ghamisi, P.; Gloaguen, R. A Machine Learning Framework for Drill-Core Mineral Mapping Using Hyperspectral and High-Resolution Mineralogical Data Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4829–4842. [Google Scholar] [CrossRef]
  58. Headwall Photonics Hyperspectral Products. Available online: https://headwallphotonics.com/products/ (accessed on 13 April 2024).
  59. Vogt, R.; Kretschmer, I. Archaeology and Agriculture: Conflicts and Solutions. EG Quat. Sci. J. 2019, 68, 47–51. [Google Scholar] [CrossRef]
  60. Rick, T.C.; Sandweiss, D.H. Archaeology, Climate, and Global Change in the Age of Humans. Proc. Natl. Acad. Sci. USA 2020, 117, 8250–8253. [Google Scholar] [CrossRef]
  61. Casana, J. Global-Scale Archaeological Prospection Using CORONA Satellite Imagery: Automated, Crowd-Sourced, and Expert-Led Approaches. J. Field Archaeol. 2020, 45, S89–S100. [Google Scholar] [CrossRef]
  62. Anderson, D.G.; Bissett, T.G.; Yerka, S.J.; Wells, J.J.; Kansa, E.C.; Kansa, S.W.; Myers, K.N.; DeMuth, R.C.; White, D.A. Sea-Level Rise and Archaeological Site Destruction: An Example from the Southeastern United States Using DINAA (Digital Index of North American Archaeology). PLoS ONE 2017, 12, e0188142. [Google Scholar] [CrossRef]
  63. Howey, M. Harnessing Remote Sensing Derived Sea Level Rise Models to Assess Cultural Heritage Vulnerability: A Case Study from the Northwest Atlantic Ocean. Sustainability 2020, 12, 9429. [Google Scholar] [CrossRef]
  64. Casana, J.; Laugier, E.J. Satellite Imagery-Based Monitoring of Archaeological Site Damage in the Syrian Civil War. PLoS ONE 2017, 12, e0188589. [Google Scholar] [CrossRef] [PubMed]
  65. Campana, S.; Sordini, M.; Berlioz, S.; Vidale, M.; Al-Lyla, R.; al-Araj, A.; Bianchi, A. Remote Sensing and Ground Survey of Archaeological Damage and Destruction at Nineveh during the ISIS Occupation. Antiquity 2022, 96, 436–454. [Google Scholar] [CrossRef]
  66. Barker, A.; Lazrus, P.K. All the King’s Horses: Essays on the Impact of Looting and the Illicit Antiquities Trade on Our Knowledge of the Past; University Press of Colorado: Boulder, CO, USA, 2012; ISBN 978-1-64642-511-2. [Google Scholar]
Figure 1. Ultra-violet, visible, and infrared light with approximate wavelengths in nanometers (nm). The spectral range for sensors on Landsat 8, ASTER, and WorldView-3 satellites are illustrated, as well as the range covered by the Resonon Pika IR+ used in this study.
Figure 1. Ultra-violet, visible, and infrared light with approximate wavelengths in nanometers (nm). The spectral range for sensors on Landsat 8, ASTER, and WorldView-3 satellites are illustrated, as well as the range covered by the Resonon Pika IR+ used in this study.
Remotesensing 16 01671 g001
Figure 2. Resonon Pika IR+ sensor mounted on a DJI Matrice 600 drone (left); drone in flight during a survey near Mesa Verde, Colorado (right). The SWIR sensor requires numerous custom modifications to the drone, and its large size and weight restrict flight times to 13–15 min.
Figure 2. Resonon Pika IR+ sensor mounted on a DJI Matrice 600 drone (left); drone in flight during a survey near Mesa Verde, Colorado (right). The SWIR sensor requires numerous custom modifications to the drone, and its large size and weight restrict flight times to 13–15 min.
Remotesensing 16 01671 g002
Figure 3. Artifact types used in this experiment include (from upper left): (1) black/dark-grey dacite chert flakes, (2) white novaculite chert flakes, (3) red/mahogany obsidian flakes, (4) black obsidian flakes, (5) glazed whiteware ceramic sherds, (6) plain redware ceramic sherds, (7) copper sheet-metal squares, (8) steel sheet-metal squares, and (9) fired brick fragments from a 19th-century building foundation.
Figure 3. Artifact types used in this experiment include (from upper left): (1) black/dark-grey dacite chert flakes, (2) white novaculite chert flakes, (3) red/mahogany obsidian flakes, (4) black obsidian flakes, (5) glazed whiteware ceramic sherds, (6) plain redware ceramic sherds, (7) copper sheet-metal squares, (8) steel sheet-metal squares, and (9) fired brick fragments from a 19th-century building foundation.
Remotesensing 16 01671 g003
Figure 4. The experimental setup used in this study, as seen from a drone during data collection. Artifacts were laid out, alongside a control reflectance tarp and ground control points, on both gravel (infield) and grassy areas of a baseball diamond at Tenny Park, Hanover, NH.
Figure 4. The experimental setup used in this study, as seen from a drone during data collection. Artifacts were laid out, alongside a control reflectance tarp and ground control points, on both gravel (infield) and grassy areas of a baseball diamond at Tenny Park, Hanover, NH.
Remotesensing 16 01671 g004
Figure 5. Experimental drone SWIR imaging with artifacts laid out in ordered lines, with numbers in the figure corresponding to artifact type. (A) Visible light orthomosaic; (B,C) SWIR imagery in two different band combinations; (D) a smoothed, first-derivative using a Savitzky–Golay filter.
Figure 5. Experimental drone SWIR imaging with artifacts laid out in ordered lines, with numbers in the figure corresponding to artifact type. (A) Visible light orthomosaic; (B,C) SWIR imagery in two different band combinations; (D) a smoothed, first-derivative using a Savitzky–Golay filter.
Remotesensing 16 01671 g005
Figure 6. Mean spectral reflectance profiles of all artifacts used in this study as well as background values for green grass, dead grass, and gravel/soil, as derived from AOIs on processed imagery from the Resonon sensor. Note that the flat area on all spectrograms in the range of 1350–1450 nm is due to atmospheric absorption.
Figure 6. Mean spectral reflectance profiles of all artifacts used in this study as well as background values for green grass, dead grass, and gravel/soil, as derived from AOIs on processed imagery from the Resonon sensor. Note that the flat area on all spectrograms in the range of 1350–1450 nm is due to atmospheric absorption.
Remotesensing 16 01671 g006
Figure 7. A classified SWIR image of the experimental plot shown in Figure 5, with numbers in the figure corresponding to artifact type. In this case, we use a spectral angle mapper (SAM) classification on the first-derivative image (Figure 5D), successfully plotting the location of numerous artifact types including plain redware, glazed whiteware, white chert, and brick samples.
Figure 7. A classified SWIR image of the experimental plot shown in Figure 5, with numbers in the figure corresponding to artifact type. In this case, we use a spectral angle mapper (SAM) classification on the first-derivative image (Figure 5D), successfully plotting the location of numerous artifact types including plain redware, glazed whiteware, white chert, and brick samples.
Remotesensing 16 01671 g007
Figure 8. Mean spectral reflectance values for all artifacts used in this study as derived from measurements using a handheld spectrometer. Differences in lighting and sensor technology make comparisons between these data and those derived from aerial observations problematic.
Figure 8. Mean spectral reflectance values for all artifacts used in this study as derived from measurements using a handheld spectrometer. Differences in lighting and sensor technology make comparisons between these data and those derived from aerial observations problematic.
Remotesensing 16 01671 g008
Table 1. Flight parameters and sensor settings used during experimental data collection.
Table 1. Flight parameters and sensor settings used during experimental data collection.
ParameterFlight Configuration
Elevation AGL (m)28
Nominal Ground Speed (m/s)1
Swath Width (m)13
Side Distance (m)5
Overshoot8 m at 4 m/s
Theoretical GSD (cm)1.7
Wavelength range (nm)900–1700
Cross track pixels640
Number of Spectral Bands336
Frame Rate (Hz)60
Table 2. Error matrix for spectral angle mapper (SAM) classification (see Figure 7).
Table 2. Error matrix for spectral angle mapper (SAM) classification (see Figure 7).
Reference Data
Classified DataCopperBrickRed WareWhite WareWhite ChertGrassSoilBlack ObsidianRed
Obsidian
Black ChertSteel
Copper1 32 1
Brick 211273
Red Ware 2 1 2
White Ware 3 1
White Chert 1 1
Grass311 191 1222
Soil 352221
Unclassified 1031
Column Total44444112464444
Overall Accuracy: 68.5% Kappa: 0.515
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Casana, J.; Ferwerda, C. Drone-Acquired Short-Wave Infrared (SWIR) Imagery in Landscape Archaeology: An Experimental Approach. Remote Sens. 2024, 16, 1671. https://doi.org/10.3390/rs16101671

AMA Style

Casana J, Ferwerda C. Drone-Acquired Short-Wave Infrared (SWIR) Imagery in Landscape Archaeology: An Experimental Approach. Remote Sensing. 2024; 16(10):1671. https://doi.org/10.3390/rs16101671

Chicago/Turabian Style

Casana, Jesse, and Carolin Ferwerda. 2024. "Drone-Acquired Short-Wave Infrared (SWIR) Imagery in Landscape Archaeology: An Experimental Approach" Remote Sensing 16, no. 10: 1671. https://doi.org/10.3390/rs16101671

APA Style

Casana, J., & Ferwerda, C. (2024). Drone-Acquired Short-Wave Infrared (SWIR) Imagery in Landscape Archaeology: An Experimental Approach. Remote Sensing, 16(10), 1671. https://doi.org/10.3390/rs16101671

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop