Next Article in Journal
Uniform vs. Lognormal Kinematics in Robots: Perceptual Preferences for Robotic Movements
Next Article in Special Issue
Acoustic Detection of Vaccine Reactions in Hens for Assessing Anti-Inflammatory Product Efficacy
Previous Article in Journal
Directed Network Disassembly Method Based on Non-Backtracking Matrix
Previous Article in Special Issue
Trend and Representativeness of Acoustic Features of Broiler Chicken Vocalisations Related to CO2
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Collection of Best Practices for the Collection and Analysis of Bioacoustic Data

1
Sea Mammal Research Unit, Scottish Oceans Institute, University of St Andrews, Fife KY16 8LB, UK
2
School of Aquatic and Fisheries Sciences, University of Washington, Seattle, WA 98105, USA
3
Biology Department, Carthage College, Kenosha, WI 53140, USA
4
Psychology Program, CLASS Department, New Mexico Institute of Mining and Technology, Socorro, NM 87801, USA
5
Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY 40506, USA
6
Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
7
Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, 8000 Aarhus C, Denmark
8
Department of Biology, University of Massachusetts Amherst, Amherst, MA 01003, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 12046; https://doi.org/10.3390/app122312046
Submission received: 1 September 2022 / Revised: 20 November 2022 / Accepted: 22 November 2022 / Published: 25 November 2022
(This article belongs to the Special Issue The Analysis and Interpretation of Animal Vocalisations)

Abstract

:
The field of bioacoustics is rapidly developing and characterized by diverse methodologies, approaches and aims. For instance, bioacoustics encompasses studies on the perception of pure tones in meticulously controlled laboratory settings, documentation of species’ presence and activities using recordings from the field, and analyses of circadian calling patterns in animal choruses. Newcomers to the field are confronted with a vast and fragmented literature, and a lack of accessible reference papers or textbooks. In this paper we contribute towards filling this gap. Instead of a classical list of “dos” and “don’ts”, we review some key papers which, we believe, embody best practices in several bioacoustic subfields. In the first three case studies, we discuss how bioacoustics can help identify the ‘who’, ‘where’ and ‘how many’ of animals within a given ecosystem. Specifically, we review cases in which bioacoustic methods have been applied with success to draw inferences regarding species identification, population structure, and biodiversity. In fourth and fifth case studies, we highlight how structural properties in signal evolution can emerge via ecological constraints or cultural transmission. Finally, in a sixth example, we discuss acoustic methods that have been used to infer predator–prey dynamics in cases where direct observation was not feasible. Across all these examples, we emphasize the importance of appropriate recording parameters and experimental design. We conclude by highlighting common best practices across studies as well as caveats about our own overview. We hope our efforts spur a more general effort in standardizing best practices across the subareas we’ve highlighted in order to increase compatibility among bioacoustic studies and inspire cross-pollination across the discipline.

1. Introduction

The majority of communication signals in nature are used by animals to mediate social interactions with other members of their species [1,2,3]. Acoustic signalling in particular enables conspecifics to coordinate their activities over long distances or where sight-lines are limited [4,5,6]. For example, noctules (Nyctalus noctula) produce loud, low frequency social calls to broadcast their position when roosting in tree cavities. This allows them to maintain group cohesion and spatial associations both within and between social groups over considerable distances [4]. Acoustic signals are also effective in that they can be modulated rapidly over time, thus allowing animals to transmit real-time information about changing states or contexts [1]. In some species, such as bats, dolphins, and shrews, acoustic signals (i.e., echolocation) are also used to perceive and map out the environment when vision provides limited input [7,8,9,10,11].
Increasingly, improvements in the technology available to record and process acoustic data have provided scientists with new methods to (1) resolve previously unanswered questions about population biology [12,13], ecology [14], and behavior [15,16], (2) document patterns of acoustic behavior over greater temporal and geographic scales and/or at higher resolutions [16,17,18], (3) improve the response time and accuracy of monitoring studies [19], and (4) reduce the level of disturbance to the target species or ecosystem during data collection. Acoustic data are relatively cost-effective to obtain compared with many other sampling methods, and advances in automated data processing are increasing the feasibility of gathering and analyzing large acoustic datasets. These advances, among others, have sparked a rapid increase in the number of studies performed each year using acoustic methods (Figure 1).
One major area of acoustic research focuses within species, often with the goal of identifying the core vocal repertoire of a species or population, in terms of both acoustic characteristics and functions of vocalizations [20,21,22,23]. Vocalizations can vary with demographic features such as sex, age, and breeding status [24,25], behavioral states [26,27,28,29], social structures [30,31], and even species morphology [32,33,34,35]. More broadly, acoustic methods can be used to investigate topics related to population structure such as distribution and speciation across habitats or regions [12,36,37,38,39,40], and population densities [41,42,43,44].
A second major realm of recent interest in acoustics concerns the coordination of cooperative and competitive behavior, including feeding, aggression, and mating [1,3,15,45]. In these contexts, acoustic data can be used to investigate cognition [46,47], vocal learning [48,49,50,51], and the role of communication patterns and environmental pressures in shaping vocal signals and repertoires over evolutionary timescales [52,53,54,55].
Yet, another major area of interest concerns acoustic interactions in the context of ecosystem biodiversity, which can be studied via passive acoustic monitoring [42,56,57,58,59]. Recently, inter-species variation in the structure of acoustic signals has been used to create biodiversity indices representing species richness [20,60,61,62,63], with one goal being to assess the impacts of environmental degradation on biodiversity [64]. As we develop the capacity to use acoustic data to monitor multiple species at once, sometimes combining passive and active acoustic methodologies, researchers have begun to use acoustic data to examine predator–prey interactions, both at broad and fine scales [65,66,67].
Our main aim in this paper is to bring together ideas and perspectives regarding the value and application of bioacoustic data and analysis for studies in population biology, ecology, and animal behavior. The authors represent a diverse group of bioacousticians in fields within the life and statistical sciences, and convened for a 2018 workshop on Bioacoustic Structure devised by Eric Archer and Shannon Rankin, sponsored by the National Institute for Mathematical and Biological Synthesis (NIMBioS). Of the many topics addressed in our working group, a point that emerged was that the field of bioacoustics is sprawling, cutting across many areas and taxa, and that even though we all work in bioacoustics, there are sub-areas of which each of us was unaware or at least unfamiliar. Commonalities emerged, however, as we discussed challenges inherent to bioacoustic research, and considered best practices for addressing these challenges. This set forth to us the challenge of generating a single document to capture our excitement regarding the possibilities of bioacoustics, writ large.
Towards this end, in this paper we will review a diverse set of case studies that we find particularly compelling, from six representative topics commonly addressed using bioacoustic data. Our overarching goal is to demonstrate the utility of, and suggest some best practices for, using bioacoustic approaches in studying topics broadly regarding the ecology, evolution, and behavior of vocalizing species. The first three topics address the use of passive acoustic data to describe and quantify vocal species diversity at scales ranging from subpopulation to ecosystem: species identification [58], population structure [68], and ecosystem biodiversity [69]. The next two topics relate to the evolution of signal structure, addressing ecological [70,71] and cultural [52] factors affecting the evolution of acoustic signal structure. The final topic covers the use of acoustic data to examine broad-scale predator–prey interactions [72]. Within each topic, we identify best practices for collecting, analyzing, and interpreting acoustic data, and discuss methods to avoid common pitfalls that can hamper acoustic data interpretation.

2. Species Identification

Monitoring animal species in the wild, which is critical to many studies in biology, is undertaken with increasing frequency using passive acoustic methods (sampling animals by recording their sounds but without further engagement). Some taxa in which such methods have been applied include birds [42,73], marine mammals [43,58,74], bats [59,75,76], amphibians [56,77], insects [78], and primates [57]. A first step in analyzing passive acoustic recordings is to identify which species produced each sound detected. When sounds cannot be readily identified visually using spectrograms or other representations of sound or aurally, researchers use statistical classifiers or machine learning algorithms [79,80,81,82,83]. A growing number of bioacoustic analysis and passive acoustic monitoring software tools now integrate machine learning methods directly into the processes of call classification and species identification [83,84,85].
Species classification algorithms are trained using datasets of acoustic recordings in which the identities of species producing the recorded sounds are known. Many species have calls that are difficult to distinguish from those of other species, and so decisions about species identity must often be validated with visual observations [86]. This can be difficult to do, particularly in situations where recorders are left in remote locations or species exhibit cryptic behavior. As a result, visual validation of species identities is sometimes not performed, and this leads to uncertainty in the validity of some tools for species identification. When using a species classifier, it is crucial to ensure that the classifier was trained using data that can unequivocally be attributed to the species included in the classifier. In addition, when compiling training and testing datasets, a sufficiently large set of recordings should be collected to capture the breadth of temporal, environmental, geographic, and behavioral variability in vocalizations of each species to be classified. In addition, care must be taken when selecting those samples used to train the classifier and those used to test the performance of the classifier within a dataset. For example, including vocalizations produced by a given group of animals in both the training and testing datasets can skew the results due to similarities in vocalizations produced by particularly vocal individuals in the group. If similarities exist, this approach will provide an unrealistically high estimate of classifier performance. Therefore, it is important to ensure independence of training and testing datasets by grouping all calls from any given individual into just one of the two datasets, when possible.
When using existing species classifiers on novel data, it is important to consider how and where the classifier training data were collected. Recorder characteristics such as sampling rate, sensitivity, and placement in the environment can all affect data quality and therefore impact classifier performance [87,88,89]. The same can be true of acoustic analysis parameters such as window length, window type, and step-size for spectrogram computation. Another factor that can impact classifier outcomes is geographic location. Many species exhibit geographic variation in their signals [90,91,92,93], and different locations often comprise different assemblages of species. Because of these factors, a classifier trained using data from one location will not necessarily be accurate when applied to data collected in a different location.
An example of a study that has taken these issues into consideration to produce a robust acoustic classifier is Rankin et al. [58]. These authors trained a random forest classifier to identify, based on acoustic parameters, five delphinid species that inhabit the California Current ecosystem. The training dataset for this classifier comprised vocalizations recorded during a 4.5-month shipboard visual and acoustic survey and included 153 different schools of dolphins recorded across a variety of behavioral states and times of day. The samples used for analysis included only schools that had visual confirmation of species identity, were composed of a single species, and had no other species within range of the hydrophone. A total of 1000 manually validated calls per species were extracted from these recordings, resulting in a large dataset to be used for training and testing the classifier. The recordings were all made using the same recording platform (a towed hydrophone array) to ensure sampling consistency, and a sampling rate (500 kHz) sufficiently high to capture each species’ entire range of vocal frequencies. To reduce the likelihood that vocalizations from given individuals would be included in both training and testing datasets, all vocalizations from each individual school were included in only one or the other dataset. This classifier performed well on the test dataset, classifying 84% of schools to the correct species. Because the training and test datasets were collected using consistent methods and drew from a sample of recordings large enough to capture temporal and behavioral variability of all five species within the study area, the resulting species classifier can now be applied with confidence to document the presence and movements of these five species in the California Current. One caveat, though, is that the Rankin et al. [58] classifier was trained using data collected using a specific method, a towed hydrophone array, which could in theory impact the structure of the training data samples. Thus, it is yet not clear whether the classifier could be applied reliably to different locations or to data collected using different sampling methods. Before doing so, it would be helpful to test the classifier on a visually validated dataset, to determine whether the performance is consistent under different conditions.

3. Population Structure

A second realm in which acoustic methods are making useful contributions concerns the study of population structure. Population structure includes analyses of age structure, demography, how individuals in a population are distributed across given areas, and how multiple populations might isolate or mix through interbreeding [94]. A key component to population structure is gene flow among populations [95]. Traditionally, studies of population structure have relied upon direct observations of animal movement, combined with genetic analyses to document patterns of interbreeding. While there is scientific merit to genetic analyses, these methods are not suitable in all cases. For instance, species that are rare or live in remote habitats may not be accessible for collecting genetic samples. In other cases, a species may be highly mobile or cryptic, and therefore difficult to observe or trap. For each of these cases, alternate methods are needed for population sampling. With advances in acoustic methods, researchers have turned to passive acoustic surveys to assess population structures for soniferous species when traditional observational methods or genetic analyses are unavailable or inconclusive (Figure 2) [36,96,97,98].
Many potential pitfalls in acoustic analyses of population structure parallel those that apply to acoustic species classification. First, as pointed out in the previous section, species need to be identified reliably based on their signals. Furthermore, just as species classifiers rely on data that are visually validated, most methods of testing for population structure require that the recordings come from source populations that have been validated using other methods such as visual observations or genetic analysis. If the populations being tested are geographically segregated, the location of the recorders can provide this validation if it can be shown that there is enough distance between recorders that sounds from the adjacent population will not be recorded. However, if the populations being tested are sympatric, it is important to use independent means (e.g., visual scans) to identify the focal population or group being recorded and to ensure that individuals from other populations are not present or nearby during the recording (see, for example, [31,98]). In addition to these considerations, the parameters used in the recording and analysis of acoustic data should be consistent across the populations being tested to avoid introducing variability as an artifact of the experimental design. As with species classification, some parameters to hold consistent, or at least attempt to control for, include sampling rate, recorder sensitivity, signal-to-noise ratio (SNR), time and seasonality of recordings, behavioral states of the animals, call classification categories, FFT sample size, and length of the spectrogram window.
A recent passive acoustic survey of Bigg’s killer whales (Orcinus orca) within the Gulf of Alaska aimed to characterize aspects of this population’s structure [68], following a genetic study in this species that suggested the presence in that region of as many as five potential subpopulations [99]. Sharpe and colleagues collected and compared 1575 calls from Bigg’s killer whales over a nine-year period, from 49–773 encounters per population per year. These samples were sufficient to capture temporal (i.e., monthly and yearly) and behavioral variability within the populations. Because the putative populations were sympatric in their distributions, the research team needed to visually verify the identities of the animals they recorded. Although they used slightly different sampling rates (48 kHz and 44.1 kHz) across years, both sampling rates appeared to be sufficient to capture the entirety of the vocalizations being examined, so this difference likely did not impact the outcome of the analysis. Spectrographic parameters such as FFT size, overlap, and window type, as well as spectrogram viewing length, were held constant throughout data processing. All calls were assessed for quality based on SNR, and preliminary analyses were conducted using only high SNR calls. Two methods for call classification were applied—manual and random forest [100,101]—and their results then combined in a way that circumvented classification errors known to exist in both methods [79,102,103] and that may occur in call types that change or drift over time [104]. Qualitative and quantitative (e.g., structural and time-frequency values) measures were used to identify a total of 36 call types across the study area. After mapping their acoustic results onto the genetic outcomes, Sharpe et al. [68] determined that this region supports at least three distinct populations. The robust data collection and statistical approaches used in this study made it possible to account for possible behavioral or temporal variability, analyst bias, or artifacts caused by sampling or methodological variability as potential confounding factors driving the patterns observed in the data, thus amplifying the authors’ confidence in their data interpretation.

4. Ecosystem Biodiversity

A third area of inquiry that is benefiting from advances in bioacoustic methods, related to the prior two, concerns sampling of species richness. In particular, the development of acoustic biodiversity indices that tally species richness [105,106,107] is allowing for relatively rapid assessment and monitoring of ecosystem biodiversity. Researchers can link acoustic parameters such as spectral amplitude, amplitude variability, and number of frequency bands with biotic parameters such as species diversity and abundance or number of distinct vocalizations. This approach allows for the quick tracking of changes in species community composition related to factors such as seasonality [108], anthropogenic noise [107], invasive species [105], habitat loss, or climate change [109].
As with studies of species identification and population structure (above), it is helpful in acoustic studies of ecosystem biodiversity to consider the type and location of one’s recording platform, and to ensure that methods used to process data are appropriate for the sample at hand. If, for example, when generating spectrograms, FFT size is too large in relation to the recording sampling rate, some temporal resolution will be lost, given the necessary trade-offs in sound analysis between time and frequency resolution [110]. Additionally, data collected should capture temporal, ecosystem, behavioral, or geographic variability as appropriate to the study question. If the study aims to compare multiple ecosystems, data collection and processing methods should be consistent across all target regions.
A caveat that is particularly relevant for the study of ecosystem biodiversity is that by default, many acoustic biodiversity indices will integrate over all types of acoustic signals detected. If, however, a given species produces distinct signals in different behavioral states, these different signals may not be attributed to a single species and the resulting estimate of species diversity may be artificially elevated. In addition, many recordings include extraneous sounds, both biological (of other species) and ambient, which if not correctly classified could further inflate estimates of species diversity. For these reasons, verification of the vocal repertoires of individual species, only possible through direct field observations (see above) or captive studies, are critical for valid application of bioacoustic data for species richness and abundance calculations.
A related challenge is to account for variation in probabilities of signal detection, given that some types of sounds will be harder to detect than others [111]. The behavioral and environmental parameters that impact detection probabilities are unknown for the signals of many species; they include the likelihood of occurrence of different types of vocalizations; vocal frequency, duration, and amplitude; the rate at which different signals attenuate; and the dynamic local environment effects on sound propagation. An accurate assessment of acoustic biodiversity in any ecosystem, and more specifically the confidence we can place on indices generated from acoustic data, will rely on our ability to reliably detect those species that are present.
In a study of avian biodiversity in a temperate woodland, Depraetere et al. [69] generated estimates of biodiversity using acoustic as well as traditional methods and determined whether results from the two sets of methods matched. Depraetere et al. also asked whether the acoustic indices could be used to track daily biodiversity variation. Towards these ends, the researchers collected acoustic recordings at three woodland sites over two three-hour periods at dawn and dusk for 73 consecutive days. Acoustic recorders were positioned at least 300 m from each other to avoid overlap in recordings, each at a height of 2 m with microphones pointed horizontally. Acoustic richness (AR), representing traditional alpha diversity (which measures the diversity within an area), was calculated as a rank statistic, by estimating the samples’ spectral and temporal entropy. Acoustic dissimilarity, representing traditional beta diversity (which measures differences between areas), was calculated by estimating the spectral and temporal dissimilarity among sites in samples recorded at the same time on the same day. Depraetere et al. [69] then compared these acoustic metric outcomes to those generated from more traditional biodiversity metrics estimated using species inventories based on aural identification. They found that the results of the AR analysis matched those obtained using aural identification of species in this habitat and that the two acoustic indices (AR and acoustic dissimilarity) provided complementary information. Based on these results, the authors concluded that acoustic indices can be used to rapidly assess relative spatial and diel variability in biodiversity in temperate woodlands.
Before drawing inferences based on their results, the authors of this study were careful to address sources of bias where possible, and to constrain their research questions to those that could be answered using the type of data collected. They controlled for instrument bias by using the same recorders at each site. Environmental bias was normalized across sites by placing the recorders in the same height and position relative to each other and by not using recordings from days when bad weather (wind, rain) degraded sound recording quality, which would have limited the ability to conduct accurate acoustic estimates of diversity. Non-signal-related noise was removed using both a bandpass and an SNR filter. Temporal variability was accounted for by collecting a large sample size for both morning and evening time frames.
These authors also offer recommendations for improving the accuracy of acoustic biodiversity metrics. Notably, acoustic samples will capture not just the focal species but also other types of sounds, both biotic and abiotic, which need to be accounted for before processing. Additionally, a trained observer may be able to recognize multiple call types belonging to a given species, and thus avoid artificially inflating the acoustic measure of species diversity. As an illustration of this issue, the authors mention that some amphibians were recorded during the evening at one site, thus spuriously inflating the inferred count of bird species diversity at that site. This observation highlights the importance of understanding the composition of an ecosystem before interpreting acoustic biodiversity analyses, to avoid unfounded inferences. It is also important to note that even trained observers cannot always identify all call types to species. In these cases, it can be important to employ a statistical or machine learning classifier as described in Section 1, Species Identification. In either case, one must quantify the reliability and accuracy of methods used to identify species, and uncertainty in species identification should be built into methods used to estimate biodiversity.
Finally, traditional analyses of species richness often account for varying species-specific probabilities of detection [112,113]. Acoustic detection probabilities are currently unknown for many species, and this will be a productive topic for further exploration. In particular, it will be useful to characterize the impacts of differential likelihoods of encountering different species, how often or systematically they produce acoustic signals, the structure of their different types of vocalizations, and dynamic local environment effects on sound attenuation or propagation. All of these parameters affect probabilities that any given species will be detected, and thus impact the acoustic assessment of biodiversity.

5. Signal Structure and Evolution: Cultural Effects

Bioacoustic analyses can also provide insights into questions about signal evolution, that is, the patterns and processes that drive changes in signal structure over time. Studies of acoustic signal evolution have examined a variety of factors. These include the role of morphology, environmental pressures, conspecific interactions, and neutral processes in shaping acoustic communication and have been conducted across a wide variety of taxa including birds, marine and terrestrial mammals, insects, and fishes (e.g., [114,115,116,117,118]). Disentangling the factors that drive acoustic signal evolution can be difficult but is important for understanding the drivers of cultural and ecological change within and among species (e.g., [119,120]).
One approach to studying the evolution of acoustic signals, in species that learn their vocalizations via imitation, focuses on cultural evolution: that is the interplay among use, learning, and transmission of signals over multiple generations (e.g., [121,122,123,124,125]). A major challenge in studying cultural evolution is to try to isolate its impact from other potential drivers of acoustic signal evolution (e.g., morphology, sexual selection, environmental factors, or random drift) [117,118,126,127,128,129]. To isolate the effect of any given driver, a researcher must either control for all other potential factors or gather a sample that is sufficiently robust to capture all of the variability in other potential factors. In particular, it can be difficult to account for pre-existing variability in the input signal (e.g., songs of tutors), and the effect of that variability on the output signal (e.g., songs of tutees). For experimental studies of song learning and cultural evolution, as in other fields, artifacts of experimental methodology can affect how results are interpreted, so one must ensure that the recording and signal processing parameters employed are appropriate to the question being asked and consistent across all experimental groups in the study.
In a set of experimental studies on song learning and cultural evolution conducted with zebra finches, Fehér et al. [52,130] ran laboratory experiments where all parameters were held constant except the song learning conditions. One advantage of this approach was the ability to control which input the first generation of learners received, thus allowing the researchers to systematically map the input signal to the end-product signal. In the 2009 study [130], first-generation birds were isolated and did not have a song tutor. As expected, these birds ended up producing aberrant songs. Aberrant-singing birds were then used as tutors for newly hatched, unrelated finches, and the procedure was repeated over several experimental generations. The authors found that juveniles imitated their tutor but changed some characteristics of their songs so that over three to four generations songs evolved towards the wild-type. Matching unrelated birds as tutors and tutees in this way enabled Fehér et al. [130] to isolate song learning from genetic inheritance, so that the resulting patterns of signal evolution could be attributed with confidence to the cultural transmission of signals across generations.
Following this, Fehér et al. [52] developed a set of experiments to quantify the effect of conspecific tutoring on song learning. In this study, vocal production was compared among three groups of zebra finches, all housed and recorded individually. Birds in an isolate group lacked a song tutor, and as described above for the 2009 study [130], developed songs that were atypical in structure. Birds in a wild-type group were tutored with recordings of wild-type adult songs. Finally, birds in a self-tutored group were not provided with tutors or external models (like the isolate birds) but were provided the opportunity to hear their own recently produced songs, played through speakers. Self-tutored birds developed songs that were much closer in structure to songs of wild-type birds than of isolate birds, suggesting that the experience of hearing songs, whether wild-type or improvised in the laboratory, favors the development of species-typical signals. There were, however, limits to the efficacy of self-tutoring; in several metrics, the structure of self-tutored birds’ songs was closer to that of isolate birds than of wild-type birds. In both of these studies, the experimental design employed (tests of song learning) allowed the researchers to isolate the potential power of cultural inheritance as a factor driving acoustic signal evolution.

6. Signal Structure and Evolution: Ecological Effects

Another productive line of research in acoustic communication focuses on how signals are influenced in structure and function by variation in ecological parameters such as resource availability, habitat structure, and conditions for rearing offspring [119,131,132,133]. Ecological effects on signal structure can be direct, such as when they favor signals that transmit effectively through particular habitats [111], or indirect, such as when patterns in resource availability impact social, sexual, and life history parameters involved in communication [134,135]. When considering acoustic signals such as echolocation clicks that are produced for navigation, object detection and localization, the environment will typically impose particularly strong effects on the characteristics of produced sounds [136,137]. For example, in bats, calls produced by different species while in flight resemble each other, as do calls while tree-climbing or marauding over a water surface [137].
In these types of studies, as above for cultural effects, it can be tempting to ascribe observed variations in acoustic signals to external ecological causes without truly isolating the hypothesized driver. A firm demonstration of systematic variation in signal structure driven by particular ecological traits requires attention to within-treatment or within-location variation, with sample sizes sufficient to document this variation with confidence. In other words, as discussed previously, the data collected should capture the variability in all other potential environmental or ecological parameters in order to truly isolate the target parameter. True variation is thus revealed by relatively limited within-treatment variation in the acoustic signal across all other ecological parameters, as compared to between-treatment variation. Variation in signals can emerge as non-functional, indirect by-products of other processes such as cultural evolution or random drift in signal traits [55], which can be confused with the direct effects of environment-specific selective regimes. It is typically not feasible to control for all confounding factors within the scope of a single study, and in such cases, it is important to address those caveats and consider potential future work that would clarify the factors that drive signal evolution.
In addition to these issues, field studies on acoustic behavior are uniquely susceptible to oversights about the context and intentions of the signaller. Understanding these things requires carefully planned observational studies with large sample sizes or controlled experimental conditions, and even these can be open to interpretation. Capturing the full variability of signals being produced often requires multiple recorders to be placed at various systematic locations throughout the study area. It is also important to record extensive metadata about the acoustic context, including parameters describing the foraging space and annotation of the prey types being hunted [1]. This, however, can be difficult to achieve in cases where the signallers cannot be directly observed.
Some recent papers on ecological effects on signal structure have focused on signal differences related to highly distinct habitats and foraging spaces. For example, in a study of mountain chickadees, Branch and Pravosudov [70] tested whether song structure diverged with location and the preferences of females for local males, a pattern which would help parents to produce offspring with locally adapted genes. The authors found that these birds’ songs indeed differed in structure among locally adapted populations, both by elevation as well as regionally across two mountain slopes [70]. A second, conceptually similar study in bats examined changes in echolocation characteristics at different flight elevations [71], and showed that echolocation clicks became longer, lower-frequency, and had narrower frequency bandwidths at higher elevations. The authors hypothesized that this pattern might reflect altitude-specific variations in factors affecting acoustic attenuation or density of prey items.
The authors of these studies standardized their recording equipment, the placement of their recorders, and timing of their recordings across locations. Gillam et al. [71] collected metadata about the data collection environment that could be used to remove non-target data. In both studies, the sample size was large enough to capture variability within each treatment (elevation and altitude, respectively), so that difference among the treatment groups could be interpreted to be a result of elevation and altitude, rather than other environmental or behavioral factors. Similar to other studies highlighted here, data processing included removing low-quality signals from the final dataset based on SNR, and spectrogram parameters, sample size, and acoustic metrics were standardized across groups. In their discussion of the drivers of changes in echolocation characteristics at altitude, Gillam et al. [71] included the caveat that several potential drivers of acoustic divergence could not be excluded based on their dataset (e.g., height-specific wind speed and direction), and suggested that future studies undertake more comprehensive altitudinal profiles of environmental conditions during the study period. Similarly, while Branch and Pravosudov’s [70] data on mountain chickadee song divergence were consistent with local adaptation, they recognized that the patterns observed in their study could also have been impacted by additional factors, such as the structure of vocal geographic variation by elevation (clinal gradients versus discrete dialects) and neutral drift caused by vocal learning inaccuracies. The researchers thus recommended that future studies examine relationships between gene flow and acoustic signals across a gradient of elevations, rather than simply between a high and low elevation.

7. Predator–Prey Relationships

One last application of bioacoustic analyses that we now consider concerns relationships between predators and prey. Some recent progress in this area has built on the integration of acoustic analyses with complementary data types to explore research questions and/or ecosystem dynamics that were previously beyond the scope of what could be studied. Predator–prey relationships have long been a central topic of ecological study, but in many environments these relationships have remained uncharacterized due to the difficulty in observing them with available technology (e.g., deep ocean, aerial, or extremely remote/densely vegetated environments), especially as predator–prey interactions are often rapid and occur in unpredictable times and locations. In many cases, however, acoustic analyses can be combined with other remote sensing technologies, such as satellite tagging or active acoustic monitoring (in which a signal is transmitted and the echo is recorded and analyzed), to allow scientists to monitor predator and prey behavior simultaneously and thus potentially document predation events.
In one recent example, Lawrence et al. [72] applied a combination of passive and active acoustic methods to describe the large-scale distributional relationships between harbor porpoise and their prey in an enclosed sea off the coast of the United Kingdom. Harbor porpoise are difficult to see at the surface of the ocean and are therefore difficult to survey using visual methods [138]. However, they vocalize frequently [136] producing clicks that are well documented and easily distinguished from those of other cetaceans [139,140]. This makes them well-suited for an analysis of their distribution using passive acoustic data. In the passive acoustic portion of the study, Lawrence et al. [72] surveyed the Clyde Sea using a towed array of two hydrophones. They ensured that their survey design covered the entire ecosystem in a representative way and surveyed only during daylight hours to minimize the effect of diel variability in acoustic behavior or depth of animals. Using two hydrophones allowed them to determine the bearing angle to the sound source, which aided in discriminating among echolocating individuals. Omnidirectional hydrophones were used, which reduced biases related to the direction of the sound source. Hydrophone sensitivity was calibrated to reduce equipment bias, and because harbor porpoise vocalize using only high-frequency echolocation clicks, a high-pass filter was used to reduce bias from noise in the lower frequency bands.
A significant challenge in combining passive and active acoustic monitoring techniques is that signals emitted by an active acoustic source will be detected by passive acoustic recorders. In order to address this challenge, Lawrence et al. [72] developed a classification algorithm using open-source PAMGuard software [141] to distinguish between harbor porpoise clicks and narrow-band 38, 50, 120, and 200 kHz pulses emitted at regular intervals by the active acoustic systems used by the researchers. Signals classified as harbor porpoise clicks were subject to an additional manual review based on their waveforms, power spectra, and wigner plots to ensure they were classified correctly. This two-step classification and review process reduced any potential false detections caused by signals emitted by the active acoustic source or other sources of impulsive sound. For the purpose of this study, the authors generated a method to rate their confidence in the classification of a click train to harbor porpoise, and only used click trains that were rated as “certain click trains” to indicate the species actual presence. Individual encounters were defined when more than 90 s passed between click trains, which is the time it took the ship to travel 300 m, the maximum known distance a harbor porpoise echolocation click will travel before it fully attenuates. Encounters were also separated if they occurred within 90 s of each other but at different bearing angles.
Using these methods, Lawrence et al. [72] found a positive relationship between porpoise density and their pelagic prey density at larger scales (5+ km), but that this relationship was not significant at smaller scales. By planning a comprehensive spatial survey, ensuring data were collected during periods reflecting consistent biological behaviors, and adhering to consistent parameters for acoustic data collection and processing, the authors of this study ensured that passive acoustic data were collected and interpreted in an accurate and representative manner. It is interesting to note that in choosing to use only “certain click trains” in their downstream analyses of the distribution of harbor porpoise, i.e., those signals that they felt the highest level of confidence to be true porpoise click trains, the authors probably erred on the side of underestimating the presence of harbor porpoise in the study area. This is often considered the “conservative” approach for academic studies; however, it is worth considering that for certain applications, for example when management or conservation policy is being developed based on the presence or absence of a species in the region, it may be beneficial to use an approach that errs toward overestimating presence. For this purpose as well, the authors may have improved their study by using three or more hydrophones, so as to triangulate the absolute position of echolocating animals. This would remove some of the uncertainty in using acoustic data to distinguish among individuals based on time difference or bearing angle between click trains.

8. Discussion

Bioacousticians benefit from adopting best practices in study design and data collection to rise to a variety of challenges in interpreting animal acoustic communication. In the preceding sections we have discussed six different studies that focused on a variety of questions that can be investigated using acoustic data. While the studies we have included address a range of questions using a range of methodologies, some commonalities emerge as best practices which should be adhered to as best as possible in data collection, processing, and interpretation of bioacoustic data (Figure 3).
Several important themes emerged related to the sampling design and data collection phases of an acoustic study. In each of our case studies, we have pointed out the importance of collecting data that adequately captures variability in the acoustic behaviour of species of interest. Some species produce different types of sounds in different behavioural states [6,142,143,144], at different times of the day or year [145,146], or in different locations [38,147,148]. If these differences are not taken into account, results of acoustic analyses can be misinterpreted. For example, as we pointed out in Section 3, Ecosystem Biodiversity, failure to account for differences in calls produced by a single species at different times could lead to over-estimates of species diversity. Sample sizes should also be carefully considered at this stage. This is particularly important for acoustic repertoires with high variability, such as the whistle repertoires of short- and long-beaked common dolphins, which are comprised of hundreds of different whistle types [149]. Sample sizes should be large enough to capture as much variability as possible, and to represent the different signal types adequately. Statistical analyses such as discovery curves can be useful tools in determining whether a given sample is sufficiently large to capture the variability in a target population [150,151].
Barriers to collecting sufficient acoustic data include challenges and logistics of carrying out field work (especially for expensive sampling protocols such as making recordings in remote locations or field work involving large ships), limitations on battery life of recording equipment, data storage space, and resources for analyzing large datasets. One strategy for overcoming these challenges is to use duty-cycling when making recordings; that is, to record for only certain periods during a given time window. For example, a duty-cycled recording regime could record for 1 min every 5 min, for 10 min every hour, or for 1 h every 4 h. However, such an approach can compromise the final sample in several ways including missed detections of rare sounds and incomplete representation of sound occurrence patterns. In addition, the effects of using duty cycles can vary by species due to differences in acoustic behavior, characteristics of the signals being produced, and propagation characteristics of the environment. Therefore, it is not always appropriate to use the same duty cycle for different species, in different habitats, or to answer different questions [152,153,154,155], a point that should be considered especially if a given recording sample will be used to examine multiple questions spanning multiple species. Before selecting a duty cycle it is advisable to examine a dataset of continuous recordings to evaluate the effect that different duty cycles will have on the detection of vocalizations of interest and thus the questions that can be answered using duty cycled data.
Placement of microphones or hydrophones (transducers) is another important consideration. For example, if conducting a small-scale study of a targeted population [156,157,158], transducers should be placed in locations where they can adequately capture sounds produced by a representative sample of the population, and at appropriate times. If the population of interest is only in a certain location at some times of the day or year, one must consider whether placing a recorder in that location will bias results. On the other hand, if an acoustic study is more broad scale, one must decide how many recorders are necessary. This decision should be based on the distance over which sounds of interest can be detected, which in turn will be influenced by the sounds produced (frequency, source level, etc.), the environment through which those sounds will travel, and the sensitivity of the recording equipment being used. Hand-in-hand with this is the need for balanced geographic coverage across study strata. If the study covers a large area, the researcher must consider how to achieve balanced coverage and/or whether certain areas within the larger area are more important to monitor than others. Finally, one should consider transducer placement with regard to their species of interest; for example, placing transducers at different heights in a tree or depths underwater will affect which species’ signals are recorded and what types of noise interference or propagation effects may result. The resulting placement should optimize the signal to noise ratio for the sounds of interest.
Another factor that needs consideration is that acoustic studies often involve the use of multiple transducers placed in different locations and/or at different times. If multiple transducers are being used, the extent of overlap among transducers must be evaluated. For example, in Section 3, Ecosystem Biodiversity, Depraetere et al. [69] positioned microphones at least 300 m from each other to limit the probability of detecting given sounds on multiple microphones. In other situations, for example when acoustic localization is desired, transducers should be placed so that target sounds are detected on multiple transducers. In one such example, in Section 6 Predator–prey relationships, Lawrence et al. [72] used hydrophones spaced 30 cm apart, which allowed acoustic localization of harbor porpoise clicks. When using multiple transducers in a study it is important to ensure they are time synchronized, and use the same sensitivity and frequency responses, high/low frequency pass filters, and sampling rates.
Yet, another consideration involves the frequency response of the transducers and the frequency range of the signals of interest. When making decisions about recording equipment, one must be familiar with the sounds of interest to ensure that the equipment is capable of capturing the entire frequency range required. For example, when recording narrowband high frequency clicks produced by species such as harbor porpoises, it is necessary to use hydrophones that have frequency responses to at least 150 kHz, and to record using a sampling rate of at least 300 kHz. Any signal with frequencies over one-half of the sampling rate, commonly called the Nyquist frequency [110], will be biased by aliasing; therefore, for sounds to be accurately represented in recordings, a sampling rate that is at least twice that of the highest frequency sound of interest must be used. If equipment limitations necessitate a lower-than-optimal sampling rate, an analog anti-aliasing filter needs to be implemented to avoid aliasing from uncaptured higher frequency components.
Once decisions have been made regarding the type and placement of recording equipment and the recording schedules, one must give careful thought to the supplemental data and observations (e.g., metadata) that will be collected along with the acoustic recordings. There are many factors that can affect the quality of acoustic recordings, the sounds that are detected, and the interpretation of the data collected, and it is crucial to document these factors along with all recordings where possible. The type of supplemental data required will vary with the aims and scope of each study. Supplemental data commonly includes information such as time and location of recordings, equipment and settings used, weather conditions, species present, and behavior exhibited. This is by no means an exhaustive list and prior to collecting acoustic data, researchers should spend time carefully considering the types of supplemental data required, how these data will be documented, and ways to best organize these data and link them to specific acoustic recordings [159].
Another recurring challenge when working with bioacoustic data collected in the field is the ability to identify species in the recordings with confidence. The gold-standard method for accomplishing this is to pair acoustic recordings with visual observations of the vocalizing animals [160,161]. However, in many cases it is not possible to obtain visual observations of all (or even any) sound producers during the recording period. In such cases, species must be identified based on their vocalizations. Some species produce calls that are distinctive and thus easily recognized (such as fixed frequency relationships among notes in Black-capped Chickadees [146]) and for others, detailed call catalogs are available (e.g., killer whales, [162,163]). In many cases, however, it can be helpful to utilize machine learning classifiers (see Section 1, Species Identification, and Section 2, Population Structure). When using machine learning methods for classification or detection, it is essential to design models and handle data carefully to avoid over-fitting and thus ensure that outcomes will be indicative of expected performance on future novel data. This requires separation of data into training, testing, and validation datasets. Models are trained on training data, with testing data used for evaluation and testing during system design to adjust model parameters. Validation data should be reserved for a single final evaluation. To avoid over-fitting so that expected results on the reserved validation data will be in line with the testing performance during model development, it is important that the model complexity and number of parameters be selected carefully in accordance with the size of the training data set.
Once an acoustic dataset has been collected, effective data processing also requires application of best practices. Many data processing workflows include examination of spectrograms. The settings used to create spectrograms (e.g., FFT size, window length, windowing function) can have significant impacts on the visual appearance of sounds and thus affect how they are evaluated. For example, increasing FFT size results in a gain in frequency resolution but a loss in time resolution, and vice versa [110]. The optimal settings for a given project will depend on the frequencies being examined, the sample rate of the recordings, and the questions being asked. It is important to maintain consistent settings across an analysis and/or to carefully document when and why settings are changed.
In many cases, acoustic recordings are processed before analysis to minimize noise and/or exclude low quality calls (e.g., calls that are low SNR, masked by other sounds, overlapping with each other, etc.). It is helpful to maintain consistency in these methods across recordings, as well as to consider the effects of pre-processing on the results of the analyses to be performed. For example, before removing low quality calls from a dataset, one should consider whether this might skew the results. This could occur if lower amplitude calls are produced by certain group members, or by individuals in certain behavior states or locations relative to the transducer. Additionally, clear rules for evaluating the quality of a call should be developed and applied. For example, in Section 2, Population Structure and Section 5, Signal Structure and Evolution: Ecological Effects, the authors of the case studies presented all clearly state that they did not include low quality calls in their analyses [68,70,71]. Knowledge of which calls were omitted and how quality control decisions are made might impact how the results of any bioacoustic study can be evaluated.
Finally, when interpreting the results of an acoustic study, one would ideally rule out all sources of variability beyond the primary sources being investigated. In reality, this is usually not possible, and so it is critical to identify remaining sources of variability, evaluate their possible effects on the results obtained, and discuss the alternative data interpretations that these could lead to. For example, as discussed in Section 5, Signal structure and evolution: ecological effects, Branch and Pravosudov [70] state that acoustic differentiation they found and attributed to local adaptation to environmental conditions could have also been impacted by additional factors such as cultural drift and the geographic structure of vocal variation and proposed further studies towards these ends. Openly presenting and discussing alternative hypotheses can lead to fruitful collaborations and productive lines of further investigation.

9. Conclusions

Throughout this manuscript, we have endeavored to present studies that exhibit best practices in some aspects of bioacoustic research, and to discuss the specific impacts of these practices on the collection and interpretation of bioacoustic data. The particular studies included here were chosen in part because each study clearly described their methodologies. Careful and comprehensive documentation of all of the steps, choices, and experimental parameters chosen in any bioacoustic study is important to allow readers to understand the basis for the study conclusions, and consistent methods for data collection allow studies to be compared across diverse geographic spaces, different ecological niches, and time periods.
Rather than providing a comprehensive literature review, we hope to have presented a taste of the kinds of issues involved in setting best practices for data collection, analysis, and interpretation. We have focused mainly on studies that apply to passive acoustic sampling, that is without adding sounds to the environment or manipulating the animals being recorded. We did not address the interpretation of active acoustic data, and touched only briefly on playback, laboratory, and experimental studies, as these approaches involve an entirely different set of considerations, some of which have been reviewed elsewhere (e.g., [164,165]). Additionally, with the exception of developing machine learning classifiers, we have not included statistical analysis of acoustic data, as this is highly dependent on the questions being asked and the type of data collected. The use of appropriate statistics is another crucial piece of the puzzle, and many resources are available to guide decisions about which approaches to use (e.g., [166,167,168,169,170,171,172,173]).
In conclusion, bioacoustic approaches can provide a wealth of unique and valuable insights into the diversity of populations, species, ecosystems, and their interactions. The time and resources required to collect, analyze, and interpret acoustic datasets are often extensive; collecting and interpreting bioacoustic data are processes that can be fraught with methodological challenges, technological difficulties and inconsistencies that can introduce bias across studies; and determining what sounds are biologically meaningful can be difficult. For these reasons, the value of applying best practices cannot be overstated. We hope that the best practices discussed in this manuscript act as a useful guide for bioacoustic investigations to come.

Author Contributions

All authors contributed to conceptualization and writing of this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Acknowledgments

We would like to thank Shannon Rankin and Eric Archer for organizing and the National Institute for Mathematical and Biological Synthesis (NIMBioS) for sponsoring and hosting the 2018 workshop on Bioacoustic Structure. We are also grateful to Vincent Janik and two anonymous reviewers for their insightful and helpful comments on drafts of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bradbury, J.W.; Vehrencamp, S.L. Principles of Animal Communication, 2nd ed.; Oxford University Press: Sunderland, MA, USA, 2011. [Google Scholar]
  2. Lammers, M.O.; Oswald, J.N. Analyzing the Acoustic Communication of Dolphins. In Dolphin Communication and Cognition: Past, Present, and Future; MIT Press: Cambridge, MA, USA, 2015; pp. 107–137. [Google Scholar]
  3. Searcy, W.N.S. The Evolution of Animal Communication and Deception in Signaling Systems; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
  4. Furmankiewicz, J.; Ruczynksi, I.; Urban, R.; Jones, G. Social Calls Provide Tree-Dwelling Bats with Information about the Location of Conspecifics at Roosts. Ethology 2011, 117, 480–489. [Google Scholar] [CrossRef]
  5. Mouterde, S.C. From Vocal Ot Neural Encoding: A Transversal Investigation of Information Transmission at Long Distance in Birds. In Coding Strategies in Vertebrate Acoustic Communication; Aubin, T., Mathevon, N., Eds.; Springer: Cham, Switzerland, 2020; pp. 203–229. [Google Scholar]
  6. Zuberbühler, K.; Noë, R.; Seyfarth, R.M. Diana Monkey Long-Distance Calls: Messages for Conspecifics and Predators. Anim. Behav. 1997, 53, 589–604. [Google Scholar] [CrossRef] [Green Version]
  7. Denzinger, A.; Tschapka, M.; Schnitzler, H.U. The Role of Echolocation Strategies for Niche Differentiation in Bats. Can. J. Zool. 2018, 96, 171–181. [Google Scholar] [CrossRef]
  8. Forsman, K.A.; Malmquist, M.G. Evidence for Echolocation in the Common Shrew, Sorex Araneus. J. Zool. 1988, 216, 655–662. [Google Scholar] [CrossRef]
  9. Jensen, M.E.; Moss, C.F.; Surlykke, A. Echolocating Bats Can Use Acoustic Landmarks for Spatial Orientation. J. Exp. Biol. 2005, 208, 4399–4410. [Google Scholar] [CrossRef] [Green Version]
  10. Johnson, M.; Madsen, P.T.; Zimmer, W.M.; de Soto, N.A.; Tyack, P.L. Beaked Whales Echolocate on Prey. Proc. R. Soc. Lond. Ser. B Biol. Sci. 2004, 271 (Suppl. S6), S383–S386. [Google Scholar] [CrossRef]
  11. Moss, C.F.; Surlykke, A. Probing the Natural Scene by Echolocation in Bats. Front. Behav. Neurosci. 2010, 4, 33. [Google Scholar] [CrossRef] [Green Version]
  12. Garland, E.C.; Goldizen, A.W.; Lilley, M.S.; Rekdahl, M.L.; Garrigue, C.; Constantine, R.; Hauser, N.D.; Poole, M.M.; Robbins, J.; Noad, M.J. Population Structure of Humpback Whales in the Western and Central South Pacific Ocean as Determined by Vocal Exchange among Populations. Conserv. Biol. 2015, 29, 1198–1207. [Google Scholar] [CrossRef] [Green Version]
  13. Pérez-Granados, C.; Traba, J. Estimating Bird Density Using Passive Acoustic Monitoring: A Review of Methods and Suggestions for Further Research. Ibis 2021, 163, 765–783. [Google Scholar] [CrossRef]
  14. Dos Santos Protázio, A.; Albuquerque, R.L.; Falkenberg, L.M.; Mesquita, D.O. Acoustic Ecology of an Anuran Assemblage in the Arid Caatinga of Northeastern Brazil. J. Nat. Hist. 2015, 49, 957–976. [Google Scholar] [CrossRef]
  15. Moore, B.L.; Connor, R.C.; Allen, S.J.; Krützen, M.; King, S.L. Acoustic Coordination by Allied Male Dolphins in a Cooperative Context. Proc. R. Soc. B 2020, 287, 20192944. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Szymański, P.; Olszowiak, K.; Wheeldon, A.; Budka, M.; Osiejuk, T.S. Passive Acoustic Monitoring Gives New Insight into Year-Round Duetting Behaviour of a Tropical Songbird. Ecol. Indic. 2021, 122, 107271. [Google Scholar] [CrossRef]
  17. Caiger, P.E.; Dean, M.J.; DeAngelis, A.I.; Hatch, L.T.; Rice, A.N.; Stanley, J.A.; Tholke, C.; Zemeckis, D.R.; Van Parijs, S.M. A Decade of Monitoring Atlantic Cod Gadus Morhua Spawning Aggregations in Massachusetts Bay Using Passive Acoustics. Mar. Ecol. Prog. Ser. 2020, 635, 89–103. [Google Scholar] [CrossRef] [Green Version]
  18. Pérez-Granados, C.; Schuchmann, K.L. Passive Acoustic Monitoring of the Diel and Annual Vocal Behavior of the Black and Gold Howler Monkey. Am. J. Primatol. 2021, 83, e23241. [Google Scholar] [CrossRef] [PubMed]
  19. Picciulin, M.; Kéver, L.; Parmentier, E.; Bolgan, M. Listening to the Unseen: Passive Acoustic Monitoring Reveals the Presence of a Cryptic Fish Species. Aquat. Conserv. Mar. Freshw. Ecosyst. 2019, 29, 202–210. [Google Scholar] [CrossRef]
  20. Gasco, A.; Ferro, H.F.; Monticelli, P.F. The Communicative Life of a Social Carnivore: Acoustic Repertoire of the Ring-Tailed Coati (Nasua Nasua). Bioacoustics 2019, 28, 459–487. [Google Scholar] [CrossRef]
  21. Kershenbaum, A.; Freeberg, T.M.; Gammon, D.E. Estimating Vocal Repertoire Size Is Like Collecting Coupons: A Theoretical Framework with Heterogeneity in Signal Abundance. J. Theor. Biol. 2015, 373, 1–11. [Google Scholar] [CrossRef]
  22. Moron, J.R.; Alves, L.C.P.; de Assis, C.V.; Garcia, F.C.; Andriolo, A. Clymene Dolphin (Stenella Clymene) Whistles in the Southwest Atlantic Ocean. J. Acoust. Soc. Am. 2018, 144, 1952. [Google Scholar] [CrossRef]
  23. Tanimoto, A.M.; Hart, P.J.; Pack, A.A.; Switzer, R. Vocal Repertoire and Signal Characteristics of ‘Alalā, the Hawaiian Crow (Corvus Hawaiiensis). Wilson J. Ornithol. 2017, 129, 25–35. [Google Scholar] [CrossRef]
  24. Cowlishaw, G. Song Function in Gibbons. Behaviour 1992, 121, 131–153. [Google Scholar] [CrossRef]
  25. Umeed, R.; Niemeyer Attademo, F.L.; Bezerra, B. The Influence of Age and Sex on the Vocal Repertoire of the Antillean Manatee (Trichechus Manatus Manatus) and Their Responses to Call Playback. Mar. Mammal Sci. 2018, 34, 577–594. [Google Scholar] [CrossRef]
  26. Bradley, D.W.; Mennill, D.J. Strong Ungraded Responses to Playback of Solos, Duets and Choruses in a Cooperatively Breeding Neotropical Songbird. Anim. Behav. 2009, 77, 1321–1327. [Google Scholar] [CrossRef]
  27. Coye, C.; Ouattara, K.; Arlet, M.E.; Lemasson, A.; Zuberbühler, K. Flexible Use of Simple and Combined Calls in Female Campbell’s Monkeys. Anim. Behav. 2018, 141, 171–181. [Google Scholar] [CrossRef] [Green Version]
  28. Hetrick, S.A.; Sieving, K.E. Antipredator Calls of Tufted Titmice and Interspecific Transfer of Encoded Threat Information. Behav. Ecol. 2011, 23, 83–92. [Google Scholar] [CrossRef] [Green Version]
  29. Raemaekers, J.J.; Raemaekers, P.M.; Haimoff, E.H. Loud Calls of the Gibbon (Hylobates Lar): Repertoire, Organisation and Context. Behaviour 1984, 91, 146–189. [Google Scholar] [CrossRef]
  30. Konrad, C.M.; Gero, S.; Frasier, T.; Whitehead, H. Kinship Influences Sperm Whale Social Organization within, but Generally Not among, Social Units. R. Soc. Open Sci. 2018, 5, 180914. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Van Cise, A.; Mahaffy, S.; Baird, R.; Mooney, T.; Barlow, J. Song of My People: Dialect Differences among Sympatric Social Groups of Short-Finned Pilot Whales in Hawai’i. Behav. Ecol. Sociobiol. 2018, 72, 193. [Google Scholar] [CrossRef]
  32. Charlton, B.D.; Reby, D. The Evolution of Acoustic Size Exaggeration in Terrestrial Mammals. Nat. Comm. 2016, 7, 12739. [Google Scholar] [CrossRef] [Green Version]
  33. Garcia, M.; Herbst, C.T.; Bowling, D.L.; Dunn, J.C.; Fitch, W.T. Acoustic Allometry Revisited: Morphological Determinants of Fundamental Frequency in Primate Vocal Production. Sci. Rep. 2017, 7, 10450. [Google Scholar] [CrossRef] [Green Version]
  34. Gillooly, J.F.; Ophir, A.G. The Energetic Basis of Acoustic Communication. Proc. R. Soc. B 2010, 277, 1325–1331. [Google Scholar] [CrossRef]
  35. Jensen, F.H.; Johnson, M.; Ladegaard, M.; Wisniewska, D.M.; Madsen, P.T. Narrow Acoustic Field of View Drives Frequency Scaling in Toothed Whale Biosonar. Curr. Biol. 2018, 28, 3878–3885.e3873. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Balcazar, N.E.; Tripovich, J.S.; Klinck, H.; Nieukirk, S.L.; Mellinger, D.K.; Dziak, R.P.; Rogers, T.L. Calls Reveal Population Structure of Blue Whales across the Southeast Indian Ocean and the Southwest Pacific Ocean. J. Mammal. 2015, 96, 1184–1193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. De la Torre, S.; Snowdon, C.T. Dialects in Pygmy Marmosets? Population Variation in Call Structure. Am. J. Primatol. 2009, 71, 333–342. [Google Scholar] [CrossRef]
  38. Nicholls, J.A.; Austin, J.J.; Moritz, C.; Goldizen, A.W. Genetic Population Structure and Call Variation in a Passerine Bird, the Satin Bowerbird, Ptilonorhynchus Violaceus. Evolution 2006, 60, 1279–1290. [Google Scholar] [CrossRef] [PubMed]
  39. Pavlova, A.; Amos, J.N.; Goretskaia, M.I.; Beme, I.R.; Buchanan, K.L.; Takeuchi, N.; Radford, J.Q.; Sunnucks, P. Genes and Song: Genetic and Social Connections in Fragmented Habitat in a Woodland Bird with Limited Dispersal. Ecology 2012, 93, 1717–1727. [Google Scholar] [CrossRef]
  40. Podos, J. Discrimination of Geographical Song Variants by Darwin’s Finches. Anim. Behav. 2007, 73, 833–844. [Google Scholar] [CrossRef]
  41. Davis, G.E.; Baumgartner, M.F.; Corkeron, P.J.; Bell, J.; Berchok, C.; Bonnell, J.M.; Bort Thornton, J.; Brault, S.; Buchanan, G.A.; Cholewiak, D.M.; et al. Exploring Movement Patterns and Changing Distributions of Baleen Whales in the Western North Atlantic Using a Decade of Passive Acoustic Data. Glob. Chang. Biol. 2020, 26, 4812–4840. [Google Scholar] [CrossRef]
  42. Dawson, D.K.; Efford, M.G. Bird Population Density Estimated from Acoustic Signals. J. Appl. Ecol. 2009, 46, 1201–1209. [Google Scholar] [CrossRef]
  43. Marques, T.A.; Munger, L.; Thomas, L.; Wiggins, S.; Hildebrand, J.A. Estimating North Pacific Right Whale Eubalaena Japonica Density Using Passive Acoustic Cue Counting. Endanger. Species Res. 2011, 13, 163–172. [Google Scholar] [CrossRef]
  44. Marques, T.A.; Thomas, L.; Martin, S.W.; Mellinger, D.K.; Ward, J.A.; Moretti, D.J.; Harris, D.; Tyack, P.L. Estimating Animal Population Density Using Passive Acoustics. Biol. Rev. 2013, 88, 287–309. [Google Scholar] [CrossRef]
  45. Lau, A.R.; Zafar, M.; Ahmad, A.H.; Clink, D.J. Investigating Temporal Coordination in the Duet Contributions of a Pair-Living Small Ape. Behav. Ecol. Sociobiol. 2022, 76, 91. [Google Scholar] [CrossRef]
  46. Dunn, J.C.; Smaers, J.B. Neural Correlates of Vocal Repertoire in Primates. Front. Neurosci. 2018, 12, 534. [Google Scholar] [CrossRef]
  47. Snowdon, C.T. Cognitive Components of Vocal Communication: A Case Study. Animals 2018, 8, 126. [Google Scholar] [CrossRef] [Green Version]
  48. Crance, J.L.; Bowles, A.E.; Garver, A. Evidence for Vocal Learning in Juvenile Male Killer Whales, Orcinus Orca, from an Adventitious Cross-Socializing Experiment. J. Exp. Biol. 2014, 217, 1229–1237. [Google Scholar] [CrossRef] [Green Version]
  49. Favaro, L.; Neves, S.; Furlati, S.; Pessani, D.; Martin, V.; Janik, V.M. Evidence Suggests Vocal Production Learning in a Cross-Fostered Risso’s Dolphin (Grampus Griseus). Anim. Cogn. 2016, 19, 847–853. [Google Scholar] [CrossRef] [Green Version]
  50. Prat, Y.; Taub, M.; Yovel, Y. Vocal Learning in a Social Mammal: Demonstrated by Isolation and Playback Experiments in Bats. Sci. Adv. 2015, 1, e1500019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Vernes, S.C.; Janik, V.M.; Fitch, W.T.; Slater, P.J.B. Vocal Learning in Animals and Humans. Philos. Trans. R. Soc. B 2021, 376, 20200234. [Google Scholar] [CrossRef]
  52. Fehér, O.; Ljubičić, I.; Suzuki, K.; Okanoya, K.; Tchernichovski, O. Statistical Learning in Songbirds: From Self-Tutoring to Song Culture. Philos. Trans. R. Soc. B 2017, 372, 20160053. [Google Scholar] [CrossRef] [Green Version]
  53. Goutte, S.; Dubois, A.; Howard, S.D.; Márquez, R.; Rowley, J.J.L.; Dehling, J.M.; Grandcolas, P.; Xiong, R.C.; Legendre, F. How the Environment Shapes Animal Signals: A Test of the Acoustic Adaptation Hypothesis in Frogs. J. Evol. Biol. 2018, 31, 148–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Kyhn, L.A.; Tougaard, J.; Beedholm, K.; Jensen, F.H.; Ashe, E.; Williams, R.; Madsen, P.T. Clicking in a Killer Whale Habitat: Narrow-Band, High-Frequency Biosonar Clicks of Harbour Porpoise (Phocoena Phocoena) and Dall’s Porpoise (Phocoenoides Dalli). PLoS ONE 2013, 8, e63763. [Google Scholar] [CrossRef] [PubMed]
  55. Podos, J.; Warren, P.S. The Evolution of Geographic Variation in Birdsong. In Advances in the Study of Behavior; Academic Press: Cambridge, MA, USA, 2007; pp. 403–458. [Google Scholar]
  56. Crouch, W.B.; Peter, W.C.P. Assessing the Use of Call Surveys to Monitor Breeding Anurans in Rhode Island. J. Herpetol. 2002, 36, 185–192. [Google Scholar] [CrossRef]
  57. Heinicke, S.; Kalan, A.K.; Wagner, O.J.J.; Mundry, R.; Lukashevich, H.; Kühl, H.S. Assessing the Performance of a Semi-Automated Acoustic Monitoring System for Primates. Methods Ecol. Evol. 2015, 6, 753–763. [Google Scholar] [CrossRef]
  58. Rankin, S.; Archer, F.; Keating, J.; Oswald, J.N.; Oswald, M.; Curtis, A.; Barlow, J. Acoustic Classification of Dolphins in the California Current Using Whistles, Echolocation Clicks, and Burst Pulses. Mar. Mammal Sci. 2017, 33, 520–540. [Google Scholar] [CrossRef]
  59. Russo, D.; Voigt, C. The Use of Automated Identification of Bat Echolocation Calls in Acoustic Monitoring: A Cautionary Note for a Sound Analysis. Ecol. Indic. 2016, 66, 598–602. [Google Scholar] [CrossRef]
  60. Gage, S.H.; Napoletano, B.M.; Cooper, M.C. Assessment of Ecosystem Biodiversity by Acoustic Diversity Indices. J. Acoust. Soc. Am. 2001, 109, 2430. [Google Scholar] [CrossRef]
  61. Mooney, T.A.; Di Iorio, L.; Lammers, M.; Lin, T.-H.; Nedelec, S.L.; Parsons, M.; Radford, C.; Urban, E.; Stanley, J. Listening Forward: Approaching Marine Biodiversity Assessments Using Acoustic Methods. R. Soc. Open Sci. 2020, 7, 201287. [Google Scholar] [CrossRef]
  62. Parks, S.E.; Miksis-Olds, J.L.; Denes, S.L. Assessing Marine Ecosystem Acoustic Diversity across Ocean Basins. Ecol. Inform. 2014, 21, 81–88. [Google Scholar] [CrossRef]
  63. Sueur, J.; Farina, A.; Gasc, A.; Pieretti, N.; Pavoine, S. Acoustic Indices for Biodiversity Assessment and Landscape Investigation. Acta Acust. 2014, 100, 772–781. [Google Scholar] [CrossRef] [Green Version]
  64. Francis, C.D.; Newman, P.; Taff, B.D.; White, C.; Monz, C.A.; Levenhagen, M.; Petrelli, A.R.; Abbott, L.C.; Newton, J.; Burson, S.; et al. Acoustic Environments Matter: Synergistic Benefits to Humans and Ecological Communities. J. Environ. Manag. 2017, 203, 245–254. [Google Scholar] [CrossRef]
  65. Benoit-Bird, K.J.; Southall, B.; Moline, M.A. Using Acoustics to Examine Odontocete Foraging Ecology: Predator–Prey Dynamics in the Mesopelagic. J. Acoust. Soc. Am. 2016, 140, 3130. [Google Scholar] [CrossRef]
  66. Berejikian, B.A.; Moore, M.E.; Jeffries, S.J. Predator–Prey Interactions between Harbor Seals and Migrating Steelhead Trout Smolts Revealed by Acoustic Telemetry. Mar. Ecol. Prog. Ser. 2016, 543, 21–35. [Google Scholar] [CrossRef] [Green Version]
  67. Parsons, M.H.; Apfelbach, R.; Banks, P.B.; Cameron, E.Z.; Dickman, C.R.; Frank, A.S.K.; Jones, M.E.; McGregor, I.S.; McLean, S.; Müller-Schwarze, D.; et al. Biologically Meaningful Scents: A Framework for Understanding Predator–Prey Research across Disciplines. Biol. Rev. 2018, 93, 98–114. [Google Scholar] [CrossRef] [PubMed]
  68. Sharpe, D.; Castellote, M.; Wade, P.; Cornick, L. Call Types of Bigg’s Killer Whales (Orcinus Orca) in Western Alaska: Using Vocal Dialects to Assess Population Structure. Bioacoustics 2019, 28, 74–99. [Google Scholar] [CrossRef]
  69. Depraetere, M.; Pavoine, S.; Jiguet, F.; Gasc, A.; Duvail, S.; Sueur, J. Monitoring Animal Diversity Using Acoustic Indices: Implementation in a Temperate Woodland. Ecol. Indic. 2012, 13, 46–54. [Google Scholar] [CrossRef]
  70. Branch, C.L.; Pravosudov, V.V. Mountain Chickadees from Different Elevations Sing Different Songs: Acoustic Adaptation, Temporal Drift or Signal of Local Adaptation? R. Soc. Open Sci. 2015, 2, 150019. [Google Scholar] [CrossRef]
  71. Gillam, E.H.; McCracken, G.F.; Westbrook, J.K.; Lee, Y.; Jensen, M.L.; Balsley, B.B. Bats Aloft: Variability in Echolocation Call Structure at High Altitudes. Behav. Ecol. Sociobiol. 2009, 64, 69–79. [Google Scholar] [CrossRef]
  72. Lawrence, J.M.; Armstrong, E.; Gordon, J.; Lusseau, S.M.; Fernandes, P.G. Passive and Active, Predator and Prey: Using Acoustics to Study Interactions between Cetaceans and Forage Fish. ICES J. Mar. Sci. 2016, 73, 2075–2084. [Google Scholar] [CrossRef] [Green Version]
  73. Celis-Murillo, A.; Deppe, J.L.; Allen, M.F. Using Soundscape Recordings to Estimate Bird Species Abundance, Richness, and Composition. J. Field Ornithol. 2009, 80, 64–78. [Google Scholar] [CrossRef]
  74. Hannay, D.E.; Delarue, J.; Mouy, X.; Martin, B.S.; Leary, D.; Oswald, J.N.; Vallarta, J. Marine Mammal Acoustic Detections in the Northeastern Chukchi Sea, September 2007–July 2011. Cont. Shelf Res. 2013, 67, 127–146. [Google Scholar] [CrossRef] [Green Version]
  75. Fukui, D.; Agetsuma, N.; Hill, D.A. Acoustic Identification of Eight Species of Bat (Mammalia: Chiroptera) Inhabiting Forests of Southern Hokkaido, Japan: Potential for Conservation Monitoring. Zool. Sci. 2004, 21, 947–955. [Google Scholar] [CrossRef]
  76. Vaughan, N.; Jones, G.; Harris, S. Identification of British Bat Species by Multivariate Analysis of Echolocation Call Parameters. Bioacoustics 1997, 7, 189–207. [Google Scholar] [CrossRef]
  77. Bridges, A.S.; Dorcas, M.E. Temporal Variation in Anuran Calling Behavior: Implications for Surveys and Monitoring Programs. Copeia 2000, 2000, 587–592. [Google Scholar] [CrossRef]
  78. Riede, K. Diversity of Sound-Producing Insects in a Bornean Lowland Rain Forest. In Tropical Rainforest Research—Current Issues: Proceedings of the Conference Held in Bandar Seri Begawan, April 1993; Edwards, D.S., Booth, W.E., Choy, S.C., Eds.; Springer: Dordrecht, The Netherlands, 1996; pp. 77–84. [Google Scholar]
  79. Armitage, D.W.; Ober, H.K. A Comparison of Supervised Learning Techniques in the Classification of Bat Echolocation Calls. Ecol. Inform. 2010, 5, 465–473. [Google Scholar] [CrossRef]
  80. Baumgartner, M.F.; Mussoline, S.E. A Generalized Baleen Whale Call Detection and Classification System. J. Acoust. Soc. Am. 2011, 129, 2889–2902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Brandes, T.S. Automated Sound Recording and Analysis Techniques for Bird Surveys and Conservation. Bird Conserv. Int. 2008, 18, S163–S173. [Google Scholar] [CrossRef] [Green Version]
  82. Jarvis, S.; DiMarzio, N.; Morrissey, R.; Morretti, D. Automated Classification of Beaked Whales and Other Small Odontocetes in the Tongue of the Ocean, Bahamas. Presented at the OCEANS 2006, Boston, MA, USA,, 18–21 September 2006. [Google Scholar]
  83. Oswald, J.N.; Rankin, S.; Barlow, J.; Lammers, M.O. A Tool for Real-Time Acoustic Species Identification of Delphinid Whistles. J. Acoust. Soc. Am. 2007, 122, 587–595. [Google Scholar] [CrossRef] [Green Version]
  84. Coffey, K.R.; Marx, R.G.; Neumaier, J.F. Deepsqueak: A Deep Learning-Based System for Detection and Analysis of Ultrasonic Vocalizations. Neuropsychopharmacology 2019, 44, 859–868. [Google Scholar] [CrossRef] [Green Version]
  85. Fukuzawa, Y.; Webb, W.H.; Pawley, M.D.M.; Roper, M.M.; Marsland, S.; Brunton, D.H.; Gilman, A. Koe: Web-Based Software to Classify Acoustic Units and Analyse Sequence Structure in Animal Vocalizations. Methods Ecol. Evol. 2020, 11, 431–441. [Google Scholar] [CrossRef] [Green Version]
  86. Oswald, J.N.; Barlow, J.; Norris, T.F. Acoustic Identification of Nine Delphinid Species in the Eastern Tropical Pacific Ocean. Mar. Mammal Sci. 2003, 19, 20–037. [Google Scholar] [CrossRef] [Green Version]
  87. Jensen, F.H.; Wahlberg, M.; Bejder, L.; Madsen, P.T. Noise Levels and Masking Potential of Small Whale-Watching and Research Vessels around Two Delphinid Species. Bioacoustics 2008, 17, 166–168. [Google Scholar] [CrossRef]
  88. Larom, D.; Garstang, M.; Payne, K.; Raspet, R.; Lindeque, M. The Influence of Surface Atmospheric Conditions on the Range and Area Reached by Animal Vocalizations. J. Exp. Biol. 1997, 200, 421–431. [Google Scholar] [CrossRef]
  89. Oswald, J.N.; Rankin, S.; Barlow, J. The Effect of Recording and Analysis Bandwidth on Acoustic Identification of Delphinid Species. J. Acoust. Soc. Am. 2004, 116, 3178–3185. [Google Scholar] [CrossRef] [Green Version]
  90. Lee, J.-H.; Podos, J.; Sung, H.-C. Distinct Patterns of Geographic Variation for Different Song Components in Daurian Redstarts (Phoenicurus Auroreus). Bird Study 2019, 66, 73–82. [Google Scholar] [CrossRef]
  91. Lima, I.M.S.; Venuto, R.; Menchaca, C.; Hoffmann, L.S.; Dalla Rosa, L.; Genoves, R.; Fruet, P.F.; Milanelli, A.; Laporta, P.; Tassino, B.; et al. Geographic Variation in the Whistles of Bottlenose Dolphins (Tursiops Spp.) in the Southwestern Atlantic Ocean. Mar. Mammal Sci. 2020, 36, 1058–1067. [Google Scholar] [CrossRef]
  92. Papale, E.; Azzolin, M.; Cascão, I.; Gannier, A.; Lammers, M.O.; Martin, V.M.; Oswald, J.; Perez-Gil, M.; Prieto, R.; Silva, M.A.; et al. Geographic Variability in the Acoustic Parameters of Striped Dolphin’s (Stenella Coeruleoalba) Whistles. J. Acoust. Soc. Am. 2013, 133, 1126–1134. [Google Scholar] [CrossRef] [Green Version]
  93. Tamura, N.; Boonkhaw, P.; Prayoon, U.; Phan, Q.T.; Yu, P.; Liu, X.; Hayashi, F. Geographical Variation in Squirrel Mating Calls and Their Recognition Limits in the Widely Distributed Species Complex. Behav. Ecol. Sociobiol. 2021, 75, 97. [Google Scholar] [CrossRef]
  94. Pritchard, J.K.; Stephens, M.; Donnelly, P. Inference of Population Structure Using Multilocus Genotype Data. Genetics 2000, 155, 945–959. [Google Scholar] [CrossRef]
  95. Slatkin, M. Gene Flow and the Geographic Structure of Natural Populations. Science 1987, 236, 787–792. [Google Scholar] [CrossRef]
  96. Irwin, D.E.; Thimgan, M.P.; Irwin, J.H. Call Divergence Is Correlated with Geographic and Genetic Distance in Greenish Warblers (Phylloscopus Trochiloides): A Strong Role for Stochasticity in Signal Evolution? J. Evol. Biol. 2008, 21, 435–448. [Google Scholar] [CrossRef] [PubMed]
  97. Laiolo, P.; Tella, J.L. Landscape Bioacoustics Allow Detection of the Effects of Habitat Patchiness on Population Structure. Ecology 2006, 87, 1203–1214. [Google Scholar] [CrossRef]
  98. Van Cise, A.; Roch, M.A.; Baird, R.W.; Mooney, T.A.; Barlow, J. Acoustic Differentiation of Shiho- and Naisa-Type Short-Finned Pilot Whales in the Pacific Ocean. J. Acoust. Soc. Am. 2017, 141, 737–748. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Parsons, K.M.; Durban, J.W.; Burdin, A.M.; Burkanov, V.N.; Pitman, R.L.; Barlow, J.; Barrett-Lennard, L.G.; LeDuc, R.G.; Robertson, K.M.; Matkin, C.O.; et al. Geographic Patterns of Genetic Differentiation among Killer Whales in the Northern North Pacific. J. Hered. 2013, 104, 737–754. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  101. Strobl, C.; Malley, J.; Tutz, G. An Introduction to Recursive Partitioning: Rationale, Application, and Characteristics of Classification and Regression Trees, Bagging, and Random Forests. Psychol. Methods 2009, 14, 323–348. [Google Scholar] [CrossRef] [Green Version]
  102. Garland, E.C.; Castellote, M.; Berchok, C.L. Beluga Whale (Delphinapterus Leucas) Vocalizations and Call Classification from the Eastern Beaufort Sea Population. J. Acoust. Soc. Am. 2015, 137, 3054–3067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  103. Keen, S.; Ross, J.C.; Griffiths, E.T.; Lanzone, M.; Farnsworth, A. A Comparison of Similarity-Based Approaches in the Classification of Flight Calls of Four Species of North American Wood-Warblers (Parulidae). Ecol. Inform. 2014, 21, 25–33. [Google Scholar] [CrossRef]
  104. Deecke, V.B.; Ford, J.K.B.; Spong, P. Dialect Change in Resident Killer Whales: Implications for Vocal Learning and Cultural Transmission. Anim. Behav. 2000, 60, 629–638. [Google Scholar] [CrossRef] [Green Version]
  105. Boelman, N.T.; Asner, G.P.; Hart, P.J.; Martin, R.E. Multi-Trophic Invasion Resistance in Hawaii: Bioacoustics, Field Surveys, and Airborne Remote Sensing. Ecol. Appl. 2007, 17, 2137–2144. [Google Scholar] [CrossRef]
  106. Joo, W. Environmental Acoustics as an Ecological Variable to Understand the Dynamics of Ecosystems. Ph.D. Thesis, Michigan State University, Ann Arbor, MI, USA, 2009. [Google Scholar]
  107. Pieretti, N.; Farina, A. Application of a Recently Introduced Index for Acoustic Complexity to an Avian Soundscape with Traffic Noise. J. Acoust. Soc. Am. 2013, 134, 891–900. [Google Scholar] [CrossRef]
  108. Krause, B.; Gage, S.H.; Joo, W. Measuring and Interpreting the Temporal Variability in the Soundscape at Four Places in Sequoia National Park. Landsc. Ecol. 2011, 26, 1247. [Google Scholar] [CrossRef]
  109. Krause, B.; Farina, A. Using Ecoacoustic Methods to Survey the Impacts of Climate Change on Biodiversity. Biol. Conserv. 2016, 195, 245–254. [Google Scholar] [CrossRef]
  110. Oppenheim, A.V.B., Jr.; Schafer, R.W. Discrete-Time Signal Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2001; Volume 2. [Google Scholar]
  111. Wiley, R.H. Noise Matters: The Evolution of Communication; Harvard University Press: Cambridge, MA, USA, 2015. [Google Scholar]
  112. Kery, M.; Plattner, M. Species Richness Estimation and Determinants of Species Detectability in Butterfly Monitoring Programmes. Ecol. Entomol. 2007, 32, 53–61. [Google Scholar] [CrossRef]
  113. Meyer, C.F.J.; Aguiar, L.M.S.; Aguirre, L.F.; Baumgarten, J.; Clarke, F.M.; Cosson, J.-F.; Villegas, S.E.; Fahr, J.; Faria, D.; Furey, N.; et al. Accounting for Detectability Improves Estimates of Species Richness in Tropical Bat Surveys. J. Appl. Ecol. 2011, 48, 777–787. [Google Scholar] [CrossRef]
  114. Green, S. Dialects in Japanese Monkeys: Vocal Learning and Cultural Transmission of Locale-Specific Vocal Behavior? Z. Tierpsychol. 1975, 38, 304–314. [Google Scholar] [CrossRef]
  115. Greenfield, M.D. Evolution of Acoustic Communication in Insects. In Insect Hearing; Pollack, G.S., Mason, A.C., Popper, A.N., Fay, R.R., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 17–47. [Google Scholar]
  116. Picq, S.; Alda, F.; Bermingham, E.; Krahe, R. Drift-Driven Evolution of Electric Signals in a Neotropical Knifefish. Evolution 2016, 70, 2134–2144. [Google Scholar] [CrossRef]
  117. Podos, J.; Huber, S.K.; Taft, B. Bird Song: The Interface of Evolution and Mechanism. Annu. Rev. Ecol. Evol. Syst. 2004, 35, 55–87. [Google Scholar] [CrossRef] [Green Version]
  118. Wilkins, M.R.; Seddon, N.; Safran, R.J. Evolutionary Divergence in Acoustic Signals: Causes and Consequences. Trends Ecol. Evol. 2013, 28, 156–166. [Google Scholar] [CrossRef] [PubMed]
  119. Derryberry, E.P.; Seddon, N.; Derryberry, G.E.; Claramunt, S.; Seeholzer, G.F.; Brumfield, R.T.; Tobias, J.A. Ecological Drivers of Song Evolution in Birds: Disentangling the Effects of Habitat and Morphology. Ecol. Evol. 2018, 8, 1890–1905. [Google Scholar] [CrossRef] [PubMed]
  120. Slabbekoorn, H.; Smith, T.B. Bird Song, Ecology and Speciation. Philos. Trans. R. Soc. B 2002, 357, 493–503. [Google Scholar] [CrossRef] [Green Version]
  121. Aoki, K.; Feldman, M.W. Toward a Theory for the Evolution of Cultural Communication: Coevolution of Signal Transmission and Reception. Proc. Natl. Acad. Sci. USA 1987, 84, 7164–7168. [Google Scholar] [CrossRef]
  122. Byers, B.E.; Belinsky, K.L.; Bentley, R.A. Independent Cultural Evolution of Two Song Traditions in the Chestnut-Sided Warbler. Am. Nat. 2010, 176, 476–489. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  123. Garland, E.C.; Rendell, L.; Lamoni, L.; Poole, M.M.; Noad, M.J. Song Hybridization Events during Revolutionary Song Change Provide Insights into Cultural Transmission in Humpback Whales. Proc. Natl. Acad. Sci. USA 2017, 114, 7822–7829. [Google Scholar] [CrossRef] [Green Version]
  124. Lachlan, R.F.; Feldman, M.W. Evolution of Cultural Communication Systems: The Coevolution of Cultural Signals and Genes Encoding Learning Preferences. J. Evol. Biol. 2003, 16, 1084–1095. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Yurk, H.; Barrett-Lennard, L.; Ford, J.K.B.; Matkin, C.O. Cultural Transmission within Maternal Lineages: Vocal Clans in Resident Killer Whales in Southern Alaska. Anim. Behav. 2002, 63, 1103–1119. [Google Scholar] [CrossRef] [Green Version]
  126. Derryberry, E.P. Ecology Shapes Birdsong Evolution: Variation in Morphology and Habitat Explains Variation in White-Crowned Sparrow Song. Am. Nat. 2009, 174, 24–33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  127. Green, S.; Marler, P. The Analysis of Animal Communication. In Social Behavior and Communication; Marler, P., Vandenbergh, J.G., Eds.; Springer: Boston, MA, USA, 1979; pp. 73–158. [Google Scholar]
  128. Scherberich, J.; Hummel, J.; Schöneich, S.; Nowotny, M. Functional Basis of the Sexual Dimorphism in the Auditory Fovea of the Duetting Bushcricket (Ancylecha Fenestrata). Proc. R. Soc. B 2017, 284, 20171426. [Google Scholar] [CrossRef] [Green Version]
  129. Slabbekoorn, H.; Ripmeester, E.A. Birdsong and Anthropogenic Noise: Implications and Applications for Conservation. Mol. Ecol. 2008, 17, 72–83. [Google Scholar] [CrossRef]
  130. Fehér, O.; Wang, H.; Saar, S.; Mitra, P.P.; Tchernichovski, O. De Novo Establishment of Wild-Type Song Culture in the Zebra Finch. Nature 2009, 459, 564–568. [Google Scholar] [CrossRef] [Green Version]
  131. Endler, J.A. Signals, Signal Conditions, and the Direction of Evolution. Am. Nat. 1992, 139, S125–S153. [Google Scholar] [CrossRef] [Green Version]
  132. Perez, E.C.; Elie, J.E.; Soulage, C.O.; Soula, H.A.; Mathevon, N.; Vignal, C. The Acoustic Expression of Stress in a Songbird: Does Corticosterone Drive Isolation-Induced Modifications of Zebra Finch Calls? Horm. Behav. 2012, 61, 573–581. [Google Scholar] [CrossRef]
  133. Sheldon, E.L.; Ironside, J.E.; de Vere, N.; Marshall, R.C. Singing under Glass: Rapid Effects of Anthropogenic Habitat Modification on Song and Response Behaviours in an Isolated House Sparrow Passer Domesticus Population. J. Avian Biol. 2020, 51, 1–8. [Google Scholar] [CrossRef]
  134. Emlen, S.T.; Oring, L.W. Ecology, Sexual Selection, and the Evolution of Mating Systems. Science 1977, 197, 215–223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. West-Eberhard, M.J. Sexual Selection, Competitive Communication and Species Specific Signals in Insects. In Insect Communication, Proceedings of the 12th Symposium of the Royal Entomological Society of London, London, UK, 7–9 September 1983; Academic Press: Cambridge, MA, USA, 1984. [Google Scholar]
  136. Akamatsu, T.; Teilmann, J.; Miller, L.A.; Tougaard, J.; Dietz, R.; Wang, D.; Wang, K.; Siebert, U.; Naito, Y. Comparison of Echolocation Behaviour between Coastal and Riverine Porpoises. Deep Sea Res. Part II Top. Stud. Oceanogr. 2007, 54, 290–297. [Google Scholar] [CrossRef]
  137. Neuweiler, G. Foraging Ecology and Audition in Echolocating Bats. Trends Ecol. Evol. 1989, 4, 160–166. [Google Scholar] [CrossRef] [PubMed]
  138. Teilmann, J. Influence of Sea State on Density Estimates of Harbour Porpoises (Phocoena Phocoena). J. Cetacean Res. Manag. 2003, 5, 85–92. [Google Scholar]
  139. Møhl, B.; Andersen, S. Echolocation: High-Frequency Component in the Click of the Harbour Porpoise (Phocoena Ph. L.). J. Acoust. Soc. Am. 1973, 54, 1368–1372. [Google Scholar] [CrossRef]
  140. Teilmann, J.; Miller, L.; Kirketerp, T.; Kastelein, R.; Madsen, P.T.; Nielsen, B.K.; Au, W.W.L. Characteristics of Echolocation Signals Used by a Harbour Porpoise ( Phocoena Phocoena ) in a Target Detection Experiment. Aquat. Mamm. 2002, 28, 275–284. [Google Scholar]
  141. Gillespie, D.; Mellinger, D.K.; Gordon, J.; McLaren, D.; Redmond, P.; McHugh, R.; Trinder, P.; Deng, X.; Thode, A. Pamguard: Semiautomated, Open Source Software for Real-Time Acoustic Detection and Localization of Cetaceans. J. Acoust. Soc. Am. 2009, 125, 2547. [Google Scholar] [CrossRef]
  142. Brudzynski, S.M. Communication of Adult Rats by Ultrasonic Vocalization: Biological, Sociobiological, and Neuroscience Approaches. ILAR J. 2009, 50, 43–50. [Google Scholar] [CrossRef] [Green Version]
  143. Seyfarth, R.; Cheney, D. Meaning and Emotion in Animal Vocalizations. Ann. N. Y. Acad. Sci. 2004, 1000, 32–55. [Google Scholar] [CrossRef]
  144. Janik, V.M.; Slater, P.J.B. Context-Specific Use Suggests That Bottlenose Dolphin Signature Whistles Are Cohesion Calls. Anim. Behav. 1998, 56, 829–838. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  145. Derrickson, K.C. Yearly and Situational Changes in the Estimate of Repertoire Size in Northern Mockingbirds (Mimus Polyglottos). Auk 1987, 104, 198–207. [Google Scholar] [CrossRef]
  146. Sekulic, R. Daily and Seasonal Patterns of Roaring and Spacing in Four Red Howler Alouatta Seniculus Troops. Folia Primatol. 1982, 39, 22–48. [Google Scholar] [CrossRef]
  147. Colombelli-Négrel, D.; Smale, R. Habitat Explained Microgeographic Variation in Little Penguin Agonistic Calls. Auk 2017, 135, 44–59. [Google Scholar] [CrossRef] [Green Version]
  148. Sementili-Cardoso, G.; Rodrigues, F.G.; Martins, R.M.; Gerotti, R.W.; Vianna, R.M.; Donatelli, R.J. Variation among Vocalizations of Taraba Major (Aves: Thamnophilidae) Subspecies. Stud. Neotrop. Fauna Environ. 2018, 53, 120–131. [Google Scholar] [CrossRef] [Green Version]
  149. Oswald, J.N.; Walmsley, S.F.; Casey, C.; Fregosi, S.; Southall, B.; Janik, V.M. Species Information in Whistle Frequency Modulation Patterns of Common Dolphins. Philos. Trans. R. Soc. B 2021, 376, 20210046. [Google Scholar] [CrossRef] [PubMed]
  150. Chao, A.; Shen, T.-J. Nonparametric Prediction in Species Sampling. J. Agric. Biol. Environ. Stat. 2004, 9, 253–269. [Google Scholar] [CrossRef] [Green Version]
  151. Fisher, R.A.; Corbet, A.S.; Williams, C.B. The Relation between the Number of Species and the Number of Individuals in a Random Sample of an Animal Population. J. Anim. Ecol. 1943, 12, 42–58. [Google Scholar] [CrossRef]
  152. Metcalf, O.C.; Barlow, J.; Marsden, S.; de Moura, N.G.; Berenguer, E.; Ferreira, J.; Lees, A.C. Optimizing Tropical Forest Bird Surveys Using Passive Acoustic Monitoring and High Temporal Resolution Sampling. Remote Sens. Ecol. Conserv. 2022, 8, 45–56. [Google Scholar] [CrossRef]
  153. Rand, Z.R.; Wood, J.D.; Oswald, J.N. Effects of Duty Cycles on Passive Acoustic Monitoring of Southern Resident Killer Whale (Orcinus Orca) Occurrence and Behavior. J. Acoust. Soc. Am. 2022, 151, 1651–1660. [Google Scholar] [CrossRef]
  154. Stanistreet, J.E.; Nowacek, D.P.; Read, A.J.; Baumann-Pickering, S.; Moors-Murphy, H.B.; Van Parijs, S.M. Effects of Duty-Cycled Passive Acoustic Recordings on Detecting the Presence of Beaked Whales in the Northwest Atlantic. J. Acoust. Soc. Am. 2016, 140, EL31–EL37. [Google Scholar] [CrossRef] [Green Version]
  155. Thomisch, K.; Boebel, O.; Zitterbart, D.P.; Samaran, F.; Van Parijs, S.; Van Opzeeland, I. Effects of Subsampling of Passive Acoustic Recordings on Acoustic Metrics. J. Acoust. Soc. Am. 2015, 138, 267–278. [Google Scholar] [CrossRef]
  156. Blumstein, D.T.; Mennill, D.J.; Clemins, P.; Girod, L.; Yao, K.; Patricelli, G.; Deppe, J.L.; Krakauer, A.H.; Clark, C.; Cortopassi, K.A.; et al. Acoustic Monitoring in Terrestrial Environments Using Microphone Arrays: Applications, Technological Considerations and Prospectus. J. Appl. Ecol. 2011, 48, 758–767. [Google Scholar] [CrossRef]
  157. Verreycken, E.; Simon, R.; Quirk-Royal, B.; Daems, W.; Barber, J.R.; Steckel, J. Bio-Acoustic Tracking and Localization Using Heterogeneous, Scalable Microphone Arrays. Commun. Biol. 2021, 4, 1–11. [Google Scholar] [CrossRef] [PubMed]
  158. Williams, E.M.; O’Donnell, C.F.J.; Armstrong, D.P. Cost-Benefit Analysis of Acoustic Recorders as a Solution to Sampling Challenges Experienced Monitoring Cryptic Species. Ecol. Evol. 2018, 8, 6839–6848. [Google Scholar] [CrossRef] [PubMed]
  159. Roch, M.A.; Batchelor, H.; Baumann-Pickering, S.; Berchok, C.L.; Cholewiak, D.; Fujioka, E.; Garland, E.C.; Herbert, S.; Hildebrand, J.A.; Oleson, E.M.; et al. Management of Acoustic Metadata for Bioacoustics. Ecol. Inform. 2016, 31, 122–136. [Google Scholar] [CrossRef] [Green Version]
  160. Darras, K.; Batáry, P.; Furnas, B.J.; Grass, I.; Mulyani, Y.A.; Tscharntke, T. Autonomous Sound Recording Outperforms Human Observation for Sampling Birds: A Systematic Map and User Guide. Ecol. Appl. 2019, 29, e01954. [Google Scholar] [CrossRef] [Green Version]
  161. Oswald, J.N.; Rankin, S.; Barlow, J.; Oswald, M.; Lammers, M. Real-Time Odontocete Call Classification Algorithm (Rocca): Software for Species Identification of Delphinid Whistles. In Detection, Classification and Localization of Marine Mammals Using Passive Acoustics, 2003–2013: 10 Years of International Research; DIRAC NGO: Paris, France, 2013; pp. 245–266. [Google Scholar]
  162. Filatova, O.A.; Deecke, V.B.; Ford, J.K.B.; Matkin, C.O.; Barrett-Lennard, L.G.; Guzeev, M.A.; Burdin, A.M.; Hoyt, E. Call Diversity in the North Pacific Killer Whale Populations: Implications for Dialect Evolution and Population History. Anim. Behav. 2012, 83, 595–603. [Google Scholar] [CrossRef] [Green Version]
  163. Ford, J.K.B. Vocal Traditions among Resident Killer Whales (Orcinus Orca) in Coastal Waters of British Columbia. Can. J. Zool. 1991, 69, 1454–1483. [Google Scholar] [CrossRef]
  164. Fitch, W.T.; Suthers, R.A. Vertebrate Vocal Production: An Introductory Overview. In Vertebrate Sound Production and Acoustic Communication; Suthers, R.A., Fitch, W.T., Fay, R.R., Popper, A.N., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1–18. [Google Scholar]
  165. Gentry, K.E.; Lewis, R.N.; Glanz, H.; Simões, P.I.; Nyári, Á.S.; Reichert, M.S. Bioacoustics in Cognitive Research: Applications, Considerations, and Recommendations. WIREs Cogn. Sci. 2020, 11, e1538. [Google Scholar] [CrossRef]
  166. Baker, M.C.; Logue, D.M. Population Differentiation in a Complex Bird Sound: A Comparison of Three Bioacoustical Analysis Procedures. Ethology 2003, 109, 223–242. [Google Scholar] [CrossRef] [Green Version]
  167. Deecke, V.B.; Janik, V.M. Automated Categorization of Bioacoustic Signals: Avoiding Perceptual Pitfalls. J. Acoust. Soc. Am. 2006, 119, 645–653. [Google Scholar] [CrossRef] [PubMed]
  168. Fischer, J.; Noser, R.; Hammerschmidt, K. Bioacoustic Field Research: A Primer to Acoustic Analyses and Playback Experiments with Primates. Am. J. Primatol. 2013, 75, 643–663. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  169. Odom, K.J.; Araya-Salas, M.; Morano, J.L.; Ligon, R.A.; Leighton, G.M.; Taff, C.C.; Dalziell, A.H.; Billings, A.C.; Germain, R.R.; Pardo, M.; et al. Comparative Bioacoustics: A Roadmap for Quantifying and Comparing Animal Sounds across Diverse Taxa. Biol. Rev. 2021, 96, 1135–1159. [Google Scholar] [CrossRef] [PubMed]
  170. Ren, Y.; Johnson, M.T.; Clemins, P.J.; Darre, M.; Glaeser, S.S.; Osiejuk, T.S.; Out-Nyarko, E. A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models. Algorithms 2009, 2, 1410–1428. [Google Scholar] [CrossRef] [Green Version]
  171. Stowell, D. Computational Bioacoustic Scene Analysis. In Computational Analysis of Sound Scenes and Events; Virtanen, T., Plumbley, M.D., Ellis, D., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 303–333. [Google Scholar]
  172. Wood, C.M.; Popescu, V.D.; Klinck, H.; Keane, J.J.; Gutiérrez, R.J.; Sawyer, S.C.; Peery, M.Z. Detecting Small Changes in Populations at Landscape Scales: A Bioacoustic Site-Occupancy Framework. Ecol. Indic. 2019, 98, 492–507. [Google Scholar] [CrossRef]
  173. Oswald, J.N.; Erbe, C.; Gannon, W.L.; Madhusudhana, S.; Thomas, J.A. Detection and Classification Methods for Animal Sounds. In Exploring Animal Behavior Through Sound: Volume 1; Erbe, C., Thomas, J.A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 269–317. [Google Scholar]
Figure 1. Number of studies performed each year using passive acoustic data. Data were obtained by searching Google Scholar using “bioacoustics” + “category description” and then filtered per year.
Figure 1. Number of studies performed each year using passive acoustic data. Data were obtained by searching Google Scholar using “bioacoustics” + “category description” and then filtered per year.
Applsci 12 12046 g001
Figure 2. Linking sound structure to population structure. (a) Animals are recorded, with or without consideration for non-acoustic socio-environmental variables. (b) A range of acoustic features are extracted from each sound sample, and samples are clustered along these features. (c) Acoustic structure is mapped to potential features of population structure (e.g., learning from nearby conspecifics, dialects). Numbers refer to different individuals (d) Population structure is summarized and visualized, providing a statistical prior for inference about non-acoustic socio-environmental variables (see point a).
Figure 2. Linking sound structure to population structure. (a) Animals are recorded, with or without consideration for non-acoustic socio-environmental variables. (b) A range of acoustic features are extracted from each sound sample, and samples are clustered along these features. (c) Acoustic structure is mapped to potential features of population structure (e.g., learning from nearby conspecifics, dialects). Numbers refer to different individuals (d) Population structure is summarized and visualized, providing a statistical prior for inference about non-acoustic socio-environmental variables (see point a).
Applsci 12 12046 g002
Figure 3. Summary of target best practices in the collection and analysis of bioacoustic data.
Figure 3. Summary of target best practices in the collection and analysis of bioacoustic data.
Applsci 12 12046 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Oswald, J.N.; Van Cise, A.M.; Dassow, A.; Elliott, T.; Johnson, M.T.; Ravignani, A.; Podos, J. A Collection of Best Practices for the Collection and Analysis of Bioacoustic Data. Appl. Sci. 2022, 12, 12046. https://doi.org/10.3390/app122312046

AMA Style

Oswald JN, Van Cise AM, Dassow A, Elliott T, Johnson MT, Ravignani A, Podos J. A Collection of Best Practices for the Collection and Analysis of Bioacoustic Data. Applied Sciences. 2022; 12(23):12046. https://doi.org/10.3390/app122312046

Chicago/Turabian Style

Oswald, Julie N., Amy M. Van Cise, Angela Dassow, Taffeta Elliott, Michael T. Johnson, Andrea Ravignani, and Jeffrey Podos. 2022. "A Collection of Best Practices for the Collection and Analysis of Bioacoustic Data" Applied Sciences 12, no. 23: 12046. https://doi.org/10.3390/app122312046

APA Style

Oswald, J. N., Van Cise, A. M., Dassow, A., Elliott, T., Johnson, M. T., Ravignani, A., & Podos, J. (2022). A Collection of Best Practices for the Collection and Analysis of Bioacoustic Data. Applied Sciences, 12(23), 12046. https://doi.org/10.3390/app122312046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop