Let’s agree to disagree: Comparing auto-acoustic identification programs for northeastern bats
Links
- More information: Publisher Index Page (via DOI)
- Download citation as: RIS | Dublin Core
Abstract
With the declines in abundance and changing distribution of white-nose syndrome–affected bat species, increased reliance on acoustic monitoring is now the new “normal.” As such, the ability to accurately identify individual bat species with acoustic identification programs has become increasingly important. We assessed rates of disagreement between the three U.S. Fish and Wildlife Service–approved acoustic identification software programs (Kaleidoscope Pro 4.2.0, Echoclass 3.1, and Bat Call Identification 2.7d) and manual visual identification using acoustic data collected during summers from 2003 to 2017 at Fort Drum, New York. We assessed the percentage of agreement between programs through pairwise comparisons on a total nightly count level, individual file level (e.g., individual echolocation pass call file), and grouped maximum likelihood estimate level (e.g., probability values that a species is misclassified as present when in fact it is absent) using preplanned contrasts, Akaike Information Criterion, and annual confusion matrices. Interprogram agreement on an individual file level was low, as measured by Cohen's Kappa (0.2–0.6). However, site-night level pairwise comparative analysis indicated that program agreement was higher (40–90%) using single season occupancy metrics. In comparing analytical outcomes of our different datasets (i.e., how comparable programs and visual identification are regarding the relationship between environmental conditions and bat activity), we determined high levels of congruency in the relative rankings of the model as well as the relative level of support for each individual model. This indicated that among individual software packages, when analyzing bat calls, there was consistent ecological inference beyond the file-by-file level at the scales used by managers. Depending on objectives, we believe our results can help users choose automated software and maximum likelihood estimate thresholds more appropriate for their needs and allow for better cross-comparison of studies using different automated acoustic software.
Publication type | Article |
---|---|
Publication Subtype | Journal Article |
Title | Let’s agree to disagree: Comparing auto-acoustic identification programs for northeastern bats |
Series title | Journal of Fish and Wildlife Management |
DOI | 10.3996/102018-JFWM-090 |
Volume | 10 |
Issue | 2 |
Year Published | 2019 |
Language | English |
Publisher | Allen Press |
Contributing office(s) | Coop Res Unit Leetown |
Description | 16 p. |
First page | 346 |
Last page | 361 |
Google Analytic Metrics | Metrics page |