Abstract
Stemming from the traditional use of field observers to score states and events, the study of animal behaviour often relies on analyses of discrete behavioural categories. Many studies of acoustic communication record sequences of animal sounds, classify vocalizations, and then examine how call categories are used relative to behavioural states and events. However, acoustic parameters can also convey information independent of call type, offering complementary study approaches to call classifications. Animal-attached tags can continuously sample high-resolution behavioural data on sounds and movements, which enables testing how acoustic parameters of signals relate to parameters of animal motion. Here, we present this approach through case studies on wild common bottlenose dolphins (Tursiops truncatus). Using data from sound-and-movement recording tags deployed in Sarasota (FL), we parameterized dolphin vocalizations and motion to investigate how senders and receivers modified movement parameters (including vectorial dynamic body acceleration, “VeDBA”, a proxy for activity intensity) as a function of signal parameters. We show that (1) VeDBA of one female during consortships had a negative relationship with centroid frequency of male calls, matching predictions about agonistic interactions based on motivation-structural rules; (2) VeDBA of four males had a positive relationship with modulation rate of their pulsed vocalizations, confirming predictions that click-repetition rate of these calls increases with agonism intensity. Tags offer opportunities to study animal behaviour through analyses of continuously sampled quantitative parameters, which can complement traditional methods and facilitate research replication. Our case studies illustrate the value of this approach to investigate communicative roles of acoustic parameter changes.
Significance statement
Studies of animal behaviour have traditionally relied on classification of behavioural patterns and analyses of discrete behavioural categories. Today, technologies such as animal-attached tags enable novel approaches, facilitating the use of quantitative metrics to characterize behaviour. In the field of acoustic communication, researchers typically classify vocalizations and examine usage of call categories. Through case studies of bottlenose dolphin social interactions, we present here a novel tag-based complementary approach. We used high-resolution tag data to parameterize dolphin sounds and motion, and we applied continuously sampled parameters to examine how individual dolphins responded to conspecifics’ signals and moved while producing sounds. Activity intensity of senders and receivers changed with specific call parameters, matching our predictions and illustrating the value of our approach to test communicative roles of acoustic parameter changes. Parametric approaches can complement traditional methods for animal behaviour and facilitate research replication.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Studies of animal behaviour have traditionally relied upon an observer watching animals when they are in view and categorizing their behaviours into discrete states and events (Altmann 1974). Limitations with this approach can create challenges that are common in ethology. Researchers can seldom follow wild animals continuously while reliably assessing subtle details of their behaviour, and direct observation is not possible for many taxa. Behavioural patterns are highly stereotyped in some species, but clear states and events are difficult to define in some taxa and in a variety of contexts (e.g. distinguish movement states such as traveling and foraging-related searching in wide-ranging species, or unambiguously define and score types of social behaviour within complex social systems). Finally, discretization of the flow of behaviour into units often raises questions about whether information is lost, and whether the categories identified are biologically relevant for the study species (Altmann 1974; Martin and Bateson 2007). Today, animal-attached recording devices offer opportunities to monitor a continuous stream of behaviour from individual animals in natural conditions, thus overcoming multiple limitations of traditional observational methods and opening novel possibilities for behavioural research (Ropert-Coudert and Wilson 2005).
In the research field of acoustic communication, the dominant study approach is to classify a species’ repertoire on the basis of acoustic properties of calls, and to examine usage and function of different call types (Hopp et al. 1998; Bradbury and Vehrencamp 2011). Various techniques can be used to classify animal sounds, including visual examination of spectrograms combined with aural impressions and/or comparisons to a call library (Winter et al. 1966; Fossey 1972; Ficken et al. 1978; Ford 1989; Herzing 1996; Rendell and Whitehead 2003; Ouattara et al. 2009a, b; Luís et al. 2021), multivariate analyses of acoustic parameters (Nowicki and Nelson 1990; Rendell and Whitehead 2003; Wadewitz et al. 2015), and automated methods such as machine-learning (Molnár et al. 2008; Brown and Smaragdis 2009; Trawicki et al. 2005; Pandeya et al. 2018; Bermant et al. 2019; Nanni et al. 2020). Different techniques can yield contrasting results (Janik 1999), which emphasizes the importance of testing whether the study species categorizes sounds the same way as the researchers do. Sound categories can be validated through studies of categorical perception (Ehret 1987; Harnad 1987; Nelson and Marler 1989; Fischer 1998; Baugh et al. 2008; Green et al. 2020) or by demonstrating a specific differential usage of calls relative to surrounding conditions (e.g. Seyfarth et al. 1980) or features of the sender (e.g. Janik 1999). Perceptual studies are particularly important in the case of graded sounds, which could be perceived either continuously with no discontinuity or categorically on the basis of acoustic boundaries that may not be predictable a-priori (Marler 1976). However, these validation studies are usually carried out in controlled settings or in the laboratory, and are not feasible for all species.
Another approach in bioacoustics research is to characterize signals through one or more acoustic metrics and to analyse these acoustic features in their own right, without an end goal of defining classes of calls. Acoustic parameters can convey information independent of what call type is used (Marler 1976; Morton 1977, 1982; Taylor and Reby 2010; Bowling et al. 2017). In many mammals, for example, formant frequencies correlate with body size of the sender (Fitch and Hauser 2002), and fundamental frequency can be a reliable indicator of features such as sex, age, size, dominance rank and individual identity (Taylor and Reby 2010; Bowling et al. 2017). Motivational states of senders of sounds can also be associated with specific acoustic properties (Owings and Morton 1998; Scherer et al. 2003; Laukka et al. 2005). An established hypothesis describing these relationships is represented by the motivation-structural rules (Morton 1977), validated in various species (Ohala 1984; Compton et al. 2001; Yin 2002) and applied in broad inter-specific studies (Morton 1982; August and Anderson 1987; Gouzoules and Gouzoules 2000). This framework postulates the existence of call acoustic features typical of aggressive versus friendly social contexts in mammals and birds, with aggressive calls being relatively low-pitched and high bandwidth, and friendly calls being high-pitched with low bandwidth (Morton 1977). Other acoustic metrics such as call amplitude, rate and duration can encode information about levels of urgency and arousal of the caller (Taylor and Reby 2010), as well as about the intensity of specific emotional states (Scherer et al. 2003; Briefer 2012, 2018). Thus, parameterization of sounds can offer valuable insights into functional attributes of acoustic signals.
Similar to acoustic signals, animal movements are also frequently resolved by researchers into discrete classes, but it can be useful to quantify motion parameters as well. Measures of dynamic body acceleration can be used as proxies for activity level and energy expenditure (Qasem et al. 2012; Wilson et al. 2006; Halsey et al. 2009). Animal acceleration, speed and orientation can be used to infer aspects of body condition (Miller et al. 2004a) and internal physiological state (Wilson et al. 2014), while the speed of specific movement patterns can correlate with individual fighting ability (Baird 2013). In studies of acoustic communication, movement metrics derived from participants in a social interaction can also help characterize the visual component of multimodal displays (Ota et al. 2015; Ronald et al. 2017), and quantify features of the social context (e.g. intensity of aggressive behaviour during agonistic interactions; Briffa 2013; Earley and Hsu 2013) or of behavioural responses to signals (e.g. approach or avoidance). Sound-and-movement recording tags enable sampling of high-resolution quantitative data on behaviour and offer opportunities to test how acoustic parameters of signals relate to patterns of animal motion. In this paper, we explore the use of animal-attached tags to derive continuous parameters for sounds and motion from senders and receivers of signals for the study of acoustic communication.
The advent of bio-logging devices in recent decades has opened new frontiers for research in a variety of animal taxa (Ropert-Coudert and Wilson 2005; Rutz and Hays 2009; Hussey et al. 2015; Kays et al. 2015). These instruments have made important contributions in many fields (e.g. movement ecology, foraging behaviour, physiology, conservation biology; Hussey et al. 2015; Kays et al. 2015), but have been less used for studies of communication (but see Jensen et al. 2011; Jensen and Tyack 2013; Stimpert et al. 2015; Arranz et al. 2016). The bottlenose dolphin (Tursiops spp.) is a good candidate to illustrate the use of tags to study acoustic communication, owing to its heavy reliance on acoustic signals (Janik 2013) and the limitations of other sensory cues in its underwater environment (Tyack 1998). These animals display a sophisticated communication system and complex forms of social interactions, but detailed information on their social behaviour is still limited compared to many terrestrial taxa, due to the inherent difficulties of behavioural research on wide-ranging aquatic organisms.
The acoustic repertoire of both bottlenose dolphin species, the common bottlenose dolphin (Tursiops truncatus) and the Indo-Pacific bottlenose dolphin (Tursiops aduncus), is traditionally classified into three broad categories: whistles, clicks and burst pulses (Janik 2009). The communicative function of whistles, especially signature whistles, has been studied extensively (Caldwell and Caldwell 1965; Tyack 1986; Janik et al. 2006; Janik and Sayigh 2013; King and Janik 2013; King et al. 2018; Jones et al. 2020), as has the role of clicks for echolocation (Au 1993; Jensen et al. 2009; Wahlberg et al. 2011). Less studied are burst pulses, which generally refer to pulsed calls with a high repetition rate, although some apply this label for all sounds that are not whistles or echolocation clicks (Janik 2009). Beyond these generic broad classes, several additional call types have also been identified (Jones et al. 2019; Luís et al. 2021), but a consistent classification of these dolphins’ repertoire is still missing across different field sites and populations (Jones et al. 2019). In addition to studies based on call classification, some work has focussed on the communicative function of specific acoustic parameters of signals. Studies of dolphins in captivity reported that features of burst pulses (i.e. call duration and repetition rate of clicks) can correlate with levels of aggression (i.e. occurrence and rate of agonistic displays) during agonistic interactions (Overstrom 1983; Blomqvist and Amundin 2004; Blomqvist et al. 2005). Links between acoustic features of whistles and the behavioural context of sound production have also been documented (Janik et al. 1994; Esch et al. 2009).
Sounds produced by males during “consortships” with females (Connor et al. 1992; Smolker et al. 1992), when cycling/oestrous is likely (Wallen et al. 2017), represent an interesting case study to analyse the relationship between acoustic features of calls and the intensity of agonistic behaviour. In the mating system of bottlenose dolphins, receptive females are dispersed and asynchronous, and males often form long-term stable alliances in pairs or trios to compete for reproductive opportunities (Wells et al. 1987; Wells 2014; Connor et al. 2000; Connor and Krützen 2015). Consortships between a female and a male alliance vary in duration from a few minutes to months, and are usually interpreted as forms of sexual coercion, mate guarding or pre-emption of female choice (Connor et al. 2000, 1996; Connor and Vollmer 2009). Males can direct agonistic signals towards a consorted female, including threat calls that induce the female to remain close to them (i.e. “pops”; Connor and Smolker 1996; Vollmer et al. 2015; King et al. 2019). Interactions can escalate and involve physical aggression (e.g. chases, coordinated flanking from allied males), to which females can respond with avoidance behaviours and by bolting away attempting to escape (Connor et al. 2000; Connor and Vollmer 2009; Connor and Krützen 2015). Several sound types can be used by males in complex vocal sequences and as agonistic signals (Connor et al. 2000; Sayigh et al. 2017; King et al. 2019; Moore et al. 2020), but a precise comparison of call categories is still missing across field sites, and the detailed function of some of these signals is unclear.
Here, we use data from animal-borne sound-and-movement recording tags (Dtags: Johnson and Tyack 2003) to present a parametric approach for the study of acoustic communication. We illustrate this approach through case studies of social interactions in wild common bottlenose dolphins. In a first case study, we examine relationships between acoustic parameters of male calls and movements of one tagged female as signal receiver during consortship interactions, and we test a hypothesis based on motivation-structural rules: if increased levels of aggression are associated with a lower frequency of male calls, then we predicted a negative relationship between female movement intensity and the frequency of male sounds. In a second case study, we use four tagged males to analyse relationships between call acoustic parameters and movements of senders of sounds, and we test the relationship between click repetition rate of pulsed calls and levels of aggression, reported in captivity for the species (Overstrom 1983; Blomqvist and Amundin 2004). To test hypotheses, we derive a measure of activity intensity from tag data and apply it as a proxy for the intensity of dolphin agonistic behaviour during social interactions.
Methods
Data
Sound-and-movement recording tags were deployed from 2011 to 2019 in Sarasota Bay, FL during periodic capture-release operations for health assessment of a long-term resident bottlenose dolphin population (for details on capture-release and health assessment, see protocols described in Wells et al. 1987, 2004). Version 3 Dtags (www.soundtags.org) were deployed on dolphins of both sexes and different age classes after handling for health assessment, and were attached by hand to the dorsal surface, halfway between the blowhole and the dorsal fin, by means of suction cups. Pairs of allied males or mother-calf pairs were typically released at the same time, and we often tagged both members of the pair in such cases; overall, 67 dolphins (35 males and 32 females) were instrumented with a Dtag over 9 years, for a total of 92 individual tag deployments. Tags sampled stereo sound at 240 kHz (with in-built 80 kHz low-pass filter and 200 Hz high-pass filter) and also depth, tri-axial accelerometer and tri-axial magnetometer data at 200 or 250 Hz; they were programmed to remain attached to the dolphins for periods of up to 24 h. After release, dolphins were followed and observed from a 7-m motorboat during the tagging period while daylight and weather conditions permitted. A 3-min focal point sampling was adopted for behavioural observations (Mann 1999; McHugh et al. 2011), and ad-libitum notes were taken continuously on visible behavioural events (Mann 1999; during 83 out of 92 tag deployments, concomitant focal-follow data were collected, using one of the tagged dolphins as focal individual). Dolphins in sight were photo-identified relying on the long-term catalogue of individuals in the population. Follows of individual tagged dolphins provided contextual information to be matched with tag data, and confirmed the occurrence of social interactions along with the identity of interacting individuals. A VHF tracking system (Cushcraft, USA; Lkarts, Norway) facilitated following the tagged animals and enabled recovery of the tags once they released from the dolphins. To minimize observer bias, blinded methods were used when all behavioural data were recorded and/or analysed. Observers were blind to the multivariate tag data being collected autonomously during follows.
Data processing
Selection of social interactions
Data offloaded from tags were processed and analysed using custom functions in Matlab R2013b (Mathworks, USA; www.mathworks.com). In order to visualize tag data, we used a custom tool that allowed inspection of the sound spectrogram (60 kHz down-sampled data, FFT size of 512 points, Hamming window with 50% overlap), sound intensity envelope and depth profile of animals, synchronously in consecutive 10-s intervals (www.soundtags.org). In selecting periods of social interactions for the case studies, we focussed on sections of tag data with intense vocal activity. These sections were chosen as being particularly relevant for illustrating the analysis of continuous parameters of vocal and motion behaviour. Only periods of tag data with simultaneous focal-follow information were considered. To identify dolphins involved in interactions, we relied on focal-follow data and included all individuals within the monitored “social group”, defined as animals moving in the same general direction, engaged in similar or coordinated activities, within a distance of approximately 100 m from each other (Wells et al. 1987).
We explored two possible applications of our analytical approach: in the first case study, we examined relationships between vocal behaviour of senders of sounds and movements of a signal receiver, while in the second case study, we looked at relationships between vocal behaviour and movements of senders of calls. In order to assess whether sounds recorded by a Dtag were produced by the tagged dolphin or by another individual, we examined the angles of arrival of vocalizations (AOA, obtained from time-of-arrival differences between the two hydrophones on each tag; Johnson et al. 2006) and their root-mean-squared (RMS) received levels. Sounds were attributed to a tagged individual if they (1) had angles of arrival (throughout the call duration) consistently within ± 20 degrees of the mean AOA of echolocation clicks measured throughout that animal’s deployment (this AOA average corresponds to the orientation of the tag on the tagged individual; Johnson et al. 2009; Marrero Pérez et al. 2016), and (2) had a higher received level on that animal’s tag compared to tag recordings from another animal in the group (in cases where multiple tags were deployed simultaneously; Marrero Pérez et al. 2016). Sounds that did not meet these two criteria were attributed to non-tagged individuals. The use of AOA for determining which individual produced calls is widespread in cetacean biologging (Johnson et al. 2009). While a single AOA measurement does not provide a conclusive discrimination between the tagged individual and another individual that is positioned exactly on-axis relative to the tagged one, the assessment of multiple angles (and of their variability) throughout entire vocalizations provides stronger evidence.
First case study, relations between male calls and female movements
Two interactions from a tag deployment on the 17-year-old female FB123 from 11 May 2015 were used for the first case study. The two interactions involved the tagged female, her 3-year-old calf (FB286, which was also tagged) and a pair of allied adult males (a different pair in each interaction). This tag deployment was selected because physiological measurements from the health assessment prior to tagging reported high estradiol concentrations for FB123, indicating that she might have been ovulating. This suggests that the males associated with her for reproductive purposes (Connor et al. 1996; Wallen et al. 2017) and was useful to identify probable consortship interactions. Both interactions occurred when a pair of males encountered the mother-calf pair and joined to form a single social group. The first male alliance remained associated with the mother-calf pair for approximately 9 min, and the group spread was generally loose (21–50 m). The second male alliance remained associated with the mother-calf pair for at least 48 min (after which the focal-follow was interrupted because it became too dark), and the group spread was generally moderate (11–20 m) but also tight (1–10 m) and loose at times; the individually distinctive signature whistles (Janik and Sayigh 2013) of both allied males were recorded on Dtags in multiple separate bouts during a period of 8 h after the end of the focal-follow, suggesting a prolonged association between the males and the mother-calf pair during the night. Further evidence suggesting a consortship context for these two interactions included the following aspects: (1) in both interactions males rapidly approached the mother-calf pair upon joining; (2) flanking behaviour of the males towards the female was observed in the first interaction; (3) in interactions males produced low-frequency narrow-band pulses (Supplementary Material; spectrogram settings: 512 FFT, Blackman-Harris window; 44.1 kHz down-sampled data) that share acoustic features (e.g. frequency features, production pattern) with “pop” calls, reported to be associated with consortship behaviour in T. aduncus (for comparison, see Fig. 4 and 5 in Connor and Smolker 1996, Fig. 3A in King et al. 2019, Fig. 1 in Moore et al. 2020); (4) two days after the interactions, the female was sighted with a third different male alliance, strengthening the possibility that she was ovulating at the time (Connor et al. 1996; Connor and Krützen 2015; Wallen et al. 2017).
Since both the female and her calf were tagged, vocalizations produced by the males could be identified by exclusion. First, sounds produced by the female and the calf were identified using call AOA and received level measured from the two tags (Fig. 1). Once these calls were excluded, all remaining sounds were assumed to be from the males, since no other dolphin was present in the focal group or sighted in the area. This allowed assigning calls to the males collectively (in both consortships, the large majority of calls was from the males). At times, distinct AOA trajectories (e.g. Figure 1) indicated that both males were vocalizing. In the second interaction, vocal matching sequences of low-frequency sounds were observed between males (Moore et al. 2020).
Second case study, relations between calls and movements of vocalizing dolphins
Four interactions with different group compositions from four tag deployments on adult males were used for the second case study. Interactions occurred when the tagged male and his alliance partner encountered and joined other dolphins. This case study focussed on the relationship between sounds and concurrent movements of the tagged dolphin. In two of the interactions (those from May 2014 and May 2017; Table 1), the tagged animal was observed chasing one of the newly encountered individuals, suggesting an aggressive context. In the other two interactions, the type of social context could not be conclusively identified by the focal-follow data (for the interaction from 9 May 2012 possible avoidance behaviour was scored for one of the non-tagged dolphins; chases were not observed from tagged or non-tagged individuals). Table 1 provides details of group composition during the social interactions used for both case studies. All interactions except the one from 9 May 2012 comprised tag data collected more than 30 min after the animal release, reducing potential effects of capture procedures on the sampled dolphin behaviour.
Labelling and classification of dolphin acoustic signals
Before measuring acoustic parameters from dolphin calls, acoustic tag data were manually labelled for presence of dolphin acoustic signals, using the visualization tool previously described. Manual labelling was performed to (1) locate signals in the acoustic time series and check that they were dolphin sounds, (2) classify signals into call types, which were used to compare the parametric approach developed in this paper with discrete call classification. Signals were classified through visual inspection of spectrograms aided with comparisons to a call library (Supplementary Methods; spectrogram settings used for plots of the call library: Hamming window with window size 2048 samples, 85% overlap, 4096 DFT; 240 kHz sampling rate) into nine categories routinely utilized for the Sarasota population: whistle, chirp, click series, buzz, burst, rasp, crack, quack, indeterminate (Sayigh et al. 2017; see Supplementary Methods for definitions of categories). Since detailed comparisons are still missing for many call categories across bottlenose dolphin species and populations, and considering the difficulties of recognizing same sound types across sites if a mixture of acoustic and usage criteria is adopted for definitions (Jones et al. 2019), we did not re-use specific call types identified in other field sites for our classification.
Labelled sounds from periods of social interactions were retained and used for extraction of parameters if they met the following criteria: (1) they did not overlap with strong ambient noise (a Signal-to-Noise ratio, SNR, criterion was applied subsequently, but these sounds were excluded at this stage through human assessment to prevent strong ambient noise from causing erroneous estimates of high SNR that could affect acoustic measurements); (2) they were ascribed to the sound sender(s) of interest in the analyses, in each interaction; (3) they did not overlap with calls from dolphins that were not the sound sender(s) used in the analyses.
Extracting acoustic parameters from dolphin signals
We developed a method to extract acoustic parameters as time series from labelled dolphin signals in tag acoustic recordings. Dolphin vocalizations were parameterized by measuring acoustic metrics on consecutive non-overlapping 0.1-s windows throughout calls. This window length was selected to be broadly similar to the integration time of mammalian hearing (Manley et al. 2004; Tougaard et al. 2015) and the acoustic response time measured for the species (Ridgway 2011). In selecting parameters, we focussed on two simple acoustic metrics that are relevant for the hypotheses tested within the case studies: centroid frequency and amplitude modulation rate. The first is a standard and robust measure of call frequency, which is a central acoustic feature of the motivation-structural rules framework (Morton 1977). The second relates to click repetition rate in the case of dolphin pulsed calls (where signal amplitude and its modulation depend mostly on the rate with which clicks are produced), which is reported to convey information about levels of aggression (Overstrom 1983; Blomqvist and Amundin 2004). Additionally, we selected two parameters useful to separate tonal and pulsed sounds: we adopted a measure of signal occupancy in the frequency domain (15 dB bandwidth from the peak frequency), and a measure of signal occupancy in the time domain (a duty cycle measure similar to that used by Murray et al. 1998).
All sound samples used for analyses were high-pass-filtered at 400 Hz (4-pole Butterworth filter) in order to remove strong low-frequency noise (e.g. flow noise) often present in on-animal recordings. This threshold is below the frequency range of most bottlenose dolphin vocalizations (Jones et al. 2019), and from preliminary analyses with unfiltered data, it also appeared to be below the centroid frequency of low-frequency dolphin vocalizations in our dataset.
The first step for the extraction of parameters was calculation of SNR. For each call, the RMS level was measured on consecutive sound segments of 1024 points each (4.3 ms, 50% overlap between successive segments) and was divided by the RMS ambient noise immediately preceding the call, as an estimate of the time-varying SNR throughout the sound (see Supplementary Methods for details; 0.1 s of ambient noise was selected for each call (same window length used for measuring acoustic parameters), and for SNR calculations, we used the 90th percentile of the RMS levels measured on 1024-point segments of noise). A 6 dB SNR criterion was used to (1) exclude low-SNR calls from the analysis, and (2) for high-SNR calls, precisely cut the length of a labelled sound so as to contain the dolphin vocalization (Supplementary Methods). The resulting high-SNR calls were divided into consecutive non-overlapping time windows of 0.1 s for which acoustic parameters were measured (Fig. 2). For each sound, the number of windows was equal to the sound duration in seconds divided by 0.1 s and rounded up to the nearest integer.
Before measuring frequency parameters in the identified 0.1-s windows, 1024-point segments with low SNR were discarded. This was done to minimise the impact of noise on the spectrum of low duty cycle signals, such as clicks. The same SNR cut-off used previously was applied to the 1024-point segments of each window (50% overlap between successive segments), and the high-SNR segments were used to calculate a power spectrum (i.e. the average power in each 1024-point FFT of these remaining segments) from which the centroid frequency and spectral occupancy (i.e. 15 dB bandwidth from the peak frequency) were measured. If none of the 1024-point segments within a window presented a high SNR, the window was considered empty and acoustic measurements were not taken (such windows could occur, e.g. in case of low duty cycle and low repetition rate signals).
Entire 0.1-s windows were used for calculating amplitude modulation rate and temporal occupancy, since the sound duty cycle is relevant for measuring these metrics. However, if no 1024-point segment of a window presented a high SNR, the window was considered empty and parameters were not measured. To measure amplitude modulation rate, the envelope of each window was calculated using the Hilbert transform (Au 1993) and then low-pass filtered at 4 kHz (4-pole Butterworth filter). The modulation rate was then calculated as the centroid frequency of the spectrum of this filtered envelope (FFT of 24,000 points, equal to the duration of the 0.1-s window). Temporal occupancy was calculated as the average of the unfiltered envelope divided by the envelope maximum, for each window.
Extracting movement parameters
Numerous methods have been described for calculating motion and posture parameters from high-resolution tag data, and we used three standard metrics to parameterize dolphin motion from Dtag data. To quantify activity levels, the RMS of tri-axial high-pass-filtered acceleration was used (this is the same as vectorial dynamic body acceleration (Qasem et al. 2012) but obtained using a custom FIR filter, and will be called simply VeDBA hereafter). Calculations used data with a decimated sampling rate of 25 Hz and a high-pass filter of 0.5 Hz; the 0.5-Hz filter cut-off was about 70% of the dominant stroke frequency over entire tag deployments, a typical threshold for dynamic body acceleration (which allows accounting for regular swimming as well as faster motions; Martin López et al. 2015, 2016, 2021). This parameter was averaged over 2-s intervals and used as a proxy for the intensity of dolphin agonistic behaviour during social interactions. In addition to VeDBA, two parameters quantifying changes in orientation were measured. The Euler angles of pitch, roll and heading were estimated following Johnson and Tyack (2003) and were used to calculate absolute changes in pointing angle (i.e. changes in orientation of the longitudinal axis of an animal; Miller et al. 2004b) and absolute changes in roll, over 2-s intervals. Before estimating Euler angles, sensor data were corrected for orientation of the tag in case it had slid from its original position, and decimated accelerometer and magnetometer data (0.5 Hz sampling rate) were used for calculations. Each of our three movement parameters was calculated on consecutive non-overlapping intervals from the start of each social interaction.
Statistical analysis
Statistical analysis was performed in R Studio (RStudio, USA; www.rstudio.com). To examine relationships between the acoustic and movement parameters, each movement parameter was modelled separately as a function of the acoustic parameters of concomitant vocalizations. Specifically, acoustic measurements from 0.1-s windows were averaged over the same time intervals that were used for calculating movement parameters (if a certain time interval did not contain any call acoustic measurement, that corresponded to a “not-available” (NA) data-point for each of the acoustic metrics). Therefore, measures of different parameters were matched on the basis of the time interval over which they had been calculated, and time series of acoustic and movement metrics with equal resolution were used for the modelling. Based on the relation between call frequency and social context postulated by motivation-structural rules (Morton 1977), for our first case study, we predicted a negative relationship between female VeDBA and centroid frequency of male calls: the higher the female activity levels got (e.g. reflecting increased female avoidance) as the intensity of agonistic signalling increased, the lower the frequency of male calls was expected to be. Based on earlier results on the usage of pulsed calls in bottlenose dolphin social interactions, with higher click repetition rates associated with higher levels of arousal (as observed by Overstrom 1983 and postulated by Blomqvist and Amundin 2004), for the second case study, we predicted a positive relationship between VeDBA and amplitude modulation rate of these signals. In the case of changes in pointing angle and changes in roll, we did not have specific predictions for how these dynamics of postural change might relate to call parameters, and these two metrics were used for exploratory analyses to illustrate possible applications of our approach.
To model movement parameters, Generalized Estimating Equations (GEEs) were used in order to account for serial correlation (geepack R package). GEEs allow the temporal autocorrelation within clustered data to be modelled explicitly, and they estimate population averages for model predictors (Hardin and Hilbe 2002). Call acoustic parameters and a categorical social interaction-ID variable were used as predictors. To account for the possibility of non-linear relationships between the response variable and predictors, model formulas were chosen during exploratory analysis in order to best approximate the shape of relationships (by setting the link function, either as identity or log link, and/or by log-transforming explanatory variables; Gaussian error distribution was used). In order to assess autocorrelation and define a suitable correlation structure, Generalized Linear Models (GLM) were fit to the data (with the same model formula chosen for GEEs) and residuals were examined with the autocorrelation function. The length of the clustering unit in the GEEs was set according to the number of time lags with significant autocorrelation in GLM residuals, and AR-1 autocorrelation was assumed within each unit (Zuur et al. 2009). Model selection was performed using QIC values (the AIC-type criterion for GEEs; Pan 2001) and applying the parsimony principle for choosing among models with different number of predictors and small difference in QIC (i.e. ∆QIC < 2).
GEEs were fit again using qualitative presence-absence of call categories as explanatory variables (including interaction-ID as a predictor) instead of acoustic parameters, to compare the use of parameters and discrete call types to predict the selected movement parameters. For each movement metric, models with parametric and categorical predictors were compared using information criteria.
Results
Male–female interactions from tag deployment on a female
Each of the two consortship interactions analysed from the tag deployment on the female FB123 involved two allied males. The males produced signals in concentrated bouts at the onset of encounters, lasting respectively 47 and 205 s from the first to the last sound in the two interactions. Calls could be assigned to the males collectively (see the Methods section for details) but not always to a specific non-tagged individual, so sounds from allied males were merged and kept as a single data stream in the analyses; this is consistent with the cooperative behaviour of males during reproductive encounters (Connor et al. 1996, 2000; Connor and Krützen 2015; Moore et al. 2020), which suggest that a female would respond to their combined signalling in these contexts. No evidence of foraging behaviour was found from focal-follows or tag acoustic data (i.e. occurrence of “buzz” signals; Supplementary Methods) during the periods analysed, which suggests that movements of the tagged female were not related to feeding.
Each movement parameter measured from the tagged female was modelled with GEEs as a function of the selected four acoustic parameters of concomitant male calls, including all male signals regardless of call categories. In the two interactions, male calls corresponded respectively to 283 and 868 0.1-s sound windows with high-SNR. Acoustic measurements from those windows were averaged over 2-s intervals thus obtaining time series with the same resolution as the movement metrics (Fig. 3; Supplementary Results). The sample size was 100 data-points overall for the two interactions (corresponding to 200 s of tag data). The best model of female VeDBA retained only the centroid frequency of male sounds as a predictor (effect size of − 0.02 g/log(Hz); Table 2), showing a significant negative relation with acceleration (Fig. 4). Change in pointing angle was also best explained by centroid frequency (effect size of 0.03 log(radians)/kHz), while the best model of change in roll retained spectral occupancy (effect size of 0.02 log(radians)/occupancy) and temporal occupancy (effect size of 0.08 log(radians)/occupancy) of calls (Table 2).
GEEs were fit again explaining movement parameters as a function of call categories of male vocalizations. Since 42% of the 2-s intervals used for calculating movement metrics comprised multiple call types, a single explanatory variable was not suitable, and a binary presence-absence variable was calculated for each of the six call categories observed (click series, whistle, chirp, quack, burst and indeterminate) and was included in the model. The best model of VeDBA retained two call types, quacks and chirps, as significant predictors (quacks, effect size of 0.07 g if present; chirps, effect size of − 0.06 g if present), and offered a similar goodness of fit compared to the model calculated with centroid frequency as a predictor (∆QIC = 0.4). Change of female roll was better explained when using parameters as predictors (∆QIC = 2.5), with a negative effect of indeterminate sounds observed when using categorical variables (effect size of − 0.69 log(radians) if present). Finally, the best model of change in female pointing angle retained the occurrence of indeterminate sounds and quacks, with a slightly lower QIC compared to the use of quantitative model predictors (∆QIC = 0.7; Table 2).
Interactions from male tag deployments
Four interactions were analysed from four tag deployments on adult males. In all interactions, the tagged male produced primarily pulsed calls, including the “burst” and “rasp” categories utilized in this paper (both falling within the broad group of “burst pulses”; Janik 2009), as well as stereotyped whistles in one interaction. Given the low diversity of call types observed and the prevalence of burst pulses, these interactions were particularly suited to test communicative functions of these signals, and analyses focussed on the relationship between acoustic parameters of these calls and movements of the vocalizing animal. In the four interactions, burst pulses of the tagged dolphin corresponded respectively to 196, 225, 125 and 205 0.1-s sound windows with high SNR, spread over a period of 216, 252, 169 and 572 s from the first to the last signal. No evidence of foraging from the tagged animal was found from focal-follow or tag data concomitant to the vocalizations analysed.
GEEs were calculated for VeDBA, change in pointing angle and change in roll as a function of modulation rate and temporal occupancy of burst pulses (centroid frequency and spectral occupancy were not analysed in this case study because these frequency parameters, particularly in the case of pulsed calls, can be strongly altered in sounds of a tagged animal as recorded on its own tag; Johnson et al. 2009). The sample size was 93 data-points overall for the four social interactions. For male VeDBA and change in pointing angle, model selection retained modulation rate and the interaction-ID variable, with modulation rate showing a significant positive relationship with both VeDBA (effect size of 0.25 g/kHz; p < 0.001) and change in pointing angle (effect size of 0.51 radians/kHz; p = 0.03). For change in roll, modulation rate and temporal occupancy were retained, respectively with an effect size of 0.26 radians/kHz (p < 0.001) and 0.03 radians/occupancy (p < 0.001).
The same analyses were also performed using a categorical variable for male vocalizations as predictor. Unlike the previous case study, a single predictor was sufficient to express the occurrence of the “burst” versus the “rasp” category in each time interval. Call type was rejected during model selection for both VeDBA and change in pointing angle. These movement parameters were better explained when modelled as a function of acoustic parameters rather than as a function of call type (∆QIC of 14 and 5.1 respectively). Call type was retained when modelling change in roll, with larger changes in roll observed when bursts were produced (effect size of 0.19 radians; p < 0.0001). However, this model offered a lower goodness of fit compared to the best model of change in roll obtained with parametric predictors (∆QIC of 6.6).
To examine whether the relationship between movement metrics and modulation rate of burst pulses changed between interactions with and without a documented aggressive context, models were fit again using modulation rate and occurrence of a chase (categorical variable) as predictors, including an interaction term between the two (interaction-ID was not included due to lack of sufficient data-points to estimate all model parameters). Adding chase as a predictor improved goodness of fit only in the case of VeDBA (∆QIC = 10.4). The best model of VeDBA retained modulation rate (p < 0.0001) and the interaction term between explanatory variables (p = 0.006), estimating a positive effect of modulation rate on VeDBA equal to 0.37 g/kHz and 0.12 g/kHz respectively for interactions with and without a chase (Fig. 5).
Discussion
Many studies of animal behaviour rely on classification of behavioural patterns and analyses of discrete behavioural categories. However, modern technologies offer continuous sampling of fine-scale behavioural data, which enables novel analytical approaches that characterize behaviour through quantitative metrics. Continuously sampled behavioural parameters are not frequently used in some fields (e.g. acoustic communication studies), but they can bring multiple advantages: (1) they avoid challenges of defining a complete set of discrete behavioural categories that are biologically relevant for the study species; (2) they facilitate comparison and replication of studies, which can be problematic in the case of categories defined solely by human observers. In practice, analyses of parameters may require contextual information about the behaviour being sampled and studied; this is most readily obtained via discrete classes, so parametric and categorical approaches can work synergistically. The case studies presented here exemplify how analysis of continuous parameters can efficiently integrate with behavioural categories and observations, thus complementing and extending traditional research methods for animal behaviour.
In this study, we first developed a method to extract call acoustic parameters as time series from tag data. We then applied acoustic and movement parameters to examine how receivers move concurrent with acoustic signals, and how signallers modify movements along with their own vocalizations. In the following, we discuss the selected case studies of dolphin social interactions, and details of our methodology adopted to parameterize tag acoustic data. Finally, we discuss the broader relevance of the proposed approach for studies of acoustic communication and animal behaviour.
Social interactions
Testing motivation-structural rules and functions of burst pulses
To illustrate the utility of our parametric approach, we used bottlenose dolphin social interactions to test hypotheses about acoustic communication that are framed in terms of parameters. In the first case study, two probable consortships from a tag deployment on a female were used to test motivation-structural rules, by examining the relation between female movements and male sounds. Existing research indicates that females are the intended signal receivers of some male calls in these contexts (Connor and Smolker 1996; King et al. 2019). Additionally, it seems likely that females would attend to sounds directed by one male to his alliance partner, given that males cooperate to herd females (Wells et al. 1987; Connor et al. 1996; Connor and Krützen 2015), and that females can face considerable individual costs if they do not respond rapidly to male calls (Connor and Smolker 1996). To test the relations predicted by Morton (1977), we used VeDBA (Qasem et al. 2012) calculated from the tagged female as a proxy for female activity intensity during interactions. Bottlenose dolphin consortships constitute forms of mate guarding and sexual coercion, and can often include an agonistic component between males and the consorted female (Connor and Krützen 2015). Multiple behavioural patterns observed in these interactions underlie a link between female activity levels and intensity of agonistic signalling that we expect in these contexts and that we rely on to test Morton’s rule. Females can, for example, often show avoidance behaviour and may bolt away attempting to escape, to which males have been shown to respond with chases and with an increase in threat displays (Connor and Smolker 1996). In response to a sequence of threat displays initiated by males, females can submissively respond by quickly moving closer (Connor and Smolker 1996; Moore et al. 2020) or show avoidance and even physical aggression responses (Connor and Smolker 1996; Connor et al. 1996). In all these cases, female movement intensity is expected to positively correlate with the intensity of agonism and with the intensity of male agonistic signalling within the 2-s intervals used here for analyses.
Of the four acoustic parameters examined, the centroid frequency of male vocalizations was the single best predictor of female VeDBA, showing a significant negative relationship with activity levels. This matches predictions of motivation-structural rules on the relation between agonistic behaviour and acoustic features of calls (Morton 1977): intensity of agonistic behaviour was higher when males produced low-frequency sounds and decreased as the frequency of male calls increased. The study of motivation-structural rules has traditionally relied on comparisons of call acoustic parameters between different behavioural contexts (August and Anderson 1987). Here, we investigated Morton’s hypothesis within specific social interactions, using a continuous estimate of the intensity of agonistic behaviour. Our approach requires identifying a metric measurable with tags to use as proxy for the intensity of the “motivational system” (McFarland 1985) of interest. This may require restricting analyses to data from specific social contexts, as we did for consortships. While this still implies the use of observer judgement, identifying general types of social interactions can be much easier and less arbitrary compared to defining a discrete set of behavioural contexts and ranking them in order of intensity for a motivational system. Measures of dynamic body acceleration, such as VeDBA, are valuable proxies of activity level and energy expenditure (Halsey et al. 2009). If measured longitudinally during agonistic interactions, they may approximate well the intensity of agonism, especially for animals where agonistic behaviour involves high-energy movements and displays (such as bottlenose dolphins; Connor et al. 2000).
In this case study, we used acoustic parameters as model predictors. Female bottlenose dolphins are known to respond to male calls during consortships, but males adjust their signalling to female movements too (Connor and Smolker 1996). Methods such as time-lag analyses could offer insights into the contribution of these concurrent bidirectional links and into causality relations. However, these aspects are beyond the main focus of motivation-structural rules, which postulate correlations. Given the rapid responses of females to males and vice versa during dolphin consortships, we expect the 2-s windows used in the present analyses to be long enough to capture concurrent links between female and male behaviour.
In the second case study, we used tag data from males to investigate the function of pulsed vocalizations. Based on existing results (Overstrom 1983; Blomqvist and Amundin 2004), we hypothesized that the intensity of threat communicated by these calls increases with their modulation rate. We predicted that this acoustic parameter would positively correlate with the intensity of agonistic behaviour, approximated by VeDBA of call senders during agonistic interactions. A chase from the tagged dolphin towards another individual was observed during two social interactions, and was taken as indicative of an agonistic context. Chases are often part of agonistic interactions (Janik 2015) and have been used to recognize aggressive behaviour in the species (Overstrom 1983; Connor et al. 1996, 2000). We found a significant positive relationship between VeDBA of the tagged dolphins and the modulation rate of pulsed calls that they produced, both for interactions where chasing was observed (stronger effect size) and for the remaining two interactions. Results from interactions with chase match our predictions: dolphins increased the modulation rate of pulsed calls concurrent with the intensity of agonistic behaviour. The relationship found for the remaining interactions may indicate that pulsed calls encode information about arousal levels in different social contexts. This case study focussed on call categories within the burst pulses group, exemplifying again how parametric approaches can be used in combination with categories of behaviour (e.g. for testing hypotheses about a specific context or signal type). A link between modulation rate of pulsed sounds and agonistic behaviour has been documented in other cetaceans as well (such as harbour porpoises, Phocoena phocoena, which modify click-repetition rate of calls across behavioural contexts; Clausen et al. 2010). Signal grading of acoustic parameters and graded vocalizations are likely to play important communicative roles in the communication systems of cetaceans (Janik et al. 1994; Murray et al. 1998; Rehn et al. 2007; Jones et al. 2019). A tag-based parametric approach opens valuable possibilities for research on these wide-ranging elusive species, even though to this day the tagging of free-ranging small delphinids remains challenging (Andrews et al. 2019) and capture-release procedures are not feasible in many sites.
Analyses of posture changes
When modelling female movements as a function of male calls, we observed a positive relationship between centroid frequency and changes in pointing angle, and a positive relationship of spectral and temporal occupancy with changes in roll. Interpreting these patterns of postural changes is difficult without detailed data on the position of senders relative to the receiver over time, and the main goal of these analyses was to illustrate a possible application of our parametric approach. Changes in pointing angle of a female during consortships might relate to avoidance behaviours or might reflect responses to threat calls adopted to remain close to the males (Connor and Smolker 1996). Changes in roll may reflect social displays or might also be forms of avoidance (for example if a female rolls her ventrum away from a flanking male). Future studies could investigate the biological significance of these movements by monitoring the location of senders and receivers, and by relating this to the motion parameters examined.
Analysis of tag deployments on males showed a positive relation between modulation rate of pulsed calls and changes in pointing angle of the vocalizing animal, as well as a positive relation of modulation rate and temporal occupancy with changes in roll. This is consistent with a usage of these sounds as part of multimodal displays, where a visual component of displays correlates with an acoustic one. Pulsed calls can indeed be produced during agonistic interactions simultaneously with aggressive visual displays (Overstrom 1983; Blomqvist and Amundin 2004).
Acoustic parameters and call categories as predictors of movement parameters
In both case studies, we compared acoustic parameters and presence-absence of call categories as model predictors of movement metrics. The parametric predictors offered a similar goodness of fit (2 out of 6 model fits; ∆QIC < 2) or a better goodness of fit (4 out of 6 model fits; ∆QIC > 2). These results suggest that, in bottlenose dolphins, (1) receivers of acoustic signals can modify their movements on the basis of quantitative acoustic properties of calls, rather than only on the basis of discrete call categories, (2) senders can make movements that correlate in some features with acoustic parameters of their signals, either reflecting movement patterns associated with calls, or visual components of graded multimodal displays. Our chosen acoustic parameters appear to encode information that is relevant in the communication system of the species, and may provide important additional information to call categories. This highlights that, while there are open questions on how to categorize the full bottlenose dolphin call repertoire (Jones et al. 2019), patterns of acoustic communication can fruitfully be studied by analysing acoustic parameters in their own right.
Parameterization of acoustic tag data
In developing a method to extract selected acoustic parameters as time series from vocalizations, a first challenge is to precisely locate sounds (including all call types) within acoustic recordings. We used manual labelling to identify dolphin signals, and we applied a SNR criterion to exclude weak calls that could be inaccurately parameterized. The manual processing of tag acoustic recordings is time-consuming, and risks introducing subjectivity. However, no automatic method is currently available to recognize all bottlenose dolphin sounds in varying conditions of ambient noise, nor to fully automate the exclusion of signals overlapping with strong ambient noise. Although methods exist for automatic detection of cetacean sounds (Mellinger et al. 2007; Zimmer 2011), these are typically restricted to specific call categories of interest. The development of new techniques (e.g. methods based on machine learning; Molnár et al. 2008; Bermant et al. 2019) may improve scope and reliability of automatic labelling.
A second challenge of parameterizing calls is to ensure consistency when extracting parameters from sounds with different structures, particularly from continuous as well as pulsed calls. Again, methods exist for extracting acoustic parameters from cetacean vocalizations, either as single values per call (Gillespie 2004) or as multiple values measured within a call (Deecke et al. 1999; Gillespie et al. 2013), but they are usually restricted to specific call types (although see the recent software Luscinia (rflachlan.github.io/Luscinia/), which offers novel opportunities for sound parameterization). The ability to consistently measure parameters from any given call is a key feature of our approach. For this aspect, we relied again on a SNR criterion, discarding low-SNR portions of recordings before measuring frequency parameters, and thus avoiding biased measurements in the case of low duty cycle signals.
An additional consideration regards the sampling interval. Assessing call features by simply measuring parameters on entire vocalizations is appropriate for many studies, but it does not generate time series with a constant resolution that are desirable for some analyses. Here, we used a 0.1-s time window to measure acoustic parameters (relevant duration for auditory processing in mammals and in dolphins; Ridgway 2011; Tougaard et al. 2015), followed by averaging over longer intervals to match the measurement interval of movement parameters. A window length of 0.1 s is too short to capture relevant patterns of posture change and to quantify activity levels of locomotion, which led us to choose a 2-s sampling interval for movement metrics (a duration deemed suitable to examine dolphin interactions, and compatible with the dominant stroke frequency in the deployments analysed).
While these challenges arise in any study aiming to extract time series of acoustic parameters from vocalizations, the method specifics used here (e.g. chosen SNR criterion and window length) are fine-tuned solutions suitable for our study system. Future applications of our approach should tailor methods for extracting acoustic metrics to the features of the study species and to the scientific questions being asked. In the case of tag data, choices of parameters should also consider the impact of the tag location on the body on some metrics (e.g. due to tissue transmission of low frequency components and directionality of high frequencies; Johnson et al. 2009).
Significance and applications of the parametric approach
Parameterization of animal behaviour is well suited to test hypotheses that are based on quantitative variables. However, hypotheses in ethology are often framed in terms of behavioural categories, which reflects the traditional methods used to collect data in this field (Altmann 1974; Martin and Bateson 2007). With the advent of new instruments that facilitate sampling quantitative data on the stream of behaviour, ethologists may pose more questions that are framed in terms of behavioural metrics, and that treat animal behaviour as a multivariate quantitative pattern (Dial et al. 2008; Brown and de Bivort 2018). This will offer valuable complementary study approaches to traditional categorical analyses.
Examples of technologies that enable parameterization of behaviour include animal-attached tags and remote-sensing devices such as stationary high-speed video cameras, which present different advantages and limitations. Tags allow sampling data from a consistent position on the tagged individual in natural conditions, including data from multiple sensors, but they require effective methods of attachment for a given species and a design for minimal impact on the study animals. Fixed cameras and acoustic monitors can collect behavioural data from multiple animals at the same time, such as data on location, fine-scale movements, posture, visual communication signals, physical social interactions, and infrared imagery of temperature in the case of cameras (e.g. Noldus et al. 2001; Fusani et al. 2007; Geberl et al. 2015; Ota et al. 2015; Mathis et al. 2018), and data on vocal behaviour as well as location information in the case of acoustic monitors (Van Parijs et al. 2009; Zimmer 2011; Gibb et al. 2019). However, sampling is often limited to one sensor type, cameras require predictable locations of individuals, and acoustic monitors may not allow characterizing the behaviour of specific individuals within groups. Data collected with tags and remote-sensing devices initially require contextual information in order to make inferences about the type of behaviours being sampled (similar to the case studies presented here). This is one example of how observational methods can be efficiently integrated with these instruments, and of how categorical contextual data and continuously sampled parameters can complement each other. If reliable signatures associated with specific behaviours or behavioural contexts are identified in the parameters analysed (e.g. Ydesen et al. 2014; Tennessen et al. 2019), it is then possible to extend analyses to periods when observational data are not available. Among other benefits, quantitative behavioural parameters can offer advantages from a statistical standpoint, since the use of categorical instead of continuous data (as well as the categorization of continuous variables) often results in the reduction of statistical power (Donner and Eliasziw 1994; Irwin and McClelland 2003).
In the field of acoustic communication, the use of behavioural parameters offers opportunities to investigate the communicative role of acoustic parameter changes, across categories or embedded within call type. Acoustic parameters can convey a wide range of information in their own right, including information on body size, sex, age, individual identity, dominance, fighting ability, intentions, social context, environment, arousal, and motivational and emotional states (Owings and Morton 1998; Sousa-Lima et al. 2002, 2008; Taylor and Reby 2010; Bradbury and Vehrencamp 2011). The first features listed change gradually over the lifetime of an individual or may even be fixed, and so vary mainly across individuals. By contrast intentions, arousal, surrounding context and motivational states can change quickly and within social interactions. Our parametric approach can examine parameter changes that convey these latter types of information, which are relevant in many kinds of interactions and can be difficult to study through observational methods alone. Animal contests, agonistic and mating interactions are examples of research areas where this approach could be particularly useful: information about motivation, intention and arousal is often critical during agonistic and mating encounters, and interactions can evolve rapidly and be difficult for observers to discretize into categories. Our approach could also be valuable to investigate the direct links between motor patterns associated with vocal production and acoustic properties of resulting vocalizations (Suthers and Goller 1997; Scherer et al. 2003). Sound-and-movement recording tags stand out as effective tools to study acoustic communication by means of behavioural parameters, since they provide continuous multivariate data on both vocal and non-vocal behaviour sampled from the tagged individual. However, standard acoustic recorders combined with high-resolution cameras could be just as effective in some settings, and would be applicable to a wide range of species for which tagging is challenging but continuous visual and acoustic monitoring is possible for extended periods of time.
If studies of acoustic communication focus exclusively on call categories, important communicative roles of signal parameters may be overlooked. An example of this from a different communication modality is the well-known honeybee dance described by von Frisch (1967), used by bees to signal direction and distance of a new food source to their conspecifics. If von Frisch had limited his work to describing the existence of waggle dance and round dance, without quantifying and studying specific parameters of these movements, a Nobel-winning discovery of a sophisticated communication mechanism might have not been made.
Data availability
The datasets analysed during the current study are available in the figshare repository https://doi.org/10.6084/m9.figshare.14040218—https://doi.org/10.6084/m9.figshare.14130338.
Code availability
Not applicable.
References
Altmann J (1974) Observational study of behaviour: sampling methods. Behaviour 49:227–266
Andrews RD, Baird RW, Calambokidis J et al (2019) Best practice guidelines for cetacean tagging. J Cetacean Res Manage 20:27–66
Arranz P, DeRuiter SL, Stimpert AK, Neves S, Friedlaender AS, Goldbogen JA, Visser F, Calambokidis J, Southall BL, Tyack PL (2016) Discrimination of fast click-series produced by tagged Risso’s dolphins (Grampus griseus) for echolocation or communication. J Exp Biol 219:2898–2907
Au WWL (1993) The sonar of dolphins. Springer, New York
August PV, Anderson JG (1987) Mammal sounds and motivation-structural rules: a test of the hypothesis. J Mammal 68:1–9
Baird TA (2013) Lizards and other reptiles as model systems for the study of contest behaviour. In: Hardy CW, Briffa M (eds) Animal contests. Cambridge University Press, New York, pp 258–286
Baugh AT, Akre KL, Ryan MJ (2008) Categorical perception of a natural, multivariate signal: mating call recognition in túngara frogs. Proc Natl Acad Sci U S A 105:8985–8988
Bermant PC, Bronstein MM, Wood RJ, Gero S, Gruber DF (2019) Deep machine learning techniques for the detection and classification of sperm whale bioacoustics. Sci Rep 9:12588
Blomqvist C, Amundin M (2004) High-frequency burst-pulse sounds in agonistic/aggressive interactions in bottlenose dolphins, Tursiops truncatus. In: Thomas J, Moss C, Vater M (eds) Echolocation in Bats and Dolphins. University of Chicago Press, Chicago, pp 425–431
Blomqvist C, Mello I, Amundin M (2005) An acoustic play-fight signal in bottlenose dolphins (Tursiops truncatus) in human care. Aquat Mamm 31:187–194
Bowling DL, Garcia M, Dunn JC, Ruprecht R, Stewart A, Frommolt KH, Fitch WT (2017) Body size and vocalization in primates and carnivores. Sci Rep 7:41070
Bradbury JW, Vehrencamp SL (2011) Principles of animal communication, 2nd edn. Sinauer Associate Inc, Sunderland
Briefer EF (2012) Vocal expression of emotions in mammals: mechanisms of production and evidence. J Zool 288:1–20
Briefer EF (2018) Vocal contagion of emotions in non-human animals. Proc R Soc B 285:20172783
Briffa M (2013) Contests in crustaceans: assessments, decisions and their underlying mechanisms. In: Hardy CW, Briffa M (eds) Animal contests. Cambridge University Press, New York, pp 86–112
Brown AEX, de Bivort B (2018) Ethology as a physical science. Nat Phys 14:653–657
Brown JC, Smaragdis P (2009) Hidden Markov and Gaussian mixture models for automatic call classification. J Acoust Soc Am 125:EL221–EL224
Caldwell DK, Caldwell MC (1970) Etiology of the chirp sounds emitted by the Atlantic bottlenosed dolphin: a controversial issue. Underw Nat 6:6–8
Caldwell MC, Caldwell DK (1965) Individualized whistle contours in bottlenosed dolphins (Tursiops truncatus). Nature 207:434–435
Caldwell MC, Haugen RM, Caldwell DK (1962) High-energy sound associated with fright in the dolphin. Science 138:907–908
Clausen KT, Wahlberg M, Beedholm K, DeRuiter SL, Madsen PT (2010) Click communication in harbour porpoises Phocoena phocoena. Bioacoustics 20:1–28
Compton LA, Clarke JA, Seidensticker J, Ingrisano DR (2001) Acoustic characteristics of white-nosed coati vocalizations: a test of motivation-structural rules. J Mammal 82:1054–1058
Connor RC, Krützen M (2015) Male dolphin alliances in Shark Bay: changing perspectives in a 30-year study. Anim Behav 103:223–235
Connor RC, Richards AF, Smolker RA, Mann J (1996) Patterns of female attractiveness in Indian Ocean bottlenose dolphins. Behaviour 133:37–69
Connor RC, Smolker RA (1996) ‘Pop’ goes the dolphin: a vocalization male bottlenose dolphins produce during consortships. Behaviour 133:643–662
Connor RC, Smolker RA, Richards AF (1992) Two levels of alliance formation among male bottlenose dolphins (Tursiops sp.). Proc Natl Acad Sci U S A 89:987–990
Connor RC, Vollmer N (2009) Sexual coercion in dolphin consortships: a comparison with chimpanzees. In: Muller MN, Wrangham RW (eds) Sexual coercion in primates: an evolutionary perspective on male aggression against females. Harvard University Press, Cambridge, pp 218–243
Connor RC, Wells RS, Mann J, Read AJ (2000) The bottlenose dolphin: social relationships in a fission-fusion society. In: Mann J, Connor RC, Tyack PL, Whitehead H (eds) Cetacean societies: field studies of dolphins and whales. University of Chicago Press, Chicago, pp 91–126
Deecke VB, Ford JK, Spong P (1999) Quantifying complex patterns of bioacoustic variation: use of a neural network to compare killer whale (Orcinus orca) dialects. J Acoust Soc Am 105:2499–2507
Dial KP, Greene E, Irschick DJ (2008) Allometry of behavior. Trends Ecol Evol 23:394–401
Donner A, Eliasziw M (1994) Statistical implications of the choice between a dichotomous or continuous trait in studies of interobserver agreement. Biometrics 50:550–555
Earley RL, Hsu Y (2013) Contest behaviour in fishes. In: Hardy CW, Briffa M (eds) Animal contests. Cambridge University Press, New York, pp 199–227
Ehret G (1987) Categorical perception of sound signals: facts and hypotheses from animal studies. In: Harnad S (ed) Categorical perception: the groundwork of cognition. Cambridge University Press, New York, pp 301–331
Esch HC, Sayigh LS, Blum JE, Wells RS (2009) Whistles as potential indicators of stress in bottlenose dolphins (Tursiops truncatus). J Mammal 90:638–650
Ficken MS, Ficken RW, Witkin SR (1978) Vocal repertoire of the black-capped chickadee. Auk 95:34–48
Fischer J (1998) Barbary macaques categorize shrill barks into two call types. Anim Behav 55:799–807
Fitch WT, Hauser MD (2002) Unpacking “honesty”: vertebrate vocal production and the evolution of acoustic signals. In: Simmons AM, Popper AN, Fay RR (eds) Acoustic communication. Springer, New York, pp 65–137
Ford JK (1989) Acoustic behaviour of resident killer whales (Orcinus orca) off Vancouver Island, British Columbia. Can J Zool 67:727–745
Fossey D (1972) Vocalizations of the mountain gorilla (Gorilla gorilla beringei). Anim Behav 20:36–53
Fusani L, Giordano M, Day LB, Schlinger BA (2007) High-speed video analysis reveals individual variability in the courtship displays of male golden-collared manakins. Ethology 113:964–972
Geberl C, Brinkløv S, Wiegrebe L, Surlykke A (2015) Fast sensory-motor reactions in echolocating bats to sudden changes during the final buzz and prey intercept. Proc Natl Acad Sci U S A 112:4122–4127
Gibb R, Browning E, Glover-Kapfer P, Jones KE (2019) Emerging opportunities and challenges for passive acoustics in ecological assessment and monitoring. Methods Ecol Evol 10:169–185
Gillespie D (2004) Detection and classification of right whale calls using an ‘edge’ detector operating on a smoothed spectrogram. Can Acoust 2:39–47
Gillespie D, Caillat M, Gordon J, White P (2013) Automatic detection and classification of odontocete whistles. J Acoust Soc Am 134:2427–2437
Gouzoules H, Gouzoules S (2000) Agonistic screams differ among four species of macaques: the significance of motivation-structural rules. Anim Behav 59:501–512
Green PA, Brandley NC, Nowicki S (2020) Categorical perception in animal communication and decision-making. Behav Ecol 31:859–867
Halsey LG, Shepard ELC, Quintana F, Laich AG, Green JA, Wilson RP (2009) The relationship between oxygen consumption and body acceleration in a range of species. Comp Biochem Physiol A 152:197–202
Hardin JW, Hilbe JM (2002) Generalized estimating equations. Chapman and Hall/CRC, New York
Harnad SR (1987) Categorical perception: the groundwork of cognition. Cambridge University Press, Cambridge
Herzing DL (1996) Vocalizations and associated underwater behavior of free-ranging Atlantic spotted dolphins, Stenella frontalis and bottlenose dolphins, Tursiops truncatus. Aquat Mamm 22:61–80
Hopp SL, Owren MJ, Evans CS (1998) Animal acoustic communication: sound analysis and research methods. Springer, Berlin
Hussey NE, Kessel ST, Aarestrup K et al (2015) Aquatic animal telemetry: a panoramic window into the underwater world. Science 348:1255642
Irwin JR, McClelland GH (2003) Negative consequences of dichotomizing continuous predictor variables. J Mark Res 40:366–371
Janik VM (1999) Pitfalls in the categorization of behaviour: a comparison of dolphin whistle classification methods. Anim Behav 57:133–143
Janik VM (2009) Acoustic communication in delphinids. Adv Stud Behav 40:123–157
Janik VM (2013) Cognitive skills in bottlenose dolphin communication. Trends Cogn Sci 17:157–159
Janik VM (2015) Play in dolphins. Curr Biol 25:R7–R8
Janik VM, Sayigh LS (2013) Communication in bottlenose dolphins: 50 years of signature whistle research. J Comp Physiol A 199:479–489
Janik VM, Sayigh LS, Wells RS (2006) Signature whistle shape conveys identity information to bottlenose dolphins. Proc Natl Acad Sci U S A 103:8293–8297
Janik VM, Todt D, Dehnhardt G (1994) Signature whistle variations in a bottlenosed dolphin, Tursiops truncatus. Behav Ecol Sociobiol 35:243–248
Jensen FH, Bejder L, Wahlberg M, Madsen PT (2009) Biosonar adjustments to target range of echolocating bottlenose dolphins (Tursiops sp.) in the wild. J Exp Biol 212:1078–1086
Jensen FH, Marrero Pérez J, Johnson M, Aguilar de Soto N, Madsen PT (2011) Calling under pressure: short-finned pilot whales make social calls during deep foraging dives. Proc R Soc Lond B 278:3017–3025
Jensen FH, Tyack PL (2013) Studying acoustic communication in pilot whale social groups. J Acoust Soc Am 134:4006–4006
Johnson M, Aguilar de Soto N, Madsen PT (2009) Studying the behaviour and sensory ecology of marine mammals using acoustic recording tags: a review. Mar Ecol Prog Ser 395:55–73
Johnson M, Madsen PT, Zimmer WM, Aguilar de Soto N, Tyack PL (2006) Foraging Blainville’s beaked whales (Mesoplodon densirostris) produce distinct click types matched to different phases of echolocation. J Exp Biol 209:5038–5050
Johnson MP, Tyack PL (2003) A digital acoustic recording tag for measuring the response of wild marine mammals to sound. IEEE J Ocean Eng 28:3–12
Jones B, Zapetis M, Samuelson MM, Ridgway S (2019) Sounds produced by bottlenose dolphins (Tursiops): a review of the defining characteristics and acoustic criteria of the dolphin vocal repertoire. Bioacoustics 18:1–42
Jones BL, Daniels R, Tufano S, Ridgway S (2020) Five members of a mixed-sex group of bottlenose dolphins share a stereotyped whistle contour in addition to maintaining their individually distinctive signature whistles. PLoS One 15:e0233658
Kays R, Crofoot MC, Jetz W, Wikelski M (2015) Terrestrial animal tracking as an eye on life and planet. Science 348:aaa2478
King SL, Allen SJ, Krützen M, Connor RC (2019) Vocal behaviour of allied male dolphins during cooperative mate guarding. Anim Cogn 22:991–1000
King SL, Friedman WR, Allen SJ, Gerber L, Jensen FH, Wittwer S, Connor RC, Krützen M (2018) Bottlenose dolphins retain individual vocal labels in multi-level alliances. Curr Biol 28:1993–1999
King SL, Janik VM (2013) Bottlenose dolphins can use learned vocal labels to address each other. Proc Natl Acad Sci U S A 110:13216–13221
Laukka P, Juslin P, Bresin R (2005) A dimensional approach to vocal expression of emotion. Cogn Emot 19:633–653
Luís AR, May-Collado LJ, Rako-Gospić N, Gridley T, Papale E, Azevedo A, Silva MA, Buscaino G, Herzing D, Dos Santos ME (2021) Vocal universals and geographic variations in the acoustic repertoire of the common bottlenose dolphin. Sci Rep 11:11847
Manley GA, Popper AN, Fay RR (2004) Evolution of the vertebrate auditory system. Springer, New York
Mann J (1999) Behavioral sampling methods for cetaceans: a review and critique. Mar Mammal Sci 15:102–122
Marler P (1976) Social organization, communication and graded signals: the chimpanzee and the gorilla. In: Bateson PPG, Hinde RA (eds) Growing points in ethology. Cambridge University Press, Cambridge, pp 239–280
Marrero Pérez J, Jensen FH, Rojano-Doñate L, Aguilar de Soto N (2016) Different modes of acoustic communication in deep-diving short-finned pilot whales (Globicephala macrorhynchus). Mar Mammal Sci 33:59–79
Martin B, Bateson P (2007) Measuring behaviour: an introductory guide. Cambridge University Press, Cambridge
Martin López LM, Aguilar de Soto N, Miller PJ, Johnson M (2016) Tracking the kinematics of caudal-oscillatory swimming: a comparison of two on-animal sensing methods. J Exp Biol 219:2103–2109
Martin López LM, Madsen PT, Aguilar de Soto N, Johnson M (2021) Overall dynamic body acceleration measures activity differently on large vs small aquatic animals. Methods Ecol Evol 13:447–448
Martin López LM, Miller PJ, Aguilar de Soto N, Johnson M (2015) Gait switches in deep-diving beaked whales: biomechanical strategies for long-duration dives. J Exp Biol 218:1325–1338
Mathis A, Mamidanna P, Cury KM, Abe T, Murthy VN, Mathis MW, Bethge M (2018) DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci 21:1281–1289
McFarland D (1985) Animal Behaviour. Benjamin Cummings, Menlo Park
McHugh KA, Allen JB, Barleycorn AA, Wells RS (2011) Natal philopatry, ranging behavior, and habitat selection of juvenile bottlenose dolphins in Sarasota Bay, Florida. J Mammal 92:1298–1313
Mellinger DK, Stafford KM, Moore SE, Dziak RP, Matsumoto H (2007) An overview of fixed passive acoustic observation methods for cetaceans. Oceanography 20:36–45
Miller LA, Pristed J, Møshl B, Surlykke A (1995) The click-sounds of narwhals (Monodon monoceros) in Inglefield Bay, Northwest Greenland. Mar Mammal Sci 11:491–502
Miller PJ, Johnson MP, Tyack PL (2004a) Sperm whale behaviour indicates the use of echolocation click buzzes ‘creaks’ in prey capture. Proc R Soc Lond B 271:2239–2247
Miller PJ, Johnson MP, Tyack PL, Terray EA (2004b) Swimming gaits, passive drag and buoyancy of diving sperm whales Physeter macrocephalus. J Exp Biol 207:1953–1967
Molnár C, Kaplan F, Roy P, Pachet F, Pongrácz P, Dóka A, Miklósi Á (2008) Classification of dog barks: a machine learning approach. Anim Cogn 11:389–400
Moore BL, Connor RC, Allen SJ, Krützen M, King SL (2020) Acoustic coordination by allied male dolphins in a cooperative context. Proc R Soc B 287:20192944
Morton ES (1977) On the occurrence and significance of motivation-structural rules in some bird and mammal sounds. Am Nat 111:855–869
Morton ES (1982) Grading, discreteness, redundancy, and motivation-structural rules. In: Kroodsma DE, Miller EH (eds) Acoustic Communication in Birds. Academic Press, New York, pp 183–212
Murray SO, Mercado E, Roitblat HL (1998) Characterizing the graded structure of false killer whale (Pseudorca crassidens) vocalizations. J Acoust Soc Am 104:1679–1688
Nanni L, Brahnam S, Lumini A, Maguolo G (2020) Animal sound classification using dissimilarity spaces. Appl Sci 10:8578
Nelson DA, Marler P (1989) Categorical perception of a natural stimulus continuum: birdsong. Science 244:976–978
Noldus LP, Spink AJ, Tegelenbosch RA (2001) EthoVision: a versatile video tracking system for automation of behavioral experiments. Behav Res Methods Instrum Comput 33:398–414
Nowicki S, Nelson DA (1990) Defining natural categories in acoustic signals: comparison of three methods applied to ‘chick-a-dee’ call notes. Ethology 86:89–101
Ohala JJ (1984) An ethological perspective on common cross-language utilization of F0 of voice. Phonetica 41:1–16
Ota N, Gahr M, Soma M (2015) Tap dancing birds: the multimodal mutual courtship display of males and females in a socially monogamous songbird. Sci Rep 5:16614
Ouattara K, Lemasson A, Zuberbühler K (2009a) Campbell’s monkeys concatenate vocalizations into context-specific call sequences. Proc Natl Acad Sci U S A 106:22026–22031
Ouattara K, Zuberbühler K, N’goran EK, Gombert JE, Lemasson A (2009b) The alarm call system of female Campbell’s monkeys. Anim Behav 78:35–44
Overstrom NA (1983) Association between burst-pulse sounds and aggressive behavior in captive Atlantic bottlenosed dolphins (Tursiops truncatus). Zoo Biol 2:93–103
Owings DH, Morton ES (1998) Animal vocal communication: a new approach. Cambridge University Press, Cambridge
Pan W (2001) Akaike’s information criterion in generalized estimating equations. Biometrics 57:120–125
Pandeya YR, Kim D, Lee J (2018) Domestic cat sound classification using learned features from deep neural nets. Appl Sci 8:1949
Pirotta E, Matthiopoulos J, MacKenzie M, Scott-Hayward L, Rendell L (2011) Modelling sperm whale habitat preference: a novel approach combining transect and follow data. Mar Ecol Prog Ser 436:257–272
Qasem L, Cardew A, Wilson A, Griffiths I, Halsey LG, Shepard EL, Gleiss AC, Wilson R (2012) Tri-axial dynamic acceleration as a proxy for animal energy expenditure; should we be summing values or calculating the vector? PLoS One 7:e31187
Rehn N, Teichert S, Thomsen F (2007) Structural and temporal emission patterns of variable pulsed calls in free-ranging killer whales (Orcinus orca). Behaviour 144:307–329
Rendell LE, Whitehead H (2003) Comparing repertoires of sperm whale codas: a multiple methods approach. Bioacoustics 14:61–81
Ridgway SH (2011) Neural time and movement time in choice of whistle or pulse burst responses to different auditory stimuli by dolphins. J Acoust Soc Am 129:1073–1080
Ronald KL, Zeng R, White DJ, Fernández-Juricic E, Lucas JR (2017) What makes a multimodal signal attractive? A preference function approach. Behav Ecol 28:677–687
Ropert-Coudert Y, Wilson RP (2005) Trends and perspectives in animal-attached remote sensing. Front Ecol Environ 3:437–444
Rutz C, Hays GC (2009) New frontiers in biologging science. Biol Lett 5:289–292
Sayigh L, Dziki A, Janik VM, Kim E, McHugh K, Tyack PL, Wells RS, Jensen FH (2017) Non-whistle sounds used in bottlenose dolphin aggressive interactions recorded on DTAGs. Poster session presentation at the meeting of the European Cetacean Society, 31st annual conference, Middelfart, Denmark
Scherer KR, Johnstone T, Klasmeyer G (2003) Vocal expression of emotion. In: Davidson RJ, Scherer KR, Goldsmith HH (eds) Handbook of affective sciences. Oxford University Press, New York, pp 433–456
Seyfarth RM, Cheney DL, Marler P (1980) Vervet monkey alarm calls: semantic communication in a free-ranging primate. Anim Behav 28:1070–1094
Smolker RA, Richards AF, Connor RC, Pepper J (1992) Association patterns among bottlenose dolphins in Shark Bay, Western Australia. Behaviour 123:38–69
Sousa-Lima RS, Paglia AP, da Fonseca GA (2002) Signature information and individual recognition in the isolation calls of Amazonian manatees, Trichechus inunguis (Mammalia: Sirenia). Anim Behav 63:301–310
Sousa-Lima RS, Paglia AP, da Fonseca GA (2008) Gender, age, and identity in the isolation calls of Antillean manatees (Trichechus manatus manatus). Aquat Mamm 34:109–122
Stimpert AK, DeRuiter SL, Falcone EA et al (2015) Sound production and associated behavior of tagged fin whales (Balaenoptera physalus) in the Southern California Bight. Anim Biotelemetry 3:23
Suthers RA, Goller F (1997) Motor correlates of vocal diversity in songbirds. In: Nolan V, Ketterson ED, Thompson CF (eds) Current ornithology. Plenum Press, New York, pp 235–288
Taylor AM, Reby D (2010) The contribution of source–filter theory to mammal vocal communication research. J Zool 280:221–236
Tennessen JB, Holt MM, Hanson MB, Emmons CK, Giles DA, Hogan JT (2019) Kinematic signatures of prey capture from archival tags reveal sex differences in killer whale foraging activity. J Exp Biol 222:jeb191874
Tougaard J, Wright AJ, Madsen PT (2015) Cetacean noise criteria revisited in the light of proposed exposure limits for harbour porpoises. Mar Pollut Bull 90:196–208
Trawicki MB, Johnson MT, Osiejuk TS (2005) Automatic song-type classification and speaker identification of Norwegian Ortolan Bunting (Emberiza hortulana) vocalizations. In: 2005 IEEE Workshop on Machine Learning for Signal Processing, pp 277–282
Tyack PL (1986) Whistle repertoires of two bottlenosed dolphins, Tursiops truncatus: mimicry of signature whistles? Behav Ecol Sociobiol 18:251–257
Tyack PL (1998) Acoustic communication under the sea. In: Hopp SL, Owren MJ, Evans CS (eds) Animal Acoustic Communication: Sound Analysis and Research Methods. Springer, Berlin, pp 163–220
Van Parijs SM, Clark CW, Sousa-Lima RS, Parks SE, Rankin S, Risch D, Van Opzeeland IC (2009) Management and research applications of real-time and archival passive acoustic sensors over varying temporal and spatial scales. Mar Ecol Prog Ser 395:21–36
Vollmer NL, Hayek LAC, Heithaus MR, Connor RC (2015) Further evidence of a context-specific agonistic signal in bottlenose dolphins: the influence of consortships and group size on the pop vocalization. Behaviour 152:1979–2000
von Frisch K (1967) The dance language and orientation of bees. Harvard University Press, Cambridge
Wadewitz P, Hammerschmidt K, Battaglia D, Witt A, Wolf F, Fischer J (2015) Characterizing vocal repertoires—hard vs. soft classification approaches. PLoS One 10:e0125785
Wahlberg M, Jensen FH, Aguilar Soto N, Beedholm K, Bejder L, Oliveira C, Rasmussen M, Simon M, Villadsgaard A, Madsen PT (2011) Source parameters of echolocation clicks from wild bottlenose dolphins (Tursiops aduncus and Tursiops truncatus). J Acoust Soc Am 130:2263–2274
Wallen MM, Krzyszczyk E, Mann J (2017) Mating in a bisexually philopatric society: bottlenose dolphin females associate with adult males but not adult sons during estrous. Behav Ecol Sociobiol 71:153
Wells RS (2014) Social structure and life history of bottlenose dolphins near Sarasota Bay, Florida: insights from four decades and five generations. In: Yamagiwa J, Karczmarski L (eds) Primates and cetaceans: field research and conservation of complex mammalian societies. Springer, Tokyo, pp 149–172
Wells RS, Rhinehart HL, Hansen LJ, Sweeney JC, Townsend FI, Stone R, Casper DR, Scott MD, Hohn AA, Rowles TK (2004) Bottlenose dolphins as marine ecosystem sentinels: developing a health monitoring system. EcoHealth 1:246–254
Wells RS, Scott MD, Irvine AB (1987) The social structure of free-ranging bottlenose dolphins. In: Genoways HH (ed) Current mammalogy. Springer, New York, pp 247–305
Wilson RP, Grundy E, Massy R et al (2014) Wild state secrets: ultra-sensitive measurement of micro-movement can reveal internal processes in animals. Front Ecol Environ 12:582–587
Wilson RP, White CR, Quintana F, Halsey LG, Liebsch N, Martin GR, Butler PJ (2006) Moving towards acceleration for estimates of activity-specific metabolic rate in free-living animals: the case of the cormorant. J Anim Ecol 75:1081–1090
Winter P, Ploog D, Latta J (1966) Vocal repertoire of the squirrel monkey (Saimiri sciureus), its analysis and significance. Exp Brain Res 1:359–384
Ydesen KS, Wisniewska DM, Hansen JD, Beedholm K, Johnson M, Madsen PT (2014) What a jerk: prey engulfment revealed by high-rate, super-cranial accelerometry on a harbour seal (Phoca vitulina). J Exp Biol 217:2239–2243
Yin S (2002) A new perspective on barking in dogs (Canis familaris). J Comp Psychol 116:189–193
Zimmer WM (2011) Passive acoustic monitoring of cetaceans. Cambridge University Press, Cambridge
Zuur AF, Ieno EN, Walker NJ, Saveliev AA, Smith GM (2009) Mixed effects models and extensions in ecology with R. Springer, New York
Acknowledgements
We thank the staff of the Chicago Zoological Society’s Sarasota Dolphin Research Program for their assistance with capture-release operations, and their long-term research monitoring in Sarasota Bay that provided essential data for this study. We also thank the many volunteers that helped during the fieldwork throughout the years. Thanks to Vincent Janik, Frants Jensen and Laela Sayigh for their help with the data collection and for their comments on this manuscript. Thanks to Lindesay Scott-Hayward for advice on the statistical analysis. Thanks to the Associate Editor, Janet Mann, and to two anonymous reviewers for their comments and suggestions, which considerably improved this work.
Funding
We are grateful for financial support received from our funders. Dolphin Quest, Inc. provided core funding for tagging opportunities during dolphin health assessments. MC was supported by the School of Biology of the University of St Andrews, the Scottish Universities Life Sciences Alliance (SULSA), and the Strategic Environmental Research and Development Program (grant RC-20–1097). PLT was supported by ONR grants N00014-18–1-2062 and N00014-21–1-2096, the Strategic Environmental Research and Development Program grants RC-20–1097, RC-20–7188, RC-21–3091, and the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland). MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions. MJ was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement N 754513 and the Aarhus University Research Foundation, as well as by the MASTS pooling initiative.
Author information
Authors and Affiliations
Contributions
Conceptualization and study design, MC, MJ, and PLT; fieldwork and data collection, MC, KAM, RSW, and PLT; development of the methodology, MC, MJ, and PLT; data analysis and writing of the original draft, MC; review and editing of the original draft, MC, MJ, and PLT; review and editing of the final draft, all authors.
Corresponding author
Ethics declarations
Ethics approval
The study received the following permits and approvals: National Marine Fisheries Service Scientific Research Permit No. 20455 & No. 15543; Florida Fish and Wildlife Conservation Commission, Manatee Protection Zone Permit No. MPZ04-0004–17; Mote Marine Laboratory IACUC approval No. 17–10-RW1; University of St Andrews, Biology School Ethics Committee approval, Ref SEC18008. The study adhered to the ASAB/ABS Guidelines for the Use of Animals in Research.
Consent for publication
All authors consent to the current publication.
Conflict of interest
The authors declare no competing interests.
Additional information
Communicated by J. Mann
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Casoli, M., Johnson, M., McHugh, K.A. et al. Parameterizing animal sounds and motion with animal-attached tags to study acoustic communication. Behav Ecol Sociobiol 76, 59 (2022). https://doi.org/10.1007/s00265-022-03154-0
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00265-022-03154-0