Hvilke aspekter ved musikalsk lyd er viktig for hvordan vi persiperer timing og rytme p? mikroniv?? Og hvordan varierer dette med kulturell bakgrunn og l?ring?
Om prosjektet
Prosjektet Timing and Sound in Musical Micro-rhythm (TIME) skal unders?ke mikrorytmiske forhold i fire rytmiske sjangre - jazz, elektronisk dansemusikk, R&B/hip-hop og skandinavisk folkemusikk. Vil du lese mer?
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks. In the same vein, research on rhythm processing in music supports the notion that perceptual timing precision varies systematically with the sharpness of acoustic onset edges, conceptualized in the beat bin hypothesis. Increased envelope sharpness induces increased precision in localizing a sound in time. Despite this tight relationship between envelope shape and temporal processing, it is currently unknown how the brain uses predictive information about envelope features to optimize temporal perception. With the current EEG study, we show that the predicted sharpness of the amplitude envelope is encoded by pre-target neural activity in the beta band (15–25 Hz), and has an impact on the temporal perception of target sounds. We used probabilistic sound cues in a timing judgment task to inform participants about the sharpness of the amplitude envelope of an upcoming target sound embedded in a beat sequence. The predictive information about the envelope shape modulated task performance and pre-target beta power. Interestingly, these conditional beta-power modulations correlated positively with behavioral performance in the timing judgment task and with perceptual temporal precision in a click-alignment task. This study provides new insight into the neural processes underlying prediction of the sharpness of the amplitude envelope during beat perception, which modulate the temporal perception of sounds. This finding could reflect a process that is involved in temporal prediction, exerting top-down control on neural entrainment via the prediction of acoustic edges in the auditory stream.
Danielsen, Anne; Paulsrud, Thea S?rli & London, Justin
(2025).
The influence of vocal expertise on the perception of microrhythm in song and speech.
Attention, Perception & Psychophysics.
ISSN 1943-3921.
87,
s. 1750–1770.
doi: 10.3758/s13414-025-03057-y.
Fulltekst i vitenarkivVis sammendrag
Musical expertise affects the perception of the temporal location of a musical sound, often to a significant degree. In a study involving jazz musicians, electronic dance music (EDM) music producers, and Norwegian folk musicians, Danielsen et al. Attention, Perception & Psychophysics, 84, 599–615, (2022) found significant between-group differences regarding the mean location as well as the variability around that mean for a set of control and genre-specific musical sounds. The present study extends these findings to singers who are experts in jazz versus classical music genres. In two experiments participants were presented with sung (Exp. 1) and sung and spoken (Exp. 2) syllables. In both the task was to synchronize either taps or a click track with a looped target sound. In both experiments, we found that classical participants tended to place the mean location later (relative to the acoustic onset) than jazz participants, and likewise had greater variability. Interestingly, and contra to our hypothesis, this between-group difference persisted even when the stimuli were spoken rather than sung. The current study gives further insights into how musical expertise impacts the low-level processing of musical sounds and provides a window into the interaction between top-down and bottom-up aspects of music and auditory perception more generally. In addition, it provides insight into how musicians approach musical and quasi-musical tasks, as well as the way they perceive the acoustic aspects of sound both as a musical object in its own right as well as a cue for perception-action coupling with their fellow musicians.
Haugen, Mari Romarheim
(2024).
Musical Meter as Shape: An Embodied Perspective on Metrical Trajectories and Curves.
I Jensenius, Alexander Refsum (Red.),
Sonic Design: Explorations Between Art and Science.
Springer Nature.
ISSN 9783031578922.s. 37–48.
doi: 10.1007/978-3-031-57892-2_3.
Fulltekst i vitenarkivVis sammendrag
The perception of musical rhythm includes not only the sonic rhythm but also the endogenous reference structures, such as meter. Musical meter is often described and understood as points in time or durations between such points. In this chapter, I argue that musical meter also has a shape. I propose that we perceive and make sense of musical meter based on our previous musical experiences involving meter-related bodily motion. In other words, the meter-related motion is integral to the perceived meter—they are the same. Meter thus has a shape that relates to the embodied sensations of these movements. Also crucial is the notion that musical meter is conditioned by musical culture. This perspective on meter as shape is highly influenced by God?y’s motor-mimetic perspective on music perception and musical shape cognition and concurs with the multimodal approach to sonic design that acknowledges motion as intrinsic to music performance and perception.
The TIME project: Timing and Sound in Musical Microrhythm (2017–2022) studied microrhythm; that is, how dynamic envelope, timbre, and center frequency, as well as the microtiming of a variety of sounds, affect their perceived rhythmic properties. The project involved theoretical work regarding the basic aspects of microrhythm; experimental studies of microrhythm perception, exploring both stimulus features and the participants’ enculturated expertise; observational studies of how musicians produce particular microrhythms; and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that: (a) altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs), (b) there are systematic effects of core acoustic factors (duration, attack) on microrhythmic perception, (c) microrhythmic features in longer and more complex sounds can give rise to different perceptions of the same sound, and (d) musicians are highly aware of microrhythms and have developed vocabularies for describing them. In addition, our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm. Our use of multiple, interdisciplinary methodologies enabled us to uncover the complexity of microrhythm perception and production in both laboratory and real-world musical contexts.
Oddekalv, Kjell Andreas
(2024).
Rap as Composite Auditory Streams: Techniques and Approaches for Chimericity Through Layered Vocal Production in Hip-Hop, and their Aesthetic Implications.
I Gull?, Jan-Olof; Hepworth-Sawyer, Russ; Paterson, Justin; Toulson, Rob & Marrington, Mark (Red.),
Innovation in Music: Cultures and Contexts.
Routledge.
ISSN 9781032611167.Fulltekst i vitenarkiv
Danielsen, Anne & Jacobsen, Eirik
(2023).
“Hard” or “Soft”: Shaping Microtiming through Sonic Features in Jazz-Related Groove Performance.
Journal of Jazz Studies (JJS).
14(2).
doi: 10.14713/jjs.v14i2.258.
Fulltekst i vitenarkivVis sammendrag
Recent research has shown that the shape, timbre, and intensity of a sound influence the perception of its timing at the micro level of rhythm. In this case study of contemporary Norwegian jazz, we investigate to what extent jazz musicians intentionally use sonic features to shape the micro level of rhythm in their performances. First, we provide an overview of existing research into microrhythm in jazz and auditory perception studies. Then we present results from interviews with five expert jazz musicians about how playing techniques and sound qualities of specific instruments are utilized to influence the perception of timing. We also analyze two selected performances by the musicians to explore the effects of interaction between sonic features and timing in a musical context. The article concludes by discussing the results in the context of findings from empirical research into microlevel auditory perception regarding the perceptual center of sounds (P-center) and auditory stream segregation. The study demonstrates the need to move related research beyond the temporal domain, and to cultivate a more holistic approach to what constitutes groove in jazz and related genres.
Lartillot, Olivier; Johansson, Mats Sigvard; Elowsson, Anders; Monstad, Lars L?berg & Cyvin, Mattias Stor?s
(2023).
A Dataset of Norwegian Hardanger Fiddle Recordings with Precise Annotation of Note and Beat Onsets.
Transactions of the International Society for Music Information Retrieval.
6(1),
s. 186–202.
doi: 10.5334/TISMIR.139.
Fulltekst i vitenarkiv
Danielsen, Anne; Johansson, Mats Sigvard; Br?vig, Ragnhild; Sandvik, Bj?rnar Ersland & B?hler, Kjetil Klette
(2023).
Shaping rhythm: timing and sound in five groove-based genres.
Popular Music.
ISSN 0261-1430.
39(1).
doi: 10.1017/S0261143023000041.
Fulltekst i vitenarkivVis sammendrag
Abstract Shaping events at the microlevel of rhythm is an important aspect of many groove-based musics. In the present study, we explore the interconnectedness of musical parameters such as timing, attack shape, timbre and relative intensity in creating groove through investigating musicians and producers’ discourse in five genres (jazz, samba, electronic dance music, hip-hop and traditional Scandinavian fiddle music). Through semi-structured interviews, we found both genre-specific accounts of how such musical features interact at the microlevel of rhythm and a cross-generic focus on inducing movement by shaping sound and generating rhythmic friction. The study empirically substantiates the multiparameter nature of musical performance and experience, and that particular genre-typical configurations of temporal and sonic features are needed to create the experience of groove. It thereby adds to the scholarly discourse on groove, which has often taken a more general and time-oriented view of rhythm.
C?mara, Guilherme Schmidt; Sioros, George; Nymoen, Kristian; Haugen, Mari Romarheim & Danielsen, Anne
(2023).
Sound-producing actions in guitar performance of groove-based microrhythm.
Empirical Musicology Review.
18(1),
s. 21–36.
doi: 10.18061/emr.v18i1.9124.
Fulltekst i vitenarkivVis sammendrag
This paper reports on an experiment that investigated how guitarists signal the intended timing of a rhythmic event in a groove-based context via three different features related to sound-producing motions of impulsive chord strokes (striking velocity, movement duration and fretboard position). 21 expert electric guitarists were instructed to perform a simple rhythmic pattern in three different timing styles—“laidback,” “on-the-beat,” and “pushed”—in tandem with a metronome. Results revealed systematic differences across participants in the striking velocity and movement duration of chords in the different timing styles. In general, laid-back strokes were played with lower striking velocity and longer movement duration relative to on-the-beat and pushed strokes. No differences in the fretboard striking position were found (neither closer to the “bridge” [bottom] or to the “neck” [head]). Correlations with previously reported audio features of the guitar strokes were also investigated, where lower velocity and longer movement duration generally corresponded with longer acoustic attack duration (signal onset to offset).
Danielsen, Anne; Johansson, Mats Sigvard & Stover, Chris
(2023).
Bins, Spans, Tolerance: Three Theories of Microtiming Behavior.
Music Theory Spectrum.
ISSN 0195-6167.
45(2).
doi: 10.1093/mts/mtad005.
Fulltekst i vitenarkivVis sammendrag
This study compares three recent theories of expressive microtiming in music. While each theory was originally designed to engage a particular musical genre—Anne Danielsen’s beat bins for funk, Neo-Soul, and other contemporary Black musical expressions, Chris Stover’s beat span for “timeline musics” from Africa and the African diaspora, and Mats Johansson’s rhythmic tolerance for Scandinavian fiddle music—we consider how they can productively coexist in a shared music-analytic space, each revealing aspects of musical structure and process in mutually reinforcing ways. In order to explore these possibilities, we bring all three theories to bear on a recording of Thelonious Monk’s “Monk’s Dream,” focusing on Monk’s piano gestures as well as the relationship between saxophonist Charlie Rouse’s improvised solo and Monk’s and bassist John Ore’s accompaniments.
Haugen, Mari Romarheim; C?mara, Guilherme Schmidt; Nymoen, Kristian & Danielsen, Anne
(2023).
Instructed timing and body posture in guitar and bass playing in groove performance.
Musicae Scientiae.
ISSN 1029-8649.
doi: 10.1177/10298649231182039.
Fulltekst i vitenarkivVis sammendrag
Body movements play a crucial role in music performance and perception, and they do so well beyond those devoted to sound production itself. Various movements related to the performer’s emotional intentions or structural aspects of the music are also part of the performance and crucial to the listening experience. In the present study, we investigated the effect of instructed timing on such non-sound producing body movements, focusing on musicians’ body posture. We used an infrared motion-capture system to record the movements of skilled guitarists and bassists while they were playing electric guitar and electric bass, respectively. We instructed the musicians to perform under three different timing-style conditions: laid-back (behind), on-the-beat, and pushed (ahead). We also conducted short semistructured interviews to gain further insight into their movement strategies. The results show that performers generally leaned forward when instructed to play systematically slightly ahead of the pulse. We suggest that this change is related to an alteration in the performer’s experience of the feel of the music. The results support the view that musicians’ non-sound-producing body movements are not random, but integral to the performance, and that they are closely related to the music’s microrhythmic feel.
Danielsen, Anne
(2023).
Shaping the beat bin in computer-based grooves.
I W?llner, Clemens & London, Justin (Red.),
Performing Time. Synchrony and Temporal Flow in Music and Dance.
Oxford University Press.
ISSN 9780192896254.Fulltekst i vitenarkivVis sammendrag
In musical genres such as neo-soul and hip-hop, beats often have a temporal shape that makes their location in time difficult to locate relative to a single point in time. Often this comes as a consequence of digital sound processing that obscures the exact location of beats in pulsecarrying rhythmic layers. The resulting beat shape may produce an internal reference structure of “beat bins,” with bin here understood as the temporal width and shape of the internal beat: sound onsets falling within the bin will be heard as merging into one beat, whereas onsets falling outside of it will be heard as belonging to another metrical unit (Danielsen 2010). In the present paper I will discuss the affordances for synchronization that different beat bin widths provide and also how the beat bin shape may affect the feel of a rhythmic groove.
Haugen, Mari Romarheim
(2023).
An embodied perspective on rhythm in music-dance genres.
I W?llner, Clemens & London, Justin (Red.),
Performing Time. Synchrony and Temporal Flow in Music and Dance.
Oxford University Press.
ISSN 9780192896254.Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Surrender to the Flow - Metre on Metre or Verse in Verses? Lineation through Rhyme in Rap Flows.
I Syk?ri, Venla & Fabb, Nigel (Red.),
Rhyme and Rhyming in Verbal Art, Language, and Song.
Studia Fennica.
ISSN 9789518585872.s. 229–245.Fulltekst i vitenarkiv
Sioros, Georgios; Madison, Guy; Cocharro, Diogo; Danielsen, Anne & Gouyon, Fabien
(2022).
Syncopation and Groove in Polyphonic Music: Patterns Matter.
Music Perception.
ISSN 0730-7829.
39(5),
s. 503–531.
doi: 10.1525/mp.2022.39.5.503.
Fulltekst i vitenarkivVis sammendrag
Music often evokes a regular beat and a pleasurable sensation of wanting to move to that beat called groove. Recent studies show that a rhythmic pattern’s ability to evoke groove increases at moderate levels of syncopation, essentially, when some notes occur earlier than expected. We present two studies that investigate that effect of syncopation in more realistic polyphonic music examples. First, listeners rated their urge to move to music excerpts transcribed from funk and rock songs, and to algorithmically transformed versions of these excerpts: 1) with the original syncopation removed, and 2) with various levels of pseudorandom syncopation introduced. While the original excerpts were rated higher than the de-syncopated, the algorithmic syncopation was not as successful in evoking groove. Consequently, a moderate level of syncopation increases groove, but only for certain syncopation patterns. The second study provides detailed comparisons of the original and transformed rhythmic structures that revealed key differences between them in: 1) the distribution of syncopation across instruments and metrical positions, 2) the counter-meter figures formed by the syncopating notes, and 3) the number of pickup notes. On this basis, we form four concrete hypotheses about the function of syncopation in groove, to be tested in future experiments.
Abstract Groove, understood as an enjoyable compulsion to move to musical rhythms, typically varies along an inverted U-curve with increasing rhythmic complexity (e.g., syncopation, pickups). Predictive coding accounts posit that moderate complexity drives us to move to reduce sensory prediction errors and model the temporal structure. While musicologists generally distinguish the effects of pickups (anacruses) and syncopations, their difference remains unexplored in groove. We used pupillometry as an index to noradrenergic arousal while subjects listened to and rated drumbeats varying in rhythmic complexity. We replicated the inverted U-shaped relationship between rhythmic complexity and groove and showed this is modulated by musical ability, based on a psychoacoustic beat perception test. The pupil drift rates suggest that groovier rhythms hold attention longer than ones rated less groovy. Moreover, we found complementary effects of syncopations and pickups on groove ratings and pupil size, respectively, discovering a distinct predictive process related to pickups. We suggest that the brain deploys attention to pickups to sharpen subsequent strong beats, augmenting the predictive scaffolding’s focus on beats that reduce syncopations’ prediction errors. This interpretation is in accordance with groove envisioned as an embodied resolution of precision-weighted prediction error.
Markussen, Bjarne; Diesen, Even Igland & Oddekalv, Kjell Andreas
(2022).
Introduksjon: Kva er rap og flow?
I Diesen, Even Igland; Markussen, Bjarne & Oddekalv, Kjell Andreas (Red.),
Flytsoner : Studiar i flow og rap-lyrikk.
Scandinavian Academic Press.
ISSN 9788230403433.s. 9–31.Fulltekst i vitenarkiv
Br?vig, Ragnhild
(2022).
"It ain't but one kind of Blues": Kid Koala's Bluesy Embrace of the Fragmented ("1 bit Blues," 2012).
I Moylan, William; Burns, Lori & Mike, Alleyne (Red.),
Analyzing Recorded Music: Collected Perspectives on Popular Music Tracks (1st ed.).
Routledge.
ISSN 9781003089926.s. 248–258.
doi: 10.4324/9781003089926.
Fulltekst i vitenarkivVis sammendrag
This chapter analyzes “1 bit Blues” (12 bit Blues, 2012, Ninja Tune) by the Canadian turntablist and music producer Kid Koala (aka Eric San). Kid Koala is not a blues musician but is instead known for his eclectic selection of samples, his virtuosic turntable techniques, and his intriguingly playful music. 12 bit Blues was primarily created using the E-mu SP-1200 drum machine and sampler to manipulate and juxtapose samples from old blues records. The sonic result of that approach initiates a dialogue between two quite distinct aesthetic paradigms. The focus of this chapter, then, is on this particular dialogue, including the ways in which this track reimagines the blues by embracing the aesthetic potential of the temporally fragmented.
Oddekalv, Kjell Andreas
(2022).
Rytmiske bumerke.
I Diesen, Even Igland; Markussen, Bjarne & Oddekalv, Kjell Andreas (Red.),
Flytsoner : Studiar i flow og rap-lyrikk.
Scandinavian Academic Press.
ISSN 9788230403433.s. 33–72.Fulltekst i vitenarkiv
Br?vig, Ragnhild; Sandvik, Bj?rnar Ersland & Aareskjold-Drecker, Jon Marius
(2022).
Influence du traitement de plage dynamique sur le rythme per?u dans l’EDM.
I Pierre, Couprie; Gohon, Kévin & Parent, Emmanuel (Red.),
La musique et la machine: Penser l'interaction dans les musiques électroniques.
Presses Universitaires de Rennes.
ISSN 9782753586857.s. 187–206.Fulltekst i vitenarkivVis sammendrag
La musique concrète, la musique électroacoustique, les musiques mixtes, le Live Electronics et à leur suite les courants populaires de la disco, de la techno, du rap et de l’EDM, désignent des styles musicaux qui ont radicalement changé les manières de faire et d’écouter la musique. En créant des situations nouvelles d’interaction entre les musiciens, les publics et les innombrables machines qui peuplent leur univers, ces répertoires ont modifié en profondeur l’ontologie et l’esthétique du fait musical. Ce livre, qui rassemble des contributions de musicologues fran?ais et internationaux (USA, Norvège, Australie, Royaume- Uni), propose ainsi une traversée des musiques électroniques savantes et populaires, de Luigi Nono à David Guetta, de Philippe Manoury à Brain Damage. Si la musique a pu un temps être pensée comme le produit de l’activité d’un compositeur unique, la médiation de l’électronique remet en pleine lumière la nature profondément collaborative et interactive de tout fait musical. Telle est l’hypothèse qui fonde l’ambition théorique de cet ouvrage.
Publié avec le soutien de l’UR APP de l’université Rennes 2, de RASM-CHCSC de l’université d’?vry Val d’Essonne et du RITMO de l’université d’Oslo.
Johansson, Mats Sigvard
(2022).
Timing-Sound Interactions: Groove-Forming Elements in
Traditional Scandinavian Fiddle Music.
Puls - journal for ethnomusicology and ethnochoreology.
7,
s. 53–72.Fulltekst i vitenarkiv
C?mara, Guilherme Schmidt; Sioros, George & Danielsen, Anne
(2022).
Mapping timing and intensity strategies in drum-kit performance of a simple back-beat pattern.
Journal of New Music Research.
ISSN 0929-8215.
doi: 10.1080/09298215.2022.2150649.
Fulltekst i vitenarkivVis sammendrag
In this article, we explore the various ways in which drummers express a simple ‘backbeat’ pattern when asked to play with different timing styles (laid-back, on-beat, pushed) via manipulation of stroke onset and intensity features. Based on hierarchical clustering analyses and phylogenetic tree visualizations, we found three main strategies used to distinguish pushed/laid-back from on-the-beat performances: (1) strong ‘general earliness/lateness’ strategies, where most instruments are consistently played earlier/later in time relative to a metrical grid; (2) subtler ‘early/late flam’ strategies, where most instruments are played synchronously with the grid but at least one instrument is distinctively played as an early/late flam ; and (3) even subtler ‘ambiguously early/late compound sound’ strategies, where two instruments form a compound sound, but one is played synchronously with the grid, while the other is played early/late. The majority of drummers used additional consistent intensity strategies, the most common being greater hi-hat or snare intensity, which might enhance the effect of laidback and pushed rhythmic events. However, intensity was not used uniformly to exclusively distinguish laid-back/pushed from on-beat timing.
Danielsen, Anne & Paulsrud, Thea S?rli
(2022).
“Three and a Half Minutes of Attitude.” Vocal Delivery, Groove, and Production in Azealia Banks' “212” (2014).
I Moylan, William; Burns, Lori & Mike, Alleyne (Red.),
Analyzing Recorded Music: Collected Perspectives on Popular Music Tracks (1st ed.).
Routledge.
ISSN 9781003089926.
doi: 10.4324/9781003089926-21.
Fulltekst i vitenarkivVis sammendrag
“212” is the first single from Azealia Banks’ debut album Broke with Expensive Taste (2014), co-written and produced by the Dutch electro-house duo Lazy Jay who first released its beat under the title Float My Boat (2009). The aim of this chapter is to unpack the secrets to success of “212” by focusing in particular on the track's groove, production, vocal delivery, and overall stylistic fusion of rap, reggaetón, pop, and house. We argue that Banks’ expressive range and seamless fusion of singing and rapping, her negotiations between confirming and protesting hip-hop norms, in combination with the style mix inherent in both Lazy Jay's original beat and her own rendition, were key to the track's success. An important aspect is the mix of musical forms typical of song and groove-directed repertoires, which allowed for many different interpretations and uses.
Lartillot, Olivier; Elovsson, Anders; Johansson, Mats Sigvard; Thedens, Hans-Hinrich & Monstad, Lars Alfred L?berg
(2022).
Segmentation, Transcription, Analysis and Visualisation of the Norwegian Folk Music Archive.
I Pugin, Laurent (Red.),
DLfM '22: 9th International Conference on Digital Libraries for Musicology.
Association for Computing Machinery (ACM).
ISSN 9781450396684.s. 1–9.
doi: 10.1145/3543882.3543883.
Fulltekst i vitenarkivVis sammendrag
We present an ongoing project dedicated to the transmutation of a collection of field recordings of Norwegian folk music established in the 1960s into an easily accessible online catalogue augmented with advanced music technology and computer musicology tools. We focus in particular on a major highlight of this collection: Hardanger fiddle music. The studied corpus was available as a series of 600 tape recordings, each tape containing up to 2 hours of recordings, associated with metadata indicating approximate positions of pieces of music. We first need to retrieve the individual recording associated with each tune, through the combination of an automated pre-segmentation based on sound classification and audio analysis, and a subsequent manual verification and fine-tuning of the temporal positions, using a home-made user interface.
Note detection is carried out by a deep learning method. To adapt the model to Hardanger fiddle music, musicians were asked to record themselves and annotate all played note, using a dedicated interface. Data augmentation techniques have been designed to accelerate the process, in particular using alignment of varied performances of same tunes. The transcription also requires the reconstruction of the metrical structure, which is particularly challenging in this style of music. We have also collected ground-truth data, and are conceiving a computational model.
The next step consists in carrying out detailed music analysis of the transcriptions, in order to reveal in particular intertextuality within the corpus. A last direction of research is aimed at designing tools to visualise each tune and the whole catalogue, both for musicologists and general public.
A techno-cognitive look at how new technologies are shaping the future of musicking.
“Musicking” encapsulates both the making of and perception of music, so it includes both active and passive forms of musical engagement. But at its core, it is a relationship between actions and sounds, between human bodies and musical instruments. Viewing musicking through this lens and drawing on music cognition and music technology, Sound Actions proposes a model for understanding differences between traditional acoustic “sound makers” and new electro-acoustic “music makers.”
What is a musical instrument? How do new technologies change how we perform and perceive music? What happens when composers build instruments, performers write code, perceivers become producers, and instruments play themselves? The answers to these pivotal questions entail a meeting point between interactive music technology and embodied music cognition, what author Alexander Refsum Jensenius calls “embodied music technology.” Moving between objective description and subjective narrative of his own musical experiences, Jensenius explores why music makes people move, how the human body can be used in musical interaction, and how new technologies allow for active musical experiences. The development of new music technologies, he demonstrates, has fundamentally changed how music is performed and perceived.
Oddekalv, Kjell Andreas
(2022).
Side Brok - H?ge Brelle – NABOK017 (Norske Albumklassikere).
Falck Forlag.
ISBN 9788293976172.139 s.Fulltekst i vitenarkiv
In musical ensembles most notes/chords are sounded by more than one instrument at
the same time, and we hear them as simultaneous, even when their onsets are not
precisely simultaneous. Here we obtain estimates for the perceptual centers of such
compound sounds when there are microtiming asynchronies between the instruments.
In Exp1 three combinations of fast-attack instruments (acoustic kick drum/synthetic
kick, kick/hi-hat, kick/bass) were presented with five levels of instrument asynchrony
relative to the kick (-40, -20, 0, 20, and 40 ms); the ISI was 600ms (100 bpm) and the
task was to align a click with the compound sound. An RMANOVA shows main effects
(p<.001) of Asynchrony and Instrument combination, and a U-shaped relationship
between Asynchrony and P-center, such that asynchrony in both directions relative to
the kick (kick early and kick late) delays the P-center for the compound sound. In Exp2
we used combinations of fast and slow attack instruments. Ten combinations (three
fast-attack–fast-attack, three slow-attack–slow-attack, four fast-attack–slow-attack)
were presented with seven levels of instrument asynchrony: -80, -40, -20, 0, 20, 40, 80.
RMANOVAs of the results revealed different relationships between asynchrony and
P-center (p<.001): Fast/fast attack combinations replicated the U-shape of experiment
1. In fast/slow combinations, the P-center followed the fast-attack instrument linearly,
and in the slow/slow combinations, P-centers followed the higher-pitched instrument. P-
centers of compound sounds depend on both the asynchrony between the instruments
and the shape of their attacks. Combinations of fast-attack instruments with extreme
asynchrony produce bimodal distributions, indicating perceptual segregation of the
two sounds. The findings align with studies showing that sharp sounds are used as
landmarks for segmentation and timing in speech and music.
Previous research (Danielsen et al. 2022) has shown that musical expertise affects
the perception of the temporal location (i.e., P-center) of an instrumental sound. Here
we extend this research to the context of vocal music. In two experiments expert
singers in jazz and classical genres were presented with a range of stimuli, including
neutral Stimuli (e.g., noise bursts, clicks), vowel sounds sung by jazz and classical
singers, and spoken versions of the vowel sounds. As with our previous study, neutral
stimuli produce largely the same responses in both participant groups, while a linear
mixed model showed that jazz participants placed their p-centers earlier (22 ms;
p=.044) and with lower variability (21 ms; p = .025) than classical participants. Contra
our hypothesis, the between-group differences for P-center location and variability
persisted in the context of spoken sounds. Why should this be so? Expert musicians
develop highly specific motor representations of their own actions and use them when
singing and playing. For singers, these these models overlap with speech production
more generally. This could explain the carry-over to speech stimuli. Likewise, the
vocal stimuli presented our participants with not only acoustic cues for the P-center
location of the sounds themselves, but also cues for synchronizing individual actions
in performance (coordinating the behaviors that produce the sounds with others). This
indicates that the vestiges of joint action that remained in our experimental context
were enough to engage their top-down sensorimotor models, as would be used in an
actual singing or speaking context.
Oddekalv, Kjell Andreas
(2024).
“I’m sorry y’all, I often drift – I’m talking gift” Microrhythmic analysis of rap – categorization, malleability and structural bothness. Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2024).
Vi skriv p? tog, og vi skriv p? tog - pitch.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2024).
The Sound of the crew in rap:
Rapping chimeras, illusory posses and other fantastical creatures summoned in the studio and cipher. Fulltekst i vitenarkiv
Danielsen, Anne
(2024).
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm.
Fulltekst i vitenarkivVis sammendrag
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
Danielsen, Anne
(2024).
There’s more to timing than time: shaping microrhythm in groove-based music.
Fulltekst i vitenarkiv
Br?vig, Ragnhild
(2024).
Boklansering: Parody in the Age of Remix: Mashups vs. the Takedown (MIT Press).
Fulltekst i vitenarkiv
Danielsen, Anne; Br?vig, Ragnhild; C?mara, Guilherme Schmidt; Haugen, Mari Romarheim; Johansson, Mats Sigvard & London, Justin
(2023).
There’s more to timing than time: Investigating sound–timing interaction across disciplines and cultures.
Fulltekst i vitenarkiv
Br?vig, Ragnhild & Stevenson, Alex
(2023).
Machine Aesthetics: An Analytical Framework.
Fulltekst i vitenarkiv
Br?vig, Ragnhild
(2023).
Wakeful Sleep and Sleepy wakefulness in EDM.
Fulltekst i vitenarkiv
Danielsen, Anne
(2023).
Ain’t that a groove! Musicological, philosophical and psychological perspectives on groove (keynote).
Fulltekst i vitenarkivVis sammendrag
The notion of groove is key to both musicians’ and academics’ discourses on musical rhythm. In this keynote, I will present groove’s historical grounding in African American musical practices and explore its current implications by addressing three distinct understandings of groove: as pattern and performance; as pleasure and “wanting to move”; and as a state of being. I will point out some musical features that seem to be shared among a wide range of groove-based styles, including syncopation and counterrhythm, swing and subdivision, and microrhythmic qualities. Ultimately, I will look at the ways in which the groove experience has been approached in different disciplines, drawing on examples from musicology / ethnomusicology, philosophy, psychology and neuroscience.
C?mara, Guilherme Schmidt; Sioros, Georgios; Danielsen, Anne; Nymoen, Kristian & Haugen, Mari Romarheim
(2023).
Sound-producing actions in guitar performance of groove-based microrhythm.
Fulltekst i vitenarkivVis sammendrag
This study reports on an experiment that investigated how guitarists signal the intended timing of a rhythmic event in a groove-based context via three different features related to sound-producing motions of impulsive chord strokes (striking velocity, movement duration and fretboard position). 21 expert electric guitarists were instructed to perform a simple rhythmic pattern in three different timing styles—“laidback,” “on-the-beat,” and “pushed”—in tandem with a metronome. Results revealed systematic differences across participants in the striking velocity and movement duration of chords in the different timing styles. In general, laid-back strokes were played with lower striking velocity and longer movement duration relative to on-the-beat and pushed strokes. No differences in the fretboard striking position were found (neither closer to the “bridge” [bottom] or to the “neck” [head]). Correlations with previously reported audio features of the guitar strokes were also investigated, where lower velocity and longer movement duration generally corresponded with longer acoustic attack duration (signal onset to offset).
Leske, Sabine Liliana; Endestad, Tor; Volehaugen, Vegard; Foldal, Maja Dyhre; Blenkmann, Alejandro Omar & Solbakk, Anne-Kristin
[Vis alle 7 forfattere av denne artikkelen](2023).
Predicting the Beat Bin – Beta Oscillations Support Top-Down Prediction of The Temporal Precision of a Beat.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2023).
En unders?kelse av flowen som kunstnerisk uttrykk (Tidens ?nd).
[Internet].
Tidens ?nd.
Fulltekst i vitenarkiv
Jones, Andy; Br?vig, Ragnhild & Aareskjold-Drecker, Jon Marius
(2023).
Scientists explain why Seeb's remix of I Took A Pill In Ibiza has racked up 1.7 billion Spotify streams: it's all down to "micro-rhythmic nuances", hard work... and playing left-handed.
[Internet].
MusicRadar.com .
Fulltekst i vitenarkiv
Roberts, Rachel; Br?vig, Ragnhild & Aareskjold-Drecker, Jon Marius
(2023).
Scientific research uncovers why Seeb’s I Took A Pill In Ibiza remix is hugely popular.
[Internet].
MusicTech.com.
Fulltekst i vitenarkiv
Danielsen, Anne & Aas, Endre Ugelstad
(2023).
Popen f?r pipestemme.
[Journal].
Klassekampen.
Fulltekst i vitenarkiv
Danielsen, Anne; Br?vig, Ragnhild; C?mara, Guilherme Schmidt; Haugen, Mari Romarheim; Johansson, Mats Sigvard & London, Justin
(2023).
Microrhythm depends on sound qualities: Investigating sound–timing interaction across disciplines and cultures.
Fulltekst i vitenarkiv
In musical genres such as neo-soul and hip-hop, beats often have a temporal shape that makes their placement in time difficult to locate relative to a single point in time. This is often due to ?muddy?, processed sounds or asynchronies between events at beat-related metric positions. The beat bin theory suggests that the perceptual counterpart to such beat asynchronies or muddy beat shapes in a sounding groove is an internal (perceptual) reference structure of beat bins of considerable ‘width’ and a distinctive ‘shape’. I will start by pre- senting the theory and then focus on how various acoustic factors influence the beat bin, using examples from computer-based musical grooves. Ultimately, I argue that micro-level perception of, and synchronization to, sound is opti- mized for the task at hand, in line with the flexibility and dynamic nature of the human apparatus in perceiving, predicting, and processing rhythm.
Danielsen, Anne; Langer?d, Martin Torvik & London, Justin
(2023).
Where is the beat in that complex note? Effect of Instrument Asynchrony on the Perceived Timing of Compound Musical Sounds.
Fulltekst i vitenarkiv
As a prélude for Norway's Constitution Day, this special event celebrated the Norwegian folk music tradition, showcasing our new online archive and demonstrating the richness of Hardanger fiddle music, with live performance. One aim of the project is to conceive new technologies allowing to better access, understand and appreciate Norwegian folk music.
In this event, we introduced a new online version of the Norwegian Folk Music Archive and discuss underlying theoretical and technical challenges. A live concert/workshop, with the participation of Olav Lukseng?rd Mjelva, offered a lively introduction to Hardanger fiddle music and its elaborate rhythm. The interests and challenges of automated transcription and analysis were discussed, with the public release of our new software Annotemus.
The symposium was organised in the context of the MIRAGE project (RITMO, in collaboration with the National Library of Norway's Digital Humanities Laboratory).
Oddekalv, Kjell Andreas
(2023).
Project: Chimera
Postdoctoral project – overview, examples, loose thoughts. HHRIG meeting presentation.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2023).
On Analysing Hip-Hop/Rap : Doing Hip-Hop Scholarship in a hip-hop way.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2023).
A Norwegian emcee/scholar – Theorizing rap flow from the outside and inside.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2023).
Weak Alternatives …and their presence making shit dope.
Fulltekst i vitenarkiv
C?mara, Guilherme Schmidt; Danielsen, Anne & Oddekalv, Kjell Andreas
(2023).
Funky rhythms – broken beats!?Kulturelle og estetiske perspektiver p? groove-basert musikk.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2023).
Flow, layering and rupture in composite auditory streams.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2023).
Sounding Same/Sounding Other:
Creative, practical and aesthetic aspects of ad libs and ‘backtracks’ in rap.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2023).
'Them bars really ain't hittin' like a play fight' : Analysing weak alternative lineations and ambiguous lineation in relation to metrical structure in rap flows. Fulltekst i vitenarkiv
An introduction to the Phase Amplitude Coupling (PAC) measure and how it is applied to EEG data (example code in MATLAB). The caveats of the measure are covered and which sanity checks might be necessary.
Sandvik, Bj?rnar Ersland
(2022).
Lydens utseende: Fra usynlig til gjenkjennelig p? skjermer vi alle g?r rundt med.
Musikkmagasinet Ballade.
ISSN 0805-5041.Fulltekst i vitenarkiv
An introduction to the inter-trial coherence measure (ITC) and how it is applied to EEG data (with example code/scripts in MATLAB). Furthermore caveats of the measure are discussed along with it's relation to phase opposition measures.
Oddekalv, Kjell Andreas
(2022).
Dr.Kjell har disputert.
[Journal].
M?re-Nytt.
Fulltekst i vitenarkiv
An introduction to the Fourier transform and how it is applied to EEG data. The short time fourier transform (STFT) and different measures (phase and amplitude) derived from it are explained.
Danielsen, Anne & Kristensen, Regine Lund
(2022).
Det rykker litt ekstra i dansefoten n?r dette skjer.
[Internet].
ung.forskning.no.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Rap music’s black cultural heritage: How does “pushing the limits” of dopeness relate to hip hop values of excellence and/as badness? Fulltekst i vitenarkiv
Lartillot, Olivier; Elovsson, Anders; Johansson, Mats Sigvard; Thedens, Hans-Hinrich & Monstad, Lars Alfred L?berg
(2022).
Segmentation, Transcription, Analysis and Visualisation of the Norwegian Folk Music Archive.
Fulltekst i vitenarkivVis sammendrag
We present an ongoing project dedicated to the transmutation of a collection of field recordings of Norwegian folk music established in the 1960s into an easily accessible online catalogue augmented with advanced music technology and computer musicology tools. We focus in particular on a major highlight of this collection: Hardanger fiddle music. The studied corpus was available as a series of 600 tape recordings, each tape containing up to 2 hours of recordings, associated with metadata indicating approximate positions of pieces of music. We first need to retrieve the individual recording associated with each tune, through the combination of an automated pre-segmentation based on sound classification and audio analysis, and a subsequent manual verification and fine-tuning of the temporal positions, using a home-made user interface.
Note detection is carried out by a deep learning method. To adapt the model to Hardanger fiddle music, musicians were asked to record themselves and annotate all played note, using a dedicated interface. Data augmentation techniques have been designed to accelerate the process, in particular using alignment of varied performances of same tunes. The transcription also requires the reconstruction of the metrical structure, which is particularly challenging in this style of music. We have also collected ground-truth data, and are conceiving a computational model.
The next step consists in carrying out detailed music analysis of the transcriptions, in order to reveal in particular intertextuality within the corpus. A last direction of research is aimed at designing tools to visualise each tune and the whole catalogue, both for musicologists and general public.
Oddekalv, Kjell Andreas
(2022).
God kok med Side Brok p? St?dt.
[Journal].
M?re-Nytt.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Skreiv bok om H?ge Brelle.
[Journal].
M?re.
Fulltekst i vitenarkiv
Sandvik, Bj?rnar Ersland
(2022).
Sample, Slice, Stretch! Four Innovative Moments in the History of Waveform Representation.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Side Brok var soundtracket til Kjell sin ungdom – no har han skrive side bok.
[Journal].
M?re-Nytt.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Sj?lvsagt m?tte vi f? til ei lansering i ?rsta!
[Journal].
M?re-Nytt.
Fulltekst i vitenarkiv
Lyrikken er den mest popul?re og utbredte av alle diktarter – vel ? merke sanglyrikken, den som fremf?res til musikk og formidles gjennom radio, grammofonplater, CD-er og str?mmetjenester. Den omgir oss til daglig og er samtidig den eldste formen for lyrikk vi kjenner til. I det gamle Hellas ble diktene fremsagt til lyrespill.
Til tross for dette har sanglyrikken v?rt mindre utforsket enn skriftlyrikken, og det har skortet p? teoretiske og metodiske perspektiver. Det s?ker denne boka ? r?de bot p?. Her diskuteres f?rst de grunnleggende likheter og forskjeller mellom skrift- og sanglyrikk, mellom ?ye- og ?rekunst. Videre dr?ftes metodiske innfallsvinkler til studiet av sanglyrikk, med tanke p? samspillet mellom ord og musikk. Deretter gj?r boka rede for en rekke kjente sanglyriske sjangrer: ballader, skillingsviser, salmer, joik, viser, blues, rock, indie-folk og rap.
Boka er den f?rste i sitt slag i Norge. Den er s?rlig rettet mot forskere og studenter i h?yere utdanning, og l?rere som vil arbeide med sanglyrikk i skoleverket. Men alle som interesserer seg for sanglyrikkens sjangrer, vil finne noe ? glede seg over her.
Oddekalv, Kjell Andreas
(2022).
On Being a White Norwegian Analysing Rap.
Dansk Musikforskning Online.
DMO Special Issue 2022,
s. 115–122.Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Categorical perception and quantisation in hip-hop practice and discourse.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Rap as composite auditory streams: Techniques and approaches for chimericity through layered vocal production in hip-hop, and their aesthetic implications.
Fulltekst i vitenarkiv
Danielsen, Anne & Leske, Sabine Liliana
(2022).
How the brain tracks the precision of a beat bin - musical, behavioral and neurophysiological perspectives.
Fulltekst i vitenarkivVis sammendrag
The internal beat or pulse in the listener is not a single point in time, but has a shape and a width and can be described via a probability distribution. This phenomenon has been conceptualized in the beat bin thoery (Danielsen 2010). The internal beat bin of the listener varies systematically with the precision needed in the given musical or sonic context. Anne and Sabine will present behavioral evidence for this phenomenon and a first attempt to reveal the underlying neural mechanism behind the flexible adaptation to the precision of the current beat bin context. They will present effects of acoustic factors on the perceptual center and the beat bin, as well as preliminary results on how neural oscillatory activity might represent a neural mechanism behind this phenomenon.
Oddekalv, Kjell Andreas
(2022).
Hva gir hiphop flow? En norsk forsker mener han har funnet svaret.
[Journal].
Morgenbladet.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Intervju om rap flows - Studio 2, NRK P2.
[Radio].
NRK P2.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
KARPE KARPE KARPE - Aftenposten Forklart.
[Internet].
Aftenposten Forklart Podcast.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Intervju om musikkrettigheiter - NRK Nyhetsmorgen.
[TV].
NRK.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2022).
Public defense: Kjell Andreas Oddekalv.
Fulltekst i vitenarkiv
This paper develops the concept of groove politics to investigate how the rhythmic qualities of shared musical experiences influence participatory democracy. Groove Politics is grounded in an analysis of listening and draws on recent studies on how music grooves, creates pleasure, and produces affective communities. Groove Politics understands musical sounds as complex signs that operate thanks to an interplay between rhythm, melody, harmony, lyrics, and local cultural meanings in which political expressions gain affective force as they bring people together. I apply this lens to performances of the Cuban band Interactivo and their musical dialogues with political and cultural changes in Cuba over the last two decades. Interactivo has been among the most innovative, controversial, and popular bands in the country of late thanks to their unique mixture of timba, rumba, jazz, funk, trova, hip-hop and world music. The study illuminates how Interactivo’s grooves both nurture and contest people’s sense of revolutionary values thanks to particular organizations of musical sound.
While existing scholarship on the politics of music elaborates upon the ways in which music is “articulated,” “mediated,” or “embedded” in larger political contexts and discourses, few studies have shown how music shapes political experience. Groove Politics fills this lacuna by taking seriously music’s ability to move us and create affective communities of political expression. The paper questions the established truism within popular music studies that the political meaning of music cannot be found in “the music itself”. Instead, Groove Politics takes its cue from John Street’s remark that what is lacking in existing scholarship is a “musical theory of politics [that takes seriously] the political possibilities inherent in pleasure”. Conceptually, Groove Politics builds on arguments within political theory by Arendt and Rancière that underscore the importance of aesthetics in politics coupled with research on how music grooves. It uses this frame to study how grooves redefine community and political discourse. The paper adds to existing musicological scholarship on popular music by drawing attention to how music moves us politically and aesthetically, coupled with analysis of the artistic and ethical judgements that give rise to and result from such practices.
Danielsen, Anne; Paulsrud, Thea S?rli & London, Justin
(2022).
Where is the beat in that (sung) note? Fulltekst i vitenarkiv
Lartillot, Olivier & Johansson, Mats Sigvard
(2021).
Automated beat tracking of Norwegian Hardanger fiddle music.
Fulltekst i vitenarkivVis sammendrag
Norwegian Hardanger fiddle music is typically played by a solo fiddler, without rhythmic accompaniment except for the musician’s discreet foot stomping. Some of its repertoire features an asymmetrical ternary meter, with an uneven proportion of durations between the three beats of each bar, and with varying degrees of fluctuation of those proportions throughout each piece. In addition, there is often no clear audible onset corresponding to the beat position. As a result, many listeners find it difficult to hear the beats without experience from playing or dancing, and the beat onsets cannot be properly tracked by state-of-the-art beat trackers.
The aim of this study is to develop a computational model of beat tracking of Hardanger fiddle music. Due to the rhythmic irregularity of the music, computational approaches relying on the detection of regular periodicities cannot be used. The proposed strategy adopts a cognitive perspective, modeling processes that progressively infer beats while scanning the music sequence chronologically. To each successive note is associated a tentative metrical position, which is determined based on a set of rules, using various input data such as (1) the ratio of the inter-onset interval (IOI) from the previous beat onset to the current note onset and the preceding inter-beat-onset interval and (2) the ratio of the IOI from the bar onset to the current note onset and the preceding inter-bar-onset interval. Successive repetition of eighth notes (as well as of eighth-note triplets) induce specific states that also guide the subsequent extension of the sequence. Multiple beat tracking scenarios can coexist at particular moments in the tune for very short periods. In particular, the very first notes at the beginning of the tune may initially imply conflicting metrical structures and tempi. The conflicting parallel beat tracking scenarios are progressively extended note after note in parallel. A scenario ends whenever it reaches a dead-end situation where the music is in total contradiction. Multiple scenarios are fused when they are continued exactly the same way, and only the scenario deemed the most congruent is retained.
One particularity of Hardanger fiddle music is that beat onsets are not precise points in time but rather diffuse temporal extension, closely related to the notion of beat bin (Danielsen, 2010). Sometimes, multiple successive notes can all be considered as possible onsets for a given beat (Johansson, 2010; Stover et al., 2021). This multiplicity of beat onsets has been integrated into the model.
Most of the analysis can be carried out using solely note onset time as input data, although more challenging cases occasionally require taking into account note duration or higher structure such as motivic repetition. This indicates that a proper beat tracker needs to be integrated as a module within a comprehensive music analysis framework, with bidirectional dependencies with the other modules of the framework. The model has so far been tuned and tested on a couple of tunes only. Its application to the automated analysis of a larger corpus is under investigation.
Danielsen, Anne (2010). “Here, there, and everywhere. Three accounts of pulse in D'Angelo's 'Left and Right’.” In A. Danielsen (Ed.), Musical Rhythm in the Age of Digital Reproduction. Farnham: Ashgate/Routledge, UK.
Johansson, Mats (2010). “The Concept of Rhythmic Tolerance – Examining Flexible Grooves in Scandinavian Folk-fiddling.” In A. Danielsen (Ed.), Musical Rhythm in the Age of Digital Reproduction. Farnham: Ashgate/Routledge, UK.
Stover, Chris; Danielsen, Anne & Johansson, Mats (2021). “Bins, Spans, Tolerance: Three Theories of Microtiming Behavior.” [under review in Music Theory Spectrum].
Oddekalv, Kjell Andreas
(2021).
Solveigs Speisa Musikk - med Kjell Andreas Oddekalv.
[Radio].
RadiOrakel.
Fulltekst i vitenarkiv
Oddekalv, Kjell Andreas
(2021).
Solveigs Speisa Musikk - Kjell Andreas Oddekalv igjen.
[Radio].
RadiOrakel.
Fulltekst i vitenarkiv
Danielsen, Anne; Stover, Chris & Oddekalv, Kjell Andreas
(2022).
What Makes the Shit Dope? The Techniques and Analysis of Rap Flows.
Universitetet i Oslo.
Fulltekst i vitenarkiv