Q & A With Psychological Scientist Daniel Levitin (Part 2)
Below is part 2 of Levitin’s Q & A:
How important is household upbringing to preference of music? For instance, if someone is raised in a home where gospel music is constantly played, do they develop a liking for this genre of music? (Even if the genre of music is not popular for the time period.)
We don’t really know much about upbringing and genre-specific preferences, but we do know something about broader issues of tonality and musical syntax. There appears to be a critical period for acquiring musical syntax as there is for acquiring speech syntax. That is, whether you’re raised listening to gospel, punk, country, heavy metal, jazz, or classical, the important point is that they’re all based on the same 12 notes, the same basic chords (what we know as the Western diatonic tonal system). That means that your brain is configured to understand that system, and to know what to expect in all of these musics and making a transition from classical to rock, for example, is easy in terms of the musical syntax. Our brains function like statistical engines that have calculated the probabilities of chord sequences for the music we were raised with. This leads to expectations and to the possibility of those expectations being either met or violated – the very basis of musical engagement. Raised in a home like that, you’re not likely to understand Indian Ragas or Chinese Opera or any of the world’s musics that conform to different tonal systems. It doesn’t mean you won’t like those musics, but you won’t have the same solid basis on which to understand them. The same is true in reverse. People raised within other tonal systems don’t necessarily understand Western tonal music.
That said, if you’re raised with a particular style, like gospel, your continued liking for it no doubt depends on associations you have with it, memories, reinforcement, and so on.
You mention in your article “Musical Behavior in a Neurogenetic Developmental Disorder: Evidence from Williams Syndrome” a connection between musicality and sociability in individuals with Williams syndrome. Have you seen any other interesting connections or dissociations in your work with atypical populations?
We find that individuals with Williams Syndrome tend to be both more sociable and more involved with music than typically developing controls. On the other hand, individuals with Autism Spectrum Disorders tend to be both less sociable and less involved with music. Both groups show a global processing deficit with visual stimuli – they tend not to see the forest for the trees, that is, they attend more to local structure than global, as demonstrated in the Navon embedded letters task. Another way of describing this is an inability to integrate parts and wholes. In a study we have in press at Child Neuropsychology we show that individuals with Autism don’t have this difficulty in the auditory domain – they’re perfectly capable of constructing global musical sequences out of local ones. This was part of the doctoral dissertation of my Ph.D. student Eve-Marie Quintin. We’ve informally shown this with Williams also, but this isn’t published yet.
Levitin, D. (2005). Musical behavior in a neurogenetic developmental disorder: Evidence from williams syndrome. Annual N.Y. Academy of Science, (1060), 1-10. doi: 10.1196/annals.1360.027
Regarding your paper “Absolute Pitch — both a curse and a blessing.”
My question is: “I know a pair of siblings; one has perfect pitch and one does not. Both children began playing an instrument at an early age, and both probably learned and heard a lot of music at an early age from their mother who is a professional violinist. What would explain why one child developed perfect pitch and one did not? Could it be a genetic reason, something about the childhood environment, the personal involvement of the child, or a confluence of these factors?”
We don’t really know for sure. There are some studies that show a genetic contribution, others that argue against it. To answer your question directly, it certainly could be an environmental difference – we see all kinds of differences in skills and abilities for siblings raised in the same household. Being in the same household doesn’t at all guarantee the same environment, and there can be huge differences in the amount and type of attention or reinforcement two children receive. I was just reading about some remarkable differences in a book called “The Boy Who Was Raised As A Dog And Other Stories From A Psychiatrist’s Notebook,” by Bruce Perry and Maia Szalavitz. Same parents, same home, one boy becomes a respected member of the community and his little brother a sociopathic murderer and rapist. Perry makes a convincing case that this is largely (though not exclusively) due to some key differences in how the family functioned during a critical few years in the younger son’s life.
I don’t dispute the possibility that genetics plays a role in Absolute Pitch – we know for example that people with it have a larger leftward asymmetry in the planum temporale, and there could be a genetic contribution to that, not to mention a genetic contribution to pitch memory and to conditional associative learning. But the two things that puzzle me about the genetic explanation are these. First, what would be the evolutionary advantage of having absolute pitch? Most tasks that are important to us linguistically and musically require relative pitch, not absolute. If we used absolute pitch to understand language, we wouldn’t be able to understand children who speak an octave higher (on average) than adults. Second, for a skill that seems so intricately bound up in culture and environment, how can we separate out the genetic contribution with current methods? The geneticists look for clusters of children within a household or family to make their case. Now a child who grows up in a household that supports and nurtures Absolute Pitch is far more likely to develop it; one who grows up in a household without even a musical instrument is very unlikely to. Put another way: French speaking tends to cluster in families too, but no one would argue that there is a gene for French-speaking. It clusters in families because it is taught in families.
How/why do we remember certain life events (we wouldn’t otherwise think about) when we hear a particular song?
The interesting thing about this is what it tells us about the nature and working of long-term memory. For the past fifty years or so, the commercialization of popular music has been such that we hear songs hundreds and hundreds of times for just a few weeks, while they’re hits, and then we often don’t hear them again for years and years. From a memory standpoint, this makes pop music an ideal retrieval cue – the song becomes wedded to a particular time, place, and set of circumstances. Songs that are always with us – Happy Birthday, the national anthem, children’s folk songs – don’t have this same power as retrieval cues of course because their ubiquity means they are not uniquely associated with a time and place. So it’s simply the distinctiveness of some songs and their association with certain events in our lives.
By the way, this is also what allows songs that are sung seasonally to carry so much power. When we hear Christmas songs – which are virtually never played during 11 months of the year – they immediately remind us of Christmas and put us in the mood of that holidays.
Is long-term memory, in terms of recollection of hierarchical pitch structure templates, non-temporal? Specifically in reference to Firmino et al. (2009).
Firmino et al (2009; Music Perception 26(3) ) conducted a fascinating study in which they played subjects pitch sequences that either progressed to a nearby or a distant key. All pitch sequences were 20 seconds, but this wasn’t revealed to the subjects who were asked to estimate how much time had passed. Subjects tended to think the time was shortened when the sequence modulated to a distant key. This would seem to contradict Korte’s Third Law of apparent motion, in which distances traversed – either real or virtual – tend to conform to a sort of constant speed; that is, an implicit physical or psychological velocity is assumed to be constant, and perceptual (or imagistic) events must be “squeezed into” the available time such that a speed/distance trade-off is manifest. It separately contradicts models of semantic networks that imply a greater time course for accessing conceptual nodes that are farther away from any given starting point.
Firmino et al interpret their finding as converging evidence for the psychological instantiation of tonal hierarchies (as shown by Lerdahl, Krumhansl, and others). They argue that tonal hierarchies are non-temporal. I find the study and topic to be fascinating, and I think this area is rich with possibilities for further research before we’ll have a definitive account of how and why this occurs.
Your 2010 study regarding emotion perception in music among adolescents with autism spectrum disorders found that participants with ASD could recognize emotionality in music but could not rate the degree of emotion conveyed as the control participants could. One explanation for this finding was that rating emotionality relied on the perception of discrete cues which are learned, while recognition does not.
To begin with, I’d dispute this point. Recognition also must be learned – individuals from other cultures do not recognize “Western” musical emotion, and we tend not to recognize musical emotion in non-Western musics such as Indian ragas. At George Harrison’s Concert for Bangla Desh, an audience of Western listeners, mostly Beatle fans, famously (notoriously?) clapped as Ravi Shankar finished tuning. “If you appreciate the tuning so much,” he said, “I hope you will enjoy the playing more.”
Additionally, previous studies have shown that music emotionality can be accurately identified across cultures and music played in the minor, as opposed to major, key is always recognized as sadder.
I take issue with this too. Minor and major are not universally sad and happy, respectively, they are cultural constructs. Consider as an example klezmer music, which is almost all in minor keys but it is not all sad.
All of this points to some fundamental ability, which is not learned, among humans to perceive various pitches, timbres and tones as attached to specific emotions. What accounts for this common ability?
There are some correlations, mappings, between tempo and interval size and emotion. Slow tempos with stepwise movement often (across cultures) indicate sad or contemplative music, and fast tempos with larger intervallic leaps indicate happy or surgent music. But this is not universal. Each culture develops their own musical language, syntax and semantics, as they do for language.
If music is hardwired into our brain why is it difficult for us to multitask (study) and listen to music with lyrics?
I suspect this is just an attentional bottleneck. Vision is hardwired into our brains too, but that doesn’t mean we can attend to lots of visual events at once.
Does music exist by itself or do we exist for music?
I would say neither – we create music in our brains, either out of sounds-in-the world (through perceptual organization) or explicitly when we compose and perform. Music is in the ear and the brain of the beholder.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.