The biology of music
The Economist, February 12th - 18th, 2000

Music may soothe the troubled breast. It might even be the food of love. But how does it cast its spell? Romantics can take comfort from the fact that science does not yet have all the answers. But it has some.
 
WHEN philosophers debate what it is that makes humans unique among animals, they often point to language. Other animals can communicate, of course. But despite the best efforts of biologists working with beasts as diverse as chimpanzees, dolphins and parrots, no other species has yet shown the subtleties of syntax that give human languages their power.

There is, however, another sonic medium that might be thought uniquely human, and that is music. Other species can sing (indeed, many birds do so better than a lot of people). But birdsong, and the song of animals such as whales, has a limited repertoire—and no other animal is known to have developed a musical instrument.

Music is strange stuff. It is clearly different from language. People can, nevertheless, use it to communicate things—especially their emotions. And when allied with speech in a song, it is one of the most powerful means of communication that humans have. But what, biologically speaking, is it?

Music to the ears

If music is truly distinct from speech, then it ought to have a distinct processing mechanism in the brain—one that keeps it separate from the interpretation of other sounds, including language. The evidence suggests that such a separate mechanism does, indeed, exist.
Scientific curiosity about the auditory system dates back to the mid-19th century. In 1861 Paul Broca, a French surgeon, observed that speech was impaired by damage to a particular part of the brain, now known as Broca’s area. In 1874 Carl Wernicke, a German neurologist, made a similar observation about another brain area, and was similarly immortalised. The location of different language-processing tasks in Broca’s and Wernicke’s areas (which are both found in the brain’s left temporal lobe, more or less above the ear) was one of the first pieces of evidence that different bits of the brain are specialised to do different jobs.

People whose language-processing centres are damaged do not, however, automatically lose their musical abilities. For example Vissarion Shebalin, a Russian composer who suffered a stroke to the left hemisphere of his brain in 1953, was able neither to understand speech nor speak after his illness—yet he retained his ability to compose music until his death ten years later. Conversely, there are one or two cases of people whose musical abilities have been destroyed without detriment to their speech. This shows that music and language are processed independently.

On top of this separation from the processing of language, the processing of music (like all other sensory abilities that have been investigated in any detail) is also broken down into a number of separate tasks, handled in different parts of the brain. As early as 1905, for example, a neurologist called Bonvicini discovered a brain-damaged individual who could identify the sounds of different musical instruments, and also detect wrong notes, but who could not recognise well-known tunes such as his own national anthem. A detailed examination of music processing, however, has taken place only in the past few years with work such as that done by Catherine Liégeois-Chauvel of INSERM in Marseilles, and Isabelle Peretz at the University of Montreal.

In the late 1990s Dr Liégeois-Chauvel and Dr Peretz examined 65 patients who had undergone a surgical procedure for epilepsy which involved the removal of part of one or other temporal lobe. That not only allowed the researchers to study whether music, like language, is processed predominantly on only one side of the brain, but also permitted them to investigate which bits of the temporal lobe are doing what.

Researchers divide melodies into at least six components. The first is the pitches (ie, the vibrational frequencies in the air) of the notes in the melody. The second is the musical intervals between the notes (the difference in pitch between one note and the next). The third is the key (the set of pitches to which the notes belong which, in a western key, is a repeating series of 12 for each “octave” in the key). The fourth is contour (how the melody rises and falls). The fifth is rhythm (the relative lengths and spacings of the notes). The sixth is tempo (the speed at which a melody is played). Dr Liégeois-Chauvel and Dr Peretz asked each of their subjects to listen to a series of short melodies written especially for the project in order to study some of these components separately.

The first set of experiments looked at the perception of key and contour. Each melody was played twice. On some occasions the second playing was identical with the first; on others, either the key or the contour was changed. Subjects had to judge whether the first and second playings were the same.

The results of these experiments showed that those people with right-temporal-lobe damage had difficulty processing both the key and the contour of a melody, while those with left-temporal-lobe damage suffered problems only with the key. This suggests that, like language, music is processed asymmetrically in the brain (although not to quite the same degree). It also suggests that if one hemisphere of the brain deserves to be called dominant for music, it is the right-hand one—the opposite of the case for language in most people.

The part of the lobe involved in the case of contour is a region known as the first temporal gyrus, though the site of the key-processor was not identified. In addition, those subjects who had had another part of the lobe, known as Heschl’s gyrus, removed, had difficulty—regardless of whether it was the left or the right Heschl’s gyrus that was missing—in identifying variations in pitch.

Dr Liégeois-Chauvel’s and Dr Peretz’s second set of experiments looked at the perception of rhythm. This time, the possible distinction between the presentations of a melody was that one might be in “marching” time (2/4, to music aficionados) while the other was in “waltz” time (3/4). Again, subjects were asked whether the two presentations differed. In this case, however, there was no effect on the perception of rhythm in any subject. That suggests rhythm is being analysed somewhere other than the temporal lobe.

 
Waltzing ahead

Dr Liégeois-Chauvel and Dr Peretz were, of course, using basically the same method as Broca and Wernicke—looking at damaged brains to see what they cannot do. But modern brain-scanning methods permit healthy brains to be interrogated, too. In 1999 Stefan Evers of the University of Münster and Jörn Dannert of the University of Dortmund used “functional transcranial Doppler sonography”, a technique that is able to measure the speed that blood is flowing in a particular artery or vein, to study the response of blood-flow to music. Their subjects were a mixture of musicians (defined as people who knew how to play at least two musical instruments) and non-musicians (defined as people who had never played an instrument, and did not listen regularly to music).
Once again, there was a bias towards the right hemisphere—at least among those with no musical training. In such non-musicians, blood flow to the right hemisphere increased on exposure to music with a lot of harmonic intervals. (The researchers picked a 16th-century madrigal whose words were in Latin, a language chosen because it was not spoken by any of the participants, and so would not activate speech processing.) In musicians, however, the reverse was true; blood-flow increased to their left hemispheres, suggesting that their training was affecting the way they perceived harmony.

When the participants were exposed to music that was strongly rhythmical (a modern rock band) rather than harmonic the response changed. Rock music produced an equal increase of blood flow in both hemispheres in both groups of subjects, confirming Dr Liégeois-Chauvel’s and Dr Peretez’s observation that pitch and rhythm are processed independently.

In France, Hervé Platel, Jean-Claude Baron and their team at the University of Caen have applied a second non-invasive technique, called positron-emission tomography, or PET, to focus more precisely on which bits of the brain are active when someone listens to a melody. Dr Platel and Dr Baron study people who are “musically illiterate”; that is, they cannot read musical notation. In these experiments the subjects were played tunes by recognised composers, such as Strauss’s “The Blue Danube”, rather than special compositions of the sort used by Dr Liégeois-Chauvel and Dr Peretz. Otherwise the method was similar—play a melody twice, and study the response to the differences—except that in this case each subject was inside a PET scanner at the time.

In general, the results from Caen matched those from Montreal and Marseilles, but because Dr Platel and Dr Baron were examining whole, healthy brains, they were able to extend the work done by Dr Liégeois-Chauvel and Dr Peretz. One of their most intriguing results came when they looked at the effect of changing the pitch of one or more of the notes in a melody. When they did this, they found that in addition to activity in the temporal lobes, parts of the visual cortex at the back of the brain lit up.

The zones involved (called Brodmann’s area 18 and 19) are better known as the site of the “mind’s eye”—the place where images are conjured up by the imagination alone. What that means is not yet clear, though the same two zones are also known, as a result of another PET study by Justine Sergent and her colleagues at McGill University, in Montreal, to be active in the brains of pianists when they are playing their instruments. Dr Baron’s interpretation is that when the pitches of a sequence of notes are being analysed, the brain uses some sort of “symbolic” image to help it to decipher each pitch, in rather the same way that the conductor of an orchestra lifts his arm as a symbol for what people think of as “high” pitches (those with frequencies corresponding to short wavelengths) and lowers it for “low” pitches (those with long wavelengths). That might help to explain why people perceive notes as high and low in the first place. (By comparison, most people have no sense that blue light is “higher” than red light, even though blue light has a shorter wavelength than red.)

A strange change from major to minor

Music’s effect on the outer layers of the brain—the temporal and even the visual cortex—is only half the story, however. These are the places in which the signal is being dissected and processed. The place where it is having its most profound effect is in the brain’s emotional core—the limbic system.
Music’s ability to trigger powerful emotions is well known anecdotally, of course. But science requires more than anecdote. So in 1995 Jaak Panksepp, a neuroscientist at Bowling Green State University, Ohio, decided to see if the anecdotes were true. He asked several hundred young men and women why they felt music to be important in their lives.

Emotion turned out to be not merely an answer. It was, more or less, the answer. Around 70% of both sexes said it was “because it elicits emotions and feelings”. “To alleviate boredom”, the next most popular response, came a very distant second.

That music does, indeed, elicit emotions—rather than merely expressing an emotion that the listener recognises—has been shown more directly by Carol Krumhansl, a psychologist at Cornell University. Dr Krumhansl addressed the question by looking at the physiological changes (in blood circulation, respiration, skin conductivity and body temperature) that occurred in a group of volunteers while they listened to different pieces of music.

The ways these bodily functions change in response to particular emotions are well known. Sadness leads to a slower pulse, raised blood pressure, a decrease in the skin’s conductivity and a drop in body temperature. Fear produces increased pulse rates. Happiness causes faster breathing. So, by playing pieces ranging from Mussorgsky’s “Night on the Bare Mountain” to Vivaldi’s “Spring” to her wired-up subjects, Dr Krumhansl was able to test musical conventions about which emotions are associated with which musical structures.

Most of the conventions survived. Music with a rapid tempo, and written in a major key, correlated precisely with the induction of happiness. A slow tempo and a minor key induced sadness, and a rapid tempo combined with dissonance (the sort of harsh musical effect particularly favoured by Schoenberg) induced fear.

To get even closer to what is happening, Robert Zatorre and Anne Blood, who also work at McGill, have pursued the emotional effects of music into the middle of the brain, using PET scanning. They attacked the problem directly by composing a series of new melodies featuring explicitly consonant and dissonant patterns of notes, and playing them to a series of volunteers who had agreed to be scanned.

When the individuals heard dissonance, areas of their limbic systems known to be responsible for unpleasant emotion lit up and, moreover, the volunteers used negative adjectives to describe their feelings. The consonant music, by contrast, stimulated parts of the limbic system associated with pleasure, and the subjects’ feelings were incontestably positive—a neurological affirmation of the opinions of those who dislike Schoenberg’s compositions.

But perhaps the most intriguing study so far of the fundamental nature of music’s effects on the emotions has been done by Dr Peretz. With the collaboration of Ms R, a woman who has suffered an unusual form of brain damage, she has shown that music’s emotional and conscious effects are completely separate.

Ms R sustained damage to both of her temporal lobes as a result of surgery undertaken to repair some of the blood vessels supplying her brain. While her speech and intellect remained unchanged after the accident, her ability to sing and to recognise once-familiar melodies disappeared. Remarkably, though, she claimed she could still enjoy music.

In Ms R’s case, the use of a PET scanner was impossible (her brain contains post-operative metallic clips, which would interfere with the equipment). Instead, Dr Peretz ran a test in which she compared her subject’s emotional reactions to music with those of a control group of women whose temporal lobes were intact.

As expected, Ms R failed to recognise any of the melodies played to her, however many times they were repeated. Nor could she consciously detect changes in pitch. But she could still feel emotion—a result confirmed by manipulating the pitch, the tempo and the major or minor nature of the key of the various pieces of music being played, and comparing her reactions to the altered tunes with those of the control group.

A lot has thus been discovered about how music works its magic. Why it does so is a different question. Geoffrey Miller, an evolutionary psychologist at University College, London, thinks that music really is the food of love. Because it is hard to do well, it is a way of demonstrating your fitness to be someone’s mate. Singing, or playing a musical instrument, requires fine muscular control. Remembering the notes demands a good memory. Getting those notes right once you have remembered them suggests a player’s hearing is in top condition. And the fact that much music is sung by a lover to his lass (or vice versa) suggests that it is, indeed, a way of showing off.

That does not, however, explain why music is so good at creating emotions. When assessing a mate, the last thing you should want is to have your feelings manipulated by the other side. So, while evolution should certainly build a fine, discriminating faculty for musical criticism into people, it is still unclear why particular combinations of noise should affect the emotions so profoundly. Stay tuned.

 

top


Debate
Evolution
CogSci

Maintained by Francis F. Steen, Communication Studies, University of California Los Angeles


CogWeb