Saturday, October 3, 2015

Hunting the Muse

Review of "This is Your Brain on Music", by Daniel Levitin.

Music is an odd intersection of math, science, and emotion. The notes are starky digital, as laid out in their logarithmic sequence on a keyboard, or on other instruments. Our hearing of them is likewise mapped with pitch-precision up the windings of the cochlea. But once in the brain, all hell breaks loose, as we map a linear sound input into many dimensions, from source location, source identification, speech interpretation, and emotional understanding. It becomes a rich sound-scape.

Dan Levitin's book is subtitled: "The science of a human obsession." He is clearly obsessed himself, going into an academic study of sound cognition after a lengthy career as a music producer. And he writes an excellent story about what is currently known about music in a scientific sense, along with a very effective primer in music theory. But one question still, in my mind, eluded him, which is perhaps the most basic- why does music carry such a strong emotional impact?

Scholars of music make a great deal out of a theory of expectations- how musicians play with our expectations in all aspects of music- rhythm, pitch, timbre, etc., to retain our interest, tell stories, move our bodies, paint a picture in soundscapes. This is an important area of work, but I think leaves out some very basic aspects of musical communication. What is the difference between a major chord and a blues chord? Just one note, but it makes a vast and instant difference in feeling. I think a major question in music theory has to be.. what is that difference? Cultural and personal expectations may have a role, but the effect is so immediate that I think something deeper is going on.

While Levitin's book mentions to the use of music and sound by other animals, it still seems to suffer a bit from species-ism, the idea that human music is of a different kind than that of other organisms like a bird's song or a whale's call. He argues against the extreme theory of Steven Pinker's, that music is "cheesecake" in the sense of being an intense, possibly even dangerous, derivative of some other, perhaps vestigial, evolutionary function (that function being language, which Pinker studies!). Levitin points out that rock stars have the kind of reproductive success, at least in potential terms, to put that theory to the lie in short order. Music is not vestigial at all.

But he does not seem to go the rest of the way, which would be to plumb the depths of the emotional code that music in the wide sense has been for animals from a rather early stage. While mammals have brought hearing to the highest possible pitch (sorry!), as exemplified by bats and dolphins, even insects involve song in the most momentous and doubtless emotional aspects of their lives. Who hasn't heard crickets chirping, or cicadas singing, or fruit flies courting? It may not sound like much to us, but to them it is the way to fulfill their most cherished hope, the fulfillment of which must stir whatever emotion they are capable of. And while anthropomorphization has its dangers, it seems fair to me to understand emotion as a virtually universal system of evaluation and expression of needs, little different in less complicated species than in ourselves. After all, the youngest human infant has towering emotions, despite virtually non-existent cognition. We should not confuse richness with intensity.

And it isn't just love. Hearing the Blue Jays and Hummingbirds fight it out in the yard, with terroristic screeches, shows that a wide-spread tone-language covers the emotional gamut. Human music is clearly a refinement and elaboration, and perhaps this is what Steven Pinker had in mind with his cheesecake analogy, but the underlying tonal language is not confined to humans at all. It also clearly preceeds all kinds of explicit language, even though birds are known to have small languages as well. Think of cats purring, and mice chirping in their ultrasonic language, to their pups and to others. Such sounds express strong pleasure, just as our music can.

That lays the groundwork of the universality of sound communication and music-type communication particularly. From there is a small step to recognize that, given the physics of sound, that assonance and dissonance form natural poles of emotional tone language. Blue Jays use dissonance to scare competitors, cats use assonance to express pleasure. Crickets chirp in tune and in rhythm, because that is attractive to female crickets. The assonance/dissonance spectrum seems to have been encoded into emotion very deeply in evolution, sort of like the flavors of sweet and bitter foods, or the attractiveness of pure colors, which are so strikingly encoded on the plumage of birds, versus the drab comouflage at the other extreme. (Or on chameleons.)

The language of chords, then, is rather analogous to that of color mixtures, where shades of dissonance enter as more complex mixtures are made. Why does that mixing evoke particular emotions, and more importantly, why do we value these complex mixtures, over the pure, major chord tones? We seem, in common with whales, mice, and other complex creatures, to need to share our emotional states, which are rarely simple. Sharing emotion creates social coordination and bonding, essential to social species, which we all are. The importance of song in courting is the premier example, of course. James Brown eloquently expressed the virtigenous rollercoaster of pain and pleasure in love, and communicating it to potential partners gives them important messages about how much they are loved and needed, creating the basis of long-term cooperation.

Emotions are complicated, and while we have many modes for communicating them, from smell, to touch, to visual cues / badges, movements and gestures, and among humans now even explicit language, music has evidently been a pre-eminent mode for complex animals. It is there that we can find the reason why Dmaj7 is different from Dmaj5 or an octave. This is not a mechanistic explanation- brain scientists such as Levitin are busy figuring out how the connections among perception and emotion happen in the brain. But we know that it happens and wonder, more crucially, about its evolutionary rationale. And that rationle, to paint it in extremely glib fashion, is to provide animals, humans included, a mode of emotional communication of exquisite expressiveness and sensitivity, which is open to anyone who hums a tune or coos to a child. It is closely related to language, which is typically strongly musical in part or whole, but is far more deeply and directly emotional.

Indeed, Levitin makes an interesting contrast late in the book between people with the genetic disorders autism and Williams syndrome. The former are emotionally impaired and generally perplexed by music, while the latter are notoriously musically gifted and highly social & verbal. Those with Williams syndrome can read emotions well, as they can music- the languages seem closely related on this genetic level as well.

  • Why so crazy? Russell Banks brings up the obvious racism of our Republican party. Also Krugman.
  • Prices per plate are going up ... how is this a democracy?
  • Murder and the corporation.
  • All the Republican tax plans are now out. Enough said.
  • The airline industry is no longer competitive.
  • The impact of low-skill immigration: "The absolute wage of high school dropouts in Miami dropped dramatically, as did the wage of high school dropouts relative to that of either high school graduates or college graduates."
  • Justified gun use in self-defense is rare. And gun control (aka gun-grabbing, for all you Freudians) works.
  • Image of the week ... the Taliban control maybe 1/4 of Afghanistan.


  1. Hello, I follow and like your blog. I usually don't comment, however I will... fully compensate today ;)

    The way brain processes music is one of my curiosities, glad that you brought this subject. I haven’t read this particular book, however I studied couple of other books on the subject.

    I guess you already know that the C-major chord (C:E:G or Do:Mi:Sol) have frequencies that are proportional to 4:5:6, that makes them all harmonics of a C with 2 octaves below: C3=2*2*C1, E=5*C1, G=6*C1. When you combine these frequencies (add and subtract them), you only get harmonics of C1. In the temperate scale they are only approximated, however this explains why we perceive them consonant.

    The other notes in the C-major scale are chosen so that they sound well enough with the fundamental chord C:E:G. For example F# in not accepted in C major because C:F#, in the temperate scale, has a ratio of 1:sqrt(2), that makes F# to create too many C-unrelated frequencies by add and subtract. Our brain/ears don’t like, usually, a cacophony of unrelated frequencies played in the same time. We prefer together frequencies that are harmonics of the same frequency like C:E:G are. Not all music is like this, but there is something there...

    The C minor scale replaces E with D#, which makes the chord less “stable”, this might explain the change in our perception with the minor chords. In the same time, other notes sound bad with D# and are replaced with notes that sounded bad with E. The reality is more complicated, however there seems to be a physical and mathematical logic on how we chose musical scales.

    This is about the physical constraints in music. I will come back with some ideas about the way melodies might interact with the brain. Disclaimer: I don’t even play an instrument, some of these are from my research and others are my theoretical speculates.

  2. Me again… :)

    At this moment, I think that the musical notes in a melody are processed by brain as patterns that bring a learning and accelerated compressing reward. The emotional dimension might be more related to the rhythm variations between notes. An argument is that you can easily make a computer program to exactly reproduce the notes and duration; however it takes an artist to create the high emotion of good music. Both play the same notes; however there are small differences in the duration of notes and especially in the space between them that makes art in music.

    Of course, you can recall the emotion of a certain melody by only listening the pattern of notes played by a computer; however I would speculate that you cannot actually feel the music if listened for the first time from a computer: flat and exact reproduction of the music sheets. It might seem like a nice pattern, but there will be no emotion.

    Somehow, the pattern of delays resonates with some internal brain patterns, triggering emotions recall or at least pleasant neuron discharges. This makes me believe that the brain actually works using propagation delay interactions not only raw neuron activation.

    Of course, the note patterns matters, for example the same pattern repeated with another cadence might create the feeling of a far memory recall. I sow this explanation from the great pianist Daniel Barenboim.

    I know, it was more like a post than a comment… I might reuse some of these comments in a future post on my own blog:

  3. Love it. Love all of it. Just wanted to say so.

  4. Thanks so much, Kelly- it is great to hear from you.