Music and Language Equally Neurologically Stimulating?

Music and language share brain areas - Image by everyone's idle

Are we naturally drawn to music, as well as language?

Chomsky’s theories that humans seem to be innately attracted to language sounds are well known. Research has concluded that when babies listen to speech, they track word patterns and gather information, but besides being attracted to language sounds, humans are also drawn to music sounds, according to multiple studies.

Music and Language Similarities

Although music may be more attractive than language sounds to some ears, there are similarities between these two expressive mediums. Music and language both have linear and coherent structures, and use syntax or specific sequences of notes or language chunks to influence the meaning or semantics of the message being relayed. What’s more, they require equally complex, higher-order thinking processes and skills including attention, categorization, and memory.

Besides similarities in structure and processing, several past studies show that musical and linguistic operations take place in similar areas of the brain, and when a musical structure is interrupted, activation takes place in brain areas associated with language structure processing.

Checking Shared Brain Locations

Neuroscientist and musician Dr. Daniel Levitin, as part of a group of researchers, decided to investigate the shared brain locations of music and language more deeply by examining whether we use distinct or shared neurological resources for processing the syntactic structures in music and speech. Their research would confirm the accuracy of the infamous shared syntactic integration resource hypothesis known as the SSIRH, which proposes that syntactic processing for language and music share a common set of neural resources in the pre-frontal cortex (PFC) area of the brain.

The scientists presented music and speech stimuli to 20 participants in order to examine brain functioning. They found that the temporal manipulation in music and speech produced fMRI (functional magnetic resonance imaging) signal changes of the same magnitude in the pre-frontal and temporal cortices of both brain hemispheres, which both supported and extended the SSIRH.

Click to Read Page Two: Multivariate Pattern Analysis (MPA)

© Copyright 2012 Lesley Lanir, All rights Reserved. Written For: Decoded Science
Decoded Everything is a non-profit corporation, dependent on donations from readers like you. Donate now! Your support keeps the great information coming!

Donation Information

I would like to make a donation in the amount of:


I would like this donation to automatically repeat each month

Tribute Gift

Check here to donate in honor or memory of someone

Donor Information

First Name:
Last Name:
Please do not display my name publicly. I would like to remain anonymous

Leave a Reply

Your email address will not be published. Required fields are marked *