Are we naturally drawn to music, as well as language?
Chomsky’s theories that humans seem to be innately attracted to language sounds are well known. Research has concluded that when babies listen to speech, they track word patterns and gather information, but besides being attracted to language sounds, humans are also drawn to music sounds, according to multiple studies.
Music and Language Similarities
Although music may be more attractive than language sounds to some ears, there are similarities between these two expressive mediums. Music and language both have linear and coherent structures, and use syntax or specific sequences of notes or language chunks to influence the meaning or semantics of the message being relayed. What’s more, they require equally complex, higher-order thinking processes and skills including attention, categorization, and memory.
Besides similarities in structure and processing, several past studies show that musical and linguistic operations take place in similar areas of the brain, and when a musical structure is interrupted, activation takes place in brain areas associated with language structure processing.
Checking Shared Brain Locations
Neuroscientist and musician Dr. Daniel Levitin, as part of a group of researchers, decided to investigate the shared brain locations of music and language more deeply by examining whether we use distinct or shared neurological resources for processing the syntactic structures in music and speech. Their research would confirm the accuracy of the infamous shared syntactic integration resource hypothesis known as the SSIRH, which proposes that syntactic processing for language and music share a common set of neural resources in the pre-frontal cortex (PFC) area of the brain.
Would you like to see more articles like this?
Support This Expert's Articles, This Category of Articles, or the Site in General Here.
Just put your preference in the "I Would Like to Support" Box after you Click to Donate Below:
The scientists presented music and speech stimuli to 20 participants in order to examine brain functioning. They found that the temporal manipulation in music and speech produced fMRI (functional magnetic resonance imaging) signal changes of the same magnitude in the pre-frontal and temporal cortices of both brain hemispheres, which both supported and extended the SSIRH.Decoded Science