Humans have an innate musicality. The ability to comprehend and enjoy complex musical patterns seems culturally universal. 2 This is because music can be compared with speech, which is the other cognitively fascinating way we use sound. While the address is essential for communicating concepts or propositions and gaining such knowledge, music does not have this primary function. Music can express emotions, moods, or affective mental states that benefit our quality of life.
This brings us to the article’s title: Why do we love music? There is no reason to feel pleasure from a series or pattern of sounds without any specific propositional meaning. 3 But music is a well-known joy.
This question can be approached in many ways. A social scientist might get a different answer from a musicologist. Since I’m a neuroscientist, I want to address it from that perspective–recognizing that other views may also offer valuable insights. Neuroscience has the advantage that we can link our answers to existing empirical findings. We also can draw from two particularly relevant domains, the neuroscience of auditory perception (Neuroscience of the reward system). The punch line of my article is that music has its power because of the interaction between these systems. The first allows us to analyze sound patterns and make predictions. The second evaluates these outcomes and generates positive or negative emotions depending on whether the expectation was fulfilled, exceeded, or not met.
The Auditory Perception System
It is incredible to realize that sounds, such as thunder and baby crying or the sounds of a waltz, are all carried by vibrations of molecules in the air. These sounds result from an intricate perceptual system, which transforms vibrations into internal representations, or what psychologists call perception, thoughts, memories, and emotions. These can be linked to our knowledge of the world and memories of other sounds. The process involves extracting the relevant acoustic characteristics from the sounds and then encoding them into the pattern of nerve firings.
The brainstem and thalamus are responsible for this process. For example, a cello string vibrates at a specific frequency depending on its material and tension. If the first string in a cello is tuned to the same musical note C, then the entire length would vibrate 65 times per second. The synchronized response of neurons in the cortex and nuclei will be matched with a corresponding neuronal Oscillation 4 65Hz. This transforms physical energy into a pattern that represents sound frequency.
Research has shown that neurons in the auditory cortical cortex, particularly in the right cerebral, are essential for distinguishing subtle gradations in frequency. It is also necessary to identify the relationships between pitches within musical systems.
A music theory introductory course would include a description of musical intervals. This is the ratio of the frequencies of two tones that determines the patterns that make melodies (when they are sequential) or harmonies (when they are simultaneous). Intervals can be defined by relations between pitches, independent of their pitch values. A minor third is roughly defined as the ratio 6 to 5, so frequencies within that relationship will be considered minor thirds.
Transposition is a property that allows us to recognize the song even if it’s sung in different keys. If we didn’t have this ability, songs like “The Sound of Music” would not work. Numerous studies have shown that brain pathways that allow this type of computation are not in the auditory cortex. They are found in areas connected to the auditory cortex that are also involved with other sensory transformations.
Another problem is that sounds disappear from the environment instantly, unlike objects in a visual scene. The brain needs to be able to temporarily store sounds in its mind to determine pitch relationships and other properties. This is also important for speech. A sentence can only be understood if each word is recovered at the moment it is spoken. This ability depends on the faculty known as working memory, which basically means the ability to retain information and process it over short periods.
The Prediction System
This description provides a quick and simplified overview of some of the machinery involved in determining relationships between tones. However, this is only a tiny part of the complexity of responding to musical sounds. One of the most crucial aspects of perception and critical to music is the ability of future events to be predicted based on past experiences.
This ability is essential for survival because it allows organisms to plan better and prepare for an eventuality. Music and, it is believed, language has a prosperous statistical relationship among patterns of sounds. Each musical system has a syntax. This is a set of rules that govern how sounds are related. These regularities are extremely sensitive by the auditory brain. It can quickly learn statistical relationships through exposure to examples of the system (melodies and rhythms, words and sentences). This is how babies learn sound patterns from their environment.
Researchers have developed procedures to test this ability’s neural substrates. They present a set that conforms to standard, expected rules (e.g., a sequence or chords) and then introduce a new item that should or shouldn’t follow based on the context (e.g., an out-of-key chord). This situation results in a brain response that is characteristic of expectancy.
These results show that music listeners encode sound properties and relationships and predict what will happen (as we wouldn’t find out about crucial chord jarring). These predictions are not only based on what we have just heard but also on our listening history. Accurate predictions can be difficult if one is not exposed to the rules of another culture. This could make it hard to understand the culture’s music. The same principle applies to other cultures’ languages.