Home » Music Glossary » Music Glossary N
Music Glossary N
Welcome to the Music Glossary N section, where we explore terms that are essential to the world of music, all starting with the letter “N.” This section covers a variety of concepts, styles, techniques, and tools that are relevant whether you’re a musician, producer, or an avid music fan. From foundational ideas to niche terms, you’ll find plenty to expand your understanding and enhance your creative or listening experience.
In this section, you’ll encounter essential terms like notation and nuance, which play critical roles in both music composition and performance. Notation refers to the system of writing music down so that it can be performed, whether through traditional sheet music or modern digital methods. Nuance, on the other hand, is all about the subtle variations in dynamics, phrasing, and articulation that bring a piece of music to life, giving it character and emotional depth.
We’ll also delve into genres and cultural movements, such as new wave—a style that emerged in the late 1970s, blending punk rock’s energy with experimental sounds, synthesizers, and catchy melodies. Understanding its evolution and influence can offer valuable insights into the development of modern pop and alternative music.
For producers and audio engineers, we cover technical terms like noise gate and normalization. A noise gate is a tool used to reduce unwanted sounds in a recording, like background hiss, while normalization adjusts the overall volume of a track to a standard level, ensuring consistent playback.
Whether you’re deciphering the intricacies of musical notation or experimenting with production techniques, this glossary is your go-to resource. Clear definitions and practical examples make it accessible for learners of all levels. Explore the “N” terms and uncover new ways to deepen your connection with music.
Table of Contents
Music Glossary N Terms
Narrative Song
A Narrative Song is a type of song that tells a story, often with a clear beginning, middle, and end. These songs focus on storytelling through their lyrics, conveying events, characters, and emotions in a structured or anecdotal format. They are a staple in genres like folk, country, blues, rock, and even hip-hop.
Key Characteristics
- Storytelling Lyrics: The primary focus is on the narrative, often involving characters, dialogue, and plot development.
- Linear Structure: Many narrative songs follow a chronological sequence of events, though some may use flashbacks or fragmented timelines.
- Themes: Common themes include love, heartbreak, adventure, tragedy, social issues, and personal experiences.
- Emotional Engagement: Narrative songs aim to evoke emotions, drawing listeners into the story and connecting them to its characters or events.
Examples of Narrative Songs
- Folk: “The Ballad of John and Yoko” by The Beatles, “The Wreck of the Edmund Fitzgerald” by Gordon Lightfoot.
- Country: “A Boy Named Sue” by Johnny Cash, “He Stopped Loving Her Today” by George Jones.
- Rock: “Hotel California” by Eagles, “The Hurricane” by Bob Dylan.
- Hip-Hop: “Stan” by Eminem, “Children’s Story” by Slick Rick.
Role in Music History
Narrative songs have been a part of musical traditions for centuries, from medieval ballads to modern genres. They often serve as a way to preserve cultural stories, recount historical events, or share personal experiences. In some cases, they function as protest songs or vehicles for social commentary.
Writing a Narrative Song
- Focus on the Story: Develop a compelling plot or event to center the song around.
- Strong Characters: Create relatable or intriguing characters to drive the narrative.
- Clear Structure: Use verses to advance the story and the chorus to reflect its emotional core or theme.
Narrative songs transcend music, acting as both art and storytelling mediums. They offer a unique way to connect with listeners, blending the universality of music with the specificity of a tale.
The Nashville Number System (NNS) is a shorthand method of writing chord progressions using numbers to represent the scale degrees of a key. Widely used in country, gospel, rock, and pop music, it simplifies communication between musicians by focusing on relationships between chords rather than specific chord names, making it highly versatile across keys.
How It Works
In the Nashville Number System:
- Chords are assigned numbers based on their position in the diatonic scale of the key.
- For example, in the key of C major:
- C = 1 (I)
- Dm = 2 (ii)
- Em = 3 (iii)
- F = 4 (IV)
- G = 5 (V)
- Am = 6 (vi)
- Bdim = 7 (vii°)
- For example, in the key of C major:
- Uppercase numbers indicate major chords (e.g., 1 = I), while lowercase numbers or specific markings (e.g., “m” for minor) indicate minor or diminished chords.
- Modifications, such as extensions or altered chords, are added as annotations (e.g., 1sus4, 2maj7).
- Changes in rhythm or time can be notated above or below the numbers using slashes, dots, or other symbols.
Example Progression
A standard chord progression in C major:
- C – G – Am – F
Written in the NNS: - 1 – 5 – 6m – 4
If this progression is transposed to G major:
- G – D – Em – C
It remains the same in the NNS: - 1 – 5 – 6m – 4
Advantages
- Transposability: The system works in any key, allowing musicians to quickly adapt to key changes.
- Simplified Communication: Studio musicians can easily follow charts without needing to rewrite them for different keys.
- Efficiency: It reduces the need for lengthy sheet music or chord charts, ideal for collaborative and improvisational settings.
Uses
The Nashville Number System is commonly used in recording studios, live performances, and songwriting sessions. Its flexibility and simplicity make it an essential tool for professional musicians and producers.
Natural Minor Scale
The Natural Minor Scale is a diatonic scale that has a distinctive, darker, and more melancholic sound compared to the Major Scale. It is one of the three main types of minor scales (along with the harmonic and melodic minor scales) and is foundational in Western music theory and composition.
Structure
The natural minor scale follows this specific pattern of whole steps (W) and half steps (H):
W – H – W – W – H – W – W
For example, the A natural minor scale consists of these notes:
A – B – C – D – E – F – G – A
This scale can also be derived by starting on the sixth degree of its relative major scale. In the case of A minor, it shares the same notes as C major but begins on A.
Characteristics
- Flat Third: The third note is a half step lower than in the major scale, giving the scale its minor quality.
- Flat Sixth and Seventh: Both the sixth and seventh degrees are also lowered by a half step compared to the major scale, contributing to its subdued and emotive character.
Applications
- Melody: Used to create introspective, sad, or mysterious melodies.
- Harmony: The basis for many chord progressions in minor keys.
- Genres: Common in classical music, jazz, folk, rock, and metal.
Comparison to Other Minor Scales
- Harmonic Minor: Raises the seventh note for a stronger resolution to the tonic.
- Melodic Minor: Raises both the sixth and seventh notes when ascending but often reverts to the natural minor when descending.
The natural minor scale is a versatile and essential tool for creating depth and emotional impact in music, making it a cornerstone for composers and musicians alike.
The Neapolitan Chord is a type of chord frequently used in classical and jazz music to add dramatic tension and color to a progression. It is typically a major chord built on the flattened second scale degree of a key. Often referred to as the Neapolitan Sixth, it is most commonly found in its first inversion.
Structure
- In a major or minor key, the Neapolitan chord is constructed on the second scale degree, which is lowered by a half step (♭II).
- It is usually voiced in first inversion, meaning the third of the chord is in the bass.
For example, in the key of C minor:
- The second scale degree (D) is lowered to D♭.
- The chord is built as D♭ major: D♭ – F – A♭.
- In first inversion, it would appear as F – A♭ – D♭.
Characteristics
- The Neapolitan chord has a rich, colorful, and tense sound due to its use of chromaticism and its unexpected position in the harmonic context.
- It is often used as a pre-dominant chord, resolving to the dominant (V) chord in cadences.
Usage
- Classical Music: The Neapolitan chord is a staple in tonal harmony, especially in minor keys, where it heightens emotional intensity. Composers like Beethoven, Chopin, and Mozart often used it to create dramatic shifts or poignant resolutions.
- Jazz and Popular Music: While less common, the Neapolitan chord occasionally appears as a color chord to add unexpected harmonic interest.
- Film and Theatre Music: Frequently used to evoke dramatic or heightened emotional moments.
Example Progression
In C minor:
- Neapolitan: F – A♭ – D♭ (♭II6)
- Resolves to Dominant: G – B – D (V)
- Then Tonic: C – E♭ – G (I)
The Neapolitan chord’s distinctive sound makes it a powerful tool for composers looking to deepen harmonic tension or create striking moments of emotional impact.
Nearfield monitors are specially designed to be placed close to the listener, typically within 1 metre/40 inches.
The intention is that sound from the monitors will reach your ears first, so that what you hear is uncolored by interference, distortion, and any other inaccuracies caused by early room reflections.
Nearfield Monitors are ideal for both tracking and mixing. Their sound profile tends to be accurate and clear, with a pretty flat sound response, but because they are almost always small in size, some models can have a poor bass response.
Needle Drop
This is a digital recording created from a source analog recording. Specifically, where the source analog recording is a vinyl or acetate disc.
The resulting digital recording is likely to be an unlicensed copy.
Generally speaking, Neighbouring Rights apply to recordings, not to the published content. Neighbouring Rights do not apply to a song, but they could apply to a recording of a song.
Performers and broadcasters have Neighbouring Rights to recordings of their work.
As an example: When a performer performs on the recording of a song, the performer will have Neighbouring Rights to that recording. Unless they wrote the song, or they otherwise own the copyright on the song, they will have no rights relating to the song itself.
Neo Soul
Neo-Soul is a genre of music that emerged in the late 1990s, blending elements of traditional soul with modern influences from R&B, hip-hop, jazz, funk, and alternative music. Known for its smooth, sophisticated sound and introspective lyrics, neo-soul often pushes creative boundaries while staying rooted in the emotional depth of classic soul music.
Characteristics
- Rich Vocal Styles: Features expressive, often improvisational vocal delivery, drawing heavily from gospel and soul traditions.
- Organic Instrumentation: Incorporates live instruments, such as bass, guitar, keys, and horns, alongside electronic elements.
- Fusion of Genres: Combines the grooves of R&B, the rhythms of hip-hop, the complexity of jazz, and the raw emotion of soul.
- Introspective Lyrics: Themes often explore love, self-awareness, identity, and social issues, with a poetic and personal touch.
- Experimental Sounds: Artists frequently experiment with unconventional song structures, lush harmonies, and layered textures.
Historical Context
The term “neo-soul” was popularized by record label executive Kedar Massenburg in the late 1990s to describe artists like Erykah Badu and D’Angelo, who were blending contemporary and classic influences. The genre emerged as a response to the commercialization of mainstream R&B, aiming to reconnect with the authenticity and artistry of earlier soul music.
Notable Artists
- Erykah Badu: Often called the “Queen of Neo-Soul,” her debut album Baduizm (1997) defined the genre’s sound.
- D’Angelo: His album Brown Sugar (1995) and later Voodoo (2000) are landmarks of the neo-soul movement.
- Lauryn Hill: Blended neo-soul with hip-hop on her groundbreaking album The Miseducation of Lauryn Hill (1998).
- Jill Scott, Maxwell, India.Arie, and Raphael Saadiq are other prominent figures in the genre.
Legacy and Influence
Neo-soul continues to evolve, influencing modern R&B and alternative music. Artists like Anderson .Paak, H.E.R., and SZA draw from its blend of classic and contemporary styles, keeping the genre relevant and innovative.
Neo-soul stands as a genre that celebrates musical depth and emotional authenticity, bridging the past and present of soul music.
see Retroactive to Record One.
The gross income collected by, or credited to, a music publisher minus any royalties to be paid to writers, performers and others who are due a share of those royalties, such as co-publishers.
When buying a publishing Catalog (musical compositions), the NPS is used to help calculate the value of the Catalog.
Net Receipts are usually defined within recording agreements as the gross income minus a number of deductions.
Gross Income is determined by the receipts relating to the explicit use of the Masters.
Deductions include the Label’s expenses:
- Collection costs
- Taxes, shipping, and insurance
- Fund contributions
- AFM Phonograph Record Manufacturers Special Payment Fund
- AFM Music Performance Trust Fund
- AFTRA Pension and Welfare Fund
- Any similar fund created in relation to a collective bargaining agreement
- AFTRA Contingent Scale Payments
- Mechanical Royalties or other payments to Copyright owners etc.
Net Sales are defined as relating to Records sold by the Label/Distributor to 3rd parties where payment has been made, or a credit line has been established, minus any Returns and any Reserves set against Returns.
A term often found in recording agreements. The term is a catch-all. With reference to Records, its intent is to cover all current forms of technology and all future forms of technology for the storage of music, which does not at that time make up a substantial percentage of Record sales.
For example, DAT, DCC, and DVD Audio Records.
Record Labels nearly always pay a lower Artist Royalty in relation to New Technology Records. The Artist Royalty then remains low until the format becomes a significant percentage of sales.
For a long time, Digital Transmission sales were treated as New Technology Records. Most Record Labels have stopped treating Digital Transmission sales in this way.
New Wave is a genre of music that emerged in the late 1970s and early 1980s, blending the raw energy of punk rock with experimental sounds, electronic elements, and pop sensibilities. It is characterized by its innovative approach to songwriting and production, often featuring synthesizers, angular guitar riffs, catchy melodies, and a playful or ironic aesthetic.
Key Characteristics
- Synthesizers and Electronics: New Wave embraced the use of synthesizers and drum machines, creating a futuristic and polished sound.
- Catchy Melodies: Songs often featured hooks and choruses that leaned toward pop accessibility.
- Quirky or Thematic Lyrics: Lyrics ranged from introspective to satirical, often reflecting themes of alienation, technology, and modern life.
- Fashion and Visual Style: New Wave artists often paired their music with distinctive fashion and imagery, including bold colors, asymmetrical designs, and avant-garde styles.
Historical Context
New Wave arose as punk rock began to lose its mainstream appeal, offering a more polished and commercially viable alternative. It drew from a variety of influences, including punk, glam rock, and disco, while incorporating new technology and production techniques.
Notable Artists
- The Talking Heads: Known for their art-rock approach and innovative sound.
- Blondie: A band that blended punk roots with pop and disco elements.
- Depeche Mode: One of the pioneers of electronic-driven New Wave.
- The Police: Mixing reggae, punk, and pop, they became one of the genre’s most successful acts.
- Duran Duran: Synonymous with the visual and stylish side of New Wave, integrating music videos as a key promotional tool.
Influence
New Wave played a major role in shaping the sound and culture of the 1980s, paving the way for synthpop, indie rock, and electronic music. Its blend of innovation and accessibility continues to inspire modern music and fashion, making it a cornerstone of contemporary pop culture history.
Nocturne
A Nocturne is a type of musical composition inspired by or evocative of the night. Known for its lyrical, serene, and often introspective qualities, the nocturne typically features a flowing melody over a rich harmonic accompaniment, creating a dreamy, atmospheric effect.
Origin
- The term “nocturne” comes from the Latin word nocturnus, meaning “of the night.”
- It was popularized in the early 19th century by Irish composer John Field, who is credited with composing some of the first nocturnes.
- The form gained prominence through the works of Frédéric Chopin, whose nocturnes remain some of the most celebrated pieces in the classical piano repertoire.
Characteristics
- Lyrical Melody: The melody is often song-like and expressive, resembling a lullaby or a reflective inner monologue.
- Arpeggiated Accompaniment: A gentle, rolling bass line or harmonic texture supports the melody, adding depth and atmosphere.
- Free Form: While not strictly tied to specific structural rules, most nocturnes have a ternary (ABA) form, with contrasting middle sections.
- Emotional Tone: Ranges from tranquil and contemplative to melancholic or even mysterious.
Notable Examples
- Chopin: His Nocturne in E-flat Major, Op. 9, No. 2 is one of the most iconic examples, showcasing lyrical beauty and technical grace.
- Field: His Nocturnes laid the groundwork for the genre’s development.
- Debussy: Although less traditional, works like his orchestral Nocturnes explore the concept in impressionistic ways.
Modern Usage
While the nocturne originated in classical music, the term and its stylistic elements have inspired modern genres. Ambient, jazz, and even film scores often borrow from the nocturne’s calm, reflective qualities.
The nocturne remains a timeless form, capturing the tranquility and mystery of the night through music.
Nodal Point
A Nodal Point is a position on a vibrating object, such as a string or air column, where no movement occurs due to destructive interference of waves. In physics and music, nodal points are crucial in understanding how sound is produced and shaped by resonance and harmonic patterns.
How It Works
When an object vibrates, it produces standing waves that consist of alternating regions of maximum displacement (antinodes) and no displacement (nodes). Nodal points are the stationary points where the waves cancel each other out.
For example:
- On a vibrating string, nodal points occur at fixed ends and at intervals along the string, dividing it into equal sections for higher harmonics.
- In wind instruments, nodal points form within the air column at specific locations, shaping the instrument’s tone.
Key Concepts
- Fundamental Frequency: The lowest frequency at which a string or column vibrates, with nodes at both ends.
- Harmonics/Overtones: Higher frequencies where additional nodal points form along the vibrating object. Each harmonic introduces more nodes, dividing the string or column into smaller vibrating sections.
- Placement: Nodal points are determined by the length, tension, and density of the vibrating object, as well as the mode of vibration.
Applications in Music
- String Instruments: Players can manipulate nodal points to produce harmonics by lightly touching the string at specific locations.
- Wind Instruments: The positions of nodes within the air column define the pitch and tone quality of notes.
- Acoustic Design: Understanding nodal points helps in designing resonant spaces like concert halls or speakers to optimize sound quality.
Visualizing Nodal Points
Imagine plucking a guitar string:
- The string vibrates most intensely at its midpoint (an antinode).
- Nodes form at the fixed ends and, for harmonics, at evenly spaced intervals along the string.
Nodal points are fundamental to the science of sound, providing insight into how musical instruments create their characteristic tones.
The Noise Floor refers to the level of background noise present in an audio system, environment, or recording when no intentional sound signal is being produced. It represents the baseline level of noise generated by equipment, circuitry, or ambient environmental sounds.
Sources of Noise Floor
- Electronic Noise: Inherent noise from audio equipment such as amplifiers, mixers, or digital converters.
- Environmental Noise: Sounds from the recording environment, like air conditioning, traffic, or electrical hums.
- System Design: Poorly designed or maintained audio systems can contribute to a higher noise floor.
Measurement
The noise floor is measured in decibels (dB) relative to full scale (dBFS in digital systems) or dBu/dBV in analog systems. A lower noise floor indicates a cleaner, quieter recording or system.
Impact on Audio Quality
- A high noise floor can mask quieter sounds, reducing the dynamic range and clarity of a recording.
- It can result in unwanted hiss, hum, or static that detracts from the audio signal.
Managing Noise Floor
- Use High-Quality Equipment: Better-designed gear often generates less noise.
- Optimize Gain Staging: Properly balancing input and output levels minimizes unnecessary amplification of noise.
- Reduce Environmental Noise: Record in treated spaces and eliminate sources of interference where possible.
- Noise Reduction Tools: Use noise gates, filters, or software plugins to clean up recordings.
Applications
Understanding and managing the noise floor is crucial in:
- Recording Studios: Ensuring clean, professional audio captures.
- Live Sound: Minimizing background hum or interference during performances.
- Broadcasting and Streaming: Maintaining clarity and consistency for listeners.
The noise floor is a key consideration for any audio engineer or producer aiming for high-quality sound production.
A Noise Gate is an audio processing tool used to control the volume of a sound signal, allowing it to pass through only when it reaches a certain threshold. This tool is primarily used to reduce or eliminate unwanted noise, such as background hum, hiss, or bleed from other instruments, during recording or live performances.
How It Works
The noise gate functions like a gate that “opens” when the input signal is loud enough to exceed a set level (the threshold) and “closes” when the signal falls below that level. When closed, it reduces or mutes the sound entirely, preventing unwanted noise from being heard.
Key Parameters
- Threshold: Determines the volume level at which the gate opens. Signals below this level are reduced or muted.
- Attack: The time it takes for the gate to fully open after the signal exceeds the threshold.
- Release: The time it takes for the gate to close after the signal falls below the threshold.
- Hold: The time the gate remains open after the signal drops below the threshold, preventing abrupt cutoffs.
- Range: Controls how much the volume is reduced when the gate is closed, from slight attenuation to complete silence.
Applications
- Recording: Eliminating low-level noise from microphones, amplifiers, or environmental sounds.
- Live Sound: Managing stage bleed from drums, guitars, or other instruments into microphones.
- Mixing: Cleaning up tracks to focus on desired audio elements, such as isolating a snare drum from cymbal bleed.
- Guitar Effects: Reducing noise from high-gain amplifiers or pedalboards.
Limitations
A noise gate can unintentionally cut off quieter parts of a desired signal if the threshold is set too high, so careful adjustment is needed to maintain natural sound dynamics.
Noise gates are invaluable for creating clean, professional recordings and performances when background noise is a concern.
See our article Noise Gates and Expanders for a more detailed description
Noise Shaping
Noise Shaping is a digital audio processing technique used to control and redistribute quantization noise, typically during processes like bit-depth reduction or digital-to-analog conversion. By shifting noise to less perceptible frequency ranges, noise shaping enhances the perceived quality of audio, especially at lower bit depths.
How It Works
When audio is converted from a higher bit depth (e.g., 24-bit) to a lower one (e.g., 16-bit), quantization noise is introduced as a byproduct. Noise shaping works by analyzing this noise and redistributing it into frequency ranges where the human ear is less sensitive, such as higher frequencies. This makes the noise less noticeable while preserving the clarity and detail of the audio in the more critical frequency ranges.
Key Applications
- Bit-Depth Reduction: Commonly used when converting high-resolution audio (e.g., 24-bit recordings) to standard CD-quality (16-bit) without sacrificing perceived audio quality.
- Digital Audio Encoding: Improves the quality of compressed audio formats like MP3 or AAC by reducing audible artifacts.
- Dithering: Often paired with dithering techniques to minimize quantization distortion and improve the smoothness of audio.
- Playback Devices: Implemented in DACs (digital-to-analog converters) for higher fidelity during playback.
Benefits
- Improved Audio Perception: Reduces the audibility of noise in critical listening ranges.
- Efficient Compression: Helps maintain audio quality in data-reduced formats.
- Enhanced Mastering: Allows for professional-grade results even with format constraints like lower bit depths.
Challenges
- Frequency Trade-offs: Shifting noise into higher frequencies can be problematic for systems or listeners sensitive to those ranges.
- Complexity: Requires precise algorithms to achieve optimal results without introducing unwanted artifacts.
Why It Matters
Noise shaping is a critical tool in modern audio production, ensuring that audio retains its fidelity and nuance, even when constrained by format limitations. It is especially vital in mastering, audio streaming, and the production of consumer playback devices.
Nominal Pitch
Nominal Pitch refers to the designated pitch name of a musical note, instrument, or part, regardless of variations in tuning or transposition. It represents the theoretical or standard pitch that an instrument or notation is associated with, as opposed to the actual pitch produced.
Key Aspects
- Standard Assignment: Nominal pitch is the pitch ascribed to a note or instrument in a musical context, such as “C” for a note on sheet music or “B♭” for a trumpet.
- Actual Pitch vs. Nominal Pitch: The actual pitch is the sound frequency produced (e.g., A = 440 Hz), which may differ due to tuning adjustments, transposition, or the physical properties of the instrument.
- Transposing Instruments: Instruments like the clarinet or French horn are notated using nominal pitch. For example:
- A B♭ clarinet’s written “C” produces the actual pitch “B♭.”
- The nominal pitch allows for easier reading and uniformity across instruments.
Applications
- Notation for Transposing Instruments: Helps musicians read music consistently, regardless of the actual pitch produced.
- Instrument Designation: Categorizes instruments by their general pitch range or transposition. For example, “alto saxophone” nominally aligns with E♭ tuning.
- Tuning Standards: Used to define and compare pitch across different tuning systems and instruments.
Example
In an orchestra, a B♭ trumpet playing a written “C” will sound a B♭ in actual pitch. The “C” is its nominal pitch, which aligns with the notation system rather than the produced sound.
Why It Matters
Nominal pitch simplifies the process of composing, arranging, and performing music, particularly in ensembles with transposing instruments. It provides a standardized reference point, ensuring clear communication between composers and performers.
Non-Destructive Editing
Non-Destructive Editing is a method of editing audio or video in which the original file remains unchanged. Instead of permanently altering the source material, edits are stored as instructions or metadata, allowing users to make adjustments, undo changes, or experiment freely without losing the original content.
How It Works
In non-destructive editing, software records changes like cuts, fades, effects, or pitch adjustments as separate data. These changes are applied in real-time during playback but do not modify the actual file. For example:
- Trimming a Clip: The edit defines a start and end point, but the underlying audio remains intact.
- Applying Effects: Reverb or EQ settings are layered over the original track without altering the raw file.
Key Benefits
- Reversibility: Changes can be undone or modified without degrading the original material.
- Flexibility: Users can experiment with edits and effects without committing to them.
- Preservation: The original recording is always intact, maintaining a safeguard against errors.
- Efficiency: Workflows remain streamlined, as edits are applied in real-time without the need for duplicating files.
Applications
- Audio Editing: Used in digital audio workstations (DAWs) like Pro Tools, Logic Pro, and Ableton Live for mixing, mastering, and sound design.
- Video Editing: Employed in software like Adobe Premiere Pro or Final Cut Pro for cutting, color grading, and visual effects.
- Photography: Similar principles apply in tools like Adobe Photoshop or Lightroom, where non-destructive edits preserve the original image.
Non-Destructive vs. Destructive Editing
- Non-Destructive: Keeps the original file unchanged; ideal for flexibility and experimentation.
- Destructive: Permanently alters the file; often used for finalizing edits or saving disk space.
Why It Matters
Non-destructive editing allows creators to work with confidence, knowing they can explore creative possibilities without risking irreversible damage to their source material. It’s an essential feature for modern production workflows in music, video, and other creative industries.
Non-Diatonic
Non-Diatonic refers to any note, chord, or scale that does not belong to the standard seven-note diatonic scale of a given key. In other words, non-diatonic elements fall outside the key signature and include pitches that are not naturally part of the major or minor scale being used.
Key Characteristics
- Altered Notes: Notes that are raised or lowered compared to the diatonic scale (e.g., F♯ in the key of C major).
- Chromaticism: The use of notes that lie outside the diatonic framework, often to create tension or add color.
- Borrowed Chords: Chords borrowed from a parallel key or mode, such as a major chord in a minor key.
- Modulation: A shift to a different key often introduces non-diatonic notes and chords.
Examples
- Non-Diatonic Notes: A D♯ note played in a melody in the key of C major is non-diatonic.
- Non-Diatonic Chords: In the key of C major, a D major chord (D – F♯ – A) is non-diatonic because it contains F♯, which is not in the key.
Applications
- Tension and Resolution: Non-diatonic notes or chords often create tension that resolves back to diatonic elements, adding drama and movement to music.
- Genre-Specific Uses:
- Jazz: Extensive use of non-diatonic chords and scales, such as altered dominants or chromatic passing tones.
- Classical: Modulations and chromatic passages frequently include non-diatonic elements.
- Pop and Rock: Non-diatonic chords are used for emotional or unexpected shifts, like the use of a ♭VII chord in a major key.
Related Terms
- Diatonic: Refers to notes and chords within the key signature.
- Chromatic: Specifically pertains to the use of all 12 notes of the chromatic scale.
- Modulation: A key change that introduces non-diatonic elements.
Non-diatonic elements are powerful tools for creating variety, complexity, and emotional depth in music, often taking a composition beyond the boundaries of its key.
Non-Linear Editing
Non-Linear Editing (NLE) is a method of editing audio, video, or other media where you can access and rearrange any part of the material freely, without needing to follow a sequential order. This modern editing approach is widely used in music production, film, and broadcasting due to its flexibility and efficiency.
How It Works
Non-linear editing involves working with digital files in a software environment that allows for:
- Instant Access: Editors can jump to any part of the material, edit, or rearrange segments without affecting the original file.
- Layering: Multiple tracks of audio, video, or effects can be layered and manipulated independently.
- Non-Destructive Workflow: Changes are stored as metadata, leaving the original files unaltered, ensuring complete flexibility and reversibility.
For example, in a digital audio workstation (DAW), you can move a drum beat from the middle of a song to the beginning without altering the original recording.
Key Benefits
- Flexibility: Allows for rapid experimentation with different arrangements or effects.
- Efficiency: Edits are immediate and do not require re-recording or sequential splicing.
- Integration: Seamlessly combines audio, video, and other elements in a unified timeline.
- Undo/Redo Capability: Enables quick correction of mistakes or changes in creative direction.
Applications
- Audio Production: In DAWs like Pro Tools, Logic Pro, or Ableton Live, where tracks can be cut, looped, and rearranged freely.
- Video Editing: Tools like Adobe Premiere Pro and Final Cut Pro offer multi-track timelines for precise editing.
- Film Scoring: Composers can sync music and sound effects to video frames, adjusting timing easily.
Non-Linear vs. Linear Editing
- Non-Linear: Jump between sections freely; ideal for modern digital workflows.
- Linear: Sequential editing where changes must be made in order, as with traditional tape-based systems.
Why It Matters
Non-linear editing revolutionized media production, enabling artists, producers, and editors to work creatively and efficiently. It allows for detailed refinement and easy collaboration, making it a cornerstone of modern music and video production workflows.
Elements of a record contract that cannot be recovered from the artist by the Record Label.
Non-Registered Parameter Numbers (NRPN)
These are undefined, assignable, MIDI Controller Numbers.
Nonet
A Nonet is a musical ensemble consisting of nine performers or a composition written specifically for nine instruments or voices. The term is derived from the Italian word nonetto, meaning “a group of nine.” Nonets are less common than smaller ensembles like trios or quartets but offer unique possibilities for blending textures and timbres.
Structure and Instrumentation
The instrumentation of a nonet can vary widely, depending on the genre and composer’s intent. Common configurations include:
- Classical Music: Often combines strings, winds, and brass. For example, Louis Spohr’s Nonet in F Major, Op. 31 features violin, viola, cello, double bass, flute, oboe, clarinet, horn, and bassoon.
- Jazz and Big Band: May include a mix of rhythm section instruments (piano, bass, drums) with horns (trumpet, trombone, saxophone) and others.
- Contemporary Music: Can feature unconventional or electronic instruments for experimental soundscapes.
Key Characteristics
- Balance: The challenge and artistry of a nonet lie in balancing nine distinct parts, ensuring clarity and cohesion.
- Textural Richness: Offers a wider palette of sounds and harmonies than smaller ensembles, without the complexity of a full orchestra.
- Flexibility: The diversity of possible combinations makes nonets adaptable across genres.
Notable Examples
- Classical: Spohr’s Nonet in F Major, Op. 31, a cornerstone of the repertoire.
- Jazz: Miles Davis’s Birth of the Cool sessions included a nine-piece ensemble, blending traditional and non-traditional jazz instruments.
- Modern: Contemporary composers often use nonets for experimental and chamber works.
Uses
Nonets are performed in:
- Chamber Music: Classical settings emphasizing the intimate, detailed interplay between musicians.
- Jazz and Pop Ensembles: For rich arrangements that maintain a manageable group size.
A nonet provides composers and performers with opportunities for unique musical expression, balancing the intimacy of small ensembles with the depth of larger groups.
This relates to the Net Sales of Records sold as Top-Line Records through both real-world Brick and Mortar Stores and Digital Downloads.
Physical Record sales and Top-Line Records sold as Digital Downloads are assigned the full Artist Royalty rate in most Record Contracts.
The term USNRC Net Sales is a common way to refer to Net Sales of Records as Top-Line Records through Normal Retail Channels in the United States, in Recording Contracts.
The highest rate of Artist Royalty that a Record Label will agree to in the United States is for USNRC Net Sales of albums.
Normalization is an audio processing technique used to adjust the overall volume of a sound file to a target level. It ensures consistency across multiple audio tracks or recordings by increasing or decreasing the amplitude of the entire waveform without altering the dynamic range (the difference between the quietest and loudest parts).
Types of Normalization
- Peak Normalization: Adjusts the loudest point (peak) of the audio to a specified level. This is useful for ensuring the signal does not clip (exceed the maximum allowable level).
- Loudness Normalization: Adjusts the perceived loudness of a track to match a target value, often measured in LUFS (Loudness Units relative to Full Scale). This method is widely used in streaming platforms and broadcast to create consistent playback volumes.
How It Works
Normalization scans the entire audio file to determine its loudest point or average loudness. The software then applies a consistent gain (volume adjustment) to bring the audio up or down to the target level. For example:
- If the loudest peak is at -6 dB and the target is 0 dB, 6 dB of gain will be applied across the whole file.
- In loudness normalization, adjustments consider the human ear’s sensitivity to different frequencies, ensuring uniform perceived loudness across tracks.
Applications
- Audio Mixing: Balancing levels between tracks before mastering.
- Broadcast and Streaming: Ensuring consistent loudness across a playlist or platform.
- Podcasting and Audiobooks: Matching volume levels across episodes or chapters for a professional listening experience.
- Live Performance: Pre-normalizing backing tracks to prevent sudden volume jumps.
Limitations
While normalization helps with volume consistency, it does not address dynamic range issues. For tracks with wide volume variations, compression might be needed in addition to normalization.
Normalization is a key tool for achieving polished, professional audio in recording, mixing, and playback contexts.
Notation is the system used to visually represent music so that it can be read, interpreted, and performed by musicians. It serves as a universal language that translates the abstract sounds of music into written symbols, allowing composers to communicate their ideas across time and space.
Types of Notation
- Traditional Staff Notation: The most common form, using a five-line staff with symbols representing pitch, duration, dynamics, and articulation. It is widely used in classical, jazz, and contemporary music.
- Tablature (Tab): Popular in guitar and stringed instrument music, this system uses numbers and lines to indicate finger placement rather than pitch.
- Graphic Notation: Found in experimental music, it uses visual symbols and shapes outside of standard musical symbols to represent sound.
- Chord Charts: Often used in pop, rock, and jazz, these notations show chords and lyrics without detailed rhythms or melodies.
Key Components of Traditional Notation
- Pitch: Represented by the placement of notes on the staff, indicating how high or low a sound should be.
- Rhythm: Shown through note shapes and stems, indicating the duration of each sound.
- Dynamics: Symbols like “p” (piano) and “f” (forte) to convey volume levels.
- Articulation: Marks such as staccato dots or legato slurs to define how notes should be played.
Notation allows musicians to learn, share, and preserve music. Whether sight-reading a symphony or strumming a chord chart, notation bridges the gap between composer and performer. In modern music, digital notation software has expanded its capabilities, making it easier than ever to create and distribute music.
Making Suggestions
All suggestions are very welcome. We ask that when you suggest a term, you also suggest a description for that term. As a regular contributor, we ask that you follow the instructions on becoming a contributor set out below. You are also welcome to make suggestions in our music community forums.
Become A Contributor To The Songstuff Music Library
Contributors Wanted
Are you a qualified entertainments lawyer? Or perhaps you have in-depth knowledge about tour management? Are you an experienced band manager? Or perhaps a booking agent? You could be a studio Engineer or a music producer. Would you be interested in helping musicians to build their skills and understanding by contributing definitions to the Songstuff Music Glossary? We rely upon musicians, and people working within the music industry, being willing to contribute to our knowledge base.
As well as contributions to our music glossary, we feature contributions to our music library, in our site blogs and social media portals.
In particular, we add video contributions to the Songstuff Channel on YouTube.
Please contact us and we can explore the possibility of you joining our contributors asap.
Songstuff Media Player
If you would like to listen to some awesome indie music while you browse, just open our media player. It opens in another window (or tab) so your playlist can play uninterrupted as you browse.
Open the Songstuff Media Player.
Playlists are curated by SSUK for the Independent Music Stage and Songstuff.