Ari Brown is in the dual program between Tufts University and New England Conservatory of Music, where he studies computer science and music composition. Following his combined passions, Ari co-founded Cherrystems Music in 2016, which aims to make music production as easy as choosing sounds and a style using artificial intelligence. Ari also recently contributed software to the Mars 2020 rover mission at NASA’s Jet Propulsion Laboratory.
The Semantic-Hypnotic Model of Music Perception
Music theorists and composers commonly analyze and synthesize music based on rules established in the Classical period, such as harmonic progressions and tonal language. However, popular music today (sometimes unknowingly) employs rules established in the sixteenth century, much before the Classical period, which are based on the perception of music by the audience rather than how it is constructed. The rules of sixteenth century counterpoint, in combination with software, may help bridge the gap between the music creation process and novice music enthusiasts. I propose the Semantic-Hypnotic model of music perception, which lays the groundwork for more greatly enhanced human-computer interaction in the context of music creation. I show an abstract implementation of the model in use, and discuss future implications that could more broadly apply to hit song science, automated mixing, and research in music cognition