By Jennifer Paccione
AI is here, people…and here to stay. Whether we like it or not, have tried ChatGPT or not, the buzz is flying about AI and its benefits—and its threats—to creativity—in fields across the board, including the music industry. Here are some pros and cons to this new and powerful tool on the block that is making its way into everything we see…and hear.
Pros:
Enhanced Creativity: AI algorithms have the ability to generate new and unique musical compositions by analyzing vast amounts of existing music. This can help musicians and producers discover fresh melodies, harmonies, and rhythms that they may not have thought of on their own, leading to increased creativity and innovation.
Music Production and Efficiency: AI can automate various aspects of music production, such as mixing, mastering, and sound engineering. This can save time and effort for artists and producers, allowing them to focus more on the creative process. AI-powered tools can also enhance the quality and consistency of music production, leading to a more polished final product.
Improved Music Analysis: AI algorithms can analyze large datasets of music to extract patterns and insights. This can be beneficial for musicians, musicologists, researchers, and industry professionals, as it can provide valuable information about trends, genres, and audience preferences. Such insights can help musicians and record labels make informed decisions regarding their creative direction and marketing strategies.
Cons:
Loss of Authenticity and Originality: While AI-generated music can be impressive, there is a concern that it may lack the emotional depth and authenticity associated with human creativity. Critics argue that AI compositions may lack the personal experiences, cultural context, and genuine emotions that artists bring to their work, leading to a loss of originality and uniqueness in the music industry. Impact on Employment: The integration of AI in music production and composition processes can potentially lead to job displacement for musicians, sound engineers, and other industry professionals. As AI tools become more sophisticated, there is a risk that human involvement in certain aspects of music creation could diminish, potentially affecting livelihoods and career opportunities.
Bias and Lack of Diversity: AI algorithms are trained on existing datasets, which can introduce biases and perpetuate existing inequalities in the music industry. If the training data predominantly represents certain genres, cultures, or demographics, AI-generated music may be skewed towards those characteristics, further marginalizing underrepresented artists and genres. Ensuring diversity and fairness in AI-generated music remains a challenge.
Ethical Concerns: The use of AI in music raises ethical questions, particularly regarding copyright and intellectual property. Determining ownership and rights over AI-generated compositions can be complex, as they are often based on existing works. Additionally, there are concerns about the potential misuse of AI for creating deepfake music or imitating the styles of famous artists without their consent.
As a musician and composer myself, I feel that as AI will have powers of speed, pattern- finding, and algorithm recipe cooking, I have to say that it will be nowhere close in replacing the dynamic ebbs and flows of the human touch. Whether it be the striking of keys, bowing or plucking of strings, pounding snares, toms, and bass drums, or even the breaths of singers between vocal phrases, I think we enjoy hearing the sounds of humans on their instruments, however subtle and unobtrusive they may be. Not to mention when I write/compose, I am drawing on my own unique experiences, past and present emotions, current life situations, perspectives, opinions, loves, hates, indifferences, dreams, passions, regrets……none of which AI possesses or can pretend to possess and translate into music as eloquently…and fiercely…as us humans.
We don’t always realize it, but those “human” nuances are what makes the audible mix so tantalizing, especially when music is heard live under an optimal sound- mixing/acoustic environment. The human connection is at the core of hearing live music, with the symbiotic relationship between audience and musician(s). Even the human energy between musicians live on stage has very powerful, unexplainable effects on the energy and emotions of the audience. Knowing from first-hand experience, musicians feed off of each other’s energy and even flow together somehow that helps the creative process and live experience unfold exponentially. Many times composing for my band I would get “composer’s block” so I would present my musical ideas on my keyboard with my fellow bandmates. Many jam sessions would find us all in different directions, then somehow magically merging together, like meandering rivers finding one ocean. We would bounce verbal and musical ideas off each other, whether talking or playing through these ideas. Once we locked on to something that we all agreed was working, we would ride that flow, swimming in each other’s rhythm and voice to create some amazing work. That’s also part of the fun of composing and performing with other musicians. I don’t know if AI would ever be able to get inside my head like my bassist or drummer can. And for me, there’s nothing in the world that compares to the electricity that runs through my veins and the power it brings me when I’m up there on stage playing music with my fellow kindred spirits and feeling that energy from an audience.
All that being said, I am certainly curious and eager to learn more how AI can better assist in future work and possibly save some time for me, as I learn how it can be my “musical assistant” rather than “co-composer.” I believe that we should understand that AI in music-making needs to serve us, the creators, as a tool rather than to be relied upon as a replacement in music-making.
Here are just a few examples of websites and platforms that currently utilize AI in music:
Jukedeck (www.jukedeck.com): Jukedeck is an AI-powered platform that allows users to generate original music tracks in different genres and styles. Users can customize parameters such as tempo, mood, and duration to create music tailored to their needs.
Amper Music (www.ampermusic.com): Amper Music is an AI-driven platform that enables users to create custom music tracks for various purposes, including film, video games, and commercials. It provides a library of pre-composed music elements that users can arrange and customize.
AIVA (www.aiva.ai): AIVA (Artificial Intelligence Virtual Artist) is an AI composer that creates original compositions in different styles and genres. AIVA has been used for film soundtracks, advertising, and other media projects. Melodrive (www.melodrive.com): Melodrive is an AI-driven music generation platform that creates adaptive and interactive music for video games. It uses AI algorithms to generate music in real-time, responding to the game’s events and creating a dynamic and immersive experience.
Magenta by Google (magenta.tensorflow.org): Magenta is an open-source project by Google that explores the intersection of AI and music. It provides tools and resources for music generation, composition, and creativity using machine learning techniques.
I encourage you to give some (or all) of these a try, but take heed:
Always remember who created who.
Rock on, humans, and let’s keep making some great, boundary-pushing music.