2020 has been a breakthrough year for artificial intelligence, specifically the branch of AI known as machine learning and neural networks, and its already causing widespread disruption in almost every industry, including within creative and artistic expressions previously thought to be out of bounds for machines.
One of 2020’s many AI breakthroughs has been OpenAI’s jukebox project – a neural net that generates music, including singing, with composition and orchestration in a variety of genres and artistic styles. The neural net is self-taught, meaning it starts out completely blank with no musical skills at all. Then it’s connected to a huge data source, like Spotify or the iTunes music store, and it starts to evolve its understanding of music as performed by the various artist. When the network is sufficiently trained it will be able to produce a brand new composition in the style of an artist or as a mashup of several artist. One minute of music takes roughly 9 hours to compose and perform.
The idea of music generating machines can be traced back many centuries. We have all seen the automated piano playing in old western movies, and there are some absolutely stunning creations to be found in museums, like a six-piece string orchestra with both organ and percussion. But for all of these mechanical marvels the music was pre-encoded, often as spikes onto a rotating cylinder, indicating timing, pitch, velocity, and what instrument of each note to be played.
New technologies create new opportunitites
At Glitch Studios we go where new technology leads us, to explore new opportunities for our client’s brands to creatively express themselves.