Artificial Intelligence Could Compose The Music Of The Future
Engineers at Google and Sony are exploring ways to enable computers to write music
Both Google and Sony have projects underway to advance how computers write music. The inherent issue with computers writing music, however, is trying to figure out what the math is for inspiration. Because creativity can’t really be quantified, it’s difficult to develop an algorithm for it.
Douglas Eck of Magenta, an offshoot of a Google Brain artificial intelligence project, says that many of the issues in figuring out how to enable computers to write music are human, rather than machine-based, and revolve around discovering the right questions to ask. Since writing music is not linear, creating a way for machines to write music cannot be linear either.
Most recently Magenta unveiled a tool called NySynth, which is a neural network trained to produce nearly 300,000 instrument sounds. What is special about NySynth is that instead of converting waves to numbers, the usual means by which a computer reproduces a sound, it works with a series of programmed ‘ideas’ about what instruments sound like.
Flow Machines, a Sony affiliate, is another project in this vein. Flow Machines are AI ‘musicians’ that draw from a rich base of musical references, with which human musicians and composers can collaborate. The technology can make music on its own or with others. Basically, you can give it an idea and it fills in the blanks.
The advancement of AI music-making might, in time, be able to take on some of the more technical aspects of composing, leaving more time for musicians to explore those ineffable sources of inspiration that can lead to something new.
Lead Image: male hands working on sound studio equipment via Shutterstock
via PSFK http://www.psfk.com/
June 16, 2017 at 03:02PM