Tag Archives: Adversarial Autoencoders

MusAE paper accepted at ECAI2020

Great news!
My paper on MusAE, a generative model for music editing and generation, has been officially accepted at the 24th European Conference on Artificial Intelligence. I thank my co-authors Antonio Carta and Davide Bacciu for their work and support. I look forward to present this work next June in the suggestive location of Santiago de Compostela.

Here you can find a link to some additional material, comprising of generated songs and interpolations. You can also find the full paper on ArXiv.

Mus̠ РAudio samples available on YouTube

At the following link, you can find a few audio samples generated by MusAE, the Machine Learning model for MIDI music manipulation I am currently working on. The name is an acronym of Music Adversarial autoEncoder, and a tribute to ancient Greek’s goddess of sciences and arts. The link redirects you to a YouTube playlist containing both song reconstructions (MusAE is based on an autoencoder model) and interpolations between two different measures. There is also a medley between Michael Jackson’s “Billie Jean” and Pink Floyd’s “Brain Damage”, which combines the reconstruction and interpolation capabilities of the model.

MusAE’s main goal is to assist new artists (even with limited knowledge of music theory) in creating their new masterpieces by automatically modifying relevant properties of musical pieces, thus leaving the artists free to concentrate only on the creative aspects of music composition, without caring too much about low-level technical nuisances.
This work is just at its initial stages, but I think the result are already quite impressive!

If you’re interested in MusAE, more technical details will be coming soon.