HAQ

We love music, but don't have a very deep understanding of playing it. In cloning us as a digital twin through machine learning technology, we wanted it to carry our musical dreams.
With the tools provided by resemble.ai, we trained two separate sound models with HAHAHA and COWCOWCOW - this machine learning will generate a sentence HA and COW no matter what kind of words or phrases are input. the audio obtained by inputting note, click words, etc., is added to Ableton live was used to process the audio, which became the sound samples used for the composition.
The next step was to arrange the music, using Magenta to analyze Alan's music in order to generate a midi file, on top of which some fragments were split and used to continue the composition. After adding the samples as instruments, a composition from our digital twins was born.
The narration of the video was also provided by our jointly trained sound model, as each contributed a third of the data volume, and thus this sound model was also a fusion of our voices, the voices of our digital twins. And the visual image of the music video comes from our photo with 😀 & 🐮emoji