890 points by deepbeats 6 months ago flag hide 30 comments
johnsmith 6 months ago next
This is really fascinating! I wonder what kind of architectures they used for the deep learning models.
neuralnetworks 6 months ago next
I guess they must've used some kind of recurrent neural networks, or maybe even a transformer based model. It's hard to say without more information.
anotheruser 6 months ago prev next
Does anyone know if they made the code publicly available? That would be really cool to play around with the model.
musiclover 6 months ago next
Yes, I saw that they open-sourced the code along with the paper. Here's the link: [insert link]
newtopost 6 months ago prev next
I've been experimenting with deep learning in music composition as well and I must say, it's a really interesting field. I'm curious if anyone here has any experience with this?
deeplearner 6 months ago next
I've dabbled in the area a bit, it's quite fun! I found that adding external constraints to the model such as melodic patterns can improve the results significantly.
anotherdeeplearner 6 months ago prev next
I've been working on a project that uses deep learning to analyze the structure of music pieces and generate new ones based on that. It's quite challenging but also very rewarding.
technicalquestion 6 months ago prev next
How did they handle note onsets and offsets with the deep learning model? That's something I've been struggling with myself.
helpfulperson 6 months ago next
One approach I've seen is to use a separate model to predict the onsets and another one for the offsets. That way, you can train them independently and improve the overall accuracy.
opinion 6 months ago prev next
I think this kind of innovation is what's going to drive the future of music creation. Imagine what kind of new genres and sounds we'll see in the next few years!
question 6 months ago prev next
How do you think this technology will impact musicians and the music industry as a whole?
mediacritic 6 months ago next
I think it will be a double-edged sword. On one hand, it opens up new possibilities for self-expression and creativity. On the other hand, it might lead to a loss of authenticity and individuality in music.
algorithmdesign 6 months ago prev next
One thing that I found interesting in the paper is the way they used reinforcement learning to optimize the generated music. It's a bit counter-intuitive but it seems to work quite well.
codebug 6 months ago next
I looked at the code and I must say, it's quite well written. I also like the fact that they've open-sourced it under an MIT license, it's great for the community.
anjee 6 months ago prev next
I would love to see a collaboration between AI and human musicians to create a completely new sound that's never been heard before.
dj 6 months ago prev next
I've been experimenting with generative models to create music for my sets and it's been amazing! It's almost like having a collaborator that's always on call.
puzzled 6 months ago prev next
I'm having a hard time understanding how the deep learning model can capture the emotions and nuances of human music. Can someone enlighten me?
musicexpert 6 months ago next
There's a lot of work being done in the field of Music Information Retrieval (MIR) to extract features such as emotion and genre from music. The deep learning model can then use these features to generate new music with similar characteristics.
idea 6 months ago prev next
Wouldn't it be interesting to use deep learning to compose a musical piece in real-time based on the reactions of the audience? You could create a truly interactive musical experience.
musiquestions 6 months ago prev next
In the example given in the paper, the deep learning model created a piece of classical music. Do you think it could be used to generate other genres as well?
dj_tech 6 months ago next
Absolutely! I've seen models that can generate electronic music, jazz and even metal. The key is to train the model on a large enough dataset of a specific genre to learn its characteristics and quirks.
interface_design 6 months ago prev next
One interesting aspect is how you can interact with the generated music. For example, you could have a visual interface where you can modify the parameters of the model in real-time and see how it affects the music.
justcurious 6 months ago prev next
What do the authors say about the limitations and future work of their model?
researcher 6 months ago next
In the paper, they mention some limitations such as the need for more expressive representation of music and better handling of large time scales. For future work, they plan to incorporate more musical features and to apply the technique to other types of sound synthesis.
anotherresearcher 6 months ago prev next
Our lab is also working on similar projects and we've found that ensembles of several different models can greatly improve the quality of the generated music. It's definitely an exciting area with a lot of potential!
codequestion 6 months ago prev next
I'm having some issues getting the code to run on my machine. Could somebody please help?
codehelp 6 months ago next
Sure, I might be able to help. What kind of error are you getting?
codeissues 6 months ago next
It says that some modules are missing, specifically TensorFlow and librosa. I've installed them but I'm still getting the same error.
codeassistance 6 months ago next
Sometimes the installation process can be tricky, I would suggest uninstalling and then reinstalling those modules. If that doesn't work, try checking the version compatibility.
greatwork 6 months ago prev next
I'm really impressed with this research. I think this is just the tip of the iceberg and there's so much more to discover in the field of AI and music composition.