Just curious because never seen anything like it before, did a lot of beta users have to submit meticulously tagged presets in order to train the new function?
No, it is trained on an endless corpus of algorithmically created sounds.
Thanks for responding Magnus, though I'm a little confused as to your wording, do you mean it was learning from procedurally generated samples? I wonder how something like that would get you a useful dataset, it's all a bit like sorcery to me if I'm honest lol
Well, I can't claim it is trained on only good sounds. 😉
Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.
- Magnus Lidström wrote:
Well, I can't claim it is trained on only good sounds. 😉
Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.
Hello,
do you have scientific article on the details of algorithms you have been using in Synthplant2? I’m making a research on the topic for my MA diploma and I would be proud to include the best product on the market with references on research papers!
- Magnus Lidström wrote:
Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample.
I produce music for an upcoming game. The client told me that if we in the team would use anything with AI, it could occur that we have to declare this and also provie that we rightfully own or be able to use the trained data set for the AI.
Now I wonder (somehow "legally"): would you say that such a thing would apply, when I use Synplant v2 in the music production? I mean: would it be neccessary already to say that the music created with the synth is AI based, even if it's "only" about the created patch for the synth?
- Sprite wrote:
Thanks for responding Magnus, though I'm a little confused as to your wording, do you mean it was learning from procedurally generated samples? I wonder how something like that would get you a useful dataset, it's all a bit like sorcery to me if I'm honest lol
my guess would be, that you would use an autoencoder and the parameters of the synth is the reduced featurespace in the middle. in that way you would only have to use a sample and then compare it to the output of the encoder (synth recording). that way you don't need to build a fancy DS. not shure if it works though. :)
- Manuel Senfft wrote:
I produce music for an upcoming game. The client told me that if we in the team would use anything with AI, it could occur that we have to declare this and also provie that we rightfully own or be able to use the trained data set for the AI.
Now I wonder (somehow "legally"): would you say that such a thing would apply, when I use Synplant v2 in the music production? I mean: would it be neccessary already to say that the music created with the synth is AI based, even if it's "only" about the created patch for the synth?
No, that does not apply here. Legally, we own full rights to all the data that Genopatch was trained on. No humans (except us) were harmed in training this AI. 😁
Also, be aware that the trained neural network is only part of the Genopatch process. There's a lot of computation going on. That's why we need 100% of your CPU.
When it comes to the source data you feed into Genopatch and the result you get out. As you probably already figured out, there is no correlation in data whatsoever. If you need to credit or pay the creator of the source sound. I think that is more a moral question (if the result sounds almost 100% identical), than a legal one.
Thanks for the detailled answer, Fredrik! Good to know. (=
- Nikita Urbansky wrote:- Magnus Lidström wrote:Hello,
Well, I can't claim it is trained on only good sounds. 😉
Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.
do you have scientific article on the details of algorithms you have been using in Synthplant2? I’m making a research on the topic for my MA diploma and I would be proud to include the best product on the market with references on research papers!
https://arxiv.org/pdf/2205.03043.pdf - have you already seen this?
- Rasmus Merten wrote:- Nikita Urbansky wrote:https://arxiv.org/pdf/2205.03043.pdf - have you already seen this?- Magnus Lidström wrote:Hello,
Well, I can't claim it is trained on only good sounds. 😉
Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.
do you have scientific article on the details of algorithms you have been using in Synthplant2? I’m making a research on the topic for my MA diploma and I would be proud to include the best product on the market with references on research papers!
I had not seen that actually, but one of the authors contacted me on Instagram today(!). Funny. Our approaches seemed similar when I first skimmed through it, but they differed once I started looking into the details. Besides differences in how we approach the neural net, the core difference is that Genopatch doesn't rely solely on the neural network. Team play occurs between different ML paradigms; they feed off each other's work. This happens in real-time on your CPU, and you can see how it develops when you run Genopatch.
Absoutely fantastic work. I just watched the deep dive video on youtube and I'm blown away by the Genopatch model. I understand that you would not want to give away your secrets but I would be fascinated to learn more about the processes involved in training. Could you speak about it a bit or point to some relevant papers?
- Magnus Lidström wrote:
Well, I can't claim it is trained on only good sounds. 😉
Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.
Amazing!! It's so good too.
You need to be signed in to post a reply