Since AI Music platforms like Udio and Suno have been getting a lot of attention lately, I wanted to get some Hexbears' opinions on the matter. Have any of you been testing the capabilities of these? Care to share what you've made?

Udio seems to be trained on a wider range of niche genres, which I think leads to more diverse sounds that are better at obscuring their AI origins. Suno has a much more limited mainstream range, but you can make an entire concept album by extending clips multiple times before you 'get the whole song'. You can also finely tune each clip by choosing to extend it very early into the clip to pick out the best parts (though at that point, why not just make the music yourself?)

So far I've been using Udio to get more diverse samples, combined with an AI music splitter to isolate the good parts. I then plug them into Suno-generated long tracks to augment limitations from the prompting process. The music still isn't great (and my editing skills are dirt poor) but I think interesting things can get created this way. Soundcloud link to some example tracks

Adam Neely just released a decent video about AI music and what it lacks, and so even though the critique of capitalist art production that he suggests is pretty milquetoast, I'd recommend it as a mild antidote to all the AI music hype.

  • Tabitha ☢️[she/her]
    ·
    7 months ago

    I don't know how to play an instrument, at least not really good or long practiced, and also in terms of music theory, I am completely illiterate and have never so much as tried to understand music theory. Therefor, I am not your average music connoisseur, but instead a principled sequential noise enjoyer.

    It appears that in the span of a week, a single person could produce 100 sequential noise artifacts and upload them to spotify. A music artist may only be able to create 1-2 works of music in that time period, and must compete with the 1000 people uploading sequential noise artifacts in that same week, plus the soon-to-be 1 million people/bots uploading 100s of sequential noise artifacts per week before 2025 starts.

    I listened to OP's soundcloud samples. I couldn't tell they were inauthentic on first skim (not a full listen). They weren't even pandering to me, but if one of them showed on my Spotify Discover weekly (spotify suggests 20 songs to you per week), it easily would have been the top 5 of that 20.

    Spotify's search is shit. I have a hard time doing a search for things like "covers of X, but not in garbage genres A,B,C)" or "show me songs from artists with similar music DNA (idk, but you know, that pandora algo bullshit) to artists D,E,F". Obviously, there's not a filter for "btw don't include fascists in results".

    I find music I like by accidentally stumbling onto it unexpectedly. To do so intentionally, requires wading through tons of spam and bullshit, even if you find a good curator.

    Strange enough, we've concocted a scenario where there are probably 100 artists I would love, but don't have a way to know they exist, 100,000 artists I would love, but they can't afford to take off work to actually make music, and there exists AI systems that soon will be more likely than spotify to hook me up with music that I like. If the AI system improves, and is cheaper than spotify, then why would I, the sequential noise enjoyer, the average person, bother even trying to find real artists?

    Music connoisseurs of the world will find their income evaporating, they'll start to come across like wine tasters, or art auction gallerys, or they'll be running around like dowsing rods accusing random new artists of being AI generated, and when not a grift, it'll be for theoretical reasons that existing sequential noise enjoyers already don't understand as they listen to pre-AI poorly made popular music anyways.

    This is truly a cursed, dystopian, anti-human hellscape capitalism has forced onto us.