Edit model card

Riffusion finetuned with google/musicCaps.
I found that prompt similar to dataset label returns more fidable result.
In my case I made a prompt with Chat GPT like this:

I'm writing prompt for music generation ai. I used captions like this: 1. someone is playing a high pitched melody on a steel drum. The file is of poor audio-quality. 2. This is a glitch music piece. There is a synth sound rising in pitch that resembles a triangle wave. There are granular synth samples being played randomly. A virtual percussive low-to-mid bell sound is playing a melody that resembles a marimba. There is an eerie feeling of flow. This piece could be used in the soundtracks of dystopian sci-fi movies. It could also be used in exploration sequences of video games. 3. This file contains an orchestral composition rising up while a lot of digital clicking sounds are in the foreground. This is an amateur recording. And the sounds seem to come from a different source. This song may be playing in an adventure videogame.

Now I want to make a soothing jazz with base with medium temp. Write a propmt in styles similar to above captions. Return one sentence with 3 lines

Response: Create a serene dance atmosphere with a dreamy melody, soothing synths, and a pulsing beat that gently propels listeners into a state of blissful tranquility, perfect for unwinding after a long day or enjoying a moment of peaceful dance.

Downloads last month
66
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Hyeon2/riffusion-musiccaps