Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you happen to know why Orpheus and Llasa use Finetuning for voice cloning?

Zonos uses 128-float embeddings for voices and it seems so much nicer. Because you can just mix and match voices without changing the model.



No, you just condition it with text-voice token pairs and then when conditioning further inference w/ text the voice tokens tend to match the pairs further up in the context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: