Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any ideas on how to add Ampere support? I have a use case in mind that I would love to try on my 3090 rig


Magpie-TTS needs a kernel compiled targeting Ampere, but it appears to be closed source. It was compiled for the 2018 T4, but not 2020-2024 consumer cards, just 2025 consumer cards.


I actually forked the repo, modified the Dockerfile and build/run scripts targeting Ampere and the whole setup is running seamlessly on my 3090, Magpie is running fine and using under 3Gb of memory, ~2Gb for nemotron STT, and ~18Gb for Nemotron Nano 30b. Latencies are great and the turn detection works really well!

I'm going to use this setup as the base for a language learning App for my gf :)


I got your fork working (also on a 3090). I was not impressed with the latency or the recommended LLM’s quality.


Make sure you’re using the nemotron-speech asr model. I added support for Spanish via Canary models but these have like 10x the latency: 160ms on nemotron-speech vs 1.5s canary.

For the LLM I’m currently using Mistral-Small-3.2-24B-Instruct instead of Nemotron 3 and it works well for my use case




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: