Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your app is very similar to our demo app! https://modal-labs-whisper-pod-transcriber-fastapi-app.modal...

How come you don't support audio files longer than 1hr? Is it because of $$ cost?

The above demo app gets faster transcription by chunking audio and parellelizing over dozens of CPUs, so you can transcribe a about 1hr of audio for $0.10.




> https://modal-labs-whisper-pod-transcriber-fastapi-app.modal...

Interesting, which model are you using? We use the medium model which is the sweet spot between time/performance ratio. We also chunk, We try to detect words and silences to do better chunking at word boundaries but if you do more chunking and you don't get the word boundaries right it seems like whisper loses some context and the accuracy suffers. We will soon support longer hours. We just want to make sure the wait time for transcription doesn't suffer for most users. But great demo, reach out to me if you want to collaborate


We’re using base. The code is open source at modal-labs/model-examples repo if you want to see anything we’re doing




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: