I remember when Adobe demoed this idea of being able to edit waveforms by the recognized text back in 2016 and it was pretty mind blowing for the time.
EDIT: I could also definitely see Audapolis being useful if you could integrate it into a podcast's post processing flow (volume normalization, de-essing) by recognizing certain verbal tics and automatically removing them from the audio such as "ummmm...", etc.
This workflow is exactly what Descript does. Transcript-based editing, filler word removal, noise reduction, volume normalization, Overdub spoken word correction using the speaker’s voice, eye gaze correction for video, etc.
What ever happened to that Adobe demo? Was that a real product at any point? It's quite amazing how ahead of its time it was. Now that we have AI making people say whatever we want, it felt like Adobe was on the cusp of that then.
I remember people saying at the time that “this is the point at which voice recordings can not be trusted any longer”. And then, like you said nothing happened kind of for a few years until the current AI/ML tech got to where it is currently at.
and there's still no commercial product for synthesizing video to sync lip movements to edited transcript like all the scary proof of concepts that turned the president into a puppet
Maybe there's not much value in editing what someone said after all
Of course there is commercial value. The cost of reshooting video materials is huge. You made an advert mentioning 3 features, but by the time the product is about to be released one of them got dropped or even worse changed? Congrats, you need to get the talent and the studio rebooked, you need to find a new tech crew, who need to set up again. Probably things won't cut seamlessly so you need to re-record the whole thing.
Potentially also for syncing lip movement for content dubbed in a foreign language. For me when watching foreign media dubbed to English the discrepancy is very noticeable and quite distracting, no matter how well the dub is written, performed, and edited to match the timing of the original.
Yeah, it's a bit like asking why Microsoft doesn't make a BitTorrent client or why Chase doesn't offer a cryptocurrency. The prevailing use cases are just a bit too untoward, even if a few wholesome stories do exist.
A genuinely free alternative to Descript sounds very useful.
I've always liked the idea of Descript and was considering building something similar before it came out. The problem is my use case is a couple of videos a year so doesn't fit with an expensive monthly subscription
I don't know if perpetual free trial fits for your definition of genuinely free, but I'm trying to build a competition for Descript at https://smartmediacutter.com/
I've spent some of my free time over the past couple of months working on something similar. It's in a decent state but I need help from somebody who understands the .fcpxml format so you can export your edits to Davinci and FCP.
Hi, matcha.video looks very cool! I'm working on https://smartmediacutter.com which has some overlap in functionality. I'd love to have a chat about matcha and if you have any plans for commercialization, etc.
Right now it exports a .fcpxml file which you would import into you editor (davinici, final cut etc) which includes all of the cuts you made. And from there you could move things around, add effects, do color grading, whatever you need to do to get to a final product.
One of the hosts of a podcast that I listen to has had positive things to say about DeScript.[0] Just mentioning it because he's been talking about it for a few years so I expect its had a good amount of feature development over time.
What I particularly like about the Descript version (though it is overdone as mentioned) is that it reduces or eliminates the pesky S sounds and P sounds (called sibilances and plosives) that you get when you talk into a microphone and you're not perfectly distanced from it.
I haven't found another app that reduces or removes these.
Its machine learning powered noise reduction + compressor + eq + normalize combo effect. Works ok. Results in quite a bit overdone “studio” sound. I think trend in mixing is leaning much more natural (less tweaked) nowdays. But for no work it might be impressive. Probably works in internal corpo presentations well.
> Hindenburg’s manuscript feature gives you a complete overview of your audio. You can select the text just as you would in a text document and watch as your edits are made in real-time.
If you need to export your text in a specific format, no problem. Hindenburg supports the most common text and transcription export formats.
I just tried this out and it's very nice and easy to use. Thank you for sharing!
I ended up copy-pasting the output from the messages page, which is 99% of the way to exporting a .txt file and my personal use case. Great work.
In my videography work I often do a separate audio-only interview to use as voice-over for the final video. I like to print out a transcript, mark the highlights, then go to the sound file and extract the snippets I liked. Extracting the snippets is a lot easier when I have timestamps printed out inline with the text at intervals of one or two minutes. In the case of bigWav, there were timestamps marked at only three or four points, so I had to go back and manually enter ten more marks to orient myself on the page. In addition, I used ChatGPT on an answer-by-answer basis to clean up the copy and add in punctuation for ease of reading. So there was an hour or two of data sanitizing needed to get everything ready to print out and use efficiently.
This really needs a video demo or at least a more in depth text description of the features. Will download later to try but curious does this just do simple hard cuts on audio text or is there any ai magic for blending sentence timing if that makes sense?
A number of comments turned me onto Descript -- made a similar comment on another audio thread recently: drives me absolutely insane how all audio tools with any AI are web based monthly saas instead of offline private gpu upfront purchase.
The web based tools launch and move faster. There’s no lack of offline tools, if you’re the kind of person that files issue tickets in their spare time
I'm not sure how new the trend is, but it's called gitmoji (https://gitmoji.dev/) and there's also tooling to make committing/searching for the "correct" emoji easier :D Whatever makes your job more fun, right? Oh and it saves on characters!
It's also probably Good Enough for a first pass-through.
I'm stuck in editing hell right now, and it would be very nice to just visually scroll past a few pages of pre-episode bullshitting and be able to wipe out whole minutes at a stretch, without having to listen to the whole thing. Even at increased speed, it's a bit of a slog.
Somewhat off-topic: I saw the funding note at the bottom - it’s pretty cool that the German government is giving some funding to projects like this. I wonder how much the US is doing in that regard, like if there’s a list of projects that tax dollars goes towards.
IMHO you should really change the headline on this. I'm an audio person, and my first thought was "that's stupid, words are awful at describing sound". But then I looked, and editing transcriptions of voice recordings by word is actually a great idea. That was not the impression the headline gave me, FWIW!
I also understood it fine, but maybe we both just remember the Adobe demo that vunderba mentioned. I guess it might not be so obvious if you don't know about that?
On the other hand it does say "not waveform" which I think makes it pretty clear. What would you suggest instead?
Also, there's often no perfect combo of words, there's a spectrum of options and you just pick an operating point. Transcription is a longer word than "word" so there's a tradeoff. It doesn't feel like a chasm to me.
I’m genuinely curious what you were trying to convey by completing your, totally valid, disagreement with “so whatevs”? I believe this is the part that’s perceived as rude because the expansion of that, “whatever”, is often further expanded as the sarcastic form of “whatever you say”.
I was going for something between YMMW and "whatever you say." The slight tilt toward the latter was received poorly ¯\_(ツ)_/¯ It's just how I talk. Maybe it's generational.
To be fair, I didn't read it as condescending. This
I understood it as a generic statement ender, like "idk" or "ymmv" or "i guess", i.e. something that you put at the end of a sentence when you don't know what more to say. Maybe it actually is generational?
in some context "whatever" can be it evens out, but in others it can be "your opinion doesn't matter".
At any rate when I first read it I thought it was going to be some sort LLM thing where you said "remove the third bridge and increase pitch by one octave in the outro" and it would give you back an edited mp4 which you could then listen and cringe to and sometimes say "whoa, that's amazing!"
Uh, no I'm not. In my work world, disagreeing with "whatevs" would be considered rude and dismissive and would be called out.
Believe me, I don't care that you disagree. I just don't like to see people breaking the civility guidelines here as it's just about one of the last places online where discourse is largely held to a a civil level for disagreements.
I write copy professionally, among other things. If you don't care whether what you write is clear to almost all readers... then I suppose it doesn't matter. Most people do not want misunderstandings of their copy and most copy editors would flag that as unclear. The new version is much better.
> I just don't like to see people breaking the civility guidelines here as it's just about one of the last places online where discourse is largely held to a a civil level for disagreements.
I seriously disagree that this breaks any sort of social contract between you and I on the internet. It was intended to be mildly dismissive but not overly rude. There's a higher standard for communicating with care at work (you should care about your coworkers), but do you really think people on the internet have time for this shit? I don't know you guy.
It's not at all obvious. Given what we have seen recently, an equally plausible interpretation is "talk to an LLM and it will edit your audio" where audio could be anything.
It's not a good idea, but then tons of the LLM ideas we see here aren't either.
Can I ask what this tool does? I was trying to figure it out (the GitHub page isn't terribly clear) and came to the same conclusion you did (delete a chunk of the transcript and the tool would delete that audio).
I think I just lack experience in this area. I've used Audacity to cut out parts of audio / splice together two clips and that's about it, so I clearly don't have enough background to understand what this tool does.
Can someone clarify what this tool does, please? :)
It does exactly what you think it does. You can cut parts of the original file without having to edit the waveform (like you would in Audacity). Instead, select the parts directly just like you would in a text editor.
What it does not do is generate new words (ie you type a sentence and it adds that to your file as voice).
https://youtu.be/I3l4XLZ59iw
EDIT: I could also definitely see Audapolis being useful if you could integrate it into a podcast's post processing flow (volume normalization, de-essing) by recognizing certain verbal tics and automatically removing them from the audio such as "ummmm...", etc.