That’s different. It’s essentially using whisper model for audio to text and that inputs to ChatGPT.
Multimodal would be watching YouTube without captions and asking “how did a certain character know it was raining outside?” Based on rain sound but no image of rain
I don't know if it's related to Gemini, but Bard seems to be able to do this by answering questions like "how many cups of sugar are called for in this video". Not sure if it relies on subtitles or not.
> Expanding Bard’s understanding of YouTube videos
> What: We're taking the first steps in Bard's ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
> Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.
Ah that’s right. I guess my question is, is it a true multimodal model (able to produce arbitrary audio) or is it a speech to text system (OpenAI has a model called Whisper for this) feeding text to the model and then using text to speech to read it aloud.
Though now that I am reading the Gemini technical report, it can only receive audio as input, it can’t produce audio as output.
Still based on quickly glancing at their technical report it seems Gemini might have superior audio input capabilities. I am not sure of this though now that I think about it.
I think it's app only though