Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Gave it a bunch of technical papers and standards, and while it's making up stuff that just isn't true, this is to be expected from the underlying system. This can be fixed, e.g., with another internal round of fact-checking or manual annotations.

What really stands out, I think, is how it could allow researchers who have troubles communicating publicly to find new ways to express themselves. I listened to the podcast about a topic I've been researching (and publishing/speaking about) for more than 10 years, and it still gave me some new talking points or illustrative examples that'd be really helpful in conversations with people unfamiliar with the research.

And while that could probably also be done in a purely text-based manner with all of the SOTA LLMs, it's much more engaging to listen to it embedded within a conversation.



The underlying NotebookLM is doing better at this - each claim in the note cites a block of text in the source. So it’s engineered to be more factually grounded.

I would not be surprised if the second pass to generate the podcast style loses some of this fidelity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: