Hacker News new | past | comments | ask | show | jobs | submit login

I'm very familiar with the Infocom era and am still in touch with some of the folks. I admit I haven't kept up with the latest developments. Probably should take a look.

For folks interested in the early history, Jason Scott's Get Lamp documentary is highly recommended. (He also has an Infocom-focused edit.)




The latest developments in IF are pretty amazing compared to the Infocom days. The parsers are a lot more advanced and this was all before things like LLMs, which I assume could be used in some way here.


I can imagine. While sophisticated for the time, the Infocom parsers were often sort of an exercise in figuring out the right incantation. (Sort of like Alexa :-/ Low blow I know.) Especially with LLMs and voice recognition, there's a huge amount of potential present and future for a lot more fluid interactions. Not that I expect it to ever be a really mainstream genre.


I dunno, I think it could with the right evolution in the interface. Imagine an interactive story app that you listen to on your commute, where voice commands back to it are the only interface (eg so it’s safe to interact with while driving).

Maybe that’s just a subset of the more general “AI companion” opportunity, but I expect you could get some really interesting experiences by calibrating the balance between the manually curated/composed parts of it and the parts that get a bit more painted-in by the LLM.

Am thinking especially of stories with conflicting timelines, unreliable narrators, etc, where you’d maybe be revisiting the same events from multiple perspectives to piece together what actually happened.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: