Hacker News new | past | comments | ask | show | jobs | submit login

Slightly off-topic, but the recent surge in responsive UI had me thinking thus: If we are now driven to merge UI logic across different graphical devices, can we think of apps that span both textual and graphical devices?

After all, the core functionality of the application remains the same. To use a simple example: when you search for a product online, get a search result list and then select one from the list, couldnt this flow be modeled just the same in both gui and text interfaces?

I'm thinking back to the turbo-pascal style applications of the past that produced full-blown IDEs in text, or the wordstars/wordperfects of yore: the UI model that sits behind those apps cannot be much different in principle from the modern day equivalents.

Even farther back, there was a time (and probably still is for college assignments) when cli applications had a prompt-read user input-respond cycle, replete with text-based choices to select from and so forth.

What if we were to merge the two worlds instead of trying to get one to confirm to the other?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: