Hacker News new | past | comments | ask | show | jobs | submit login

Going through the list of what happens on iOS:

> UIKit introduced 1-2 ms event processing overhead, CPU-bound

I wonder if this is correct, and what's happening there if so - a modern CPU (even a mobile one) can do a lot in 1-2 ms. That's 6 to 12% of the per-frame budget of a game running at 60 fps, which is pretty mind-boggling for just processing an event.




I guess you can waste any amount of time with "a few" layers of strictly unnecessary indirection.

Speaking of games: I had just the other day the realization that we should look into software design around games if we want proper architectures for GUI applications.

What we do today instead are "layers of madness". At least I would call it like this.


Games have privilege of controlling everything from input device to GPU pipeline. Nothing desktop is going to be that vertically integrated easily


> Nothing desktop is going to be that vertically integrated easily

Why? Are there any technical reasons?

I think this is a pure framework / system-API question.


Only things I can think of is that for windowed apps, you have to wait for the OS to hand you mouse events, since the mouse may be on another window, and you have to render to a window instead of directly to the framebuffer.


Which brings us back to "system APIs".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: