Hacker News new | past | comments | ask | show | jobs | submit login

That sounds like a fun idea! I've been toying with the idea for years of having a fully incremental compiler - basically caching the dependency DAG of code compilation. "This function, depends on the definition of this struct, which in turn depend on these typedefs and this other struct..."

Then I can imagine a compiler which takes the DAG from the previous compilation and information about which lines of code have changed, and can figure out which intermediate compilation results need to be recomputed. And the result would be a set of blocks which change - globals and symbol table entries and function definitions. And then you could implement a linker in a sort of similar way - consuming the delta + information about the previous compilation run to figure out where and how to patch the executable from the previous compilation run.

The end result would be a compiler which should only need to do work proportional to how much machine code is effected by a change. So if you're working on chromium and you change one function in a cpp file, the incremental compilation should be instant. (Since it would only need to recompile that function, patch the binary with the change and update some jmp addresses. It would still be slow if you changed a struct thats used all over the place - but it wouldn't need to recompile every line of code that transitively includes that header, like we do right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: