Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"According to Glenn, it is very hard to figure out what TeX code is doing, mostly due to its terseness and extreme optimization"

This quote surprises me. The TeX code is extremely well documented compared to just about any other piece of source code I have seen.

The literate programming style never caught on but I think you see the beauty of it in the TeX source code. If its hard to read due to the mix of code and comment it can be turned into a book. Or into a compiled program.

I am also surprised that optimized Pascal from decades ago, that compiles on a wide variety of different hardware and software platforms outperforms a modern re implementation.

The point being that if it was optimized for a specific cpu on a specific OS, making it really fast makes sense. But when writing code for "some" or in reality any (ok that isnt quite true but TeX runs on a lot of different hardware) cpu, and any operating system optimizing code is harder.

It does get compiled by a C backend these days though.



For a better sense of what he means, check out the video go get more detail about some of the complexities.

As a counter-point, a codebase can be well documented and yet be hard to understand if the abstractions are not meaningful to the reader. I can say this without attaching any normative judgment to the quality of the code.

If I recall correctly, Glenn found it difficult to reason about the code because many parts were tangled in complex ways. For example, subsequent processing steps often triggered previous steps to repeat in surprising ways.

To speculate a little bit, these parts of the code may be well-documented down in the weeds, but they could still feel non-intuitive in the broader context if they didn't feel consistent or non-surprising.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: