Hacker News new | past | comments | ask | show | jobs | submit login

> Usually, font renderers are Turing complete in some way, and usually this opens up security concerns, and rightfully so...

Most of the features of which I am aware in fonts are modeled as FSMs, which can certainly be turing complete but do not need to be. Off the top of my head you could figure out a “sane” max time for a “script” to run without any loss of font quality, though it would not technically comply with the intended functionality.

I am at a loss of words for why this was ever possible, although perhaps security didn’t matter until this past decade and I am too young to remember how low priority it used to be.... certainly, putting anything in kernel space for performance reasons of all things seems ridiculous for desktop computing. I’ll take a fucking massive performance hit to keep my data safer.




Slight correction: FSMs are not turing-complete. They may not hold though, which is not the same thing.


Derp, been too long since my theory class.

I am fairly confident AAT tables are FSMs, so I am curious where the turing completeness comes from. I am certainly not familiar with all truetype/opentype features.


It's the hinting that's normally referred to here as Turing complete, I think.

https://en.wikipedia.org/wiki/TrueType#Hinting_language


I know Mac OS X has ignored a lot of the hinting in fonts. Do they evaluate the entire script regardless?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: