Hacker News new | past | comments | ask | show | jobs | submit login
Text Rendering Hates You (gankra.github.io)
542 points by robin_reala on Sept 29, 2019 | hide | past | favorite | 169 comments



A few minor things missing in OP (mostly CJK related)

- Code point is sometimes not enough to determine the glyph. For example, U+5199 must look different in Simplified Chinese and Japanese. Typically this is handled by using different fonts, but more formally it should be marked with different lang attributes (in case of HTML).

- Top-to-bottom writing mode is still pretty much in use in Japanese. HTML support is poor, but it's common in PDF. Caveat: Latin alphabets are rotated by 90 degree (as explained in OP), but for punctuations, simply rotating the glyph isn't enough because the center line is going to be slightly off. You need special glyphs for rotated punctuation.

- Unlike Latin scripts, most Chinese and Japanese characters are free to break line at any point, but there are exceptions: contracted letters (e.g. ちょっ) cannot be split. So you'll end up with treating each letter as a separate word. (Contracted letters are treated as one word.)


Elaboration for "ちょっ"

Japanese hiragana chi, small yo, and small tsu.

Hiragana is a syllabary (not alphabet!) comprised of 46 sounds. Small hiragana can be used to modify the sound of the leading character to expand the set of available phonemes. Here, "Chi (small)yo" becomes "cho".

Small tsu is special. It represents a doubled or germinated consonant.

For example ちょっと (chotto) sounds like chot-to. Example:

https://m.youtube.com/watch?v=kh9Kk_RHn8Q

Japanese is a really fun language. If you've ever had an interest in learning it, I recommend it. Reading kanji is a joy in and of itself, because it has a pleasant inner logic that gets deeper the more you learn.


Japanese is one of the few languages with a worse writing system than English. Hats off to them for it. I do miss the pre-war spellings though, where they wrote things based on 10c pronunciation, eg AU is pronounced OH (like in French).

Tibetan is also in pretty desperate need of spelling reform. I’d love to hear replies about other terrible writing systems.


In the scheme of writing systems it's hard for me to see why English is so bad. In general writing (like spoken grammar) is so heavily used that all the "rough corners" get sanded smooth pretty rapidly and continuously. For example, though the histogram of Kanji/Hanzi has a very steep drop-off, it's not that hard for people to recognize and keep track of, because characters that were hard to read at first tend to get restructured or dropped.

As far as Indo-European languages go, languages like English and French are quite conservative in spelling and underlying meaning, which means they do diverge from pronunciation but retain semantic clues which help the reader unlock the meaning of unknown words. While languages that continuously reform to try to match pronunciation shifts (e.g. German) discard this info. In addition they have to pick one canonical pronunciation, and themselves typically retain various homophonic issues (e.g. how many ways can you write the sound of ä (I don't mean "ae")).

Even as a kid (should be "easy to learn", right?) I struggled far more with the variants of the devanagari alphabet even though it was so phonetic. But hundreds of millions of people use it every day without thinking about it.


English is pretty bad unless you are a total orthography history nerd.

For example, people routinely mispronounce the name of the President of China. (It’s closer to Shi than Zi.) Why? Because English has a strong preference for retaining native spellings even when the other language has a completely incompatible set of pronunciation rules. With Chinese this is extra silly because pinyin is only one of a million ways of romanizing Chinese and basically the least English compatible method.

English spelling is like a Hofstader puzzle: to be able to master it, you must master the spellings of all languages plus English itself recursively going back to the Great Vowel Shift. It’s not a good system.


Pinyin is requested by the Chinese government. They used to use Wade-Giles but changed iirc in the 70s and requested that others do so too.

It can’t really be called part of English in any meaningful way anyway.


We don't call the country "Zhongguo" in English, we call it "China". So why do we let them tell us to call it "Beijing" instead of "Peking"? It's craziness. We gave up a perfectly good name for "Canton" and replaced it with an unreadable mess that isn't even in Cantonese ("Guangzhou").

If people who speak Chinese want to use pinyin, more power to them. But English speakers need to stop just copying other people's romanizations even when the romanizations do not connect to our spelling system at all.


> For example, people routinely mispronounce the name of the President of China. (It’s closer to Shi than Zi.) Why? Because English has a strong preference for retaining native spellings even when the other language has a completely incompatible set of pronunciation rules.

This is objecting to the .01%--I find it hard to take seriously.

English has the really good feature that there is a phonic correspondence between spelling and pronunciation. It's not always one-to-one, and it's sometimes really weird, but it's generally there.

So, if someone sees Xi Jinping in writing, the utterance out of their mouth will be close enough for me to know what they are talking about.

However, even when someone completely botches the spelling of a word, there is almost always enough logic behind the error for the person looking at it figure out the actual word meant. That's a really nice feature in a language. (For example--I have seen phlegm or subtle spelled in all manner of ways, but I generally could tell what was meant.).

By contrast, lots of the articles talking about Chinese highlight all the really common words that native speakers can't even cough up the kanji for because there is so little phonic correspondence.


> English has the really good feature that there is a phonic correspondence between spelling and pronunciation

It's absolutely not the case and it's actually one of the most mocked feature of English on the internet. The pronunciation of the name of “Sean Bean” is a good example. As a non native speaker, I have had a lot of pain learning the pronunciation of words I've only seen written.

In American English, you have at least one direction easy (from pronunciation to spelling) but not in British English. And going from spelling to pronunciation is really hard in both dialects.


> And going from spelling to pronunciation is really hard in both dialects.

But you can try. And it will kinda work.

If you pronounce "Sean Bean" as "seen bean" (most likely), "shawn bawn", or "say-an bay-an" (that's gonna require some thought for a native ...) people will scrunch their brow, think a bit, maybe chuckle, and have an idea who you are talking about--especially if they've been around you for a couple days.

English is remarkably error tolerant. You may not get it right, but you will get your point across.

With kanji you can't even try. This isn't about whether English is easy. It's about that fact that the simple act of going from pronunciation to kanji or kanji to pronunciation simply isn't possible at all.

(And choosing a formal name as an example is just asking for exceptions. Japanese, for example, has a whole class of kanji used basically for nothing except names--good luck pronouncing those ...)


> With kanji you can't even try.

I don't want to defend kanji/hanzi as a writing system. It's also in debt to history in a crazy way. But as a non-native reader of Japanese, you can often figure out the pronunciation and meaning of unfamiliar characters based on how they look. A large majority, maybe 90%, of characters have a meaning-part and sound-part, and once you know the common roots, you can get pretty far by just looking at things. You can also move to Korean and Cantonese pretty easily because those preserved the pronunciations of old Chinese pretty well. Ironically, Mandarin did a pretty bad job of preserving old Chinese, so it's harder to match up.

E.g. 楽 (music) is old Chinese nguk, Japanese gaku, Cantonese ngok, Korean ak, and Mandarin yue(??).


> English is remarkably error tolerant

I don't know what's your primary language, but as a French person I can tell you that no, English isn't that error tolerant and people don't understand what you mean if your pronunciation isn't good enough.

I still have traumatic flashback of my younger self desperately trying to buy water (pronouncing “wa” as in waffles and “ter” as in territory) in Canada when I was 15 (and really thirsty)… I think I spent a whole 5 minutes before a French Canadian arrived and saved my day.


Did you try aqua? Most educated English speakers should have known that word. :-(


It's actually Sean. Pronounced shawn. Be thankful we didn't get more from Gaelic, where "Laoghaire" is pronounced leery. :)

Don't agree US English makes it easier, it's just differently idiosyncratic.

Extreme example - ghoti: Pronounced fish. The gh from tough, the o from women, the ti from nation. Works in American English. :p

If I hear in the US "cull err" and go to spell, I don't end up at color, hearing "sell fone" doesn't lead me to cell phone, etc.

Half the problems of English spelling are the three (four in US English, with Webster) significant attempts to simplify spelling over the centuries, and outsourcing some temporarily illegal printing to the Dutch.


> Extreme example - ghoti: Pronounced fish. The gh from tough, the o from women, the ti from nation. Works in American English. :p

http://zompist.com/spell.html

Initial 'gh' can never be pronounced 'f', final 'ti' can never be pronounced 'sh', and 'o' is only pronounced 'i' in one word.


> It's actually Sean.

Indeed. Fixed, thanks

> Don't agree US English makes it easier, it's just differently idiosyncratic.

I found it easier when learning, but YMMV.


I'll defer then, as you're the non-native speaker who learnt the hard way. :)

Just a lot I might have thought American English would take the opportunity to clean up, remain. Like all those silent h's from having printed the first English bibles in Holland. Ghost, aghast etc. There used to be a lot more - ghospel for one!


> you're the non-native speaker who learnt the hard way

Well, even native speakers need to learn it the hard way as a child! As far as I'm concerned, I really struggled with French spelling when I was younger ;).

> Like all those silent h's from having printed the first English bibles in Holland. Ghost, aghast etc.

That's a really cool story. Thanks!


Any time this general topic comes up, I like to drop the Chaos Poem as an example, for anyone who hasn't yet encountered it: http://ncf.idallen.com/english.html


Tibetan and French usually come up on lists of worst correspondence between orthography and pronunciation. Irish is also pretty bad. (It makes sense once you learn how it works, then stops making sense once you hear people of different dialects actually pronounce words.)


French is terrible in one direction: pronunciation to spelling is a nightmare, as most sounds can be written in a least 3 ways. «un» «um», «in», «im», «ain», «aim» and «ein» are pronounced the same, so if you hear this sound, good luck figuring out how to spell it.

But the other way around isn't really bad: a spelling almost always have an unambiguous pronunciation. (There are exceptions but they are few and far between)

At least if you don't count ancient French, which is still prominent in places name's, and have a totally arbitrary pronunciation. As a little game for French speakers, try pronouncing the name of the city of «Meung sur Loire» ;).


geminated not germinated haha it's not a seed :p


Now you’re just arguing seminantics.


> but more formally it should be marked with different lang attributes (in case of HTML).

Isn't the that point of UNIcode?

To unify all text into a single character set so that can exist side-by-side without messing with code pages?

Isn't this just code pages all over again?


"Han unification" (deciding that Chinese and Japanese characters were 'basically the same' and could be represented with the same set of codepoints) was a terrible idea, yes, mainly brought from the Unicode foundation not being run by people who spoke or wrote either language, and not wanting to "waste" their limited code points in the BMP on languages using an inconvenient number of characters.


Top-to-bottom writing is not just prevalent in Japanese but also in Chinese too. Pick up any professionally published book in Taiwan and chances are it's written top-to-bottom.


And Mongolian, Korean, Uyghur...

If you add support for archaic systems you have to handle vertical bottom to top and boustrophedon too.


Uh, vertical writing in Korea was famously abolished decades ago when the Chosun Ilbo, a prominent conservative newspaper notorious for its love with now almost dead hanja uses and vertical writing, finally gave up with that in March 2, 1999.


It’s true — actually I thought it had been abandoned in the 70s but I figured that was recent enough not to be “archaic” like some of the obscure bottom-to-top scripts.


Also Arabic mixed with English goes RTL for the Arabic parts and LTR for the English parts: https://docs.oracle.com/javase/8/javafx/user-interface-tutor... (bottom)


Another pain point on text AA: Many renderers mix up colorspaces on antialiasing. For example freetype assumes a linear colorspace when calculating the antialiased bitmap but AFAIK both GTK and QT apply it directly on sRGB without any adjusting. The result is apparently thickened fonts when displayed black on white and thinned fonts when displayed white on black.

Edit: Some background can be read on [0].

[0] https://www.freetype.org/freetype2/docs/text-rendering-gener...


I believe it's somewhat unavoidable as the fonts may also have co-evolved to work with renderers that work that way


Yes - fonts looked too thin and rather ugly when the color space "bug" was fixed in early Qt 5 (on Linux / more precisely, with FreeType glyph rendering) versions. The fix was reverted to fix the look of text.


Sure, that's what stem-thickening is meant to compensate in the cited article.


With high density monitors on the rise we should get rid of LCD (RGB rainbows) anti alias as its really just a hack.


I strongly disagree, also the way the article describes subpixel antialiasing is not entirely accurate. The thing is the boundaries between pixels don't actually exist, it's just a convention so we can address individual pixels and subpixels within them. But these subpixels are not at the same position within the pixels and this needs to be taken into account for antialiasing.

Also regarding hiDPI: I know that you just want to get rid of subpixel antialiasing but you really don't want to get rid of antialiasing in general. It's not really an issue with typefaces, but non-antialiased high-frequency patterns can easily generate unwanted Moiré patterns on you display regardless how high is your display resolution.


It’s pretty much arbitrary what we as an industry have decided “text” means (and this has changed over the years), and what features it ought to support. It’s largely “what our technology easily supports today”.

For example, most people expect antialiasing (i.e., subsampling across space) but we seem to have written off the idea of motion blur (i.e., subsampling across time), except in some games.

It’s be technically more accurate to draw a quickly-dragged mouse cursor with a (correctly computed) blur, but that’s not generally easy to render today. And motion blur of moving subpixel antialiased text sounds like a nightmare right now, but with the right abstractions it might not be.


Motion blur is only "accurate" if you assume that the viewer's eyeballs remain still and don't follow the moving object. Having said that not blurring isn't accurate either on current LCDs when the viewer follows the object. This is what black frame insertion is meant to correct.


Antialiasing (nominally) makes text easier to read. Motion blur makes everything it's applied to harder to see.


From my own reading experience, I found that individually lit "RGB" clusters tend to be white, wherease "GBR" is fuzzy(?) and "BRG" tends to appear purple on the left but green on the right. This is because the green subpixel contributes most to perceived brightness. I think it's fortunate that most displays locate them in the center.

I tried reproducing the results using actual Cleartype with black lines on white backgrounds, and "orange-blue" vertical lines appear blurrier than "black" vertical lines.


Pretty much no one wants to get rid of antialiasing in general.

I use only grayscale antialiasing, even on LoDPI screens. It looks great. Subpixel adds bad color fringes.


Doesn't it depend on the screen and user preference though? On my old Thinkpad's screen grayscale antialiasing looks awful (fuzzier and harder to read), to the point where I've filled bugs when some aspect of a program's UI doesn't obey my system settings.


New high-end PCs indeed use high DPI displays, but color LCD monitors are now used in cars, dishwashers and microwaves. These are not high-DPI, and they often lack CPU and GPU power to handle high DPI.

Here's a library I've made for an embedded Linux device: https://github.com/Const-me/nanovg At 5" 800x480, subpixel AA improved text a lot.


Addressing subpixels according to their spatial location is no way a hack.


Qt does do gamma correction when it comes to text rendering.


Maybe for blending, white on black text and things like that - after calculating shades the "wrong" way for compatibility with fonts' built-in expectations. See my other comment about that.


> Mercifully, subpixel has become less relevant over the years: retina displays really don't need it, and the subpixel layout on phones, prevents the trick from working (without major work). On newer versions of macos, subpixel-aa of text is disabled at the OS level by default.

Speaking of mercy, I wish Apple had disabled it only on hidpi displays and chosen a longer deprecation window for normal displays.

I prefer keeping displays at a distance slightly longer than my arm. This completely hides the slight blur of subpixel antialiasing.

Now a 1440p has significantly degraded text rendering (even though other graphics look great) and if you want to use an external display, it must be 4K or 5K to not get a blurry, thin mess.


Have you tried `defaults write -g CGFontRenderingFontSmoothingDisabled -bool NO`? (also needs a reboot)


Yep, but some say it's gone from Catalina. I don't try betas. Can someone on 10.15 verify that this still works?


It does work on 10.15 latest beta (10 as of writing).


Good to know, thanks. Hope it makes it to GM.


High-DPI (namely 200+ DPI, aka Retina) text rendering on desktop computers is one of the more impactful hardware developments for programmers in a while.

When you're looking at text all day every day, having every single glyph take a massive step up in fidelity (4x the pixel budget!) is not to be sneezed at.


It has also affected the Print industry in a major way.

No longer is there much need to have a 200+ DPI difference between your display and what you’re printing.

Gone are the days of zooming way way in to a page-sized PS document - it’s just the size it will be for print.

For my girlfriend, this was super important.


This for sure. I love my iMac 27” 5k simply because how text looks on it so much better than even a 28” 4K. 220 DPI is a magical number


Want a bit of pixel shock? Find someone with an original iPhone.

I fired mine up a few weeks ago and simply couldn't believe that we used to use screens that low resolution and blurry. My eyes have become spoiled.


That is illustrated very well by the second row here: https://www.paintcodeapp.com/news/ultimate-guide-to-iphone-r...


And that screen was fairly high-res for its time!


Interesting, my main screen is 160dpi@27", and I would love to try something at 300dpi@27". It's difficult to figure out how much of a difference a screen is going to make at 160 vs 220 vs 300dpi unless you sit down and just work for a few days.


Edit: obviously that is only for the resolution...

Try looking at any recent smartphone with an at least full-HD display ;) To appreciate just how many pixels there are and how many pixels per glyph, take a screenshot and look at it on a computer monitor. It's kind of surprising.


160 vs 220 is pretty obvious to me since I’ve experienced that directly. I’m not sure about 300, I’ve never seen one of those even on a laptop before.


I miss my amber Hercules screen


Ashton-Tate's Framework on a green Hercules was my daily driver for a very long time. Multi-tasking (sort of), windows, and the ability to cut-and-paste between applications in a text-only environment years before Microsoft Windows was usable.

https://en.wikipedia.org/wiki/Framework_(office_suite)

Picture: https://winworldpc.com/res/img/spotlight/Famework%201.0%20-%...


Framework was the tits back in the day. It was basically Emacs for the office. It built on the core concept of a frame the way Emacs built on the buffer -- except frames could contain other frames, which is how Framework composed spreadsheets and documents with multiple subsections.

Like Emacs, Framework was programmable in a Lisp-like scripting language called FRED. In fact you could attach FRED macros to any frame, and they would be saved along with your work.

All this on an 8088-based PC (or 80186-based Tandy 2000).


I have happy memories of getting a monochrome bitmapped graphics driver working with Turbo Pascal and generating Clifford Pickover's million-point structures on my old amber monitor.


My first screen was one as well. Insanely high horizontal resolution for the day too.


Agree. I love my 4k monitor, and I think 8k will reach the point where I don't need any higher resolution.


i tend to prefer low-resolution displays because the battery lasts much longer. Do not really see the point of tiny pixels


Point is that diminishing returns for great legibility kick in far north of ~100 DPI. Perhaps 330 or 440 is pointless at typical desktop viewing distances, but 220 certainly isn't.

I mean, just because ~100 looks okay isn't grounds to get stuck in a rut dictated by what was economical to mass produce in the year 2000s.


The other comments about the details rendering are great and an example of how much depth of knowledge you can find on HN (and all this is now well beyond my experience).

What I'd be curious about here, though, is what this says about large scale software engineering. Text rendering has to be one of the most common activities within a large number of computer program sorts and I know pieces of the text rendering process are common examples in texts on object oriented programming, indeed the different of rendering processes seem to suggest objects and interface readily. Yet the standard pipleline the author describes seems leakier than anything I saw twenty years when I dealt with such issues - nothing is solved on the software engineering side, the mess just grows.

Obviously, this is a product of adding new languages and new display models to the text rendering process, as well as standardizing the process so it accommodated different font approaches and etc.

But object orientation as well as related models promised, some time in the past, something akin to "encapsulate the process and adding complexity will be easier". Object oriented programming has lost almost all luster but what alternatives? Could the pipeline be less leaky in functional programming or something similar or something different.

It just makes me curious.


Across all coding paradigms, it is always good to abstract at the correct level and to modularize as much as possible (really these are two sides of the same coin). The leaky nature of text rendering means that the only level you can really abstract at is the topmost level, and you’re left simply covering every imaginable case individually. Not sure if that can be solved by a programming paradigm.


But is text rendering "inherently leaky"? Or more leaky than large scale accounting data or etc?

At that rate, it seems like just about any messy, multilevel problem can be taken as inherently leaky and not something design paradigms can make easier. Maybe the solution is, "there is no solution" but as an optimist, it's hard for me to accept that.

(And yeah, I should have said "design paradigm", not programming paradigm).


If it weren't for Windows, we could move on from TrueType outlines (.ttf) to CFF outlines (.otf). .otf works nicely on non-Microsoft rasterizers. It was terrible on XP. Now on Windows 10 it’s less terrible, but .otf fonts on Windows 10 using DirectWrite look bolder and blurrier than using FreeType, Apple, or Adobe rasterizers.

If Microsoft adopted the Mac font rendering aesthetic and fixed their CFF rasterizer, we wouldn't need to worry about TrueType hinting anymore. But now, since your PDFs and Web fonts get viewed on Windows, you need to use TrueType outlines with Windows-friendly hinting even if you aren't using Windows yourself.


That's the difference between ttf and otf? And that's why certain fonts look slightly different between platforms!

This is a serious revelation to me. Thank you. I've been having to switch between my Mac laptop, Linux workstation, Windows 10 workstation, and Windows 10 Amazon Workspace a lot. It's been frustrating seeing the small differences but being unable to really figure out what was going on.


The format of outlines is only difference between .ttf and .otf, yes. .ttf has quardratic Bézier curves and hinting is done by interpreted programs (on Windows; TrueType has generate hints at run-time). .otf has cubic Bézier curves and declarative hints.

The outline format of .otf comes from Adobe Type 1 fonts (.otf really is just .ttf with outline format taken from Type 1), so the .otf outline format is the older one. Apple created TrueType after licensing talks with Adobe failed long ago.

.otf is generally more compact in terms of font file size than .ttf. Really the only reason to stick with .ttf is the rasterizer situation on Windows and outlines traveling from Linux and Mac to Windows in PDFs.


OpenType also supports the TrueType glyph format [0], and it's the dominant glyph format in most OTF files I could find. Really, OpenType is just CFF / Type 2 Fonts and TrueType smushed together in a single file.

[0] https://docs.microsoft.com/en-us/typography/opentype/spec/gl...


The primary visible difference is the way hinting works in both formats. TrueType-flavored fonts (.ttf) must be explicitly programmed; how good this looks depends on in which decade the font was hinted and how much money, time and skill went into it. Postscript-flavored fonts (.otf) can be outfitted with some hinting metadata and have an automatic hinting tool run on them, applying the hints is then completely up to the renderer. This has the disadvantage that the designer has less control over the end-result and the advantage that the renderer can interpret hints differently if some text display technology/stack changes.

I personally find Adobe's .otf renderer in FreeType the best compromise between aesthetically pleasing and legible and can only recommend using exclusively .otf fonts on FreeType platforms.


Someday, 90% of people won't be running Windows... It's been said for decades now, but it might happen some day.


If we consider smartphone browsers, that time has already come to pass!


Half sure, 90% no.


There are over 2.5 billion active Android devices (https://twitter.com/Android/status/1125822326183014401). There were over 700 million active iPhones in 2017, likely more today (https://fortune.com/2017/03/06/apple-iphone-use-worldwide/). So conservatively there are over 3 billion smartphones.

Microsoft claims there are over 1 billion Windows 10 installations (https://news.microsoft.com/bythenumbers/en/windowsdevices/) and over half are running Windows 10; so that means that less than 2 billion Windows installations are in existence.

The numbers are pretty clear at this point.


> So conservatively there are over 3 billion smartphones.

> so that means that less than 2 billion Windows installations are in existence.

In what world is 3 billion out of 5 billion >= 90%? I repeat "Half sure, 90% no."


I think I misread the GP. I interpreted it as a claim that someday, Windows won’t have a 90% market share.


Once upon a time I wrote an automated newspaper layout/preview system in Cocoa, the rendering engine in MacOS.

It was the only thing I could find that could do high quality text (ligatures, OpenType fonts, etc.), had native CMYK support, and could produce a print ready PDF or JPEG preview in milliseconds.

I can’t open source the code, but if anyone ever needs to embark on something similar I’d be happy to share what I learnt.


It seems that the term "subpixel" is overloaded. In one meaning the term refers to the RGB parts. In another meaning the result looks different when the text is shifted by less than a pixel, even on a monochrome screen.


In the first case, I believe that it's normally referred to as "subpixel antialiasing". And in the second case, "subpixel positioning".


Yes, good point, but what would you call it if you'd apply both concepts at the same time?


Subpixel antialiasing plus subpixel positioning :)

They are orthogonal concepts.


Yep, this one confused me too. But this article appears to use the term consistently for the first meaning.


That article links to this page about what a massive hack ClearType (Microsoft's subpixel anti-aliasing in Windows) is and I love it: http://rastertragedy.com/RTRCh4.htm#Sec1


Wow, that's an amazing (and beautifully designed) website, thank you!


> If you're in Safari or Edge, this might still look ok! If you're in Firefox or Chrome, it looks awful, like this: https://gankra.github.io/blah/text-hates-you/transparent-cur...

It doesn't look awful in Firefox, it looks perfect (it's version 26 of Firefox though.) This is what I see https://i.imgur.com/sjvqycv.png


Mobile Safari is giving me the color stacking in your second link.


> Synthetic bold: paint every glyph multiple times with a slight offset in your text-direction.

Designers forgetting to include a bold version of their web fonts is one of my biggest bug bears. It always stands out, especially on Safari on iOS as you zoom in and out.


I think Chrome (PC/Android) implements synthetic bold by expanding the glyph outline, then drawing it with subpixel off. It's prettier than Firefox's "horizontal-only extrusion", but sometimes leads to funny artifacts at sharp corners.


How Firefox implements synthetic-bold varies across different platforms, actually, depending on the features of the (platform-specific) font rendering backend it's using. It uses the paint-multiple-times-with-a-small-offset approach when the font backend doesn't seem to offer a "better" synthetic-bold option.


Yeah the smearing approach is the only one that merits a callout from my position in the pipeline, because the other approaches are handled by your shaper or rasterizer, which are usually 3rd party libs that can be mostly regarded as black boxes.


> forgetting

Or rather deliberately not including to decrease download size.

Though OpenType variable fonts are a thing now. e.g. Inter https://rsms.me/inter/ has a variable version


Maybe deliberate, but I don’t think it’s a good idea. This is an optimisation where the user actually sees the artefact/degradation. And that font would otherwise be used over the whole session.

Excluding bold copy on the page or using system fonts in the design seem like better alternatives for saving bandwidth. Or appropriate use of the font-display property. Otherwise I see this as choosing custom fonts and only half implementing them for a design.

And variable fonts still need two sets of outlines to interpolate between glyph weights, so they may not completely answer the problem.


That’s not the worst for sure. Now not packing a proper italicized version leaving the text renderer to draw each glyph at an angle to make oblique text (which is the difference between oblique and italic) for a truly hideous result.


Fun fact about subpixel antialiasing: at some point, Firefox was printing the subpixels when printing PDFs displayed with PDF.js. That looked awful. I'm actually not sure whether that was fixed, as I haven't printed in years (also, I have disabled subpixel AA for a while because of my dual monitor not in the same direction setup)


I got some printed marketing material at a trade show last week. Screenshot of grey sub-pixel text, enlarged and printed in CMYK onto flyers.

Obviously blurry and color-fringed; hard to believe anyone approved that artwork or finished product.


Perhaps it was an artistic choice? Doubtful depending on what the marketing material was for but I’ve seen interesting stuff along those lines. A good analogy would be how some people like the sound of “cheap” synthesizers.


So your printer would literally leak information about your display?


Probably because pdf.js is implemented using <canvas> so it's basically like printing screenshots of the PDF


They have been working on SVG backend for quite a while to get real vector printing, still not done.


I am not surprised, given the poor SVG support of Firefox (and browsers in general). Also there are gradient types that PDF supports but SVG doesn't. I am also not sure why they would need to go through SVG for printing.


SVG is the only way to get vector printing through a browser currently unless its just normal html/css which isn't enough to render a pdf.

Canvas is just a bitmap so when printing pdf.js renders to a certain dpi canvas which I believe is less than 300 dpi (150?) which even still uses huge memory and ends up with fuzzy text.

You don't want bitmaps going to printer for text and line art, you want vector so it can come out at 600+ dpi while using minimal memory.


> SVG is the only way to get vector printing through a browser currently

This is what seems weird. Why is that? Also it's not like most printer drivers don't need to convert it back to ps or pdf before printing.


You can't convert a bitmap back into vectors (well maybe image recognition of some sort). Once you render to canvas its a bitmap and stays that way all the way to printer.

SVG is turned into the appropriate PS/PDF vector drawing command by the browser print engine, canvas just gets sent as a bitmap in the printer language since that all that's left.

I believe pdf.js incorporated and modified https://github.com/gliffy/canvas2svg which implements the canvas api but instead creates an SVG dom.


Yeah, ok, so we have essentially PDF -> SVG -> PDF/PS because of too many abstractions. It would make sense to expose some internal printing API to the embedded PDF reader so all this nonsense wouldn't be necessary.


Firefox tried to with mozPrintCallback specifically for pdf.js, then went away from that. Not sure what happened behind the scenes exactly as they wanted to get it standardized for canvas printing.

Printing is a forgotten corner on the web, would be nice if a mainstream browser implemented the full css print spec too so we could create page perfect output without relying on PrinceXml...


>Rendering text, how hard could it be? As it turns out, incredibly hard!

Related undesired complexities/heterogeneities I encountered while implementing a simple text drawing API on top of various libraries (cf. "Pain points", near the bottom):

https://github.com/jeffhain/jolikit/blob/master/README-BWD.m...


> Libraries don't always provide font metrics, and when they do, these metrics are not reliable, as some glyphs can leak outside of their theoretical box, possibly depending on the used font size, or usage of diacritical marks

God yes. I'm building a UI design tool, and running head-first into this reality. I had no idea it was this hard.


It can go the other way, too. It is mathematically impossible to make a font that has the exact same vertical metrics/line-spacing everywhere, so everyone sets the three value sets provided by the OpenType spec to the same values for web fonts. And then you try to render a font with tall glyphs (i.e. Arabic) on Android and find that it actually looks at the font-wide bounding box in the head table to determine metrics. You can't change them because then Android clips everything legitimately outside that box.


"If you're in Firefox or Chrome, it looks awful"

Looks correct in Firefox 69.0.1 on Windows here. In Chrome it looks awful as described. In both the "bow" at the top overlaps the text on the previous line.


Chrome/Blink is in the middle of rolling out a large layout engine change. Chrome 76 behaves more like Safari, Chrome 77 behaves as the article describes.

People also might be (1% prob.) placed into a "holdback" experiment group which has the old behaviour in M77.


Looks like they changed things. The link [1] given is not working on firefox, while the website is telling me to use firefox.

[1] : https://colorfonts.langustefonts.com/disco.html


I get the "awful" look in Firefox if I enable Webrender.


This is a nice intuitive introduction for the naive. Of course - there is more. @euske explains some extra trouble for CJK. There is also the hell of _direction_ - RTL vs LTR text on the same line. There is a complex bidirectional algorithm that's part of the Unicode standard [1], and it involves breaks up text into runs of RTL and LTR text - but it's not at all trivial, since many characters are actually neutral, like punctuation marks.

What's more, Unicode has a lot of control characters, that don't get rendered into glyphs but affect text rendering. These can be pretty tame, like space and non-breaking space - but some are rather nastly, like pushing and popping a stack of RTL-vs-LTR state (yes, it nests).

[1] : http://www.unicode.org/reports/tr9/


It seems like you’d need to read at least a few dozen languages fluently to be able to meaningfully reason about some of these issues. I can’t imagine what sort of educational background experts on rendering would have.


I'm not fluent in any language (except maybe English), but this was my job on Android for about five years, and I still have my hand in. I got very good at being able to spot various forms of incorrect rendering, and of course called in the experts when needed.

One of my favorite examples was Devanagari shaping. In the word ट्विटर (tvitar = Twitter), does the "i" matra shape before the "tv" it in the middle of it? In my sample, people accepted it either way (and you'll see both depending on font), but there are lots of examples where one is just wrong.


As "Twitter" is a foreign word.there is no set standard to it.

If I were to write it, I would write "tv" first. And draw an extended variant of "I" to cover "tv"

In text rendering, this translates to a custom ligature for an unexpected combo. I am not sure how feasible it is to add ligatures for every such combos

In real life, as long as the reader "gets" it, they would assume that it is the right way to write it :)


I've been doing text rendering professionally for a few years now and I wouldn't consider myself fluent in anything other than English and maybe German (I'm a bit rusty in the latter). I can also speak some Japanese, but, other than that, that's it. The rest of my knowledge is a smattering of weird details about various languages that are useful for writing text renderers but terribly unhelpful for actually understanding the languages.


Yeah there's a reason all my examples use the same ~3 languages, and only a few random fragments from them. You just need a few examples that demonstrate that something can happen. Everything actually works pretty uniformly, so as long as you have a few examples that cover the interesting cases and handle those cases in a general way, everything tends to work fine. The actual font formats are much more nasty and corner-casey. (Thank god you don't need to deal with that, eh Patrick?)

Same reason I prefer to use imperfect terms that capture the important aspects of the problem-space from an english-speaking perspective. Are ligatures the right word for how arabic and marathi get shaped into glyphs? Maybe not, but as long as you get that the æ ligature can be synthesized from ae by a font, and that this is super important for some languages, you're on the right path.

I don't even know what the fragments I use mean, lol. I like to assume I'm just copy-pasting Arabic swears around. Apparently at least one is just Manish's name?


> Apparently at least one is just Manish's name?

Yes, I noticed that. :) Both मनीष and منش are "Manish", though he didn't bother with the vowels in the Arabic-script version, so alternative readings are possible.


Eh, you really need to understand how scripts work without necessarily being able to read them.

You may need to be able to read some scripts, but learning scripts is much easier than learning languages. Most text rendering experts I know seem to be bilingual or monolingual, but understand the mechanics of a lot more scripts (and can read a couple). Many of them are people who taught themselves about other scripts as they went along.

It's quite easy to talk about text from other scripts in a more clinical way without actually being able to read the script: I've often had text rendering discussions about the Perso-Arabic or Devanagari scripts with folks who can't 100% read the script, but know the mechanics of the script: you can totally describe things in terms of general categories like consonants and vowels (in both scripts they behave differently, an equivalent in the Latin script would be talking about letters and accent marks).

I once wrote https://manishearth.github.io/blog/2017/01/15/breaking-our-l... which goes through the various ways scripts deviate from Latin that most programmers should know. There's a lot that isn't listed there (which only folks working specifically on text would need to care about), but it's not hard to acquire that background to a level well enough to be effective.

As demonstrated in that post you can also "collapse" a lot of scripts together into one set of scripts with similar behavior. A lot of the weirdness in text shaping, for example, is covered by the Perso-Arabic script and any one Indic script. I like to say that there's a reason so many people involved in text shaping are Persians.

Personally, while this stuff isn't my dayjob, I can read around ten scripts to varying degrees of success but I know like ... a couple words from each language whose script I can read. It's not hard to learn to read a script, and as I mentioned you don't even need to be able to read them: If we're counting understanding the mechanics of scripts, my 10 balloons to a number I can't even count, because I can for example now include most Indic scripts. I've had productive conversations in Unicode spaces about e.g the Punjabi script without being able to properly read it.


I realize it’s probably too late, but as a graphics person, every time I read about text antialiasing, I wish we could rewind and fix the terminology of “Subpixel antialiasing” and “Greyscale antialiasing”. It seems problematic that greyscale antialiasing involves subpixel sampling and color channels.

I’d suggest “LCD antialiasing” to replace “subpixel antialiasing”. And regular antialiasing doesn’t really need a term, it was already established long before LCDs existed. AFAICT “Greyscale antialiasing” was made up only to differentiate regular antialiasing from LCD antialiasing.


"Subpixel" makes more sense than "LCD": the technique applies to any screen with illumination units smaller than a pixel (eg. LED screens) so "subpixel" makes perfect sense.


Yes, it makes sense. My beef is not that subpixel doesn’t make sense, it’s that the term is overloaded. Standard antialiasing also involves subpixels and the term makes perfect sense there too. From my perspective, “subpixel” fails to differentiate.

Now, using “Greyscale” to mean color, just not LCD color, that one doesn’t make as much sense to me. Some articles call it “whole pixel anatialiasing” or “traditional antialiasing”, those seem better, but maybe we can assume antialiasing is the regular kind, and only need a term to talk about LCD style antialiasing.


How does standard antialiasing involve subpixels - are you talking about retina or something?

It applies alpha to pixels of whatever size right? What's the sub- part?


It’s common in both software and especially hardware to render at a higher resolution and then filter & downsample. The 2x2 or 3x3 or 4x4 etc pixels in the higher resolution that correspond to the 1 pixel at final resolution, those are called subpixels. For example, you can read about subpixels in descriptions of traditional aliasing as well as GPU antialiasing.

https://en.m.wikipedia.org/wiki/Spatial_anti-aliasing

https://en.m.wikipedia.org/wiki/Multisample_anti-aliasing

The term subpixel is often referring to virtual pixels used to compute some final pixel value, as opposed to the LCD specific idea of a physical subpixel that’s red, green, or blue. Look around, for example, for discussions on subpixel resolution, subpixel positioning, subpixel animation, etc. Those are usually talking about the virtual kind used in traditional antialiasing, not the physical LCD subpixels.


To add to the confusion, OpenGL uses the term 'fragment' for what you're calling subpixels.


Yes that’s an interesting point, the “fragment” terminology can be confusing at first. But I think of fragments as something slightly different. “Fragment” means basically pixel or subpixel geometry, it’s output from the rasterizer. The fragment shader is what takes a fragment as input and then outputs the pixel or subpixel color. Fragments aren’t necessarily involved in GPU antialiasing, you can have subpixels in OpenGL without fragments, if you’re doing image processing without rasterizing. In any case, it’s a good thing that they picked a different word to define, and didn’t just call it “subpixel”, right?


"LCD" antialiasing worked fine on trinitron CRTs which had a well defined RGB alignment. Calling it LCD alignment is more confusing.


Sure, that’s fair. Maybe LCD is a bad suggestion. It’s not like I’ll be able to change it, but just for fun, is there a better term that isn’t confusing? Because subpixel and especially greyscale seem pretty confusing.


I agree that greyscale anti-aliasing doesn't seem like the right term at all.

Perhaps, subpixel addressed anti-aliasing vs whole-pixel addressed anti-aliasing.


> So subpixel-AA is a really neat hack that can significantly improve text legibility, great! But, sadly, it's also a huge pain in the neck!

When playing around with FreeType's ftview demo program, text rendered with an OTF/CFF font using the Adobe CFF renderer with stem darkening enabled actually looks pretty good with grayscale rendering. Subpixel rendering is not strictly necessary.

> https://gankra.github.io/blah/text-hates-you/#aa-breaks-glyp...

I think the author mixes up two concepts here, hinting and subpixel positioning.

A TrueType font can change stuff around on the x and y-axis. Applying hinting on the x-axis messes with your layouting and prevents subpixel positioning on the x-axis. What you do is apply hinting on the y-axis only (FreeType calls this slight hinting, DirectWrite more or less does this -- not quite, but close enough) so you are free to shift glyphs around on the x-axis. This helps with displaying text with a more even texture on LoDPI screens. On *nixes, Chrome does this, Firefox doesn't.


+1. Subpixel AA seemed quite necessary back in the day with Windows XP and tiny, precisely hinted Tahoma/Verdana/etc.

These days, with bigger better bolder fonts, better renderers, and the departure from pixel-perfect stuff in favor of higher resolutions, subpixel is absolutely unnecessary.

Even on LoDPI screens I turn it off (assign none to lcdfilter in fontconfig) because the color fringes are so ugly. I always notice them.


Subpixel AA is absolutely not unnecessary these days. I've seen a few programs switch off from it recently (in favor of grayscale AA) and every time it's been noticably more blurry. For example, Discord had a bug that disabled it just a few weeks ago (which has been fixed since), and Twitch's redesign also disabled it in their desktop app (which is why I now use Twitch almost exclusively on Firefox). Both of these apps are based on electron/chromium though, so text rendering, even with grayscale AA, could be better in some other programs.

Unless we completely abandon small font sizes or switch exclusively to high-dpi screens (will likely happen eventually, but we're not there yet), subpixel AA can look much sharper than grayscale AA if properly configured and you're not sensitive to the color fringes (personally, sitting at about an arm's length from my monitor, I don't notice them at all). And I'd rather not have text rendering quality suddenly downgraded on my existing peripherals before that happens.


For the color example they say "here's what they look like in Chrome and safari"... and show some horrible mess, yet I'm on Chrome and that isn't what I see at all?

The text I see looks fine, although the font is different.

https://imgur.com/a/5JZVZCB


I wonder if it is partially OS-dependent? I'm also on Chrome, but see something closest to the Safari version.


Some things are indeed way harder than they seem. Some others I came across:

- time

- indoor positioning


>3.2 Style Can Change Mid-Ligature

>Here's what they look like in Chrome and Safari:

Running 76.0.3809.132 here on Mac OS and it looks very different from the picture. Bug partially fixed?

https://i.imgur.com/CcjATLO.png


It probably depends on the OS. I assume screenshots in the article are on Linux. On Windows it looks different still: https://i.imgur.com/ri2yUrR.png


The article has Safari and Edge screenshots through, so I would assume Windows was used


On the other hand, here's 3.1 on my MacOS Chrome, i don't think it's supposed to look like this:

https://imgur.com/HLiFPIt


I find it... odd that an article on font rendering is using a 14px forced font size (default: 16px) making the article really hard to read on my 14" laptop screen from a foot away. Subpixel antialiasing on my LCD screen is just not my problem here, author. :)


I am a partially vision-impaired man, and I really can't stand the dark-grey on light-grey texts that a lot of modern web pages have. They are essentially unreadable to me. In order to comfortably read this article, I had to go to Firefox's Style Editor and make the text actually black.

And it is ironic that an article about text rendering uses such low-contrast and small-sized text.


Background is white and the text is #333. Contrast ratio is 12.63:1, which isn’t bad at all.

Can’t really fault the author here


> In order to comfortably read this article, I had to go to Firefox's Style Editor and make the text actually black

Does the Reader View in Firefox help here?


Completely forgot about it. Yes, it does improve readability for me.


I mainly prioritize making my pages work reasonably with different zooms and window sizes, since all I really know for certain is that there's no ideal for everyone.

The font settings are just from a copy of bootstrap from like 6 years ago, because I have absolutely no eye for this sort of thing.


One man's antialiasing is another mans blur. I, for one, prefer crisp bitmap fonts.


Where do you find bitmap fonts that are high resolution enough for modern displays?


No idea, sorry, I have never used a "modern" display. Yet terminus and the standard bitmap X fonts go to quite large sizes. They are certainly readable even when using ridiculously tiny pixels.


I prefer rendering with strict grid-fitting, with crisp stems but antialiased arcs. Unfortunately, in current Linux, it does not work even with full hinting enabled by standard fontconfig settings, one have to set magic variable FREETYPE_PROPERTIES=truetype:interpreter-version=35 .


What is the best sans serif bitmap font?


Lucida Sans.


that's not a bitmap font


Sure it is: -b&h-lucida-medium-r---14--75-75--*-iso10646-1


You could use a font specifically designed to be displayed that way.


Really fascinating read. It reminded me of Tom Scott's video on Numberphile, about time zones: https://www.youtube.com/watch?v=-5wpm-gesOY

Maybe it's the way Tom presents it, but I do get the distinct impression that (fully) dealing with time zones is an even more maddening incomprehensible quagmire of a task than rendering text as laid out in this article.


There are few things that scare me as a software engineer. Regex usually comes to mind first, but then I remember ... text rendering.

Just reading Microsoft's Michael J Kaplan's blog (RIP) about Unicode, and how that plays into rendering, was daunting enough.

But this article really points out how ridiculous it can get. No thanks, I'll stick to the easy stuff like my current project, Angular 8 / .Net Core, and let Firefox handle the tricky bits.


Why do both Windows and FreeType "faux italic" tilt the font too much for my liking? And why is the "point of zero tilt" at the baseline rather than 25-50% of the way up? This makes faux-italic text appear to be too far to the right. I feel like changing these 2 would make faux italic much less obtrusive.


> That said, if you take a screenshot of subpixel text you will absolutely be able to see the colors if you resize the image

I once tried to see the subpixel rendering using the Microsoft's magnifying glass tool. Got disappointed - the tool just disabled subpixel rendering. Globally, not just the area being magnified.


Subpixel AA is basically just trading color resolution for spatial. It’s true that it’s not composable, but you can actually “fix” that by keeping it in 3x horizontal format as long as possible, then actually rendering to 1x with subpixel detail later.

P.S.: because subpixel effectively gives you 3x horizontal resolution, it looks terrible to still fit to a pixel grid horizontally. Check the Anti-grain geometry article on font rendering for more cool font rendering bits, I recall it being very interesting.


I miss non-AA’d text. Like PalmOS or Windows 95.

Gotta love the sharpness of it somehow.


I agree; I disable antialiasing to the best degree that I can when I code. This is not possible on Retina displays on MacOS though because they render at a higher resolution then down-sample.

I was pretty disappointed the original article didn't talk much about hinting.

Even old operating systems did a great job at type hinting to align glyphs to pixels (even when antialiasing is on). "Modern" operating systems (especially MacOS after Mavericks) assume retina and throw hinting to the wind; the consequence is pervasive blurry text even on retina displays.


I like sharp, pixely text too but I don't like the color fringing you get on LCDs when you disable subpixel antialiasing. I wish I could get a display without this consistent chromatic abberation. One way to accomplish this would be to alternate subpixel layout from one pixel to the next. For example:

RGBBGR

BGRRGB

Then the errors in color would cancel out, much like in serpentine dithering. You could do the same with a Pentile pixel layout like this:

RGGR

BWWB

BWWB

RGGR

Either way, instead of seeing color fringing, I imagine it would give a grainy appearance similar to what dithering looks like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: