CSS now has the ability to override some font metrics, so you shouldn't need to edit the font files directly.
And these overrides can be applied to local fonts as well (generally used to ensure the metrics of the local fallback font matches the yet-to-be-downloaded web font, preventing a layout shift when the web font is swapped in)
Re the stuff about @-moz-document at the end, does anyone remember exploiting IE bugs in parsing CSS to present different rules for IE vs firefox, especially to work around IE's broken box model? I remember actually using this[0] hack somewhere long ago:
Speaking of font differences, the default Windows emoji font is so bad I don't understand why Google doesn't just ship their Android emoji font with Chrome and default to that.
"Funny" thing about the Windows emoji font, is that flags are not included. I guess Microsoft is worried it can hurt their business (e.g. Israeli flag, Palestinian flag, Taiwanese flag, etc...).
So, if you need flag icons for any reason (e.g. phone number country code input), you'd have to use a different emoji font.
The comments on that article are hilarious. Never in my life would I have expected someone to complain about a lack of professionality in an emoji set. Since when are emoji professional?
If you're asking whether we use emoji in professional setting to convey important information, yes, we do, in spades.
Imagine you see an alert in slack about a critical system. You can type a full sentence to explain you've seen it and will investigate what's happening, or you can stick an :eyes: reaction on it and actually focus on investigating.
Same if you want someone to wait a bit because you're thinking etc.
You can always type full sentences to convey the same information, but in a professional setting conciseness and efficiency are also valuable, right ?
Agreed. It's not just about smiley faces. I use all the little caution signs, clocks, hourglasses etc in UI design all the time. Easiest way to get icons in places they are a pain to put.
There's a blurriness, but it's still pretty workable. You won't ask someone to look at a problem by sticking eyes on it, that needs a lot more communication.
But you can look at an issue, see there 3 of your coworkers sticking an eyes emoji on it, and mention them when asking what they think it, get clarification if they've just seen and didn't care, are still digging deeper etc.
In a way, the imprecision is what gives the versatility, otherwise many emoji get a more limited, standardized meanings over time. Like a "done" sticker that will be enough to give a status on a request on a channel, a thumbs up on a proposition, or a green checkmark on things that explicitely needed to be checked.
At least the last couple of years. In fact, actual court cases have been fought about their meaning, those cases included actual depictions to try to point out that some were meant to be whimsical, and some were meant to be serious.
I quite liked them as well. Made them very readable even in small sizes.
But then again, I also loved the Android's old blob emojis[0]. Some specific ones were weird but mostly I really liked the amount of personality and movement they were able to express. But as far as I've been able to tell, most people seemed to hate them, for whatever reason.
Most people I know really enjoyed the blobs too. I never bought Google's justification for the redesign, and I don't think they ever published hard data about it.
Using FontForge to regenerate an export is kinda drastic. What you want is ttx from the fontTools font manipulation toolset. ttx can generate an editable XML file with the font’s tables that are easy to make changes in. hhea is obvious to find; for “win”, you probably want to look inside OS/2. If you can afford it, the go to application for this sort of post-production alterations is DTL OTMaster.
Macs are king when it comes to font rendering. It was prioritized in the original Macintosh and Apple still has the most solid font engine to this day.
MacOS is literally the only major desktop OS where sub-pixel rendering was removed on purpose, despite the negative impact on everyone using <200 dpi monitors, i.e. most office workers at companies that won't spend $1500 on 5K monitors.
Probably 90-95% of Macs sold come with a monitor, and they're all high-DPI now. Apple doesn't really care about optimizing for some giant 1080p monitor. An iMac is pretty cheap.
In my world, Macbooks are pretty popular in the corporate world. Sure my MBP 16" has a high DPI built-in display but I'm never going to get the budget for an Apple Studio Display. Offices are equipped with Dell/Lenovo/HP USB-C monitors that are between 100 and 150 dpi. I'm not talking about 20 year old pixel density here, but modern ultra-wide or UHD monitors.
There's a difference between not caring about low DPI anymore, and crippling font rendering on purpose.
> I'm not talking about 20 year old pixel density here, but modern ultra-wide or UHD monitors
Because the monitor industry largely got to 4K and said "eh that'll do", a lot of those "ultrawide" and UHD displays with large sizes literally do have twenty year old DPI.
Increasing physical panel size (usually) used to translate to higher resolution, but at some point the majority of manufacturers stopped doing this, so you get the same 4K resolution at ever stretched physical sizes, and ever decreasing DPI.
macOS doesn't even render to its own high pixel density displays correctly, owing to the (in my opinion) very naïve algorithm used. If you select any resolution that's not a perfect factor of the display being rendered to, then there is blurriness[1]. MacOS renders to a viewport that is 2× the resolution of the 'looks like' setting, and then scales it down to the actual monitor resolution. Clearly, at any non-integer multiple resolution, there is blurring.
This is problematic enough that it defeats Apple's 'good font rendering'. I see shimmering and ringing artifacts around regions of high contrast (i.e. essentially all text) with such a non-native setup. I am forced to use the integer factor resolution, which makes things much too big. Of course, I can scale my browser and VS Code, but besides that the rest of the OS is comically large. Needless to say this also comes with the large performance impact of always rendering to a viewport four times the resolution of a given display. It is also non-intuitive to program against, especially using APIs like GLUT, SDL, etc.
Windows is the only OS that actually does high pixel density rendering correctly for programs that support it[2]. Windows works with the given monitor resolution, and scales UI elements according to the percentage value set (100% is 96 DPI). This is a lot more involved to program for, but when done right, it works exceptionally well. Everything that's not a raster image is always pixel-perfect. If it's not (and people have complained about this[3]), then there's a system setting/registry patch to make it so[4].
Windows also handles moving program windows between displays set to different DPIs quite seamlessly. The only issue I see is when a new display with a different scaling setting is set as the primary (and only) display, and then Windows Explorer scales things weirdly—which is fixed by restarting Explorer.
On Linux... Forget it. On Xorg there are a million environment and per-app-specific configurations to set (just see how long the HiDPI article[5] in the Arch Linux wiki is). On Wayland, things are better, but not yet for me, since I use an NVIDIA graphics card, KDE Plasma, and Chrome, which is the worst possible combination for Wayland. It's not mature enough for this setup—the Windows-esque rendering (they call it 'fractional scaling') was only merged in slightly more than a year ago[6], and Plasma 5, my DE of choice, still doesn't quite use it yet.
> the Windows-esque rendering (they call it 'fractional scaling') was only merged in slightly more than a year ago[6]
wp-fractional-scale-v1 is not necessary to implement fractional scaling; it's there to make it easier and to solve some edge cases. It was inspired by already existing fractional scale implementations.
Fun fact on the first Windows 95 fonts in Arabic, is that Microsoft decided to go cheap. and not to pay Boutros Fonts, a London-based type foundry that design Arabic fonts, and bought a cheaper derivative 'pirated' copy of that same font for $5k. A lengthy legal battle ensued with Microsoft ultimately winning it out of might of its $$ and legal team.
I struggled with this recently, upgrading qtwebkit from Qt5 to Qt6. Qt5 used platform-specific height values, while Qt6 now uses win height values, for consistent rendering across all platforms. It’s better in concept, the only issue is that the really big cross-platform browser _doesn’t_ so it turns into “why doesn’t this font rendering look like Chrome on Mac/linux?”
It’s not better “in concept,” it’s worse. Not only is it a non-native API, it’s abandoning one aspect of even attempting to mimic being native in favor of “branding.”
If someone insists that an app on the Mac should look the same as on Windows they’re an idiot. If they say an app on the Mac should look the same as a web page on Windows or Linux they’re a malicious idiot.
> The ascent is the distance from the baseline to the top of the tallest glyph, so typically 1em. The descent is the distance from the baseline to the lowest point in any glyph. The descent can be different because on web fonts, glyphs like g or p can have tails that extend below the baseline.
Huh? Different from what? It's described as exactly the same thing as the ascent, but down. And why does the author specify web fonts? This sounds like it applies to fonts in general.
Edit: I think the author is just trying to say the descent varies across fonts, even measured in em. I think "web fonts" and "glyphs can have tails" are just red herrings.
> What's that? You say you want even more infuriating font stories? Well don't you worry, I'll be back soon with another diatribe about font thickness and antialiasing on the web on Mac vs. Windows.
Did he ever write about this? I can't see anything about it in his list of articles.
And these overrides can be applied to local fonts as well (generally used to ensure the metrics of the local fallback font matches the yet-to-be-downloaded web font, preventing a layout shift when the web font is swapped in)
ascent-override - https://developer.mozilla.org/en-US/docs/Web/CSS/@font-face/...
descent-override - https://developer.mozilla.org/en-US/docs/Web/CSS/@font-face/...
line-gap-override - https://developer.mozilla.org/en-US/docs/Web/CSS/@font-face/...
size-adjust - https://developer.mozilla.org/en-US/docs/Web/CSS/@font-face/...