- You can copy a text symbol and put it into a search engine to figure out what it means.
- Screen readers can recognize text symbols without needing an alt text.
- Text is more flexible on dark or light backgrounds and has better contrast.
- Text is better suited for virtual keyboards and auto-correct or auto-replacement.
- Text can be used in input fields.
- Text has native rendering hints built-in, such as baseline and kerning, which improves inline rendering.
- Text is more flexible when choosing a glyph. For example, you can render it larger for the visually impaired, or change fonts and put a special glyph into the font.
- Text is more flexible when choosing color vs. black-and-white output. Try printing an emoji to see what I mean.
Tooling could be built to do all of that for images too.
For example you could take part of an image and put it into a search engine too... That search engine would take images as input, and where the image represents writing if some kind, it would return relevant results.
Screen readers could look into an image and read out any writing found, just like some screen readers can describe an image ("person in boat holding flag"). Deep nets that can do this reasonably well have been around years now.
- You can copy a text symbol and put it into a search engine to figure out what it means.
- Screen readers can recognize text symbols without needing an alt text.
- Text is more flexible on dark or light backgrounds and has better contrast.
- Text is better suited for virtual keyboards and auto-correct or auto-replacement.
- Text can be used in input fields.
- Text has native rendering hints built-in, such as baseline and kerning, which improves inline rendering.
- Text is more flexible when choosing a glyph. For example, you can render it larger for the visually impaired, or change fonts and put a special glyph into the font.
- Text is more flexible when choosing color vs. black-and-white output. Try printing an emoji to see what I mean.