Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I strongly disagree.

- You can copy a text symbol and put it into a search engine to figure out what it means.

- Screen readers can recognize text symbols without needing an alt text.

- Text is more flexible on dark or light backgrounds and has better contrast.

- Text is better suited for virtual keyboards and auto-correct or auto-replacement.

- Text can be used in input fields.

- Text has native rendering hints built-in, such as baseline and kerning, which improves inline rendering.

- Text is more flexible when choosing a glyph. For example, you can render it larger for the visually impaired, or change fonts and put a special glyph into the font.

- Text is more flexible when choosing color vs. black-and-white output. Try printing an emoji to see what I mean.




Tooling could be built to do all of that for images too.

For example you could take part of an image and put it into a search engine too... That search engine would take images as input, and where the image represents writing if some kind, it would return relevant results.

Screen readers could look into an image and read out any writing found, just like some screen readers can describe an image ("person in boat holding flag"). Deep nets that can do this reasonably well have been around years now.


The tooling for doing all that with images is called a font.


You can do all that but you'd have to use a neural net instead of "find..." Neural nets make plenty of errors, including on OCR (at least 2-4%).


Find my girlfriend a dress that doesn't need ironing.

Easy if there's a "do not iron" symbol in text to search. Much harder to grep if it's an image.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: