Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the link, super interesting to watch. Over the years as a programmer I've picked up knowledge about web accessibility from a mechanical, standards/implementation focused point of view, but this video is helping me realize that my understanding has been lacking a level of depth and empathy for the real people using these tools.

Looking through some of her other videos I found this demonstration of camera-to-speech in the iPhone pretty awesome–I had no idea that this was a feature: https://www.youtube.com/watch?v=8CAafjodkyE




Magic stuff indeed, I didn't see that vid before—gotta rectify this with her other videos.

Personally I only used recognition with photos a couple times, to identify some things. Now, that right there is a power user of the feature.

How does the phone even process the images that quickly? I was under the impression that generic models to recognize a wide variety of things require beefy processing and plenty of memory or disk. Or, are latencies on mobile networks that low in the US or wherever she is? And, do people really use mobile internet all day long—especially transferring dozens of photos?

P.S. While we're on the topic of magnifiers: MacOS has the feature where the onscreen magnifier can be shown temporarily with the keys ctrl-alt, and follows the mouse. I have rather moderately poor vision (so far), but I'm using this quite often to gawk at smaller things on the screen, instead of bending myself closer to the monitor or trying to zoom the webpages. This especially works wonders with hi-dpi screens, where zoomed-in areas just have the old-dpi resolution—so I really can see small details in images, as if having separate images of those parts. With landscape photos, the effect is great.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: