iOS actually has a multi-touch based braille keyboard built-in. So you can have a rather similar behaviour, however, without extra hardware, with native iOS. In Settings > Accessiblity > VoiceOver, add "Braille keyboard" to the rotor options, and then use two fingers in a circular motion until you hear "Braille input". You can now hold the screen away from you in landscape mode, and use three fingers from each hand to type 6-dot braille.
By the way, besides the wacky keyboard blind people also have another, erhm, ‘perk’ in their use of phones: they, pretty obviously, keep the brightness at zero. Gonna last a long time on a charge.
I keep thinking occasionally that some things that I do with the phone could be done with a small one-handed wireless remote/controller and audio output—so I could trigger them while doing something else, e.g. walking outside. For example, Anki already supports either a specific remote or bluetooth remotes in general, and has TTS. Also, there are games using audio as output, which might work in a similar manner—may be interesting to try, if not very engrossing.
Yeah, "screen curtain" is a pretty essential feature for some of us. I also do that on my Linux laptop. Helps againnst sholder surfing and saves hours of battery time.
> Anki already supports either a specific remote or bluetooth remotes in general, and has TTS.
Have you considered a gamepad designed for phones? I use a GameSir T1s for my Anki reviews while out walking.
Only downside is the screen is lopsided in the controller due to the volume buttons, but this could be fixed with a more expensive gamepad which wraps the sides.
I guess you mean holding the gamepad with both hands and staring at the screen, in the usual gaming manner. But, you see, as a keyboard worker I (partly) go for walks precisely to give the shoulders and the neck some workout and stretching—and using a regular gamepad is the opposite of that, as I already discovered in practice. Otherwise I could just use the phone as normal—which is also pretty bad for the shoulders, though one at a time.
So I was thinking of using something in this vein:
Might come in handy in more occasions than just walking. E.g. home chores when I still have one hand free—currently such time is occupied with more passive podcasts and audiobooks.
Ah! I understand. I can use my controller one-handed if necessary and get a much better angle than staring down at my phone, but I typically don't have the neck/back problems that others do.
A lot of med students use the 8bitdo Zero 2[0] with Anki one-handed, but I believe a standard bluetooth clicker would be more ergonomic as it'll be designed for one-handed use.
For audio cards, you typically want a controller with 6 keys: 4 answer buttons, replay audio, and undo.
Native key remapping will be available in AnkiDroid 2.16. AnkiMobile (iOS) supports it, and there's various programs/addons for the desktop version.
There's a fair number of reddit threads with further opinions which should be helpful
Thanks! Didn't expect this much info, but then I looked into your profile, which explained things.
BTW, perhaps this could be of interest in regard to this topic: I previously was able to use Tasker to remap headphones' control buttons to keyboard keys, and thus trigger AnkiDroid's buttons. However, this turned out to be pretty awkward for me—but it's a generic mechanism for remapping, so may come in handy for someone who has other not-natively-supported inputs. (Though it will likely require additional paid addons for Tasker, namely AutoInput.)
I didn't have the time to get Bluetooth headphones working this release cycle, but it'd be great to do so if it doesn't require too many permissions.
Bluetooth headphones go through a different API than physical buttons/controllers/keyboards, and Google's changed the APIs a few times. At first glance it seemed like a minefield (either managing a media session, or managing Bluetooth devices, in both cases which we'd be competing for Bluetooth access with proper media players which have a legitimate want for the buttons).
Anyway, I digress. Thank you for the pointers, and I'll have another look when there's less maintenance work to carry out.
Thanks for the link, super interesting to watch. Over the years as a programmer I've picked up knowledge about web accessibility from a mechanical, standards/implementation focused point of view, but this video is helping me realize that my understanding has been lacking a level of depth and empathy for the real people using these tools.
Looking through some of her other videos I found this demonstration of camera-to-speech in the iPhone pretty awesome–I had no idea that this was a feature: https://www.youtube.com/watch?v=8CAafjodkyE
Magic stuff indeed, I didn't see that vid before—gotta rectify this with her other videos.
Personally I only used recognition with photos a couple times, to identify some things. Now, that right there is a power user of the feature.
How does the phone even process the images that quickly? I was under the impression that generic models to recognize a wide variety of things require beefy processing and plenty of memory or disk. Or, are latencies on mobile networks that low in the US or wherever she is? And, do people really use mobile internet all day long—especially transferring dozens of photos?
P.S. While we're on the topic of magnifiers: MacOS has the feature where the onscreen magnifier can be shown temporarily with the keys ctrl-alt, and follows the mouse. I have rather moderately poor vision (so far), but I'm using this quite often to gawk at smaller things on the screen, instead of bending myself closer to the monitor or trying to zoom the webpages. This especially works wonders with hi-dpi screens, where zoomed-in areas just have the old-dpi resolution—so I really can see small details in images, as if having separate images of those parts. With landscape photos, the effect is great.
Yep—I'm a bit of an interface junkie myself, and I get an unscratchable itch whenever I hear how Iphones have some magic haptics where I could grope the screen in search of the buttons and actually feel them, or something like that.
Especially when PS Vita has a rear touch pad, on which extra functions are mapped in some games—and I can't tell which areas have the mappings, resulting in me either pressing the areas accidentally with my grownup hands, or missing them when they're needed. Or when Apple adds the touchbar instead of keyboard buttons, with none of that magic.
I found this mapping between Braille symbols and the alphabet[0], but how does the buttons in the Braille keyboard map to these symbols? Trying to understand as it could be cool to type without having to look at the screen.
Braille characters are formed by six dots/bumps in a 2×3 grid. On a Braille keyboard, each of the six main keys correspond to one dot, and for each character you press the keys for the dots you want simultaneously, then lift up and repeat. There's also a space key. My grandmother used to volunteer transcribing things into Braille, and she taught me how to use the machine as a kid.