Hacker News new | past | comments | ask | show | jobs | submit login

> Peoples fingers are relatively fat and inaccurate.

They're accurate enough to tap on a single OSK key and get it right most of the time. Regardless, tap targets are only a very limited factor in UX design, so there should be plenty of scope for enhancing information density after accounting for that.




It's important to note though that the _actual_ touch targets for keys are influenced by what you've already written. You can miss the key from a visual perspective but still end up with the right letter as a result.


Are you saying some keyboards are gaslighting users and screwing up their ability to improve their typing accuracy over time?


All forms of predictive text do this even on a desktop. It's terrible.


Autocorrect on mobile is frakkin annoying; predictive "autocomplete" in prose is merely a distraction to be mostly ignored, but keyboard changing touch target size?

I'm guessing that's the reason why I've gotten worse at typing on mobile some years ago, and can't seem to improve it, despite being able to quickly master any similar skill through repetition.


You can tell it's not just you when suddenly your swipe text has randomly capitalized words. I get LoL _constantly_ since the beginning of this year, despite going back and rewriting it letter by letter every time for the first month or so it happened. I'm not sure I've ever typed it that way intentionally, as I don't play the game and neither do my friends. Ever since Google changed the way they did autocorrect[1] a few years ago it's been pretty terrible and getting worse. There's random "words" that I can't remove from predictions[2] because at some point I used double dashes somewhere and Android remembered `not--it's` as a word itself[3]. Then there's words that I've had to explicitly add to my dictionary because despite using them frequently for a year (eg. CLI program names and options) just one time backspacing to change the ending of one was enough to remove it from whatever temporary memory is kept that's not part of the actual dictionary.

Desktop isn't much better. I actually somewhat like having the text inputs in some programs as I do writing in nontraditional programs (like arbitrary chat programs that don't implement spell check, because why would they?) but it feels incredibly unintuitive, especially when using RDP from my tablet [4], but also when using it locally. Windows is a keyboard-first OS for me, I shouldn't need to use the mouse for any first-party functionality.

1: including autocomplete suggestions, predictive text, and swipe input. I call all of these autocorrect since they mostly seen to share the same dictionary and rules. 2: they're not in my custom dictionary, and dragging the suggestion to the middle of the screen doesn't prevent it from happening again) 3: easy fix, but it needs to be implemented: a single dash is part of a word, a double dash is not. 4: this is an area Microsoft could improve tbh. They know what kind of input you're using when you're in edge or command line. In these software the mobile RDP client should provide keyboard hints to the local soft keyboard to switch input modes based on form input types, and should reset the local keyboard when the active remote keyboard context changes. It should certainly not be deleting entire command lines when I hit backspace in RDP (this is especially annoying in CMD and notepad) as the local software keyboard doesn't know that the my long string was 7 inputs to 4 different programs, not one long input.


I don't know why iPhones have autocorrect on, and why people do not disable it first thing. It trips me constantly so much so that it is an anti-feature when writing acronyms, URLs or any text message with abbreviations or non-standard word.

How do people survive with their ducking auto-correct on, I don't know. I guess they don't type that much.


Maybe you’re just getting older


There's more to onscreen keyboards than you think. At least on Apple's side, they don't just check which key you've tapped, they also check where your finger is on the key relative to its neighbors. If you want to hit a d but hit an f by accident, the keyboard remembers that the tap was pretty ambiguous and you might actually have wanted a d instead. This information helps it choose the right autocorrect candidate. If you only register which key was tapped, but not where, typing accuracy goes down considerably.

All of this was described in detail in Ken Kocienda's book about his time at Apple working on the keyboard for the original iPhone.


Where are you getting this idea that tap targets are a very limited factor in UX design? My guess is: not from the interface guidelines for any mobile platform, and not from anyone who designs apps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: