Seems like a great opportunity for a new alternative system.
Preferably with an error correcting code built in. Homophones shouldn't really matter, if you mess it up, the ECC could just fix it.
Go with 4 words instead of 3.
Take Diceware, treat the words as a base 7775 number, convert to binary, you get 50 bits.
You could do some fancy reshuffling, reorder the words by similarity(Start with the first, then repeatedly pick the one with the smallest edit distance), and do the conversion to an integer such that any one word swapped to a close neighbor only produces a small change.
My math knowledge is too limited to do this myself unless there's something to copy and paste or a brute force hack, but couldn't you do that by treating each word as a digit in 7775-ary gray code?
Any one word being off to a nearby word, would then just give you an error in the low order bits of the latitude.
Then you protect those bits with a hamming code, and have an overall 2 or 3 but checksum for everything, so at least some of the easy to make mistakes will be corrected, and most will probably be detected.
Reliability could probably be better than reading off coordinates, still faster, with only one extra word.
Plus the words themselves already have redundancy in them, if you make a typo, autocorrect can fix it, and if it gets the wrong one, the correction will likely catch it or flag the issue.
I built a location to words system for fun a little while ago. https://Wherewords.id
I concluded something similar to you and went to 4 words. I spent the most time on the wordlist. I used the Google s2 library to split the world into a hierarchy of points.
I even did an emoji checksum too although these days I'd probably do that differently. I really like the idea of an ECC actually, I might have a look at doing that, although it would remove the hierarchical nature that I like with the current system. Because it's hierarchical, people who know the context can skip initial words. Like two people in the 'decorate' region https://wherewords.id/decorate could just use the last three of the four words, or to indicate a number of locations near each other, you could just provide one location and then the other positions with just one or two words.
I think ECC probably wouldn't get you much unless you reordered words by similarity and did the grey code thing, otherwise, most words would change a lot of bits at once if you mixed them up.
But with grey code reordering you might not need any ECC because single words changing to similar words might just put you off by a few meters, degrading gracefully in a way that doesn't matter most of the time. Or at the very least you might only need to protect the last few bits.
Maybe there is an ordering that puts words likely to be mixed up next to each other, but still feels like it has variety, enough to where you'd notice the difference between adjacent locations most of the time?
Or maybe word confusion isn't actually the type of error that's most relevant? Accidentally using data meant for a a different region might be a bigger problem.
Thinking about it, one word of the nato alphabet gives you 4 bits, so you could have a check word out of the nato alphabet that contained a parity bit for each of your words. It wouldn't let you correct any errors, but it's a single, easy word that would give you a good chance of spotting errors in the other words. That's probably how I'd do it.
The problem with error correcting codes is that they typically need quite a lot more bits transmitted, and I'm trying to keep the number of words manageable without needing a huge wordlist. Of course if I could get a 65k wordlist I was happy with that'd be fine, but that seems very unlikely given how hard it was to get a good 4096 word wordlist.
Having a separate wordlist does seem like a pretty good plan, but if there's only a single bit per word, then you only have 50% chance of catching a single word swap, as opposed to something like a 4 bit CRC which should catch close to 15 out of 16 errors.
You can see it correctly identified that the third word is wrong. I will probably integrate something like this next time I update wherewords.id, although that might be a while....
Preferably with an error correcting code built in. Homophones shouldn't really matter, if you mess it up, the ECC could just fix it.
Go with 4 words instead of 3.
Take Diceware, treat the words as a base 7775 number, convert to binary, you get 50 bits.
You could do some fancy reshuffling, reorder the words by similarity(Start with the first, then repeatedly pick the one with the smallest edit distance), and do the conversion to an integer such that any one word swapped to a close neighbor only produces a small change.
My math knowledge is too limited to do this myself unless there's something to copy and paste or a brute force hack, but couldn't you do that by treating each word as a digit in 7775-ary gray code?
Any one word being off to a nearby word, would then just give you an error in the low order bits of the latitude.
Then you protect those bits with a hamming code, and have an overall 2 or 3 but checksum for everything, so at least some of the easy to make mistakes will be corrected, and most will probably be detected.
Reliability could probably be better than reading off coordinates, still faster, with only one extra word.
Plus the words themselves already have redundancy in them, if you make a typo, autocorrect can fix it, and if it gets the wrong one, the correction will likely catch it or flag the issue.