This post has some overlap with work I did a while back on a "coupon code" system that is optimised for users taking a code printed on paper and entering it into a web form. A number of measures were employed to avoid/correct transcription errors.
If you'd like to explore Unicode characters, you can use the Unicode Character finder, a web app I built some years ago: https://www.mclean.net.nz/ucf/
The app allows you to paste in a character to find out more about it, or to search the database of character descriptions to find what you're after.
For exploration, additionally I'd recommend http://shapecatcher.com/ It allows you to draw the shape you are looking for, and with some form of ML, sorts by similarity. It has come in handy a few times for finding the characters I'm unable to describe.
Had a scare in a data centre once when the UPS started trying to connect to an IP hosted somewhere in China. Turned out when it did a DNS lookup for the SNMP server (or something - sorry about hand-waviness) the first response it got back was an IPV6 address (DNS AAAA record). And since the crappy TCP stack on the device had no IPV6 support, it was just interpreting the first four bytes of the AAAA record as an IPV4 address. One of our super smart sysadmins worked out what was going on and tweaked the DNS to return A record first - problem solved.
I am disappointed to learn that you feel I have attacked you in the past. I apologise unreservedly for any offence I caused you. I always try to be civil and professional in my interactions and to be mindful of the difficulty of conveying the intended tone over an electronic medium.
If I have voiced some criticism of your code it is certainly not because I wished to belittle your efforts or to make any value judgements about your worth as a person.
Thank you for expressing this. It helps me to be able to move past it. I have always thought you were angry at my contributions to the community. You will note that I applaud the simplicity of your module, and even went so far as to create a SAX streaming version of my parser that works together with XML::Simple so that those who wish to stick with your interface can do so. I didn't maintain it much, but can do so if there is any real interest in it.
I named you simply because you are the first person to state that my XML parser is "invalid", despite my having worked very hard to ensure that it does parse XML meaningfully.
I do acknowledge freely that I am disregarding the specs to some extent for the sake of raw speed. You will see that I have altered the documentation to make this clear so there is no confusion.
For myself, with your apology I consider that extremely adequate to address the past. I don't really remember clearly, but I know that it was a very rough entry into the open source world to have my parser attacked ( considering it is the first meaningful thing I contributed to the community )
I would like to point out that communication and understanding between members of the community is exactly what I am asking for. I thank you for stepping out and attempting to resolve this. There is no way I would ever know that you felt this way without you expressing it, and unless I did I would have lived forever thinking you have bad feelings towards myself and the code I have created.
For all the people who imply that I was attacking any of the named people, including Grant, see that was and is not my intention, and I am very happy today to have some of these things addressed.
I will throw this out there for consideration; it boggles my mind how wikipedia has banned the article on my parser, considering there are entries for many other equivalent parsers. The article was up for years then removed suddenly for no legitimate reason imo... Do you have any opinion on the clear favoring of certain parsers in the information community? ( such as on wikipedia or in excluding specific parsers from being mentioned as related codebases )
If you go to mapofcpan.org and click on "Need more help?" it explains more about what the map is showing you.
Only namespaces that contain 30 or more distributions get a colour and a label - to keep the map from getting too cluttered. The remaining modules are submerged under the light blue "primordial soup".
Since the individual distributions are arranged alphabetically and laid out along a Hilbert curve, adding an extra distribution to 'Acme' pushes every other namespace along.
And yeah - the Hilbert curve probably isn't ideally suited to being animated :-)
https://hacks.mozilla.org/2020/07/safely-reviving-shared-mem...