I once implemented it in typical scenario where sales people had to look for a client, but it could be written as:
1/ That Client With A Strange Name, Ltd.
2/ The Client With Strange Name, Ltd.
3/ That Client With A Strange Name
4/ [etc]
It worked really well and avoided lots of duplicated entries.
For name matching there is also an old (1960s) algorithm called Soundex. It is well described in Knuth's The Art of Computer Programming Vol. 3, Sorting and Searching. It's a simple algorithm so the wikipedia page is enough: https://en.wikipedia.org/wiki/Soundex#cite_ref-10
Soundex was designed for a very specific purpose. It is very culture-dependent and, in my experience, is working very poorly in most practical applications related to matching names.
Soundex works fine as part of a larger process, especially when combined with other kinds of normalization. You need a human to make the final judgement on matches. In the course of a year I have to match 100k names to names in a database of 850k people. Soundex is great for flagging names that might match, or for flagging matches that might be incorrect. I use Soundex in combination with NYSIIS, double metaphone, lists of normally confused names, etc. Before I created our current matching process, we were creating approximately 5-10k duplicate records a year.
Quick edit: Our data sources are handwritten and typed names, often transcribed by a second party. So algorithms that detect transposition errors as well as phonetic errors are really helpful.
I've used a Python implementation of soundex() in a production data mining app to help resolve things like ECQUADOR->ECUADOR. Worked well (as an entity resolution mechanism among many others).
Yes, but the downside is that (afaik) you can't use an index to quickly retrieve the matches in order. You really have to scan your complete dataset on every search.
Besides, related to this, does anybody know of a good Javascript implementation of a 3-way merge of strings, and perhaps also of JSON-like structures?
You can store the values in a trie (e.g., with one node per character in the string). Exact lookup in the trie is O(string_length), like a hash table. Inexact lookup can similarly walk the tree, but explore side branches within a certain budget.
So if your string is "abc", you'd follow the node for "a", then the one for "b", but _also_ the one for "c", at a cost of 1 (because dropping the "b" incurs an edit distance of 1).
> Exact lookup in the trie is O(string_length), like a hash table
It's worth noting that some standard libraries (Java's JDK for one [1]) will cache the value of String.GetHashCode(), meaning string lookup in a HashTable is constant time average (but O(n) worst-case due to collisions).
There is a variety of approximate string matching algorithms that speed up search by using an index. https://arxiv.org/abs/1008.1191 is one that should be fairly easy to implement. Levenshtein automata are another approach that makes the rounds on HN every now and then, but are a tough beast to implement and I wouldn't really recommend them in practice.
> Yes, but the downside is that (afaik) you can't use an index to quickly retrieve the matches in order. You really have to scan your complete dataset on every search.
I'm not sure I see what you're driving at there. If you had a finite set of strings that you might have to compare, you could (for example) populate a graph or something with weighted edges representing the Levenshtein distance between strings (vertices). Offhand, it seems like your search could basically use a hash table to find the position of the vertex representing your string on an already-populated adjacency list.
It'd be big, but in reality you'd probably only populate the edges with especially high or low weights, depending on the application?
Searching a new string - adding a string to the set - would require comparing against all items in the set, yes. That's a much less daunting prospect than what you said: You really have to scan your complete dataset on every search.
Building the index, or adding to it, is not fast (without some heuristics applied) but searching using the index isn't bad.
I thought the thing peff briefly described above sounded pretty good. The worst case complexity, if I'm visualizing it right, would be the length of the string you were comparing.
I remember some discussion of this in previous HN threads but I don't know what it was about...
I often look up algorithms there, it's a great resource. Unfortunately, you cannot search for parallel algorithms specifically and need to know what you're looking for anyway.
I was wondering whether someone could recommend similar resources (or books) for parallel algorithms. These are still very underrepresented on websites and in books, often just an addition or mentioned in passing by.
I once implemented it in typical scenario where sales people had to look for a client, but it could be written as: 1/ That Client With A Strange Name, Ltd. 2/ The Client With Strange Name, Ltd. 3/ That Client With A Strange Name 4/ [etc]
It worked really well and avoided lots of duplicated entries.