Simple spell checking algorithm

12,022

Solution 1

The simpler way to solve the problem is indeed a precomputed map [bad word] -> [suggestions].

The problem is that while the removal of a letter creates few "bad words", for the addition or substitution you have many candidates.

So I would suggest another solution ;)

Note: the edit distance you are describing is called the Levenshtein Distance

The solution is described in incremental step, normally the search speed should continuously improve at each idea and I have tried to organize them with the simpler ideas (in term of implementation) first. Feel free to stop whenever you're comfortable with the results.


0. Preliminary

  • Implement the Levenshtein Distance algorithm
  • Store the dictionnary in a sorted sequence (std::set for example, though a sorted std::deque or std::vector would be better performance wise)

Keys Points:

  • The Levenshtein Distance compututation uses an array, at each step the next row is computed solely with the previous row
  • The minimum distance in a row is always superior (or equal) to the minimum in the previous row

The latter property allow a short-circuit implementation: if you want to limit yourself to 2 errors (treshold), then whenever the minimum of the current row is superior to 2, you can abandon the computation. A simple strategy is to return the treshold + 1 as the distance.


1. First Tentative

Let's begin simple.

We'll implement a linear scan: for each word we compute the distance (short-circuited) and we list those words which achieved the smaller distance so far.

It works very well on smallish dictionaries.


2. Improving the data structure

The levenshtein distance is at least equal to the difference of length.

By using as a key the couple (length, word) instead of just word, you can restrict your search to the range of length [length - edit, length + edit] and greatly reduce the search space.


3. Prefixes and pruning

To improve on this, we can remark than when we build the distance matrix, row by row, one world is entirely scanned (the word we look for) but the other (the referent) is not: we only use one letter for each row.

This very important property means that for two referents that share the same initial sequence (prefix), then the first rows of the matrix will be identical.

Remember that I ask you to store the dictionnary sorted ? It means that words that share the same prefix are adjacent.

Suppose that you are checking your word against cartoon and at car you realize it does not work (the distance is already too long), then any word beginning by car won't work either, you can skip words as long as they begin by car.

The skip itself can be done either linearly or with a search (find the first word that has a higher prefix than car):

  • linear works best if the prefix is long (few words to skip)
  • binary search works best for short prefix (many words to skip)

How long is "long" depends on your dictionary and you'll have to measure. I would go with the binary search to begin with.

Note: the length partitioning works against the prefix partitioning, but it prunes much more of the search space


4. Prefixes and re-use

Now, we'll also try to re-use the computation as much as possible (and not just the "it does not work" result)

Suppose that you have two words:

  • cartoon
  • carwash

You first compute the matrix, row by row, for cartoon. Then when reading carwash you need to determine the length of the common prefix (here car) and you can keep the first 4 rows of the matrix (corresponding to void, c, a, r).

Therefore, when begin to computing carwash, you in fact begin iterating at w.

To do this, simply use an array allocated straight at the beginning of your search, and make it large enough to accommodate the larger reference (you should know what is the largest length in your dictionary).


5. Using a "better" data structure

To have an easier time working with prefixes, you could use a Trie or a Patricia Tree to store the dictionary. However it's not a STL data structure and you would need to augment it to store in each subtree the range of words length that are stored so you'll have to make your own implementation. It's not as easy as it seems because there are memory explosion issues which can kill locality.

This is a last resort option. It's costly to implement.

Solution 2

You should have a look at this explanation of Peter Norvig on how to write a spelling corrector .

How to write a spelling corrector

EveryThing is well explain in this article, as an example the python code for the spell checker looks like this :

import re, collections

def words(text): return re.findall('[a-z]+', text.lower()) 

def train(features):
    model = collections.defaultdict(lambda: 1)
    for f in features:
        model[f] += 1
    return model

NWORDS = train(words(file('big.txt').read()))

alphabet = 'abcdefghijklmnopqrstuvwxyz'

def edits1(word):
   splits     = [(word[:i], word[i:]) for i in range(len(word) + 1)]
   deletes    = [a + b[1:] for a, b in splits if b]
   transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1]
   replaces   = [a + c + b[1:] for a, b in splits for c in alphabet if b]
   inserts    = [a + c + b     for a, b in splits for c in alphabet]
   return set(deletes + transposes + replaces + inserts)

def known_edits2(word):
    return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS)

def known(words): return set(w for w in words if w in NWORDS)

def correct(word):
    candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word]
    return max(candidates, key=NWORDS.get)

Hope you can find what you need on Peter Norvig website.

Solution 3

for spell checker many data structures would be useful for example BK-Tree. Check Damn Cool Algorithms, Part 1: BK-Trees I have done implementation for the same here

My Earlier code link may be misleading, this one is correct for spelling corrector.

Share:
12,022

Related videos on Youtube

JimR
Author by

JimR

Updated on June 04, 2022

Comments

  • JimR
    JimR almost 2 years

    I've been tasked with creating a simple spell checker for an assignment but have given next to no guidance so was wondering if anyone could help me out. I'm not after someone to do the assignment for me, but any direction or help with the algorithm would be awesome! If what I'm asking is not within the guildlines of the site then I'm sorry and I'll look elsewhere. :)

    The project loads correctly spelled lower case words and then needs to make spelling suggestions based on two criteria:

    • One letter difference (either added or subtracted to get the word the same as a word in the dictionary). For example 'stack' would be a suggestion for 'staick' and 'cool' would be a suggestion for 'coo'.

    • One letter substitution. So for example, 'bad' would be a suggestion for 'bod'.

    So, just to make sure I've explained properly.. I might load in the words [hello, goodbye, fantastic, good, god] and then the suggestions for the (incorrectly spelled) word 'godd' would be [good, god].

    Speed is my main consideration here so while I think I know a way to get this work, I'm really not too sure about how efficient it'll be. The way I'm thinking of doing it is to create a map<string, vector<string>> and then for each correctly spelled word that's loaded in, add the correctly spelled work in as a key in the map and the populate the vector to be all the possible 'wrong' permutations of that word.

    Then, when I want to look up a word, I'll look through every vector in the map to see if that word is a permutation of one of the correctly spelled word. If it is, I'll add the key as a spelling suggestion.

    This seems like it would take up HEAPS of memory though, cause there would surely be thousands of permutations for each word? It also seems like it'd be very very slow if my initial dictionary of correctly spelled words was large?

    I was thinking that maybe I could cut down time a bit by only looking in the keys that are similar to the word I'm looking at. But then again, if they're similar in some way then it probably means that the key will be a suggestion meaning I don't need all those permutations!

    So yeah, I'm a bit stumped about which direction I should look in. I'd really appreciate any help as I really am not sure how to estimate the speed of the different ways of doing things (we haven't been taught this at all in class).

  • Mooing Duck
    Mooing Duck over 12 years
    Hmm, A Trie implemented as a vector of Node{unsigned short prevIndex, bool end; unsigned short nextIndex[26];} could hold 65536 words in only 3.5MB of space (fits in i7's L3 cache!), and would make the spellchecker a relatively simple recursive problem. I'm going to play with that
  • Matthieu M.
    Matthieu M. over 12 years
    @MooingDuck: you don't need prevIndex if you maintain a "fat" iterator.
  • Mooing Duck
    Mooing Duck over 12 years
    It's probably easier to work prevIndex into the recursion than an iterator. But you're right that it doesn't need to be there.
  • Mooing Duck
    Mooing Duck over 12 years
    Initial testing shows my Trie is double the time of my hashtable. Hashtable takes a string, fuzzes it, and sees if any fuzzies are in the hashtable, ranking by how it fuzzed and how much. Doesn't mean anything until I verify the accuracy for either though.
  • JimR
    JimR over 12 years
    Sorry for taking so long to reply but thanks heaps for this, really really useful. :)
  • Matthieu M.
    Matthieu M. over 12 years
    @MooingDuck: nice! I must admit I never investigated the "single" change much, even Norvig's toy case goes to 2 mistakes which definitely increases the map size. As for Tries: they are hard to get to play nice. There are strategies to compress the ends (use a "bucket" of suffixes at the end of each path, instead of fully spelling them out) and to reduce the branch factor (group 'A-D' for a single branch, but this implies storing the full word at the end as you can no longer reconstitute it from the path alone). I admit that a simple sorted container is nice enough to begin with :)