Compare similarity algorithms

19,287

Solution 1

Expanding on my wiki-walk comment in the errata and noting some of the ground-floor literature on the comparability of algorithms that apply to similar problem spaces, let's explore the applicability of these algorithms before we determine if they're numerically comparable.

From Wikipedia, Jaro-Winkler:

In computer science and statistics, the Jaro–Winkler distance (Winkler, 1990) is a measure of similarity between two strings. It is a variant of the Jaro distance metric (Jaro, 1989, 1995) and mainly[citation needed] used in the area of record linkage (duplicate detection). The higher the Jaro–Winkler distance for two strings is, the more similar the strings are. The Jaro–Winkler distance metric is designed and best suited for short strings such as person names. The score is normalized such that 0 equates to no similarity and 1 is an exact match.

Levenshtein distance:

In information theory and computer science, the Levenshtein distance is a string metric for measuring the amount of difference between two sequences. The term edit distance is often used to refer specifically to Levenshtein distance.

The Levenshtein distance between two strings is defined as the minimum number of edits needed to transform one string into the other, with the allowable edit operations being insertion, deletion, or substitution of a single character. It is named after Vladimir Levenshtein, who considered this distance in 1965.

Euclidean distance:

In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. By using this formula as distance, Euclidean space (or even any inner product space) becomes a metric space. The associated norm is called the Euclidean norm. Older literature refers to the metric as Pythagorean metric.

And Q- or n-gram encoding:

In the fields of computational linguistics and probability, an n-gram is a contiguous sequence of n items from a given sequence of text or speech. The items in question can be phonemes, syllables, letters, words or base pairs according to the application. n-grams are collected from a text or speech corpus.

The two core advantages of n-gram models (and algorithms that use them) are relative simplicity and the ability to scale up – by simply increasing n a model can be used to store more context with a well-understood space–time tradeoff, enabling small experiments to scale up very efficiently.

The trouble is these algorithms solve different problems that have different applicability within the space of all possible algorithms to solve the longest common subsequence problem, in your data or in grafting a usable metric thereof. In fact, not all of these are even metrics, as some of them don't satisfy the triangle inequality.

Instead of going out of your way to define a dubious scheme to detect data corruption, do this properly: by using checksums and parity bits for your data. Don't try to solve a much harder problem when a simpler solution will do.

Solution 2

String similarity helps in a lot of different ways. For example

  • google's did you mean results are calculated using string similarity.
  • string similarity is used to correct OCR errors.
  • string similarity is used to correct keyboard entering errors.
  • string similarity is used to find most matching sequence of two DNAs in bioinformatics.

But as one size does not fit all. Every string similarity algorithm is designed for a specific usage though most of them are similar. For example Levenshtein_distance is about how many char you change to make two strings equal.

kitten → sitten

Here distance is 1 character change. You may give different weights to deletion, addition and substitution. For example OCR errors and keyboard errors give less weight for some changes. OCR ( some chars are very similar to others ), keyboard some chars are very near to each other. Bioinformatic string similarity allows a lot of insertion.

Your second example of "Jaro–Winkler distance metric is designed and best suited for short strings such as person names"

Therefore you should keep in your mind about your problem.

I want to use string similarity functions to find corrupted data in my database.

How your data is corrupted? Is it a user error , similar to keyboard input error? Or is it similar to OCR errors? Or something else entirely?

Share:
19,287

Related videos on Youtube

Ali
Author by

Ali

Updated on June 06, 2022

Comments

  • Ali
    Ali about 2 years

    I want to use string similarity functions to find corrupted data in my database.

    I came upon several of them:

    • Jaro,
    • Jaro-Winkler,
    • Levenshtein,
    • Euclidean and
    • Q-gram,

    I wanted to know what is the difference between them and in what situations they work best?

    • Fred Foo
      Fred Foo over 12 years
      I've never heard of "Q-gram". Any reference for it?
    • MrGomez
      MrGomez over 12 years
      This is a case where a wiki-walk is honestly most appropriate to quickly and coherently answer your question. Consider also: using Shannon entropy or mutual information as a heuristic. The comparison is by problem space and efficiency, which you can get from the description and body.
    • Dan Nissenbaum
      Dan Nissenbaum about 12 years
      This is a non-trivial mathematical field for which books are written and extensive research is undertaken, worthy of discussion that would be difficult to fit into a single SO answer. Would it be possible for you to be more specific?
  • Daniel
    Daniel about 12 years
    If you're trying to verify whether a database has been corrupted, use checksums and parity bits. If you're trying to figure out what data is corrupted, you need to identify what kinds of corruption you're trying to fix (record linkage, polluted data, missing data, etc.).
  • willlma
    willlma about 10 years
    Google's did you mean is not calculated using string similarity. It's calculated by tracking users mistype and re-attempt a moment later. Source