How can I manipulate MySQL fulltext search relevance to make one field more 'valuable' than another?
Solution 1
Actually, using a case statement to make a pair of flags might be a better solution:
select
...
, case when keyword like '%' + @input + '%' then 1 else 0 end as keywordmatch
, case when content like '%' + @input + '%' then 1 else 0 end as contentmatch
-- or whatever check you use for the matching
from
...
and here the rest of your usual matching query
...
order by keywordmatch desc, contentmatch desc
Again, this is only if all keyword matches rank higher than all the content-only matches. I also made the assumption that a match in both keyword and content is the highest rank.
Solution 2
Create three full text indexes
- a) one on the keyword column
- b) one on the content column
- c) one on both keyword and content column
Then, your query:
SELECT id, keyword, content,
MATCH (keyword) AGAINST ('watermelon') AS rel1,
MATCH (content) AGAINST ('watermelon') AS rel2
FROM table
WHERE MATCH (keyword,content) AGAINST ('watermelon')
ORDER BY (rel1*1.5)+(rel2) DESC
The point is that rel1
gives you the relevance of your query just in the keyword
column (because you created the index only on that column). rel2
does the same, but for the content
column. You can now add these two relevance scores together applying any weighting you like.
However, you aren't using either of these two indexes for the actual search. For that, you use your third index, which is on both columns.
The index on (keyword,content) controls your recall. Aka, what is returned.
The two separate indexes (one on keyword only, one on content only) control your relevance. And you can apply your own weighting criteria here.
Note that you can use any number of different indexes (or, vary the indexes and weightings you use at query time based on other factors perhaps ... only search on keyword if the query contains a stop word ... decrease the weighting bias for keywords if the query contains more than 3 words ... etc).
Each index does use up disk space, so more indexes, more disk. And in turn, higher memory footprint for mysql. Also, inserts will take longer, as you have more indexes to update.
You should benchmark performance (being careful to turn off the mysql query cache for benchmarking else your results will be skewed) for your situation. This isn't google grade efficient, but it is pretty easy and "out of the box" and it's almost certainly a lot lot better than your use of "like" in the queries.
I find it works really well.
Solution 3
Simpler version using only 2 fulltext indexes (credits taken from @mintywalker):
SELECT id,
MATCH (`content_ft`) AGAINST ('keyword*' IN BOOLEAN MODE) AS relevance1,
MATCH (`title_ft`) AGAINST ('keyword*' IN BOOLEAN MODE) AS relevance2
FROM search_table
HAVING (relevance1 + relevance2) > 0
ORDER BY (relevance1 * 1.5) + (relevance2) DESC
LIMIT 0, 1000;
This will search both full indexed columns against the keyword
and select matched relevance into two separate columns. We will exclude items with no match (relevance1 and relevance2 are both zero) and reorder results by increased weight of content_ft
column. We don't need composite fulltext index.
Related videos on Youtube
Buzz
Updated on February 27, 2020Comments
-
Buzz about 4 years
Suppose I have two columns, keywords and content. I have a fulltext index across both. I want a row with foo in the keywords to have more relevance than a row with foo in the content. What do I need to do to cause MySQL to weight the matches in keywords higher than those in content?
I'm using the "match against" syntax.
SOLUTION:
Was able to make this work in the following manner:
SELECT *, CASE when Keywords like '%watermelon%' then 1 else 0 END as keywordmatch, CASE when Content like '%watermelon%' then 1 else 0 END as contentmatch, MATCH (Title, Keywords, Content) AGAINST ('watermelon') AS relevance FROM about_data WHERE MATCH(Title, Keywords, Content) AGAINST ('watermelon' IN BOOLEAN MODE) HAVING relevance > 0 ORDER by keywordmatch desc, contentmatch desc, relevance desc
-
Buzz about 15 yearsI tried this, and ended up with syntax errors. I don't think I knew what to put in the order by blahblah spot. Suggestions?
-
notnot about 15 yearsSorry, it wasn't mean to be a copy & paste example. The order by in the over clause is the order you apply the row numbers, so it should be whatever you would normally order the results by.
-
notnot about 15 yearsNow that I think about it, this one will duplicate the records which match both keyword and content.
-
Buzz about 15 yearsI am not able to find any way to make this work. In fact, I don't think mysql supports row_number
-
Bretticus over 13 yearsWorks well and makes sense. Thanks!
-
Ultimate Gobblement over 12 yearsI could not seem to get this to work (perhaps because I had not added the third index), but changing the where condition to: rel1 > 0 OR rel2 > 0 solved my problem so thanks.
-
ChrisG about 12 yearsUsing the like statement is not a great way to run searches. First, unless you split strings, you'll only match in the exact order. i.e. searching
LIKE '%t-shirt red%'
will not match 'Red t-shirt' in your database. Second, you end up with a higher time to execute your query, since LIKE does a full table scan. -
gontard almost 10 years@ChrisG
LIKE
does a full table scan when it is used in theFROM
clause not in theSELECT
-
PanPipes almost 8 years@mintywalker should the Order By not be
ORDER BY (rel1*1.5)+(rel2) DESC
to get the highest score and thus more relevant first? -
Flame over 7 years@PanPipes yes it should be
DESC
since higher relevance is a better match -
mastazi over 3 years@mintywalker I just wanted to say thanks, this exact query (adapted to our schema) has been chugging along for at least five years now in a community website with tens of thousands of news articles and hundreds of thousands of registered users (and many more unregistered visitors). Always worked perfectly well for our needs, and we never had performance issues.
-
conrad10781 about 3 yearsBy utilizing "HAVING" instead of a WHERE ( with the composite or something else ), you run into an issue of having to do a full table scan to get your result. Meaning, I don't believe this solution scales very well. To be more specific, in an extreme scenario, if you have a table with 10M rows, and only 999 match ( or n-1 of whatever limit you set ), since all rows will return results in your query , most albeit with 0's, you will not only have to load the entire table, but you would also have to iterate through all 10M rows.
-
lubosdz about 3 years@conrad10781 Having clause operates over only matched resultset.
-
conrad10781 about 3 yearscorrect, but literally every record in the table is going to be matched in that query because there is nothing to filter it. Meaning, you're selecting values from the table, but without a where, you're retrieving all the records, then having is executing the filter on them. To clarify, remove the having statement from your search locally. All records are returned. Imagine that on a table with 10M records. Run an explain, and it will probably say using temporary; using filesort. The where like in mintywalker's response allows the records to be filtered first on the server.
-
lubosdz about 3 years@conrad10781 Yes, you are right - without where clause it scans over whole resultset. The idea was to avoid complex fulltext indexing, which may cause large overhead for intensive writes. Fixing this is simply possible by adding WHERE clause between FROM ... HAVING, but then whole query does not look so simple anymore + duplicates fullindex match. Query above may work fine for small datasets say up to 10k-100k records - depends on.