Oracle Date index is slow. Query is 300 times faster without it

19,125

Solution 1

Seems counter intuitive though--the index slowing things down so much.

Counter-intuitive only if you don't understand how indexes work.

Indexes are good for retrieving individual rows. They are not suited to retrieving large numbers of records. You haven't bothered to provide any metrics but it seems likely your query is touching a large number of rows. In which case a full table scan or other set=based operation will be more much efficient.


Tuning date range queries is tricky, because it's very hard for the database to know how many records lie between the two bounds, no matter how up-to-date our statistics are. (Even more tricky to tune when the date bounds can vary - one day is a different matter from one month or one year.) So often we need to help the optimizer by using our knowledge of our data.

don't think using the "+0" is the best solution going forward...

Why not? People have been using that technique to avoid using an index in a specific query for literally decades.

However, there are more modern solutions. The undocumented cardinality hint is one:

 select /*+ cardinality(big_table,10000) */ 

... should be enough to dissuade the optimizer from using an index - provided you have accurate statistics gathered for all the tables in the query.

Alternatively you can force the optimizer to do a full table scan with ...

 select /*+ full(big_table) */ 

Anyway, there's nothing you can do to the index to change the way databases work. You could make things faster with partitioning, but I would guess if your organisation had bought the Partitioning option you'd be using it already.

Solution 2

These are the reasons using an index slows down a query:

  1. A full tablescan would be faster. This happens if a substantial fraction of rows has to be retrieved. The concrete numbers depend on various factors, but as a rule of thumb in common situations using an index is slower if you retrieve more than 10-20% of your rows.

  2. Using another index would be even better, because fewer rows are left after the first stage. Using a certain index on a table usually means that other indexes cannot be used.

Now it is the optimizers job to decide which variant is the best. To perform this task, he has to guess (among other things) how much rows are left after applying certain filtering clauses. This estimate is based on the tables statistics, and is usually quite ok. It even takes into account skewed data, but it might be off if either your statitiscs are outdated or you have a rather uncommon distribution of data. For example, if you computed your statistics before the data of february was inserted in your example, the optimizer might wrongly conclude that there are only few (if any) rows left after applaying the date range filter.

Using combined indexes on several columns might also be an option dependent on your data.

Another note on the "skewed data issue": There are cases the optimizer detects skewed data in column A if you have an index on cloumn A but not if you only have a combined index on columns A and B, because the combination might make the distribution more even. This is one of the few cases where an index on A,B does not make an index on A redundant.

APCs answer show how to use hints to direct the optimizer in the right direction if it still produces wrong plans even with right statistics.

Share:
19,125
urbanmojo
Author by

urbanmojo

Updated on June 05, 2022

Comments

  • urbanmojo
    urbanmojo almost 2 years

    I had an Oracle query as below that took 10 minutes or longer to run:

      select
          r.range_text as duration_range,
          nvl(count(c.call_duration),0) as calls,
          nvl(SUM(call_duration),0) as total_duration
          from
          call_duration_ranges r
          left join
          big_table c
          on c.call_duration BETWEEN r.range_lbound AND r.range_ubound
     and c.aaep_src = 'MAIN_SOURCE'
     and c.calltimestamp_local  >= to_date('01-02-2014 00:00:00' ,'dd-MM-yyyy HH24:mi:ss')
     AND c.calltimestamp_local <=  to_date('28-02-2014 23:59:59','dd-MM-yyyy HH24:mi:ss')
     and          c.destinationnumber LIKE substr( 'abc:[email protected]:5060;user=phone',1,8) || '%'    
     group by
          r.range_text
     order by
          r.range_text
    

    If I changed the date part of the query to:

     (c.calltimestamp_local+0)  >= to_date('01-02-2014 00:00:00' ,'dd-MM-yyyy HH24:mi:ss')
     (AND c.calltimestamp_local+0) <=  to_date('28-02-2014 23:59:59','dd-MM-yyyy HH24:mi:ss')
    

    It runs in 2 seconds. I did this based on another post to avoid using the date index. Seems counter intuitive though--the index slowing things down so much.

    Ran the explain plan and it seems identical between the new and updated query. Only difference is the the MERGE JOIN operation is 16,269 bytes in the old query and 1,218 bytes in the new query. Actually cardinality is higher in the old query as well. And I actually don't see an "INDEX" operation on the old or new query in the explain plan, just for the index on the destinationnumber field.

    So why is the index slowing down the query so much? What can I do to the index--don't think using the "+0" is the best solution going forward...

    Querying for two days of data, suppressing use of destinationnumber index:

    0   SELECT STATEMENT            ALL_ROWS    329382  1218    14
    1   SORT    GROUP BY            329382  1218    14
    2   MERGE JOIN  OUTER           329381  1218    14
    3   SORT    JOIN            4   308 14
    4   TABLE ACCESS    FULL    CALL_DURATION_RANGES    ANALYZED    3   308 14
    5   FILTER                      
    6   SORT    JOIN            329377  65  1
    7   TABLE ACCESS    BY GLOBAL INDEX ROWID   BIG_TABLE   ANALYZED    329376  65  1
    8   INDEX   RANGE SCAN  IDX_CDR_CALLTIMESTAMP_LOCAL ANALYZED    1104        342104
    

    Querying for 2 days using destinationnumber index:

    0   SELECT STATEMENT            ALL_ROWS    11  1218    14
    1   SORT    GROUP BY            11  1218    14
    2   MERGE JOIN  OUTER           10  1218    14
    3   SORT    JOIN            4   308 14
    4   TABLE ACCESS    FULL    CALL_DURATION_RANGES    ANALYZED    3   308 14
    5   FILTER                      
    6   SORT    JOIN            6   65  1
    7   TABLE ACCESS    BY GLOBAL INDEX ROWID   BIG_TABLE   ANALYZED    5   65  1
    8   INDEX   RANGE SCAN  IDX_DESTINATIONNUMBER_PART  ANALYZED    4       4
    

    Querying for one month, suppressing destinationnumber index--full scan:

    0   SELECT STATEMENT            ALL_ROWS    824174  1218    14
    1   SORT    GROUP BY            824174  1218    14
    2   MERGE JOIN  OUTER           824173  1218    14
    3   SORT    JOIN            4   308 14
    4   TABLE ACCESS    FULL    CALL_DURATION_RANGES    ANALYZED    3   308 14
    5   FILTER                      
    6   SORT    JOIN            824169  65  1
    7   PARTITION RANGE ALL         824168  65  1
    8   TABLE ACCESS    FULL    BIG_TABLE   ANALYZED    824168  65  1
    
  • urbanmojo
    urbanmojo about 10 years
    If I look at the explain plan, it will always use the "destinationnumber" index first. If I force it to not use the destinationnumber index, it will use the calltimestamp index if the date range selected is about 3 days or less. Anything more than that it will do a full table scan.
  • urbanmojo
    urbanmojo about 10 years
    Posted explain plans above. At most the query brings back 1.5 million records for a month "abc:1000", but takes as long when it only brings back 15,000 records. (always only 14 rows brought back--number of rows in the "Call_Duration"). Wouldn't expect simple counts to take that long.