Favourite performance tuning tricks

38,673

Solution 1

Here is the handy-dandy list of things I always give to someone asking me about optimisation.
We mainly use Sybase, but most of the advice will apply across the board.

SQL Server, for example, comes with a host of performance monitoring / tuning bits, but if you don't have anything like that (and maybe even if you do) then I would consider the following...

99% of problems I have seen are caused by putting too many tables in a join. The fix for this is to do half the join (with some of the tables) and cache the results in a temporary table. Then do the rest of the query joining on that temporary table.

Query Optimisation Checklist

  • Run UPDATE STATISTICS on the underlying tables
    • Many systems run this as a scheduled weekly job
  • Delete records from underlying tables (possibly archive the deleted records)
    • Consider doing this automatically once a day or once a week.
  • Rebuild Indexes
  • Rebuild Tables (bcp data out/in)
  • Dump / Reload the database (drastic, but might fix corruption)
  • Build new, more appropriate index
  • Run DBCC to see if there is possible corruption in the database
  • Locks / Deadlocks
    • Ensure no other processes running in database
      • Especially DBCC
    • Are you using row or page level locking?
    • Lock the tables exclusively before starting the query
    • Check that all processes are accessing tables in the same order
  • Are indices being used appropriately?
    • Joins will only use index if both expressions are exactly the same data type
    • Index will only be used if the first field(s) on the index are matched in the query
    • Are clustered indices used where appropriate?
      • range data
      • WHERE field between value1 and value2
  • Small Joins are Nice Joins
    • By default the optimiser will only consider the tables 4 at a time.
    • This means that in joins with more than 4 tables, it has a good chance of choosing a non-optimal query plan
  • Break up the Join
    • Can you break up the join?
    • Pre-select foreign keys into a temporary table
    • Do half the join and put results in a temporary table
  • Are you using the right kind of temporary table?
    • #temp tables may perform much better than @table variables with large volumes (thousands of rows).
  • Maintain Summary Tables
    • Build with triggers on the underlying tables
    • Build daily / hourly / etc.
    • Build ad-hoc
    • Build incrementally or teardown / rebuild
  • See what the query plan is with SET SHOWPLAN ON
  • See what’s actually happenning with SET STATS IO ON
  • Force an index using the pragma: (index: myindex)
  • Force the table order using SET FORCEPLAN ON
  • Parameter Sniffing:
    • Break Stored Procedure into 2
    • call proc2 from proc1
    • allows optimiser to choose index in proc2 if @parameter has been changed by proc1
  • Can you improve your hardware?
  • What time are you running? Is there a quieter time?
  • Is Replication Server (or other non-stop process) running? Can you suspend it? Run it eg. hourly?

Solution 2

  1. Have a pretty good idea of the optimal path of running the query in your head.
  2. Check the query plan - always.
  3. Turn on STATS, so that you can examine both IO and CPU performance. Focus on driving those numbers down, not necessarily the query time (as that can be influenced by other activity, cache, etc.).
  4. Look for large numbers of rows coming into an operator, but small numbers coming out. Usually, an index would help by limiting the number of rows coming in (which saves disk reads).
  5. Focus on the largest cost subtree first. Changing that subtree can often change the entire query plan.
  6. Common problems I've seen are:
    • If there's a lot of joins, sometimes Sql Server will choose to expand the joins, and then apply WHERE clauses. You can usually fix this by moving the WHERE conditions into the JOIN clause, or a derived table with the conditions inlined. Views can cause the same problems.
    • Suboptimal joins (LOOP vs HASH vs MERGE). My rule of thumb is to use a LOOP join when the top row has very few rows compared to the bottom, a MERGE when the sets are roughly equal and ordered, and a HASH for everything else. Adding a join hint will let you test your theory.
    • Parameter sniffing. If you ran the stored proc with unrealistic values at first (say, for testing), then the cached query plan may be suboptimal for your production values. Running again WITH RECOMPILE should verify this. For some stored procs, especially those that deal with varying sized ranges (say, all dates between today and yesterday - which would entail an INDEX SEEK - or, all dates between last year and this year - which would be better off with an INDEX SCAN) you may have to run it WITH RECOMPILE every time.
    • Bad indentation...Okay, so Sql Server doesn't have an issue with this - but I sure find it impossible to understand a query until I've fixed up the formatting.

Solution 3

Slightly off topic but if you have control over these issues...
High level and High Impact.

  • For high IO environments make sure your disks are for either RAID 10 or RAID 0+1 or some nested implementation of raid 1 and raid 0.
  • Don't use drives less than 1500K.
  • Make sure your disks are only used for your Database. IE no logging no OS.
  • Turn off auto grow or similar feature. Let the database use all storage that is anticipated. Not necessarily what is currently being used.
  • design your schema and indexes for the type queries.
  • if it's a log type table (insert only) and must be in the DB don't index it.
  • if your doing allot of reporting (complex selects with many joins) then you should look at creating a data warehouse with a star or snowflake schema.
  • Don't be afraid of replicating data in exchange for performance!

Solution 4

CREATE INDEX

Assure there are indexes available for your WHERE and JOIN clauses. This will speed data access greatly.

If your environment is a data mart or warehouse, indexes should abound for almost any conceivable query.

In a transactional environment, the number of indexes should be lower and their definitions more strategic so that index maintenance doesn't drag down resources. (Index maintenance is when the leaves of an index must be changed to reflect a change in the underlying table, as with INSERT, UPDATE, and DELETE operations.)

Also, be mindful of the order of fields in the index - the more selective (higher cardinality) a field, the earlier in the index it should appear. For example, say you're querying for used automobiles:

SELECT   i.make, i.model, i.price
FROM     dbo.inventory i
WHERE    i.color = 'red'
  AND    i.price BETWEEN 15000 AND 18000

Price generally has higher cardinality. There may be only a few dozen colors available, but quite possibly thousands of different asking prices.

Of these index choices, idx01 provides the faster path to satisfy the query:

CREATE INDEX idx01 ON dbo.inventory (price, color)
CREATE INDEX idx02 ON dbo.inventory (color, price)

This is because fewer cars will satisfy the price point than the color choice, giving the query engine far less data to analyze.

I've been known to have two very similar indexes differing only in the field order to speed queries (firstname, lastname) in one and (lastname, firstname) in the other.

Solution 5

A trick I recently learned is that SQL Server can update local variables as well as fields, in an update statement.

UPDATE table
SET @variable = column = @variable + otherColumn

Or the more readable version:

UPDATE table
SET
    @variable = @variable + otherColumn,
    column = @variable

I've used this to replace complicated cursors/joins when implementing recursive calculations, and also gained a lot in performance.

Here's details and example code that made fantastic improvements in performance: http://geekswithblogs.net/Rhames/archive/2008/10/28/calculating-running-totals-in-sql-server-2005---the-optimal.aspx

Share:
38,673
Seibar
Author by

Seibar

Updated on January 23, 2020

Comments

  • Seibar
    Seibar over 4 years

    When you have a query or stored procedure that needs performance tuning, what are some of the first things you try?

  • John Stauffer
    John Stauffer over 15 years
    Did you actually benchmark this one? If SQL Server is doing what is reasonable (using a hash algorithm to locate the Stored Proc), then this would make no difference. In fact if SQL Server wasn't doing that, it seems like system performance would stink (since it presumably calls it's own procs).
  • Sklivvz
    Sklivvz over 15 years
    @@ROWCOUNT is set anyways. NOCOUNT disables the "xx rows affected" statements.
  • Sklivvz
    Sklivvz over 15 years
    Yeah, cursors are a curse! ;)
  • puzzledbeginner
    puzzledbeginner over 15 years
    to which bit are you refering?
  • Grant Johnson
    Grant Johnson over 15 years
    Yes, but this can lead to weird bugs which are VERY hard to find.
  • user3001801
    user3001801 over 15 years
    Ugh. Don't throw that out unqualified like that. Cursors are like guns. They aren't bad by themselves, its just that people do really bad things with them.
  • user3001801
    user3001801 over 15 years
    Does this really ever make an appreciable difference in performance?
  • user3001801
    user3001801 over 15 years
    I think this falls in the bucket of premature optimization. It is probably a good practice to avoid confusion for people, but as an optimization tip... D-
  • travis
    travis over 15 years
    Yeah, then the count isn't automatically calculated every time a SQL statement is run. It's easy enough to bench a query with and without to see that it does make a difference.
  • Tom H
    Tom H over 15 years
    The count is tracked in SQL Server anyway. Any performance difference that you see is because the counts have to go over the network to your front end. If you're doing a single SELECT it won't make an appreciable difference. If you have a loop with 100000 inserts it's a lot extra over the network.
  • eksortso
    eksortso almost 15 years
    That's good for MySQL, but the question was tagged "sqlserver". Still, it's a good thing to do that. The analogous thing to do in SSMS is to use "Display Estimated Execution Plan" and "Include Actual Execution Plan". If you can eliminate huge table scans and use clustered index seeks, then you're well on your way to optimal performance.
  • mwigdahl
    mwigdahl almost 13 years
    +1 for the inclusion of bad indentation. Formatting is key! :)
  • Mark Sowul
    Mark Sowul over 11 years
    NOLOCK is only for queries for which you don't care about correct results
  • Esen
    Esen about 11 years
    I have seen this unexplained behavior.
  • Admin
    Admin over 9 years
    This should be used judiciously, not habitually. Locking is not evil, just misunderstood.
  • sheldonhull
    sheldonhull over 8 years
    This is some cool stuff, but I wish you'd have some references for some claims. For example: I'd never heard the optimize considers only 4 tables a time in a join. I don't understand how this could be right. Could you provide some references for that in particular? I'd love to see where you are getting this.