How to exclude indexes from backups in SQL Server 2008

7,180

Solution 1

If you switch over to full recovery mode, you can do this with filegroups, but it's really, really clumsy. You leave the data in the primary filegroup, and put the indexes in a separate (non-default, that's the key) filegroup.

Then you stagger your backups so that you're doing filegroup backups of the primary every night, and transaction log backups every X minutes.

When disaster strikes, you restore the primary filegroup by itself. The data is suddenly online, but the indexes are not. However, to get back to normalcy, you'll need to export that data into a new clean database and add indexes from there. You can't bring the database completely online without restoring all of the filegroups, and you can't say "I don't need that other filegroup anymore anyway."

For more about how this works, check out my video tutorial on filegroup restores.

Solution 2

Honestly, you really don't want to do this, even if you overcome the other issues others raise here.

When you restore the backup in an emergency, you don't want to wait for the indexes to rebuild, and you're going to suffer abominable performance until you do.

I can't think of a situation where you'd want to restore a backup without indexes, so in all cases you'll really want to back them up at the same time.

You'll likely need to look for other solutions to this problem...

-Adam

Solution 3

It sounds as if this isn't supported. From this bug report info:

There's been a lot of interest in this one, so I'll go into a bit more detail as to what is happening behind the scenes, and what it would mean to implement this functionality. Some types of index pages are segregated into separate allocation units, while others are mixed in with the data pages. Where we currently only look at the allocation bitmap to see if an extent is allocated, now we would have to go in and interpret what is stored in each allocation unit. Further, we would now not be able to just do a linear scan of the data files copying data, we'd be skipping around in the file. All of this interpretation of the data structures would drastically slow down backup. Restore gets even more interesting, because there are a lots of structures that would have to ba fixed up to account for the holes in the backup. Otherwise you'd have allocation maps pointing to pages which weren't backed up, and so have garbage in them, etc. etc. So, implementing this would mean that we'd save less data, take longer doing it, and take much longer restoring it. The other facet to consider is that this would take a large amount of engineering effort to get it all right. While that's not your problem on the surface, consider that it means that other features you may want to see wouldn't get built.

Solution 4

might be a crazy idea, but here goes.

  1. drop your non-clustered indexes that take up lots of space
  2. do a backup
  3. re-create the indexes you dropped

Of course you can only really do this if you database allows for some down-time in the day.

Also, dont drop your clustered indexes as SQL Server will waste a lot of time converting these to a heap.

Does buying that extra disk space seem like an easier solution yet?

Have you considered doing compressed backups? this is a new feature of 2008, it may be an option for you.

Share:
7,180

Related videos on Youtube

Jake McGraw
Author by

Jake McGraw

Former Developer on the Stack Overflow team. Was dubbed SALTY SAILOR by Jeff Atwood, as filth and flarn would oft-times fly when dealing with a particularly nasty bug! Twitter me: jarrod_dixon Email me: [email protected]

Updated on September 17, 2022

Comments

  • Jake McGraw
    Jake McGraw over 1 year

    Our nightly full (and periodic differential) backups are becoming quite large, due mostly to the amount of indexes on our tables; roughly half the backup size is comprised of indexes.

    We're using the Simple recovery model for our backups.

    Is there any way, through using FileGroups or some other file-partitioning method, to exclude indexes from the backups?

    It would be nice if this could be extended to full-text catalogs, as well.

  • Bill
    Bill about 15 years
    "you don't want to wait for the indexes to rebuild" very presumptive, IMO
  • Adam Davis
    Adam Davis about 15 years
    Yes, but keep in mind I'm generalizing here. I haven't been convinced that there are more situations where it's better to ditch the indexes and rebuild than there are situations where it's better to backup the indexes and avoid a rebuild. In other words, one case is generally better, and unless a particular situation calls for it, one should err on the side of backing them up. That being said, every situation is different. I'm curious to know how long it takes to rebuild the SO indexes, and how much the performance of the site suffers until they're done (assuming it's up during rebuild).
  • Jake McGraw
    Jake McGraw about 15 years
    Yes, we have turned on compression for backups, but the built-in compression isn't that great. We actually have an end-of-week, non-compressed backup taken and then 7zip it for us devs to bring down; it's about 1/3 the size, but it does take some time to run!
  • Jake McGraw
    Jake McGraw about 15 years
    As for DB downtime, could we really live without both serverfault.com and stackoverflow.com up as much as possible? I shudder at that thought :)
  • Jake McGraw
    Jake McGraw about 15 years
    Hehe, I had moved the non-clustered indexes to their own filegroup; the Simple Recovery model completely stymied me. The indexes were actually larger than the data - how it taunted me! Oh well, perhaps some 3rd party will release a magic bullet hint hint :)
  • Nick Kavadias
    Nick Kavadias about 15 years
    I haven't tested it myself, but i suspected as much. It's not very configurable. If your still looking at compression as an option take a look at SQL Lightspeed (expensive, but awesome, but the price is very negotiable) and RedGate's SQL Backup. Very configureable & excellent results
  • Brent Ozar
    Brent Ozar about 15 years
    And maybe, just maybe, you'll get free licensing for it. ;-)
  • Tim van Uum
    Tim van Uum almost 11 years
    This wouldn't be an issue for full-text indexes, would it? Those tend to be pretty large.
  • richardtallent
    richardtallent over 6 years
    Long-term archival backup is the specific use case I'm looking at now that led me to this question. I have a project that is complete and no longer want the database online, but also don't need to clutter up our network drive with a larger backup file than necessary. I don't want to lose the index definitions, but wouldn't mind rebuilding them if I ever had to restore this again.