How do you clear the SQL Server transaction log?

1,380,385

Solution 1

Making a log file smaller should really be reserved for scenarios where it encountered unexpected growth which you do not expect to happen again. If the log file will grow to the same size again, not very much is accomplished by shrinking it temporarily. Now, depending on the recovery goals of your database, these are the actions you should take.

First, take a full backup

Never make any changes to your database without ensuring you can restore it should something go wrong.

If you care about point-in-time recovery

(And by point-in-time recovery, I mean you care about being able to restore to anything other than a full or differential backup.)

Presumably your database is in FULL recovery mode. If not, then make sure it is:

ALTER DATABASE testdb SET RECOVERY FULL;

Even if you are taking regular full backups, the log file will grow and grow until you perform a log backup - this is for your protection, not to needlessly eat away at your disk space. You should be performing these log backups quite frequently, according to your recovery objectives. For example, if you have a business rule that states you can afford to lose no more than 15 minutes of data in the event of a disaster, you should have a job that backs up the log every 15 minutes. Here is a script that will generate timestamped file names based on the current time (but you can also do this with maintenance plans etc., just don't choose any of the shrink options in maintenance plans, they're awful).

DECLARE @path NVARCHAR(255) = N'\\backup_share\log\testdb_' 
  + CONVERT(CHAR(8), GETDATE(), 112) + '_'
  + REPLACE(CONVERT(CHAR(8), GETDATE(), 108),':','')
  + '.trn';

BACKUP LOG foo TO DISK = @path WITH INIT, COMPRESSION;

Note that \\backup_share\ should be on a different machine that represents a different underlying storage device. Backing these up to the same machine (or to a different machine that uses the same underlying disks, or a different VM that's on the same physical host) does not really help you, since if the machine blows up, you've lost your database and its backups. Depending on your network infrastructure it may make more sense to backup locally and then transfer them to a different location behind the scenes; in either case, you want to get them off the primary database machine as quickly as possible.

Now, once you have regular log backups running, it should be reasonable to shrink the log file to something more reasonable than whatever it's blown up to now. This does not mean running SHRINKFILE over and over again until the log file is 1 MB - even if you are backing up the log frequently, it still needs to accommodate the sum of any concurrent transactions that can occur. Log file autogrow events are expensive, since SQL Server has to zero out the files (unlike data files when instant file initialization is enabled), and user transactions have to wait while this happens. You want to do this grow-shrink-grow-shrink routine as little as possible, and you certainly don't want to make your users pay for it.

Note that you may need to back up the log twice before a shrink is possible (thanks Robert).

So, you need to come up with a practical size for your log file. Nobody here can tell you what that is without knowing a lot more about your system, but if you've been frequently shrinking the log file and it has been growing again, a good watermark is probably 10-50% higher than the largest it's been. Let's say that comes to 200 MB, and you want any subsequent autogrowth events to be 50 MB, then you can adjust the log file size this way:

USE [master];
GO
ALTER DATABASE Test1 
  MODIFY FILE
  (NAME = yourdb_log, SIZE = 200MB, FILEGROWTH = 50MB);
GO

Note that if the log file is currently > 200 MB, you may need to run this first:

USE yourdb;
GO
DBCC SHRINKFILE(yourdb_log, 200);
GO

If you don't care about point-in-time recovery

If this is a test database, and you don't care about point-in-time recovery, then you should make sure that your database is in SIMPLE recovery mode.

ALTER DATABASE testdb SET RECOVERY SIMPLE;

Putting the database in SIMPLE recovery mode will make sure that SQL Server re-uses portions of the log file (essentially phasing out inactive transactions) instead of growing to keep a record of all transactions (like FULL recovery does until you back up the log). CHECKPOINT events will help control the log and make sure that it doesn't need to grow unless you generate a lot of t-log activity between CHECKPOINTs.

Next, you should make absolute sure that this log growth was truly due to an abnormal event (say, an annual spring cleaning or rebuilding your biggest indexes), and not due to normal, everyday usage. If you shrink the log file to a ridiculously small size, and SQL Server just has to grow it again to accommodate your normal activity, what did you gain? Were you able to make use of that disk space you freed up only temporarily? If you need an immediate fix, then you can run the following:

USE yourdb;
GO
CHECKPOINT;
GO
CHECKPOINT; -- run twice to ensure file wrap-around
GO
DBCC SHRINKFILE(yourdb_log, 200); -- unit is set in MBs
GO

Otherwise, set an appropriate size and growth rate. As per the example in the point-in-time recovery case, you can use the same code and logic to determine what file size is appropriate and set reasonable autogrowth parameters.

Some things you don't want to do

  • Back up the log with TRUNCATE_ONLY option and then SHRINKFILE. For one, this TRUNCATE_ONLY option has been deprecated and is no longer available in current versions of SQL Server. Second, if you are in FULL recovery model, this will destroy your log chain and require a new, full backup.

  • Detach the database, delete the log file, and re-attach. I can't emphasize how dangerous this can be. Your database may not come back up, it may come up as suspect, you may have to revert to a backup (if you have one), etc. etc.

  • Use the "shrink database" option. DBCC SHRINKDATABASE and the maintenance plan option to do the same are bad ideas, especially if you really only need to resolve a log problem issue. Target the file you want to adjust and adjust it independently, using DBCC SHRINKFILE or ALTER DATABASE ... MODIFY FILE (examples above).

  • Shrink the log file to 1 MB. This looks tempting because, hey, SQL Server will let me do it in certain scenarios, and look at all the space it frees! Unless your database is read only (and it is, you should mark it as such using ALTER DATABASE), this will absolutely just lead to many unnecessary growth events, as the log has to accommodate current transactions regardless of the recovery model. What is the point of freeing up that space temporarily, just so SQL Server can take it back slowly and painfully?

  • Create a second log file. This will provide temporarily relief for the drive that has filled your disk, but this is like trying to fix a punctured lung with a band-aid. You should deal with the problematic log file directly instead of just adding another potential problem. Other than redirecting some transaction log activity to a different drive, a second log file really does nothing for you (unlike a second data file), since only one of the files can ever be used at a time. Paul Randal also explains why multiple log files can bite you later.

Be proactive

Instead of shrinking your log file to some small amount and letting it constantly autogrow at a small rate on its own, set it to some reasonably large size (one that will accommodate the sum of your largest set of concurrent transactions) and set a reasonable autogrow setting as a fallback, so that it doesn't have to grow multiple times to satisfy single transactions and so that it will be relatively rare for it to ever have to grow during normal business operations.

The worst possible settings here are 1 MB growth or 10% growth. Funny enough, these are the defaults for SQL Server (which I've complained about and asked for changes to no avail) - 1 MB for data files, and 10% for log files. The former is much too small in this day and age, and the latter leads to longer and longer events every time (say, your log file is 500 MB, first growth is 50 MB, next growth is 55 MB, next growth is 60.5 MB, etc. etc. - and on slow I/O, believe me, you will really notice this curve).

Further reading

Please don't stop here; while much of the advice you see out there about shrinking log files is inherently bad and even potentially disastrous, there are some people who care more about data integrity than freeing up disk space.

A blog post I wrote in 2009, when I saw a few "here's how to shrink the log file" posts spring up.

A blog post Brent Ozar wrote four years ago, pointing to multiple resources, in response to a SQL Server Magazine article that should not have been published.

A blog post by Paul Randal explaining why t-log maintenance is important and why you shouldn't shrink your data files, either.

Mike Walsh has a great answer covering some of these aspects too, including reasons why you might not be able to shrink your log file immediately.

Solution 2

-- DON'T FORGET TO BACKUP THE DB :D (Check [here][1]) 


USE AdventureWorks2008R2;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE AdventureWorks2008R2
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (AdventureWorks2008R2_Log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE AdventureWorks2008R2
SET RECOVERY FULL;
GO

From: DBCC SHRINKFILE (Transact-SQL)

You may want to backup first.

Solution 3

DISCLAIMER: Please read comments below carefully, and I assume you've already read the accepted answer. As I said nearly 5 years ago:

if anyone has any comments to add for situations when this is NOT an adequate or optimal solution then please comment below


  • Right click on the database name.

  • Select Tasks → Shrink → Database

  • Then click OK!

I usually open the Windows Explorer directory containing the database files, so I can immediately see the effect.

I was actually quite surprised this worked! Normally I've used DBCC before, but I just tried that and it didn't shrink anything, so I tried the GUI (2005) and it worked great - freeing up 17 GB in 10 seconds

In Full recovery mode this might not work, so you have to either back up the log first, or change to Simple recovery, then shrink the file. [thanks @onupdatecascade for this]

--

PS: I appreciate what some have commented regarding the dangers of this, but in my environment I didn't have any issues doing this myself especially since I always do a full backup first. So please take into consideration what your environment is, and how this affects your backup strategy and job security before continuing. All I was doing was pointing people to a feature provided by Microsoft!

Solution 4

Below is a script to shrink the transaction log, but I’d definitely recommend backing up the transaction log before shrinking it.

If you just shrink the file you are going to lose a ton of data that may come as a life saver in case of disaster. The transaction log contains a lot of useful data that can be read using a third-party transaction log reader (it can be read manually but with extreme effort though).

The transaction log is also a must when it comes to point in time recovery, so don’t just throw it away, but make sure you back it up beforehand.

Here are several posts where people used data stored in the transaction log to accomplish recovery:

 

USE DATABASE_NAME;
GO

ALTER DATABASE DATABASE_NAME
SET RECOVERY SIMPLE;
GO
--First parameter is log file name and second is size in MB
DBCC SHRINKFILE (DATABASE_NAME_Log, 1);

ALTER DATABASE DATABASE_NAME
SET RECOVERY FULL;
GO

You may get an error that looks like this when the executing commands above

“Cannot shrink log file (log file name) because the logical log file located at the end of the file is in use“

This means that TLOG is in use. In this case try executing this several times in a row or find a way to reduce database activities.

Solution 5

Here is a simple and very inelegant & potentially dangerous way.

  1. Backup DB
  2. Detach DB
  3. Rename Log file
  4. Attach DB
  5. New log file will be recreated
  6. Delete Renamed Log file.

I'm guessing that you are not doing log backups. (Which truncate the log). My advice is to change recovery model from full to simple. This will prevent log bloat.

Share:
1,380,385
Brendan Weinstein
Author by

Brendan Weinstein

I'm here to learn something!

Updated on January 03, 2020

Comments

  • Brendan Weinstein
    Brendan Weinstein over 4 years

    I'm not a SQL expert, and I'm reminded of the fact every time I need to do something beyond the basics. I have a test database that is not large in size, but the transaction log definitely is. How do I clear out the transaction log?

  • Johnno Nolan
    Johnno Nolan over 15 years
    ...but I have done it dozens of times without issue. perhaps you could explain why the db may not re-attach.
  • mrdenny
    mrdenny over 15 years
    I have on occasion (not very often) seen the SQL Server not be able to attach the database back to the database when the log file has been deleted. This leaves you with a useless MDF file. There are several possibilities that can cause the problem. Transactions pending rollback come to mind.
  • aruno
    aruno about 15 years
    glad to hear it worked for you too! if anyone has any comments to add for situations when this is NOT an adequate or optimal solution then please comment below.
  • onupdatecascade
    onupdatecascade almost 15 years
    In Full recovery mode this might not work, so you have to either back up the log first, or change to Simple recovery, then shrink the file.
  • Daniel Noworyta
    Daniel Noworyta almost 15 years
    This is the way that I clear log files on my dev boxes. Prod environments with all of the associated backup strategies etc I leave to the DBA's :-)
  • aruno
    aruno almost 15 years
    @onupdatecascade - good call on full recovery trick. had another database with a huge log : switched to simple, then shrink database and switched back to full. log file down to 500kb!
  • Vikram
    Vikram about 14 years
    simple shrink command from SQL GUI does not seems to work for me at all. I have tried it several times without any reduction in the size of transaction log. Am i missing something ??
  • aruno
    aruno about 14 years
    did you turn off 'full recovery' mode and changed it to 'simple' mode? disclaimer: MAKE A BACKUP first if you do this. i've never had an issue but would hate for you to lose data
  • Martin Smith
    Martin Smith over 13 years
    +1 For being the first answer to mention that this may not be a good idea! The OP specifies a test database but it is a point well worth making for the more general case.
  • Munish Goyal
    Munish Goyal over 13 years
    i dont need transaction lgo at all. i mean i am just working with some scratch data. I dont need any type of recovery. wahtever data i value , i just copy the table to new DB or some such thing. Can i work without trasanction log ? It grows too big and if i delete data out of large data it takes forever because it is recordng tx log. If i restrict , then it say sits full and increase it how to restric it or turn it off totally ??
  • Shaul Behr
    Shaul Behr over 12 years
    +1 - Inelegant or not, this method has got me out of hot water a couple of times with database logs that have filled the entire disk, such that even a shrink command can't run.
  • Paul
    Paul almost 11 years
    Is there not a risk of uncheckpointed transactions existing in the log?
  • Michael K. Campbell
    Michael K. Campbell almost 11 years
    Sorry. But this answer simply could NOT be MORE wrong. By shrinking the database you WILL grow the transaction log file. ANY time you move data around in a SQL Server database, you'll require logging - bloating the log file. To decrease log size, either set the DB to Simple Recovery OR (if you care/need logged data - and you almost always do in production) backup the log. More Details in these simple, free, videos: sqlservervideos.com/video/logging-essentials sqlservervideos.com/video/sql2528-log-files
  • Adir D
    Adir D almost 11 years
    Wow, kudos for getting 1300+ rep for this answer, but it really is terrible advice.
  • Adir D
    Adir D almost 11 years
    Setting the recovery mode to simple will not, on its own, magically shrink the transaction log.
  • Adir D
    Adir D almost 11 years
    Just setting the database to simple won't shrink the log file, in whatever state it's currently in. It just may help prevent it from growing further (but it still could).
  • Adir D
    Adir D almost 11 years
    I agree with this tactic, but it should be reserved for cases where the log has blown up due to some unforeseen and/or extraordinary event. If you set up a job to do this every week, you're doing it very, very wrong.
  • Adir D
    Adir D almost 11 years
    TRUNCATE_ONLY is no longer an option in current versions of SQL Server, and it's not recommended on versions that do support it (see Rachel's answer).
  • Adir D
    Adir D almost 11 years
    TRUNCATE_ONLY is no longer an option in current versions of SQL Server, and it's not recommended on versions that do support it (see Rachel's answer).
  • Adir D
    Adir D almost 11 years
    TRUNCATE_ONLY is no longer an option in current versions of SQL Server, and it's not recommended on versions that do support it (see Rachel's answer).
  • Adir D
    Adir D almost 11 years
    I agree with your answer, except for the , 1) part. The problem is that if you shrink it to 1 MB, the growth events leading to a normal log size will be quite costly, and there will be many of them if the growth rate is left to the default of 10%.
  • Robert L Davis
    Robert L Davis almost 11 years
    Point-in-time recovery isn't the only reason to use full recovery model. The main reason is to prevent data loss. Your potential for data loss is the length between backups. If you're only doing a daily backup, your potential for data loss is 24 hours. If you then add log backups every half hour, your potential for data loss becomes 30 minutes. Additionally, log backups are required to perform any sort of piecemeal restore (like to recover from corruption).
  • Robert L Davis
    Robert L Davis almost 11 years
    That aside, this is the most complete and correct answer given on this page.
  • Adir D
    Adir D almost 11 years
    @Robert thanks, I added clarification to what I mean by the two different categories.
  • Robert L Davis
    Robert L Davis almost 11 years
    I would also want to add that clearing the log is done by backing up the log (in full or bulk-logged recovery) or a checkpoint (in simple recovery). However, if you are in a situation where you must shrink the log file, that's not enough. You need to cause the currently active VLF to cycle back to the start of the log file. You can force this in SQL 2008 and newer by issuing two log backups or checkpoints back-to-back. The first one clears it and the second one cycles it back to the start of the file.
  • Robert L Davis
    Robert L Davis almost 11 years
    In additional to what Michael and Aaron said, if you are switching back and forth between full and simple recovery model, you shouldn't be allowed to touch SQL Server until you've at least learned the basics.
  • Jonathan
    Jonathan almost 11 years
    @Aaron Not on it's own, no. I assumed that the OP would be using their test database, and therefore "the transaction log will very shortly shrink", but you are correct in that it's more of a side effect: Recovery mode of simple will probably make you end up with a shrunken transaction log soon
  • Question3CPO
    Question3CPO over 10 years
    @AaronBertrand Thanks for this; I had thought that shrinking the database was a bad idea, and sure enough, you (and Brent) confirmed it.
  • Doug_Ivison
    Doug_Ivison over 10 years
    "Simple...and never fill up again" -- not true. I've seen it happen (in the past 48 hours) on a database where the Recovery Model was set to "SIMPLE". The logfile's filegrowth was set to "restricted", and we'd been doing some immense activity on it... I understand that it was an unusual situation. (In our situation, where we had plenty of disc space, we increased the logfile size, and set logfile filegrowth to "unrestricted"... which by the way --interface bug-- shows up, after the change, as "restricted" with a maxsize of 2,097,152 MB.)
  • dburges
    dburges over 10 years
    Worst advice ever. This is actively dangerous advice and should be removed.
  • Jonathan
    Jonathan over 10 years
    @Doug_Ivison Yes, the transaction log will have open transactions in it, but they will be removed in simple mode once a checkpoint has taken place. This answer is only intended as a quick "my development/test box has a big transaction log, and I want it to go away so I don't need to worry about it too often", rather than ever intended to go into a production environment. To re-iterate: Do not do this in production.
  • Doug_Ivison
    Doug_Ivison over 10 years
    That's all true, and I get that it was a development-only quick approach. Why I commented: until it happened to me, I actually thought the simple recovery model could NEVER fill up... and I think it took me longer to figure out / resolve, while I came to understand that unusually large transactions can do that.
  • Doug_Ivison
    Doug_Ivison over 10 years
    I used to think that recovery model=simple meant no logging. Now that I understand that THERE IS SOME LOGGING, even while the recovery model is set to simple... I wonder why Mgmt Studio does not offer Backup type=Log, for recovery model=simple.
  • Adir D
    Adir D over 10 years
    @Doug_Ivison because at any point, log records could be purged. What would be the point of allowing you to backup a log which is incomplete? In simple recovery, the log is only really used to allow for rollbacks of transactions. Once a transaction has been committed or rolled back, the next second it could be gone from the log.
  • bksi
    bksi over 10 years
    It works for me using MSSQL Server 2005 Standard edition
  • aruno
    aruno over 10 years
    @RobertLDavis whoever said I switched back? ;-)
  • Jaques
    Jaques over 9 years
    This might also be fine for smaller DB's, but if your have a 3 or 4 TB DB, it might not be the best solution.
  • Newclique
    Newclique over 9 years
    Here is an exaggeration to demonstrate what is happening and why shrinking is absolutely critical on a periodic basis: Record A is changed 1 million times before a backup is done. What is in the log? 999,999 pieces of data that are irrelevant. If the logs are never shrunk you will never know what the true operating expense of the database is. Also, you are hogging valuable resources on a SAN, most likely. Shrinking is good maintenance and keeps you in touch with your environment. Show me someone who thinks you should never shrink and I'll show you someone ignoring their environment.
  • ripvlan
    ripvlan almost 9 years
    Never ever delete the transaction log. Part of your data is in the Log. Delete it and database will become corrupt. I don't have rep to down vote.
  • Omid-RH
    Omid-RH over 8 years
    thanks, I did not suppose to lose much time for it and your answer was my best :)
  • ripvlan
    ripvlan over 8 years
    I should have added - If you delete the TX log - Update Resume!
  • JsonStatham
    JsonStatham over 7 years
    This seems ok if you have been developing a system for a long time and loading/delete thousands of records during the dev period. Then when you want to use this database to deploy to live, the testing/development data that has been logged is redundant and therefore doesn't matter if its lost, no?
  • Jodrell
    Jodrell over 7 years
    @RobertLDavis this is one the top most complete and correct answers on the site.
  • Sir Crispalot
    Sir Crispalot about 7 years
    @AaronBertrand We recently lost our connection to an off-site log shipping disk, so the logs grew rapidly until we were able to restore the connection and catch up. Is this considered a valid "unexpected growth" scenario as you have described?
  • Adir D
    Adir D about 7 years
    @SirCrispalot Of course, unless you expect to lose the off-site disk regularly, in which case you should either log ship elsewhere first and then pass it on (so that production doesn't depend on the reliability of the off-site disk), or stop using an unreliable off-site disk. :-)
  • Vildan
    Vildan almost 7 years
    This approach sets recover type to "FULL", even if recovery type was something else before
  • Nelda.techspiress
    Nelda.techspiress almost 7 years
    Yes development environments may not care about data integrity as is my case. But I'd like to know how you rename the log file? I can't find the place in SSMS to do that and Server12 doesn't give me access to the log directory.
  • Hilary
    Hilary almost 7 years
    Under the category "If you don't care about point in time recovery..." - DEVELOPMENT - NOT LIVE - database - we had a rogue transaction holding onto the transaction log. An hour after cancellation, it was still logging AND was sitting at twice the allowed logfile limit for the repository (obviously an incorrect incorrect setup parameter to be addressed). We used DBCC OPENTRAN to identify the spid of the trans. holding onto the log and kill it. The log shrank after that. Was a last resort for us and as said before likely only viable in DEV/TEST env where point-in-time recovery not required.
  • JGilmartin
    JGilmartin over 6 years
    this is perfectly acceptable for any dev/test DB which are 90% of the time the DB's were having to work with
  • Bizhan
    Bizhan over 6 years
    Worked like magic! Note that the name of the log file is actually its logical name (which can be found in db->properties->files)
  • zinczinc
    zinczinc about 6 years
    @Aaron Bertrand: I voted this answer down because it does not concisely and precisely answer the question, which by the way, is very clear. Instead of answering the question first, the author starts with a lecture that drowns the reader. IMHO, it would have been better to 1) answer the question exactly, to the point, then 2) explain the risks.
  • Adir D
    Adir D about 6 years
    @zinczinc Ok, thank you for your feedback. The problem I see with putting the answer first and the explanation later is that they will never read the important parts. The lecture I drown the reader with is actually far more important than the answer at the end, and IMHO the background I provide is pretty important to making those choices. But hey, if you want to submit a one-line answer because you think that is better for the OP, please feel free to use that portion of my answer to make a better one we can all learn from.
  • Thomas Franz
    Thomas Franz over 5 years
    thanks for the hint regarding the double CHECKPOINT; I run it always only once and wondered, why my logfile (SIMPLE RECOVERY MODE) remains big...
  • Cesar
    Cesar almost 5 years
    I'd include a BACKUP DATABASE clause in this script, so nobody forgets this part. I say this, because some years ago I shrunk a database in a disk where it has too few free space. In the shrink process, the files were getting bigger, and an Out of Space error was thrown. Result: I lost the database. Luckly was a log database which had lose tolerance.
  • George M Reinstate Monica
    George M Reinstate Monica over 4 years
    Yes, this just happened to us. We wanted to ditch 20G of log file as we'd just backed up the data before moving the database. No way would MSSQL allow us to re-attach the new database without the humongous log file.
  • Rui Lima
    Rui Lima about 4 years
    @Dragas There is a link to the official documentation in the answer, you may want to consider taking a look at it, at the official documentation. BTW a good DBA would not be looking in stackoverflow for how to clear a log, pretty sure they would know, but I am not a DBA :)
  • Marcell
    Marcell over 3 years
    Like mentioned in the comments section of a different answer: "yourdb_log" is the logical file name, which can be found in the database file properties section. I added single quotes around it and it worked.
  • The Red Pea
    The Red Pea over 3 years
    Maybe after setting to full recovery mode, then shrink (using the UI, according to this post) to see effects of decreased log size?
  • Kiquenet
    Kiquenet over 3 years
    For me DBCC SHRINKFILE not reduce log file ldf (Recovery is SIMPLE). For me log_reuse_wait_desc not returns any data. DBCC SQLPerf(logspace) return 99,99% Log Space Used DBCC LOGINFO returns 11059 rows, all Status = 2.
  • David Browne - Microsoft
    David Browne - Microsoft over 2 years
    If you have to run ALTER DATABASE testdb SET RECOVERY FULL;, then you need another FULL backup before you can take a log backup.
  • Gabe Halsmer
    Gabe Halsmer over 2 years
    Didn't work at first until I realized that in SQL-Server 2016 the log file is actually lower-case "_log". The 3rd command is case-sensitive. Once I changed it to match exactly my database's log name, this worked!!
  • chaostheory
    chaostheory about 2 years
    What bothered me is that this should have worked for me, but it didn't for my version of SQL Server. According to the documentation it should. While it did shrink my log, it did not truncate it, nor did it change my recovery to Simple. I had to do both in the GUI before it worked. SQL Server is not fun to admin compared to other alternatives.