How do I test database-related code with NUnit?
Solution 1
NUnit now has a [Rollback] attribute, but I prefer to do it a different way. I use the TransactionScope class. There are a couple of ways to use it.
[Test]
public void YourTest()
{
using (TransactionScope scope = new TransactionScope())
{
// your test code here
}
}
Since you didn't tell the TransactionScope to commit it will rollback automatically. It works even if an assertion fails or some other exception is thrown.
The other way is to use the [SetUp] to create the TransactionScope and [TearDown] to call Dispose on it. It cuts out some code duplication, but accomplishes the same thing.
[TestFixture]
public class YourFixture
{
private TransactionScope scope;
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void YourTest()
{
// your test code here
}
}
This is as safe as the using statement in an individual test because NUnit will guarantee that TearDown is called.
Having said all that I do think that tests that hit the database are not really unit tests. I still write them, but I think of them as integration tests. I still see them as providing value. One place I use them often is in testing LINQ to SQL code. I don't use the designer. I hand write the DTO's and attributes. I've been known to get it wrong. The integration tests help catch my mistake.
Solution 2
I just went to a .NET user group and the presenter said he used SQLlite in test setup and teardown and used the in memory option. He had to fudge the connection a little and explicit destroy the connection, but it would give a clean DB every time.
http://houseofbilz.com/archive/2008/11/14/update-for-the-activerecord-quotmockquot-framework.aspx
Solution 3
I would call these integration tests, but no matter. What I have done for such tests is have my setup methods in the test class clear all the tables of interest before each test. I generally hand write the SQL to do this so that I'm not using the classes under test.
Generally, I rely on an ORM for my datalayer and thus I don't write unit tests for much there. I don't feel a need to unit test code that I don't write. For code that I add in the layer, I generally use dependency injection to abstract out the actual connection to the database so that when I test my code, it doesn't touch the actual database. Do this in conjunction with a mocking framework for best results.
Michael Haren
These days I'm a .net/sql software guy I have plenty of experience on the LAMP stack, too Currently toying with RoR, ObjC. and stumbling my way through LINQ to Entities Currently loving ASP.NET MVC and jQuery More about be on my CV or personal blog [email protected] Random silliness on Twitter This is a personal account and not affiliated with my employer. #SOreadytohelp
Updated on July 05, 2022Comments
-
Michael Haren almost 2 years
I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic:
- http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx
- http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx
- http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx
- http://haacked.com/archive/2005/12/28/11377.aspx
These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes.
That's great but...
- Does this functionality exists somewhere in NUnit natively?
- Has this technique been improved upon in the last 4 years?
- Is this still the best way to test database-related code?
Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one.
Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.
-
Michael Haren over 15 yearsUnfortunately this approach is not practical for my projects (hundreds of tables, procedures, gigs of data). This is too high-friction to justify on an existing project.
-
tvanfosson over 15 yearsBut your unit tests should be broken up in to smaller, more focused classes that don't touch all of the tables. You only need to deal with the tables this particular test class touches.
-
tvanfosson over 15 yearsAlso, retrofitting unit tests on existing projects is probably best done on an "as needed" basis -- like when you need to refactor or fix a bug. Then you can write a "box" of tests around the existing code to guarantee that your changes don't break things (or fix the bug).
-
Michael Haren over 15 yearsI wish that were true. I really do. Plus, I don't want to have to write lots and lots of fixture code just to get the db into a "ready to go" state.
-
Michael Haren over 15 yearsAfter using this approach for a couple weeks, I'm very happy with it, thanks again!
-
Bruno Lopes over 15 yearsI ended up using a very similar pattern, but with a base class that deals with the database trivia, including setting up the connections and whatnot.
-
tjmoore about 14 yearsThe only problem is surely if you don't commit, you cannot then test the data has been committed to the database? i.e. I'd like my test to call code that calls the DB, then do some asserts on the DB to verify the data, but finally rollback all those changes when the test or test suite is complete. Though it's a valid point to say these aren't really unit tests. Personally I mock the DAL generally, but it's useful to have explicit DB tests that aren't run on an automated run.
-
Mike Two about 14 years@tjmoore - as long as you query the data using the same connection you can see the rows since you are "inside" the transaction. In the sample code in the answer above you would call something that did some inserts perhaps and then query the data back and check that it contains the expected values. Once the test completes and the
TransactionScope
rolls back nothing will be left. Since your query is enlisted in the same transaction as the insert the rows will be found. Generally I also mock the DAL, but as you said, sometimes you just have to prove the insert is going to work. -
tjmoore about 14 yearsOf course, what I've realised is the code I have under test uses a DAL that maintains the DB connections (actually via the Enterprise Library) and without a re-write just for the tests, the connection is opened and closed on each DAL operation. Ah well, another approach then.
-
Mike Two about 14 years@tjmoore - It should still work. That's the thing about
TransactionScope
. When a new connection is created it will automatically enlist in the currentTransactionScope
. Since the transaction scope will span multiple connections you will need to enable the MSDTC (Microsfot Distributed Transaction Coordinator). But it will work. I've used this code in that situation. -
propagated about 10 years@MikeTwo I have implemented and adopted this pattern but i am having an issue where the sql tables i'm interacting with have an identity column and this column is being incremented even though the transactionscope is not being committed. Any suggestions?
-
Mike Two about 10 years@propagated That's kind of expected stackoverflow.com/questions/282451 There might be ways around it, but I'm not the best database resource. I've never cared what my identity columns values were so this never bothered me. Also I'm only running tests against a local db or a test db so it wasn't an issue. Sorry I couldn't help.
-
propagated about 10 years@MikeTwo No problem, thanks for your response. Yes, i read that post and retooled how i was thinking about it in terms of my tests and realized i didn't care. All good now.
-
Mivaweb over 7 yearsThanks for pointing to the TransactionScope class! Works perfectly with Petapoco and Nunit!
-
OuttaSpaceTime about 3 yearsThis links is useless since it only show a page of different blog posts