What is the real overhead of try/catch in C#?
Solution 1
I'm not an expert in language implementations (so take this with a grain of salt), but I think one of the biggest costs is unwinding the stack and storing it for the stack trace. I suspect this happens only when the exception is thrown (but I don't know), and if so, this would be decently sized hidden cost every time an exception is thrown... so it's not like you are just jumping from one place in the code to another, there is a lot going on.
I don't think it's a problem as long as you are using exceptions for EXCEPTIONAL behavior (so not your typical, expected path through the program).
Solution 2
Three points to make here:
Firstly, there is little or NO performance penalty in actually having try-catch blocks in your code. This should not be a consideration when trying to avoid having them in your application. The performance hit only comes into play when an exception is thrown.
When an exception is thrown in addition to the stack unwinding operations etc that take place which others have mentioned you should be aware that a whole bunch of runtime/reflection related stuff happens in order to populate the members of the exception class such as the stack trace object and the various type members etc.
I believe that this is one of the reasons why the general advice if you are going to rethrow the exception is to just
throw;
rather than throw the exception again or construct a new one as in those cases all of that stack information is regathered whereas in the simple throw it is all preserved.
Solution 3
Are you asking about the overhead of using try/catch/finally when exceptions aren't thrown, or the overhead of using exceptions to control process flow? The latter is somewhat akin to using a stick of dynamite to light a toddler's birthday candle, and the associated overhead falls into the following areas:
- You can expect additional cache misses due to the thrown exception accessing resident data not normally in the cache.
-
You can expect additional page faults due to the thrown exception accessing non-resident code and data not normally in your application's working set.
- for example, throwing the exception will require the CLR to find the location of the finally and catch blocks based on the current IP and the return IP of every frame until the exception is handled plus the filter block.
- additional construction cost and name resolution in order to create the frames for diagnostic purposes, including reading of metadata etc.
-
both of the above items typically access "cold" code and data, so hard page faults are probable if you have memory pressure at all:
- the CLR tries to put code and data that is used infrequently far from data that is used frequently to improve locality, so this works against you because you're forcing the cold to be hot.
- the cost of the hard page faults, if any, will dwarf everything else.
- Typical catch situations are often deep, therefore the above effects would tend to be magnified (increasing the likelihood of page faults).
As for the actual impact of the cost, this can vary a lot depending on what else is going on in your code at the time. Jon Skeet has a good summary here, with some useful links. I tend to agree with his statement that if you get to the point where exceptions are significantly hurting your performance, you have problems in terms of your use of exceptions beyond just the performance.
Solution 4
Contrary to theories commonly accepted, try
/catch
can have significant performance implications, and that's whether an exception is thrown or not!
- It disables some automatic optimisations (by design), and in some cases injects debugging code, as you can expect from a debugging aid. There will always be people who disagree with me on this point, but the language requires it and the disassembly shows it so those people are by dictionary definition delusional.
- It can impact negatively upon maintenance. This is actually the most significant issue here, but since my last answer (which focused almost entirely on it) was deleted, I'll try to focus on the less significant issue (the micro-optimisation) as opposed to the more significant issue (the macro-optimisation).
The former has been covered in a couple of blog posts by Microsoft MVPs over the years, and I trust you could find them easily yet StackOverflow cares so much about content so I'll provide links to some of them as filler evidence:
-
Performance implications of
try
/catch
/finally
(and part two), by Peter Ritchie explores the optimisations whichtry
/catch
/finally
disables (and I'll go further into this with quotes from the standard) -
Performance Profiling
Parse
vs.TryParse
vs.ConvertTo
by Ian Huff states blatantly that "exception handling is very slow" and demonstrates this point by pittingInt.Parse
andInt.TryParse
against each other... To anyone who insists thatTryParse
usestry
/catch
behind the scenes, this ought to shed some light!
There's also this answer which shows the difference between disassembled code with- and without using try
/catch
.
It seems so obvious that there is an overhead which is blatantly observable in code generation, and that overhead even seems to be acknowledged by people who Microsoft value! Yet I am, repeating the internet...
Yes, there are dozens of extra MSIL instructions for one trivial line of code, and that doesn't even cover the disabled optimisations so technically it's a micro-optimisation.
I posted an answer years ago which got deleted as it focused on the productivity of programmers (the macro-optimisation).
This is unfortunate as no saving of a few nanoseconds here and there of CPU time is likely to make up for many accumulated hours of manual optimisation by humans. Which does your boss pay more for: an hour of your time, or an hour with the computer running? At what point do we pull the plug and admit that it's time to just buy a faster computer?
Clearly, we should be optimising our priorities, not just our code! In my last answer I drew upon the differences between two snippets of code.
Using try
/catch
:
int x;
try {
x = int.Parse("1234");
}
catch {
return;
}
// some more code here...
Not using try
/catch
:
int x;
if (int.TryParse("1234", out x) == false) {
return;
}
// some more code here
Consider from the perspective of a maintenance developer, which is more likely to waste your time, if not in profiling/optimisation (covered above) which likely wouldn't even be necessary if it weren't for the try
/catch
problem, then in scrolling through source code... One of those has four extra lines of boilerplate garbage!
As more and more fields are introduced into a class, all of this boilerplate garbage accumulates (both in source and disassembled code) well beyond reasonable levels. Four extra lines per field, and they're always the same lines... Were we not taught to avoid repeating ourselves? I suppose we could hide the try
/catch
behind some home-brewed abstraction, but... then we might as well just avoid exceptions (i.e. use Int.TryParse
).
This isn't even a complex example; I've seen attempts at instantiating new classes in try
/catch
. Consider that all of the code inside of the constructor might then be disqualified from certain optimisations that would otherwise be automatically applied by the compiler. What better way to give rise to the theory that the compiler is slow, as opposed to the compiler is doing exactly what it's told to do?
Assuming an exception is thrown by said constructor, and some bug is triggered as a result, the poor maintenance developer then has to track it down. That might not be such an easy task, as unlike the spaghetti code of the goto nightmare, try
/catch
can cause messes in three dimensions, as it could move up the stack into not just other parts of the same method, but also other classes and methods, all of which will be observed by the maintenance developer, the hard way! Yet we are told that "goto is dangerous", heh!
At the end I mention, try
/catch
has its benefit which is, it's designed to disable optimisations! It is, if you will, a debugging aid! That's what it was designed for and it's what it should be used as...
I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.
What optimisations do try
, catch
and finally
disable?
A.K.A
How are try
, catch
and finally
useful as debugging aids?
they're write-barriers. This comes from the standard:
12.3.3.13 Try-catch statements
For a statement stmt of the form:
try try-block catch ( ... ) catch-block-1 ... catch ( ... ) catch-block-n
- The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
- The definite assignment state of v at the beginning of catch-block-i (for any i) is the same as the definite assignment state of v at the beginning of stmt.
- The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) v is definitely assigned at the end-point of try-block and every catch-block-i (for every i from 1 to n).
In other words, at the beginning of each try
statement:
- all assignments made to visible objects prior to entering the
try
statement must be complete, which requires a thread lock for a start, making it useful for debugging race conditions! - the compiler isn't allowed to:
- eliminate unused variable assignments which have definitely been assigned to before the
try
statement - reorganise or coalesce any of it's inner-assignments (i.e. see my first link, if you haven't already done so).
- hoist assignments over this barrier, to delay assignment to a variable which it knows won't be used until later (if at all) or to pre-emptively move later assignments forward to make other optimisations possible...
- eliminate unused variable assignments which have definitely been assigned to before the
A similar story holds for each catch
statement; suppose within your try
statement (or a constructor or function it invokes, etc) you assign to that otherwise pointless variable (let's say, garbage=42;
), the compiler can't eliminate that statement, no matter how irrelevant it is to the observable behaviour of the program. The assignment needs to have completed before the catch
block is entered.
For what it's worth, finally
tells a similarly degrading story:
12.3.3.14 Try-finally statements
For a try statement stmt of the form:
try try-block finally finally-block
• The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the beginning of finally-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) either: o v is definitely assigned at the end-point of try-block o v is definitely assigned at the end-point of finally-block If a control flow transfer (such as a goto statement) is made that begins within try-block, and ends outside of try-block, then v is also considered definitely assigned on that control flow transfer if v is definitely assigned at the end-point of finally-block. (This is not an only if—if v is definitely assigned for another reason on this control flow transfer, then it is still considered definitely assigned.)
12.3.3.15 Try-catch-finally statements
Definite assignment analysis for a try-catch-finally statement of the form:
try try-block catch ( ... ) catch-block-1 ... catch ( ... ) catch-block-n finally finally-block
is done as if the statement were a try-finally statement enclosing a try-catch statement:
try { try try-block catch ( ... ) catch-block-1 ... catch ( ... ) catch-block-n } finally finally-block
Solution 5
In my experience the biggest overhead is in actually throwing an exception and handling it. I once worked on a project where code similar to the following was used to check if someone had a right to edit some object. This HasRight() method was used everywhere in the presentation layer, and was often called for 100s of objects.
bool HasRight(string rightName, DomainObject obj) {
try {
CheckRight(rightName, obj);
return true;
}
catch (Exception ex) {
return false;
}
}
void CheckRight(string rightName, DomainObject obj) {
if (!_user.Rights.Contains(rightName))
throw new Exception();
}
When the test database got fuller with test data, this lead to a very visible slowdown while openening new forms etc.
So I refactored it to the following, which - according to later quick 'n dirty measurements - is about 2 orders of magnitude faster:
bool HasRight(string rightName, DomainObject obj) {
return _user.Rights.Contains(rightName);
}
void CheckRight(string rightName, DomainObject obj) {
if (!HasRight(rightName, obj))
throw new Exception();
}
So in short, using exceptions in normal process flow is about two orders of magnitude slower then using similar process flow without exceptions.
JC Grubbs
Updated on November 27, 2020Comments
-
JC Grubbs over 3 years
So, I know that try/catch does add some overhead and therefore isn't a good way of controlling process flow, but where does this overhead come from and what is it's actual impact?
-
Windows programmer over 15 yearsMore precisely: try is cheap, catch is cheap, throw is expensive. If you avoid try and catch, throw is still expensive.
-
HTTP 410 over 15 yearsHmmm - mark-up doesn't work in comments. To try again - exceptions are for errors, not for "exceptional behaviour" or conditions: blogs.msdn.com/kcwalina/archive/2008/07/17/…
-
Eddie almost 15 yearsAlso: When you rethrow an exception as "throw ex" then you lose the original stack trace and replace it with the CURRENT stack trace; rarely what's wanted. If you just "throw" then the original stack trace in the Exception is preserved.
-
ThunderGr over 10 yearsWhy would you want to throw an exception here? You could handle the case of not having the rights on the spot.
-
ThunderGr over 10 yearsThe compiler is out of picture at runtime. There has to be an overhead for try/catch blocks so that the CLR can handle the exceptions. C# runs on the .NET CLR(a virtual machine). It seems to me that the overhead of the block itself is minimal when there is no exception but the cost of the CLR handling the exception is very significant.
-
ThunderGr over 10 yearsI am pretty sure that TryParse does a try {int x = int.Parse("xxx"); return true;} catch{ return false; } internally. Indentation is not a concern in the question, only performance and overhead.
-
Tobi over 10 years@ThunderGr that's actually what I changed, making it two orders of magnitude faster.
-
Kapé about 10 years@Windows programmer Stats / source please?
-
autistic over 6 years@ThunderGr Alternatively, read the new answer I posted. It contains more links, one of which is an analysis of the massive performance boost when you avoid
Int.Parse
in favour ofInt.TryParse
. -
autistic over 6 yearsI couldn't reach your blog (the connection is timing out; are you using
try
/catch
too much? heh heh), but you seem to be arguing with the language spec and some MS MVPs who have also written blogs on the subject, providing measurements to the contrary of your advice... I'm open to the suggestion that the research I've done is wrong, but I'll need to read your blog entry to see what it says. -
Admin over 6 yearsIn addition to @Hafthor's blog post, here's another blog post with code specifically written to test speed performance differences. According to the results, if you have an exception occur even just 5% of the time, Exception Handling code runs 100x slower overall than non-exception handling code. The article specifically targets the
try-catch
block vstryparse()
methods, but the concept is the same. -
binki about 6 years@Eddie Or
throw new Exception("Wrapping layer’s error", ex);