Exceptions

Pages: 123
@Bourgond Aries: The language in question was not specified, nor were the includes, using directives, libraries, etc.
Last edited on
there are several variations of assert floating around depending on the lib. here's one of my versions in my library Im developing; ets a MACRO (wrapper for logging and exit/abort asserting):

 
ASSERT(1 < 0,"an error .... has ocurred, fix it asap.", log_level );


Ogre uses a variation of this as well.

as for try/throws/catches they are better then error codes for sure when in deeply nested function calls. But its better to avoid a design where "too deep" looping or nesting is occurring.
Last edited on
I don't think exactly zero is possible in case of a method that links to another unit of compilation (how would the callee know the address to jump to on exception?) - the exception handler has to be installed on entering the try block and deinstalled on leaving, which may take a few additional CPU cycles. But 100% overhead is probably too much.


The level at which the checks have to be done is very fine, and these methods are called hundreds of millions of times. Entering the try block is enough overhead to slow the model down considerably.

We've used numerous code profiling tools to ensure there are no bottlenecks in the codebase and the overhead we get from the try blocks is an indication of how optimised the code currently is :)
However, I guess total zero-cost is possible with a little help of linker. I think gnu linker is capable of integrating exception handlers of different modules into one logical table. Then you wouldn't need to register anything on entering/leaving the try block. If indeed, gcc is doing that, then this is another nice reason for programming on Linux than on Windows ;)
Implying that there is no way to use gcc on Windows and get the same functionality as on Linux.
Last time I checked gcc (mingw) produced much slower code on Windows than on Linux, but maybe it changed now. Not sure if it was because of gcc or because Linux is simply faster.
https://gist.github.com/cire3791/5592977

Typical result for VC++:

Try block: 10108ms
No try block: 10124ms
Difference of -0.15804%


try are expensive.
What happens if you add more catch blocks or if you add catch(...)?
What happens if you add more catch blocks or if you add catch(...)?

The test function grows by a few lines.

Note: Structured exception handling is not enabled, so faults generated by, for example, dividing by 0 aren't caught. Were SEH enabled, the difference would be ~150% with the try block being the obvious loser.

> cire:
> Typical result for VC++:
>
Try block: 10108ms
No try block: 10124ms
Difference of -0.15804%

> try are expensive.

+1

Fulminations surrounding the cost of exceptions that are not thrown are irrelevant to the point of being asinine. For time-critical programs, problems arise from:

a. the unpedictability of the time needed to pass control from a throw-expression to the catch clause.

b. error-handling being not resumptive. With exceptions, the stack is unwound, and directly resuming execution from the point at which the error occurred (after fixing the error) is not possible.

The low-cost (actual zero cost if there is no error and minimal cost if there is one) alternative is call-backs. For instance, std::new_handler instead of std::bad_alloc. The primary drawback of using callbacks to handle errors is that it makes discrimination of errors quite difficult.
Dylan uses recoverable error mechanics, where can error can occur but the program state can be recovered. I'm not exactly sure how it works, but it is discussed here:
http://opendylan.org/documentation/intro-dylan/conditions.html#recovery
I have no idea how eficient it is (but in a language like Dylan it is probably less efficient than C anyway, unfortunately - but I wonder how it would be in place of C++ exceptions)
Topic archived. No new replies allowed.
Pages: 123