Layers and exceptions
-
Would you prefer "critical" and "downright abusive"?
Critical Downright abusive Mildly offensive Judgmental Gently informative Gender-neutral, grain-fed, free range, and cruelty free Politically correct ...
Software Zen:
delete this;
-
Let's say that you have this system with a data layer, a business rules layer and the UI layer. Let's say that one class of the data layer throws an exception under certain conditions. What would you rather do? A) Throw the exception and handle it in the business rules layer. B) Throw the exception and handle it in the UI layer. C) Handle the exception and return some error code to the BR layer. D) Other Usually I go for A, and the BR layer determines whether the user has to know about the error or not, and if so, display a message to the user. However, I am curious on what do CPians think about this.
Hope is the negation of reality - Raistlin Majere
Everyone has a favorite way to architect an application. My design philosophy is different from yours which is different from Joe Cool's down the street. Personally I believe exceptions should be thrown and handled by the layer that makes the most sense. If the user is entering data into an interactive application, and an exception is thrown in the data layer, the exception should percolate back to the user if it is something that the user could fix. This way he knows how to fix the problem, and let you know you should have figured out a way to better validate data before the record/s got send to the database. If application is non interactive, a batch job say, let the database handle it by logging the error, and putting the troublesome record to the side, either storing it in a temp table or a file and then going on to complete the rest of the processes. In most cases there is no need to propagate exception outside the layer in which they occurred. SELECT * FROM users WHERE clue > 0 No rows returned
-
Let's say that you have this system with a data layer, a business rules layer and the UI layer. Let's say that one class of the data layer throws an exception under certain conditions. What would you rather do? A) Throw the exception and handle it in the business rules layer. B) Throw the exception and handle it in the UI layer. C) Handle the exception and return some error code to the BR layer. D) Other Usually I go for A, and the BR layer determines whether the user has to know about the error or not, and if so, display a message to the user. However, I am curious on what do CPians think about this.
Hope is the negation of reality - Raistlin Majere
I write this type of software all the time. What I've found works best is to handle exceptions as close as possible to where they occurred, if nothing productive can be done then it bubbles up to the next layer etc. Anything major will end up at the UI layer and is reported to the user and logged if it couldn't be handled below. By handled I mean complete whatever the users intent was. To say that errors shouldn't be passed up through exceptions is a bullshit rule when it comes to multilayered business apps that often have bits running through remoting. There is just no other way in many cases to deal with an error. What I think people should actually do is avoid exceptions in areas where they are generated as a result of user input. Instead there should be a business rules scheme in place to alert the user at the UI level (I use those blinky exclamation mark with mouseover text) if something they have entered isn't going to fly. For example the user should never see an exception error for anything that breaks a database schema rule, instead the UI should alert them to this problem and not allow them to attempt a database update until it's resolved. If a field is required that needs to be enforced at the UI level, other things may involve the help of business rules at the business object level etc. Often the code I write involves duplication of business rules at the UI level, the business object level and the database level. The reason for this is that my apps are often also developers API's at the business object level and there is also many users who think it's perfectly ok to just go in and play with the database directly.
Cum catapultae proscriptae erunt tum soli proscripti catapultas habebunt
-
Let's say that you have this system with a data layer, a business rules layer and the UI layer. Let's say that one class of the data layer throws an exception under certain conditions. What would you rather do? A) Throw the exception and handle it in the business rules layer. B) Throw the exception and handle it in the UI layer. C) Handle the exception and return some error code to the BR layer. D) Other Usually I go for A, and the BR layer determines whether the user has to know about the error or not, and if so, display a message to the user. However, I am curious on what do CPians think about this.
Hope is the negation of reality - Raistlin Majere
I hate programmers that swalllow exceptions in low-level layers (like database access libraries) just because they are afraid to throw it back up because it might crash their application. I have spent numerous hours debugging applications, just to later find out that another programmer coded the lower-level library to swallow the exception, because it was "safer". I insist that any low level layers like utilites and database layers must (1) catch the exception (2) log it in a file or notify via email and (3) throw it back up. Let the business layer handle it and decide whether to notify the client. And most times I will place the exception message somewhere at the end of the custom error message back to the client. This has saved me countless hours of debugging, because instead of getting an generic error message, I get the real problem, which might come 20 calls deep down from a lower-level library. And sometimes, you need to just throw the exception back up always. For instance, you have a webmethod in a webservice, that returns an array of strings or a custom object. This webmethod might call many other libraries, business objects, database objects, etc. If there is an error, you do not have much error handling to work with, you have to return the array of strings or the custom object. Therefore just throw the exception and let the consumer of the webmethod deal with it. We all know that exception handling is a very expensive transaction and should not be over-used or abused. But being afraid of exceptions because they crash your application is a much worse approach.
-
Hi, I think each layer must handle the exceptions and errors and only inform upper layers about the problems.To accomplish this you have to return error codes to the upper layers.That is exactly what i do and as the program goes on, error codes increase.At higher layers we decide what to do, according to the error codes.It worked for me very well and i use it in every project. Thanks Behzad
behzad
This approach works best in scenarios where you are writing all layers of the application, because you are sharing error handling codes and classes all throughout you application. But if you want to build libraries independently of each other, like for commercial use, I personally believe, you are better off throwing exceptions. You can create your own Exception classes if you want, but still use exception handling.
-
I hate programmers that swalllow exceptions in low-level layers (like database access libraries) just because they are afraid to throw it back up because it might crash their application. I have spent numerous hours debugging applications, just to later find out that another programmer coded the lower-level library to swallow the exception, because it was "safer". I insist that any low level layers like utilites and database layers must (1) catch the exception (2) log it in a file or notify via email and (3) throw it back up. Let the business layer handle it and decide whether to notify the client. And most times I will place the exception message somewhere at the end of the custom error message back to the client. This has saved me countless hours of debugging, because instead of getting an generic error message, I get the real problem, which might come 20 calls deep down from a lower-level library. And sometimes, you need to just throw the exception back up always. For instance, you have a webmethod in a webservice, that returns an array of strings or a custom object. This webmethod might call many other libraries, business objects, database objects, etc. If there is an error, you do not have much error handling to work with, you have to return the array of strings or the custom object. Therefore just throw the exception and let the consumer of the webmethod deal with it. We all know that exception handling is a very expensive transaction and should not be over-used or abused. But being afraid of exceptions because they crash your application is a much worse approach.
ajdiaz wrote:
We all know that exception handling is a very expensive transaction
Some of us disagree.
-
The problem with error codes is that your clients can ignore them.
Christian Graus - Microsoft MVP - C++ "I am working on a project that will convert a FORTRAN code to corresponding C++ code.I am not aware of FORTRAN syntax" ( spotted in the C++/CLI forum )
Christian Graus wrote:
The problem with error codes is that your clients can ignore them.
The problem with exceptions is that your clients can ignore them. Exceptions don't have any magical qualities that require users to actually implement handling. Or even for that matter to understand how they should handle it even if they thought about it.
-
Let's say that you have this system with a data layer, a business rules layer and the UI layer. Let's say that one class of the data layer throws an exception under certain conditions. What would you rather do? A) Throw the exception and handle it in the business rules layer. B) Throw the exception and handle it in the UI layer. C) Handle the exception and return some error code to the BR layer. D) Other Usually I go for A, and the BR layer determines whether the user has to know about the error or not, and if so, display a message to the user. However, I am curious on what do CPians think about this.
Hope is the negation of reality - Raistlin Majere
Fernando A. Gomez F. wrote:
Let's say that you have this system with a data layer, a business rules layer and the UI layer. Let's say that one class of the data layer throws an exception under certain conditions. What would you rather do?
What I would rather do is.... 1. Look at the requirements for the system. 2. Look at what the exception is there for. 3. Determine what the exception means in terms of the system. 4. Implement and perhaps design a solution that fits within the boundaries of 1-3. Which might lead to any number of different solutions. 5. Finally in some cases refactor the layer that threw the exception so it returns a value instead. In terms of 4 it is important to differentiate between normal, error and exceptional states for the system as an entirety and for each individual layer. And without the context of the system requirements that isn't possible.
-
I hate programmers that swalllow exceptions in low-level layers (like database access libraries) just because they are afraid to throw it back up because it might crash their application. I have spent numerous hours debugging applications, just to later find out that another programmer coded the lower-level library to swallow the exception, because it was "safer". I insist that any low level layers like utilites and database layers must (1) catch the exception (2) log it in a file or notify via email and (3) throw it back up. Let the business layer handle it and decide whether to notify the client. And most times I will place the exception message somewhere at the end of the custom error message back to the client. This has saved me countless hours of debugging, because instead of getting an generic error message, I get the real problem, which might come 20 calls deep down from a lower-level library. And sometimes, you need to just throw the exception back up always. For instance, you have a webmethod in a webservice, that returns an array of strings or a custom object. This webmethod might call many other libraries, business objects, database objects, etc. If there is an error, you do not have much error handling to work with, you have to return the array of strings or the custom object. Therefore just throw the exception and let the consumer of the webmethod deal with it. We all know that exception handling is a very expensive transaction and should not be over-used or abused. But being afraid of exceptions because they crash your application is a much worse approach.
ajdiaz wrote:
We all know that exception handling is a very expensive transaction and should not be over-used or abused. But being afraid of exceptions because they crash your application is a much worse approach.
only if you're hemorrhaging them. The cost of a single exception is only on the order of 10 microseconds (estimated using the DB test run), for reporting errors in any normal workflow the hit is small enough to be almost meaningless. One exception per DB action is only a 1.5% performance hit and in normal usage exceptions should be much rarer than that. http://www.codeproject.com/dotnet/ExceptionPerformance.asp?df=100&forumid=206219&exp=0&select=1190728[^]
-- If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
-
The problem with error codes is that your clients can ignore them.
Christian Graus - Microsoft MVP - C++ "I am working on a project that will convert a FORTRAN code to corresponding C++ code.I am not aware of FORTRAN syntax" ( spotted in the C++/CLI forum )
The problems with exceptions include: 1) they slow down the execution of the program, but more importantly 2) they are used by lazy people to defer work that they should be doing to other people who don't understand the code that threw the exception and then have to spend hours tracking down things that should not have to know about the underlying packages in order to understand what is really wrong 3) handling exceptions is far bulkier than responding to error codes. Sometimes ignoring error codes is just fine -- like when I delete a file that doesn't exist, I really don't give a rodent's behind that the delete file function returned an error. If I wanted to know that the file existed, I'd have used some form of status call to get is permissions and make intelligent decisions before calling file delete. 4) the debuggers don't give you a "came from" trace when you put a breakpoint in the catch() clause -- this means that you have to have "time of throw" breakpoints and this means you stop in the debugger way too often because people other coders are throwing exceptions for things are not exceptional! 5) in C++ throwing exceptions and calling operator new directly (ie without storing it in a class object or some kind of smart pointer) are incompatible programming methodologies, but third party packages are throwing exceptions and their lazy design decisions means your code is suddenly buggy. uh uh uh uh .... calming .... down ... exceptions == yuk
-
ajdiaz wrote:
We all know that exception handling is a very expensive transaction and should not be over-used or abused. But being afraid of exceptions because they crash your application is a much worse approach.
only if you're hemorrhaging them. The cost of a single exception is only on the order of 10 microseconds (estimated using the DB test run), for reporting errors in any normal workflow the hit is small enough to be almost meaningless. One exception per DB action is only a 1.5% performance hit and in normal usage exceptions should be much rarer than that. http://www.codeproject.com/dotnet/ExceptionPerformance.asp?df=100&forumid=206219&exp=0&select=1190728[^]
-- If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
The idea that exceptions should not propagate because that is expensive surely misses the point - the name should say it all - if it is and exceptional circumstance that causes the error then you shouldn't be expecting it to happen in the "normal" running of the program so the expense of raising when this does happen is not the normal performance of the code. Sometimes raising exceptions is the only sensible option (e.g. if you provide a property that the calling layer sets with some invalid value it is better to raise an exception than remove the neatness of the normal property setting with say a Set method with result code). And all the talk of error codes - surely this should be referred to as a result code one of the results being success - that way if there are clear and predictable reasons why a method could fail you can return a code that indicates what happened then it is up to implementer of higher layer to decide how carefully then need to handle this - at the simplest level they can just set their code to only proceed on a successful result code. If the method hits something that was less predictable then the same method could also raise exceptions (so e.g. out of memory or out of disk space etc you might treat as exceptional conditions that bubble up as exceptions but file not found is something more predictable and gets a result code)
-
The problems with exceptions include: 1) they slow down the execution of the program, but more importantly 2) they are used by lazy people to defer work that they should be doing to other people who don't understand the code that threw the exception and then have to spend hours tracking down things that should not have to know about the underlying packages in order to understand what is really wrong 3) handling exceptions is far bulkier than responding to error codes. Sometimes ignoring error codes is just fine -- like when I delete a file that doesn't exist, I really don't give a rodent's behind that the delete file function returned an error. If I wanted to know that the file existed, I'd have used some form of status call to get is permissions and make intelligent decisions before calling file delete. 4) the debuggers don't give you a "came from" trace when you put a breakpoint in the catch() clause -- this means that you have to have "time of throw" breakpoints and this means you stop in the debugger way too often because people other coders are throwing exceptions for things are not exceptional! 5) in C++ throwing exceptions and calling operator new directly (ie without storing it in a class object or some kind of smart pointer) are incompatible programming methodologies, but third party packages are throwing exceptions and their lazy design decisions means your code is suddenly buggy. uh uh uh uh .... calming .... down ... exceptions == yuk
Lowell Boggs wrote:
- they slow down the execution of the program, but more importantly
Everything slows down the execution of a program. If you have a problem with performance then profile the application to determine bottlenecks. I seriously doubt that you will even see a system that is even moderately well written that has any measurable impact anywhere due to exceptions.
Lowell Boggs wrote:
The problems with exceptions include:
Everything else you posted has to do with developers doing something wrong and has nothing to do with exceptions themselves. And developers can do many things wrong with error codes as well.
-
The problems with exceptions include: 1) they slow down the execution of the program, but more importantly 2) they are used by lazy people to defer work that they should be doing to other people who don't understand the code that threw the exception and then have to spend hours tracking down things that should not have to know about the underlying packages in order to understand what is really wrong 3) handling exceptions is far bulkier than responding to error codes. Sometimes ignoring error codes is just fine -- like when I delete a file that doesn't exist, I really don't give a rodent's behind that the delete file function returned an error. If I wanted to know that the file existed, I'd have used some form of status call to get is permissions and make intelligent decisions before calling file delete. 4) the debuggers don't give you a "came from" trace when you put a breakpoint in the catch() clause -- this means that you have to have "time of throw" breakpoints and this means you stop in the debugger way too often because people other coders are throwing exceptions for things are not exceptional! 5) in C++ throwing exceptions and calling operator new directly (ie without storing it in a class object or some kind of smart pointer) are incompatible programming methodologies, but third party packages are throwing exceptions and their lazy design decisions means your code is suddenly buggy. uh uh uh uh .... calming .... down ... exceptions == yuk
Lowell Boggs wrote:
- the debuggers don't give you a "came from" trace when you put a breakpoint in the catch() clause
StackTrace
andTargetSite
don't work for you? -
Christian Graus wrote:
The problem with error codes is that your clients can ignore them.
The problem with exceptions is that your clients can ignore them. Exceptions don't have any magical qualities that require users to actually implement handling. Or even for that matter to understand how they should handle it even if they thought about it.
jschell wrote:
The problem with exceptions is that your clients can ignore them.
You reckon ? If your app doesn't handle an exception, it blows up. By 'clients', I mean the people who use your code, not the end user.
Christian Graus - Microsoft MVP - C++ "I am working on a project that will convert a FORTRAN code to corresponding C++ code.I am not aware of FORTRAN syntax" ( spotted in the C++/CLI forum )
-
Lowell Boggs wrote:
- they slow down the execution of the program, but more importantly
Everything slows down the execution of a program. If you have a problem with performance then profile the application to determine bottlenecks. I seriously doubt that you will even see a system that is even moderately well written that has any measurable impact anywhere due to exceptions.
Lowell Boggs wrote:
The problems with exceptions include:
Everything else you posted has to do with developers doing something wrong and has nothing to do with exceptions themselves. And developers can do many things wrong with error codes as well.
Thanks for your reply. Yes everything you do slows things down, but Stroustrup points out that in C++ exception handling adds a lot of time -- far more than error handling. In the interpreted languages, it likely makes less difference due to their already sluggish behavior. If exceptions are used only for truly exceptional situations, then the performance impact may be negligible. In an earlier generation of HP compilers, there was a 25% overhead if you turned on exceptions in our application that accessed multi-gigabyte data sets. The problem turned out to be that loop unrolling didn't work as well with exceptions turned on. However HP has fixed this problem and you only get about a 5% overhead. However, if you have a big supply chain planning application that takes 12 hours to run, that 5 percent matters. Yes, exceptions can be mis-used like many other features. But some features are more prone to mis-use than others. Exceptions anre an institutionalized laziness. They give the developer the idea that exception handling is someone else's problem, I just have to report the problem. And low and behold, every problem becomes worthy of an exception. Why should I have to do that grunt work? Yep, error codes can be easily ignored. But so can exceptions: you just catch(...) and do nothing.
-
jschell wrote:
The problem with exceptions is that your clients can ignore them.
You reckon ? If your app doesn't handle an exception, it blows up. By 'clients', I mean the people who use your code, not the end user.
Christian Graus - Microsoft MVP - C++ "I am working on a project that will convert a FORTRAN code to corresponding C++ code.I am not aware of FORTRAN syntax" ( spotted in the C++/CLI forum )
-
Lowell Boggs wrote:
- the debuggers don't give you a "came from" trace when you put a breakpoint in the catch() clause
StackTrace
andTargetSite
don't work for you?A stack trace in the catch block (using VC++ 2003) does not show you where the throw point, it only shows you where the catch currently is -- that is to say the caller of the function that catch occurs in. Perhaps I am just ignorant of how to find the throw point in VC++ 2003, please enlighten me if you know -- I will be forever grateful. (truthfully!)
-
Thanks for your reply. Yes everything you do slows things down, but Stroustrup points out that in C++ exception handling adds a lot of time -- far more than error handling. In the interpreted languages, it likely makes less difference due to their already sluggish behavior. If exceptions are used only for truly exceptional situations, then the performance impact may be negligible. In an earlier generation of HP compilers, there was a 25% overhead if you turned on exceptions in our application that accessed multi-gigabyte data sets. The problem turned out to be that loop unrolling didn't work as well with exceptions turned on. However HP has fixed this problem and you only get about a 5% overhead. However, if you have a big supply chain planning application that takes 12 hours to run, that 5 percent matters. Yes, exceptions can be mis-used like many other features. But some features are more prone to mis-use than others. Exceptions anre an institutionalized laziness. They give the developer the idea that exception handling is someone else's problem, I just have to report the problem. And low and behold, every problem becomes worthy of an exception. Why should I have to do that grunt work? Yep, error codes can be easily ignored. But so can exceptions: you just catch(...) and do nothing.
Lowell Boggs wrote:
Yes everything you do slows things down, but Stroustrup points out that in C++ exception handling adds a lot of time -- far more than error handling.
I suspect you will find there is less of an impact now.
Lowell Boggs wrote:
if you turned on exceptions in our application that accessed multi-gigabyte data sets.
I would guess that such a situation happens very rarely in a correctly written application. I have seen people who were trying to code to the idea that they were going to return gigabytes of data to a user list boxes - problems like that however have nothing to do with problems inherit in exceptions themselves.
Lowell Boggs wrote:
However, if you have a big supply chain planning application that takes 12 hours to run, that 5 percent matters.
And how many exceptional conditions do you plan on handling in that? I have written batch applications that took hours to process and errors, not exceptions, were common. Which doesn't mean that exceptions, for exceptional conditions, did not exist but they were very rare. So exceptions even if they were expensive wouldn't have matter.
Lowell Boggs wrote:
They give the developer the idea that exception handling is someone else's problem, I just have to report the problem.
And do error values help when the developer doesn't address exceptional conditions using any mechanism? Does it help when the developer returns 1000 different errors codes? Error codes do not magically allow for a correct solution.
-
Lowell Boggs wrote:
Yes everything you do slows things down, but Stroustrup points out that in C++ exception handling adds a lot of time -- far more than error handling.
I suspect you will find there is less of an impact now.
Lowell Boggs wrote:
if you turned on exceptions in our application that accessed multi-gigabyte data sets.
I would guess that such a situation happens very rarely in a correctly written application. I have seen people who were trying to code to the idea that they were going to return gigabytes of data to a user list boxes - problems like that however have nothing to do with problems inherit in exceptions themselves.
Lowell Boggs wrote:
However, if you have a big supply chain planning application that takes 12 hours to run, that 5 percent matters.
And how many exceptional conditions do you plan on handling in that? I have written batch applications that took hours to process and errors, not exceptions, were common. Which doesn't mean that exceptions, for exceptional conditions, did not exist but they were very rare. So exceptions even if they were expensive wouldn't have matter.
Lowell Boggs wrote:
They give the developer the idea that exception handling is someone else's problem, I just have to report the problem.
And do error values help when the developer doesn't address exceptional conditions using any mechanism? Does it help when the developer returns 1000 different errors codes? Error codes do not magically allow for a correct solution.
I agree completely that there is no magic solution. As for my comments about exceptions slowing things down, I have measured it myself with several compilers. Merely compiling with exceptions turned on causes between 5 and 25% slowdown in algorithms that access large amounts of memory to, say, exhaustively search a large in-memory data set. I do understand that this is a rare kind of program to be writing, but everything has its place. Sometimes exceptions really don't matter for performance -- for example if you are doing a lot of system calls. None the less, I really wish they hadn't been added to the language. Other people are getting to force their annoyances on me as a developer, and they don't don't even have to write good documentation explaining the exceptions. This could also be true with error codes -- but they are a lot easier to debug than exceptions given the limitations of the tools available to me. We'll just have to agree to disagree. Peace, Lowell
-
Christian Graus wrote:
You reckon ? If your app doesn't handle an exception, it blows up. By 'clients', I mean the people who use your code, not the end user.
And what happens when they wrap your code with a do nothing catch block?
Well, idiot coders are something you can never deal with. But, those people can just ignore a status code, they have to actively suppress an exception.
Christian Graus - Microsoft MVP - C++ "I am working on a project that will convert a FORTRAN code to corresponding C++ code.I am not aware of FORTRAN syntax" ( spotted in the C++/CLI forum )