Why Every Programming Language Sucks at Error Handling

14 points by kugurerdem 3 months ago | 13 comments
  • biwills 3 months ago
    The more I write software, the more I think errors should be first-class citizens (camp #2 from the OP's post).

    I've been using https://github.com/biw/enwrap (disclaimer: I wrote it) in TypeScript and have found that the overhead it adds is well worth the safety it adds when handling and returning errors to users.

    That said, I see parallels between the debate about typed vs. non-typed errors and the debate of static typing vs. dynamic typing in programming languages.

    • aiono 3 months ago
      > I see parallels between the debate about typed vs. non-typed errors and the debate of static typing vs. dynamic typing in programming languages.

      Author of the post here. I also see this parallel in error handling discussions. But seems like it's much harder to sell error handling than static typing. Static typing was also much more debatable in the past then now so maybe same can happen to error handling mechanisms in the future as well.

      Your project seems very interesting! Typescript is sophisticated enough to model complex Result-like types that can narrow-widen error cases throughout the code. I will check it more if I find the time.

      • biwills 3 months ago
        The biggest problem I see is that, like static/dynamic typing, it's usually a boil-the-ocean problem. Most languages have historically been static or dynamically typed. Only recently have TypeScript and Python allowed for migration from dynamic to static typing, introducing millions(?) of developers to static types in the process.

        With errors, it's hard since many languages can throw errors anywhere, so it's hard to feel like any function is "safe" in terms of error handling. That's one of the reasons why `enwrap` returns a generic error alongside any other result: to support incremental adoption.

        If you have a chance to check out `enwrap` and have feedback, email me! (link in bio)

    • smackeyacky 3 months ago
      There is another class of error that has been poorly implemented since the demise of Smalltalk - exceptions during testing that can be fixed by the programmer and allow the program execution to continue.

      In a lot of the non-trivial systems I've worked on, the setup to do debugging/development work to get to a particular part of the code can often be arduous and time consuming, meaning that if you made a mistake and have to restart your debugging session you then have to go through the setup process again which radically slows down progress on thorny issues.

      In Smalltalk it was possible to take an unhandled exception, re-inject a more suitable value and continue debugging so that each "session" was far more productive and the setup issue became much less onerous.

      Sadly, .NET, Java, Javascript and Python can't even come close to the power of that ability to catch an exception, modify the object and continue execution. It's a great loss in my opinion. So much of the Smalltalk programming environment has been copied into modern languages but not re-entrant exceptions.

      Now, while it was very powerful it could also be heavily abused including at run time, but I still miss that ability.

      • dfe 3 months ago
        You can do this in a good Java debugger like IDEA. Break on the throw, unwind stack frames to your liking, hot-replace code, continue.
        • neonsunset 3 months ago
          > Sadly, .NET, Java, Javascript and Python can't even come close to the power of that ability to catch an exception, modify the object and continue execution.

          Is that...On Error Resume Next? :D

          • steve_gh 3 months ago
            Do any of these languages have anything similar to Ruby's retry within error handling?
          • ahefner 3 months ago
            There's still a few Common Lisp features the mainstream would benefit from ripping off. Relevant here is the condition system. Conditions and restarts are still foreign enough that it's hard to explain to people why the "debugger" was a fundamental part of the lisp machine UI, and how much sense that made, and harder still to fathom how you'd adapt that to Unix, where it would morph into some kind of weird IPC tied into the command shell.

            It's not just about being able to stop your program and hot-patch code to recover from errors - this should be trivial in any dynamic language. Rather, it's about being able to composably control how to recover from exception conditions (or, really, any branching behavior) both interactively and programmatically in a unified framework.

            • dfe 3 months ago
              I don't agree with the criticism of Java's exception handling, since Java literally does make exactly the distinction between "1. A Bug in the system" as RuntimeException, and "2. A faulty situation that can't be avoided" as all other Exception, particularly IOException.

              Although the same statement is used to catch both, you only catch both if you catch Exception. If you catch IOException or whatever other exceptions you need to catch and you opt to handle them, then RuntimeException will still propagate.

              It is only a matter of understanding that this is the distinction, and writing the bodies of your catch blocks accordingly.

              And if you are only writing a prototype, declare that it throws Exception, or if you can't, a catch (IOException ex) { throw new UncheckedIOException(ex); } really isn't that bad.

              • aiono 3 months ago
                In practice people catch runtime exceptions all the time so it's treated more as a recoverable error than a bug. Java let's you recover from runtime exceptions as easy as checked exceptions which blurs the distinction.

                > And if you are only writing a prototype, declare that it throws Exception, or if you can't, a catch (IOException ex) { throw new UncheckedIOException(ex); } really isn't that bad.

                The problem with that is when you go back to do proper error handling you can't easily find all those places where you did this. For instance in Rust, you can find those by searching for `unwrap`.

              • sherdil2022 3 months ago
                Rust does a great job here. I know it is based upon other languages but error handling in Rust doesn’t suck
                • aiono 3 months ago
                  I like many aspects of Rust's error handling, but I think it still lacks the composability part. Its not straightforward to compose two functions that return different failure sets. One thing you can do is to merge enums for both functions, than you lost what failures each individual function can have and having to do that is a lot of unnecessary work. Other option is you can return a boxed trait then you no more encode what errors are return from each function. User has to rely on reading the documentation for what kind of errors can occur.
                • xerokimo 3 months ago
                  An error to me can be as simple as any time I need to write a range check so I don't index out of bounds, that very act of range checking is an error check. In other words, any time you require a runtime check to prevent an operation that would otherwise be erroneous is an error

                  Philosophically, I don't differentiate how you represent errors, they are all errors in the end. You can use unchecked exceptions, results, checked exceptions, error codes, whatever. I will abide by rules set in the code base's guidelines, and I do have my preferences (unchecked exceptions), and I'll also accept using one or the other for practical reasons (performance, or binary overhead), but I will never argue that it is absolutely best that one error handling scheme is reserved for uses ABC, and the other best for EFG.

                  A bug is one of two things, writing code that is erroneous despite having knowledge at compile time that it can't happen such as, indexing into an array of known bounds with inputs that are known to never be out of bounds. Typically where asserts are used. You can promote these types of bugs to errors if it doesn't harm your system stability, and they often are in order to avoid crashing. The other definition of a bug is just your program entering in an unforseen state.

                  I dislike using the words expected, unexpected, and such for error handling unless you're talking about c++'s std::expected, because no one can agree what they mean, hence to me are useless definitions when talking about error handling. While the same can be said about definitions of errors and bugs, the practical uses tend to converge to an agreement on whether something is an error, or a bug.

                  From years of observing error handling:

                  - Most times I see people talk about it, from what I interpret from what people say, seems to have an emphasis on what operations can error out, rather than emphasizing what the state of your program should be when a unit of code errors out, which I believe should be more emphasized.

                  I typically define a unit of code as a function because that's how most things already work and is easier to reason about. In other words, stop focusing on individual failures of the callers, rather treat the entirety of the callee as a failure. This reduces your function to usually only have 2 states, what should your program state be when everything succeeds, and what should it be when it fails.

                  That's how you can interpret a lot of value based error handling functions already no matter how many operations can fail. With exceptions it means that, either your whole function must be in a try/catch block, or it doesn't have one at all.

                  - Value based error handling seems to re-invent some forms of unchecked exception features, with very good reasons, leading me to believe that they are on the same sides of the coin where given enough language and tooling support, they converge to the exact same thing.

                  - I do think it's better to encode preconditions in some way visible to callers that's not just through documentation. If you're using types, which is my preferred way, known as dependent types. I'm not too familiar with them, but apparently requires a proof solver for it to work entirely at compile time. I wouldn't mind them turning into runtime checks, and if you have overloading enabled, allows runtime checks to mostly occur once, at construction time.

                  • 3 months ago