All computation is effectful (2009)

14 points by panamafrank 10 years ago | 14 comments
  • SomeCallMeTim 10 years ago
    For a long time now I've been looking at Haskell from the outside, and wondering whether I should take the dive and learn the language. I usually write apps and games, though, and my gut has always told me that there's very little advantage to writing code that's primarily declarative, imperative, or reactive in a pure-functional manner. Sure, I use some functional principles to minimize code complexity, but I haven't made the commitment to learning a new language to experience the purity.

    Still, reading blog after blog telling me how awesome functional programming is, and meeting people in person who swear by it, and who try to sway me to the religion, made me desire to learn FP, just in case they were all right. I try to improve myself when I can, and ignoring other developers' advice isn't a good way to improve yourself.

    But after reading this well-written article, I am at peace with my long-standing de facto decision not to burn time learning Haskell. I'll continue to use functional code organization when appropriate, and I'll probably keep reading the articles from time to time, but to me deterministic speed of execution is far more important than "code purity." I'm a good developer precisely because I'm good at understanding how code effects propagate, I'm good at designing code to be clean and fast, and I know when to allow side-effects and when to forbid them. I don't feel the need of a crutch to make my code "more correct," and I already have access to several options to help me make my code more concise. I wish I'd seen this (2009?) article sooner.

    So no Haskell for me. Long live Python/Lua/JavaScript/C++/Go/and who knows what's next. [1]

    [1] Those are the languages I currently use the most. I make no claim that they are better or worse in some abstract way than Haskell. In concrete ways, however... ;)

    • chadaustin 10 years ago
      You should probably still learn Haskell, because it will help you get better at separating concepts in other programs you write. Ideas you will learn in Haskell that you won't necessarily in the languages you listed:

      - type classes and their associated laws (e.g. generalize mapping: it's obvious that you can map across a list. it's less obvious that you can also map across a Maybe. it's even less obvious that you can map across a function...)

      - sum types

      - functions as a distilled programming concept without ancillary implementation details such as C's or C++'s "unique address" rules

      - restricted effects - subsetting IO for stronger static guarantees (like c++'s const but way more powerful). STM was built this way. BufferBuilder too. http://chadaustin.me/2015/02/buffer-builder/ It's a powerful technique in Haskell that you don't get in other languages.

      - the realization that monomorphization is a generics implementation detail

      - the feeling that you get from generic, terse, dynamic-looking code that you can still rapidly iterate on in a repl

      Haskell is not some pure ivory tower - there is plenty of imperative stateful code written in Haskell. The value of Haskell is that it introduces a pile of powerful ideas that you will carry through the rest of your career, even if you don't write Haskell on a day-to-day basis.

      • codygman 10 years ago
        > there is plenty of imperative stateful code written in Haskell.

        Yep. For instance here's a program that logs into paypal to check your balance in Haskell using screen scraping:

        https://github.com/codygman/hs-scrape-paypal-login/blob/mast...

        • SomeCallMeTim 10 years ago
          OK, that's funny. I don't know Chad, but I did some work for IMVU a couple years ago, and yes, their backend services spit out a LOT of JSON.

          Some of the other things I do have in other languages: JavaScript, Go, and Lua give me first class functions, including the ability to curry.

          Sum types looks nice, but also looks functionally equivalent to enum type in C++; pattern matching is a nice feature if you ever need it, but in the kinds of programs I usually write, I don't.

          The value of type classes is less clear to me. Duck typing gets you a lot of reuse leverage without the mental masturbation of the quick reference I found online to the concept.

          Monomorphization is a generics implementation detail. Yes. What of it? If you're in C++ and using templates, then they're useful. I'm guessing its use in Haskell is related to the Haskell pattern matching feature, which again I don't end up needing very often (I'm not saying it wouldn't be convenient, just that it doesn't make a compelling argument to change languages)?

          Terse and actually dynamic code is pretty awesome as well. Right now I'm restricted to code that can run in the browser, so I'm stuck with JavaScript for the most part, but when I get to use Lua and Go, I get a lot of the power (the parts relevant to my typical code) along with a lot of speed.

        • Retra 10 years ago
          It sounds like you're saying "I can optimize my algorithms better than any machine could, so why would I learn how to write code that a machine could reason about more effectively?"

          Or more accurately, "I don't write code that needs what Haskell offers, so I'm not going to learn it." Which is fine, so long as you keep writing the same kind of code.

          Anyway, the reason we need these kinds of things to be more popular is so they are better understood. And if they are better understood, they will allow us to produce better ways of solving problems. Haskell is a very influential language in the world today, and it's not because it is strictly superior to other languages, but because it encourages ways of thinking about problems that are elegant and insightful in addition to being pure.

          Haskell's performance is 'pretty good.' It's not great. But it's not awful, which is what it would be if it weren't as good a language as it is. And the reason Haskell is not awful is because of those features that you're saying aren't important to you. Those features took a slug and turned it into a rabbit. It may not be a race horse, but if you ported those improvements into something like C, you'll have a high-performance rocket. Code that could possibly automatically refactor itself into more efficient, more general components.

          You could potentially get something faster and more expressive than any language you know. And "who knows what's next" may not be you, because you don't know what's now.

          But really, nobody is selling you functional programming because it'll make your code faster. They are selling it to you because it inspires them.

          • codygman 10 years ago
            >> It may not be a race horse, but if you ported those improvements into something like C, you'll have a high-performance rocket. Code that could possibly automatically refactor itself into more efficient, more general components.

            > From what I can tell, this is actually not the case. Unless what you're talking about is Go. In which case, carry on. ;)

            So, don't have all the context... but it sounds like you are implying Haskell libraries can't be a "high-performance rocket" when paired with C whereas Go can? That sound right? Hope so, because it's the assumption my comment below responds to.

            What about libraries which take this approach such as bytestring]0], aeson[1], attoparsec[2], and binary[3]?

            0: https://hackage.haskell.org/package/bytestring

            1: https://hackage.haskell.org/package/aeson

            2: https://hackage.haskell.org/package/attoparsec

            3: https://hackage.haskell.org/package/binary

            • SomeCallMeTim 10 years ago
              >It sounds like you're saying "I can optimize my algorithms better than any machine could, so why would I learn how to write code that a machine could reason about more effectively?"

              Actually, that's not at all what I'm saying.

              I'm using tools that are known to create code that's faster than the equivalent Haskell.

              >Or more accurately, "I don't write code that needs what Haskell offers, so I'm not going to learn it." Which is fine, so long as you keep writing the same kind of code.

              I use languages that offer a lot of the tools that Haskell offers, but without as many restrictions. If the restrictions bought me speed (which I thought they did, previously), then they would be valuable.

              >But really, nobody is selling you functional programming because it'll make your code faster. They are selling it to you because it inspires them.

              I get that. I can see it in the converts who try to evangelize me. But I think that it's like the people who drank the OOP Koolaid: Yes, there are useful things to learn in other paradigms, but going extreme in any paradigm isn't the best way to code.

              > It may not be a race horse, but if you ported those improvements into something like C, you'll have a high-performance rocket. Code that could possibly automatically refactor itself into more efficient, more general components.

              From what I can tell, this is actually not the case. Unless what you're talking about is Go. In which case, carry on. ;)

            • pron 10 years ago
              Regardless of the merits of Haskell, I think that learning any new language -- while sometimes necessary -- is among the least valuable uses of your "learning resources". That is to say, it may be valuable, but there are probably much more valuable things you can learn (from new algorithms and data structures through OS design and all the way to hardware design). Programming languages are a small -- and not the most central -- part of computer science.

              Even from a pragmatic point of view, adopting a new programming language is one of the least effective ways to increase software quality. It is certainly the most expensive and wasteful way to increase productivity/quality and its returns are modest (certainly compared to the cost). I think that widely applicable -- and not at all "mathematical" -- techniques such as automated unit-testing have done so much more to increase software productivity and quality over the past twenty years than any language.

              • dreamcompiler 10 years ago
                I completely disagree. If all the languages I'd used had been in the Algol family (as most "mainstream" languages are) I might agree with you, since there's not much qualitative difference between them. But some languages are so different that they change the way you think about programming, and those new ways of thinking improve your skill in every language. For example, Forth taught me about threaded, stack-based code. It's a very different metaphor than assembler or C and it runs just as fast. Icon (the language) showed me what you can do when failure is a first class notion and everything is a generator. (Icon is monad-based programming, although that word wasn't used when Icon was invented.) Prolog taught me what you can do with declarative programming, pattern-matching, and unification. Prolog is not really a general-purpose language, but it's a mind-blowingly elegant way to solve some classes of problems. (Prolog is the world's best database query language, for example. SQL feels like primitive torture by comparison.) Lisp taught me the virtues of lambda calculus, functional programming, programs as data, macros, and ubiquitous composition. After I learned Lisp I thought I understood functional programming.

                And then I learned Haskell, which cranks FP all the way to 11. You can curry in Lisp, but Haskell does it automatically. And then there are monads, which (spoiler alert) really are not about I/O. They're much more general than that. Knowing all this stuff has made me a better, more productive programmer in every language I use.

                • pron 10 years ago
                  > But some languages are so different that they change the way you think about programming, and those new ways of thinking improve your skill in every language.

                  I absolutely agree -- I never said that learning new languages is useless -- it's just that there are other things you can learn that are much (much) more useful.

                  For example, decent familiarity with various algorithms and data structures (the more the better), OS architecture and hardware architecture will also improve your skill in every language, and will do so to a much greater extent than knowing Lisp, Haskell, Prolog and Python. The reason is that different programming languages differ in the abstractions they provide and the way they're used to express algorithms. But abstractions -- while very important -- are secondary to the algorithms themselves.

                  I'm saying this from the vantage point of roughly 20 years of experience, after having learned (to varying degrees) Lisp, Haskell and Prolog. Yes, they help. But knowing that other stuff helps so much more.

                  I find that it is younger programmers who tend to equate a program with its code, while more experienced programmers learn to separate the two: the code (which is very important), and the program itself -- the stream of machine instructions that get executed on the CPU and its interaction with other hardware subsystems and the OS. Today I can get a very good feel for what a program does -- and how elegantly it does it -- without even looking at the code or knowing what language it's written in, but simply by running it in a profiler (or several), although the profiler should be appropriate for the language. I find that "profile elegance" is even more important than "code elegance".

                  Code elegance is sometimes subjective and is almost always language dependent. Often, migrating abstraction concepts from one language to another really hurts code elegance; chief among those offending imported concepts is the monad, which has much better alternatives in imperative languages -- in fact, I'm giving a talk precisely about that in the upcoming Curry On/ECOOP conference in a couple of weeks.

            • ridiculous_fish 10 years ago
              Here is a Haskell action, that has the effect of waiting for 5 seconds:

                  sleep 5
              
              This is in the IO monad and therefore impure. As we know, it is better to use pure functions when possible. Let us rewrite it as a pure function:

                  seq $ [0..] !! 1000000000
              
              This accomplishes the same thing with no side effects, and is therefore better. Please note that spinning up your CPU fan is not a side effect.

              Joking of course, but it does illustrate the point. Functions take time and time is user-visible, thus all functions are effectful in that way. And C programmers will roll their eyes at the idea that `getpid` has side effects, while allocating a million node linked list does not.

              Really this is just talking at cross purposes. Haskell's notion of purity and side effects lives not on the physical machine, but in a formalism, and ghc is its imperfect simulator.

              • dgreensp 10 years ago
                There's nothing inherently slow or problematic about composing code out of "pure functions." The implication that evaluating functions requires doing lambda calculus, which requires a weird runtime like Haskell, is pure FUD. Haskell is weird for other reasons, like the fact that expressions are lazily instead of eagerly evaluated. You can compose your code out of pure functions in most any programming language.

                Even when we're talking about facilities provided by the runtime, the fact that a programming language feature may have complex performance characteristics, in exchange for allowing the programmer to think about the problem in a more abstracted way, does not invalidate the abstraction. We might as well have an article about garbage collection called, "There is no garbage," making the point that at a lower level, all memory needs to be explicitly de-allocated. Or one called, "There is no immutability," pointing out that immutable data structures can only try to cover up the fundamental nature of computers, which have mutable memory cells.

                • pron 10 years ago
                  Pure functions are beautiful, useful and desirable, yet I believe, like the post's author, that a language that tries to enforce referential transparency everywhere is misguided, in that it places too big a burden on programming while doing little to prevent bugs that really matter.

                  Not all bugs are created equal -- some are easier to introduce and/or harder to find than others. A language that chooses to reduce bugs by placing non-trivial constraints on the developer -- i.e. by increasing the mental burden -- would do well to concentrate efforts on those bugs that cost more. IMO, Haskell does the exact opposite, nearly eliminating data-transformation bugs -- that are very easily found -- and doing little (though not nothing) to reduce effect-related bugs.

                  Pure functional programming carries other burdens, too, some stemming from its conceptual roots in lambda calculus, like the lack of a clear complexity model.

                  I also reject the conclusion that if most costly bugs are related to effects, our best course of action is getting rid of them altogether (or relegating them to an opaque runtime). This solution seems strange, as side-effects -- in spite of their name -- are the most central component of useful programs. Effects are the things we program (unless we're writing a compiler). This solution seems to me like suggesting to a programmer that since typing is the cause of wrist pain, she should type with her tongue. There are better solutions already. Clojure's approach to memory effects is at least as effective as Haskell's in preventing certain types of bugs, yet the language places a much lower mental burden on the programmer.

                  Lastly, I think PFP has (unintentionally) caused many to believe it is the only approach to a more "mathematical" mode of programming, and the best course for formal software verification. Neither could be farther from the truth. Haskell could not have prevented the bug discovered in Java's (originally Python's) efficient sorting algorithms -- probably the most used sorting code in the world today -- a few months ago, and it would have been very hard to detect it even in Idris (if the algorithm is expressible at all in that language). Instead, it was discovered with a software verifier for imperative languages. It is true that PFP languages may depend in clever ways on the Curry-Howard correspondence to help (force?) the programmer to prove some properties of her algorithm, but spelling out a partial proof in the code itself is not the only -- or even the best -- way to verify a program.

                  All this is not to say that we haven't learned a great deal from PFP and its approach to software verification. But it is just one approach of many, and one that is particularly intrusive.