The Failures of "Intro to TDD"

169 points by davemo 11 years ago | 84 comments
  • akeefer 11 years ago
    I think this is a great explanation of a lot of the obvious pitfalls with "basic" TDD, and why so many people end up putting in a lot of effort with TDD without getting much return.

    I personally have kind of moved away from TDD over the years, because of some of these reasons: namely, that if the tests match the structure of the code too closely, changes to the organization of that code are incredibly painful because of the work to be done in fixing the tests. I think the author's solution is a good one, though it still doesn't really solve the problem around what you do if you realize you got something wrong and need to refactor things.

    Over the years I personally have moved to writing some of the integration tests first, basically defining the API and the contracts that I feel like are the least likely to change, then breaking things down into the pieces that I think are necessary, but only really filling in unit tests once I'm pretty confident that the structure is basically correct and won't require major refactorings in the near future (and often only for those pieces whose behavior is complicated enough that the integration tests are unlikely to catch all the potential bugs).

    I think there sometimes needs to be a bit more honest discussion about things like: * When TDD isn't a good idea (say, when prototyping things, or when you don't yet know how you want to structure the system) * Which tests are the most valuable, and how to identify them * The different ways in which tests can provide value (in ensuring the system is designed for testability, in identifying bugs during early implementation, in providing a place to hang future regression tests, in enabling debugging of the system, in preventing regressions, etc.), what kinds of tests provide what value, and how to identify when they're no longer providing enough value to justify their continued maintenance * What to do when you have to do a major refactoring that kills hundreds of tests (i.e. how much is it worth it to rewrite those unit tests?) * That investment in testing is an ROI equation (as with everything), and how to evaluate the true value the tests are giving you against the true costs of writing and maintaining them * All the different failure modes of TDD (e.g. the unit tests work but the system as a whole is broken, mock hell, expensive refactorings, too many tiny pieces that make it hard to follow anything) and how to avoid them or minimize their cost

    Sometimes it seems like the high level goals, i.e. shipping high-quality software that solves a user's problems, get lost in the dogma around how to meet those goals.

    • vinceguidry 11 years ago
      > I think this is a great explanation of a lot of the obvious pitfalls with "basic" TDD, and why so many people end up putting in a lot of effort with TDD without getting much return.

      If you have the cash, spring for Gary Bernhardt's Destroy All Software screencasts. That $240 was the best money my employer ever spent on me. Trying to learn TDD on your own is asking for a lot of pain, and all you'll end up doing is reinventing the wheel.

      There are a lot of subtle concepts Gary taught me that I'm still learning to master. You learn what to test, how to test it, at what level to test it, how to structure your workflow to accommodate it.

      • searls 11 years ago
        +1 for DAS. Gary's great and I think we agree pretty closely on these issues.
        • taude 11 years ago
          Were there any particular seasons your found useful in Destroy All? It seams like it's mixed where there's just snipped of TDD spread around at will, whenever the need hit.

          (I ask because there's no way I'm going to have time to watch/absorb all those things).

      • ArbitraryLimits 11 years ago
        > When TDD isn't a good idea (say, when ... you don't yet know how you want to structure the system)

        (Apologies in advance as I can't figure out how not to sound snarky here.)

        Isn't that called "the design? And is there any meaningful way in which, if "test-driven design" fails if you don't already have the design, it's worth anything at all?

        • akeefer 11 years ago
          Sure, you can call that structure the design, or the architecture, or whatever you like. Either way, it's a fair question.

          As a point of semantics: TDD generally stands for "test-driven development," not "test-driven design," though the article here does make the claim that TDD helps with design.

          To reduce my personal philosophy to a near tautology: if you don't design the system to be testable, it's not going to be testable. TDD, to me, is really about designing for testability. Doing that, however, isn't easy: knowing what's testable and what's not requires a lot of practical experience which tends to be gained by writing a bunch of tests for things. In addition, the longer you wait to validate how testable your design actually is, the more likely it is that you got things wrong and will find it very painful to fix them. So when I talk about TDD myself, I'm really talking about "design for testability and validate testability early and often." If you don't have a clue how you want to build things, TDD isn't going to help.

          If you take TDD to mean strictly test-first development . . . well, I only find that useful when I'm fixing bugs, where step 1 is always to write a regression test (if possible). Otherwise it just makes me miserable.

          The other thing worth pointing out is that design for testability isn't always 100% aligned with other design concerns like performance, readability, or flexibility: you often have to make a tradeoff, and testability isn't always the right answer. I personally get really irked by the arguments some people make that "TDD always leads to good design; if you did TDD and the result isn't good, you're doing TDD wrong." Sure, plenty of people have no clue what they're doing and make a mess of things in the name of testability. (To be clear, I don't think the author here makes the mistake of begging the question: I liked the article because I think it honestly points out many of the types of mistakes people make and provides a reasonable approach to avoiding them.)

          • couchand 11 years ago
            I think you're spot on here - TDD is great as long as you're not too obstinate about it. It's a trade off, just like every interesting problem.

            One point I'd like to draw out. If you don't have a clue how you want to build things, TDD isn't going to help.

            This is exactly right. If you find yourself completely unable to articulate a test for something, you probably don't really know what it is you're trying to build. I think that's the greatest benefit to TDD: it forces you to stop typing and think.

          • twic 11 years ago
            "Test-driven design", as it is commonly understood, does seem to be a mythical beast. I've hunted it with both logic and experience and come up empty-handed.

            That said, i do still find that while test-driven development doesn't itself create good design, it is a useful tool to help me create good design. I have a bite-size piece of functionality to write; i think about what the class should look like; i write tests to describe the class; i write the class. The key thing is that the tests are a description of the class. The act of writing down a description of something has an amazing power to force the mind to really understand it; to see what's missing, what's contradictory, what's unnecessary, and what's really important. I experience this when i write presentations, when i write documentation, and when i write tests. The tests don't do the thinking for me, but they are a very useful tool for my thinking.

            • dboat 11 years ago
              It's very common in software development to receive incomplete requirements. My world would be a very different place if I always receive feature complete design documents (and in same cases, any documents at all). Had I insisted on any kind of TDD, it would greatly increase my workload by reducing my ability to alter the design to accommodate new feature requests and changes while internal clients test the code.

              I do gather some places do things differently though. Must be nice.

              • searls 11 years ago
                I think I'd have to offer that my experience differs. TDD is not at all big-design-up-front, even with this reductive exercise. In fact, most features start very minimally and the tree of dependencies grows over time, just like any system becomes incrementally more complex. TDD is just one tool (of many) to help manage that complexity. Both by offering some regression value (at least of the logical bits) and also by encouraging small, bite-sized units that are easy to make sense of (and therefore change or replace)
          • usea 11 years ago
            I have tried many times to do TDD. I find it extraordinarily hard to let tests drive the design, because I already see the design in my head before I start coding. All the details might not be filled in, and there are surely things I overlook from the high-up view, but for the most part I already envision the solution.

            It's difficult to ignore the solution that is staring my brain in the face and pretend to let it happen organically. I know that I will end up with a worse design too, because I'm a novice at TDD and it doesn't come naturally to me. (I'd argue that I'm a novice at everything and always will be, but I'm even more green when it comes to TDD)

            I have no problem writing unit tests, I love mocking dependencies, and I love designing small units of code with little or no internal state. But I cannot figure out how to let go of all that and try to get there via tests instead.

            I don't think that I'm a master craftsman, nor do I think my designs are perfect. I get excited at the idea of learning that the way I do everything is garbage and there's a better way. If I ever learn that I'm a master at software development, I'll probably get depressed. But I don't think my inability to get to a better design via TDD is dunning-kruger, either.

            I want to see the light.

            • Silhouette 11 years ago
              I want to see the light.

              Some of us would argue that you already have.

              You're already doing several reasonable things that tend to improve results: using unit tests, being aware of dependencies, being aware of where your state is held. There is ample credible evidence to suggest that both using automated testing processes and controlling the complexity of your code are good things.

              There is little if any robust evidence that adopting TDD would necessarily improve your performance from the respectable position you're already in. So do the truly agile thing, and follow a process that works for you on your projects. You can and should always be looking for ways to improve that process as you gain experience. But never feel compelled to adopt a practice just because some textbook or blog post or high-profile consultant advocated it, if you've tried it and your own experience is that it is counterproductive for you at that time.

              • wpietri 11 years ago
                My big tip for someone at your stage: don't see the design. See a few designs. Sure, be going in some direction, but constantly be seeking alternatives to choose from. And always favor the simpler alternative to start.

                One thing that helps keep me doing that: it's only with trivial problems that you know everything important up front. Accept that your domain will surprise you. That your technology will surprise you. That your own code will surprise you if you pay close attention to what's working well and what could be better.

                • couchand 11 years ago
                  And your users, they'll surely surprise you!
                • sethkojo 11 years ago
                  Maybe you're over thinking it? It sounds like you're already doing the right things.

                  All the details might not be filled in, and there are surely things I overlook from the high-up view, but for the most part I already envision the solution.

                  The design part of TDD is just the expectations. So if you were to test an add function for example, you might write something like

                    assertEqual(add(5,2), 7)
                    assertEqual(add(-5,2), -3)
                    assertEqual(add(5,-2), 3)
                  
                  before actually implementing the function. So here the design is that the add function takes 2 arguments. That's it.

                  For other things like classes, your expectations will also drive the design of the class -- what fields and methods are exposed, what the fields might default to, what kinds of things the methods return, etc. Your expectations are the things you saw in your head before you start coding. So it's pretty much the same as what you do already. The benefit of TDD is in knowing that you have a correct implementation and you can move on once things are green.

                  One thing that's easy to misinterpret is that TDD doesn't mean writing a bunch of tests before writing any code...That's pretty much waterfall development. TDD tends to work best with a real tight test-code loop at the function level.

                  • ams6110 11 years ago
                    Incidentally for functions like that, if you have an environment that supports a tool like QuickCheck[1], it's a great thing to use. "The programmer provides a specification of the program, in the form of properties which functions should satisfy, and QuickCheck then tests that the properties hold in a large number of randomly generated cases."

                    1: http://www.cse.chalmers.se/~rjmh/QuickCheck/

                    • collyw 11 years ago
                      Why is that TDD examples always test stuff that is pretty much useless? I don't need to check an add function. I am pretty confident it will work as is.

                      If you can find me a more useful example on somewhere then please show it to me.

                  • richardjordan 11 years ago
                    The comments section today looks like a support group for beginners/intermediates who struggled with TDD and gave up, and so want to explain why it's all bunk. I get this. I am not a great programmer. I'm self taught like a lot of you. I had tremendous difficulty grokking TDD and for the longest time I'd start, give up, build without it.

                    But, I'm here as a you-can-do-it-to. You might not think you want to but I'm so glad I DID manage to get there.

                    Feel free to ignore because I respect that everyone's experience differs. But the real problem is that there are few good step by step tutorials that teach you from start to competent with TDD. Couple that with the fact that it takes real time to learn good TDD practices and the vast majority of TDDers in their early stage write too many tests, bad tests, and tightly couple tests.

                    Just as it's taken you time to learn programming - I don't mean hello world, but getting to the competent level with coding you're at today, it'll take a long time to get good with TDD. My case (ruby ymmv) involved googling every time I struggled; lots of Stack Overflow; plenty of Confreaks talks; Sandi Metz' POODR...

                    Like the OP says - at different stages in the learning cycles you take different approaches because you're better, it's more instinctive to you. I thought I understood the purpose of mocks/doubles, until I actually understood the purpose of mocks/doubles. When used right they're fantastic.

                    The key insight that everyone attempting TDD has to grok, before all else, is that it's about design not regression testing. If you're struggling to write tests, and they're hard to write, messy, take a lot of setup, are slow to run, too tightly coupled etc. you have a design problem. It's exposed. Think through your abstractions. Refactor. Always refactor. Don't do RED-GREEN-GOOD ENOUGH ... I did for a long time. It was frustrating.

                    This is a good post. Don't dismiss TDD because you're struggling. Try to find better learning tools and practice lots and listen to others who are successful with it.

                    It's true that sometimes fads take hold and we can dismiss them as everyone doing something for no reason. But cynicism can take hold too and we can think that of everything and miss good tools and techniques. TDD will help you be a better coder - at least it has me. If your first response to this post was TDD is bullshit, give it another try.

                    • bphogan 11 years ago
                      This is really right on the money. If it's too hard to test then you've already found something really valuable - a problem with your design that will cause you friction later on.

                      That itself might be worth a ton to you.

                      • collyw 11 years ago
                        "If you're struggling to write tests, and they're hard to write, messy, take a lot of setup, are slow to run, too tightly coupled etc. you have a design problem."

                        This is my problem exactly, and I wouldn't say I have a design problem. My application is a Django app that return complex database query results. Creating the fixtures for ALL of the edge cases would take significantly longer than writing the code. At this stage it is far more efficient to take a copy of the production database and check things manually. It helps that my app is in house only, and so users will report straight away when something isn't working.

                        But to say that I have a design problem because tests are going to be difficult to implement is just plain wrong.

                        • richardjordan 11 years ago
                          sure... it can be more broadly stated "you have a design problem and/or you're testing the wrong things"
                      • hcarvalhoalves 11 years ago
                        The approach outlined actually makes much more sense without OO. I guess the WTF comes from forcing yourself into a world of "MoneyFinder", "InvoiceFetcher", etc. Makes it look a lot more complicated and prone to error than it is, because you're now supposed to mock objects that may have internal state. Otherwise it's the usual top-down approach with stubs.
                        • MoosePlissken 11 years ago
                          Yeah I think it's interesting that the final approach with "logical units" and "collaboration units" mirrors a functional approach with "functions" and "higher-order functions". The advice to write small "logical units" could also just be "write pure functions". The complex class hierarchy in the final example could probably be avoided entirely if you were using a language with first class functions. As a bonus, in a functional language the "collaboration units" have probably already been written and tested for you.
                          • searls 11 years ago
                            I'm not sure if the current draft says so, but I talked about how the logical units ought to be "pure functions" most of the time at some point.
                          • searls 11 years ago
                            Yep. I don't really practice "OOP" anymore because each of my objects are really just behavior with no application state (their only state would be the other behavioral objects they depend on).

                            However, in a classical language it's easier to organize stuff into classes and for the purpose of a post like this one it's easier to convey. But you're dead on.

                          • mattvanhorn 11 years ago
                            I think that Red-Green-Refactor is as much about learning to habitually look for and recognize the refactoring opportunities as it is about being meticulous in reacting to those opportunities.

                            It's true that nothing forces your to refactor - but I think wanting that is a symptom of treating TDD as a kind of recipe-based prescriptive approach. It is not a reflection of the nature of TDD as a practice or habit.

                            It's a subtle difference, but important:

                            A recipe says "do step 3 or your end result will be bad"

                            A practice says "do step 3 so you get better at doing step 3"

                            • danso 11 years ago
                              The more I try to explain TDD, the more I realize that some of my favorite concepts, like the ability to mock functionality of an external process because the details of that process should be irrelevant...is just beyond the grasp of most beginners. That is, I thought/hoped that TDD would necessarily force them into good orthogonal design, because it does so for me...but it seems like they have to have a good grasp of that before they can truly grok TDD.

                              Has anyone else solved this chicken and the egg dilemma?

                              • fleitz 11 years ago
                                Test Driven Design doesn't fundamentally solve any problems, it's a tool for master craftsmen to tease out subtle errors in their design. The problem is junior programmers can't recognize bad design so they end up writing tests for a bad design, because they don't understand how bad the design is, they don't understand how to break it.

                                IMHO junior programmers tend to think that over specifying a design helps them, only a master can recognize the brilliance of something like SMTP/REST/JSON over X400/SOAP/XML. TDD just helps them over specify their bad designs.

                                That said TDD is a wonderful tool in the hands of a master. It's like photography, a $10,000 camera won't help you solve your composition problems. Tech can help ensure Ansel Adams doesn't take a photo with the wrong focus, but a properly focused poorly composed image does not a masterpiece make.

                                • searls 11 years ago
                                  This was indeed my motivation for writing the post. I think the next step to take if you agree with my premise is that we need to come together with ideas for how to best teach TDD to beginners/novices. Exercises that promote these concepts, lines of reasoning to take, tools to get people started without any unnecessary cognitive overhead, etc.

                                  I agree that teaching TDD exactly how I do it today can be a bit overwhelming from a tooling perspective currently, but conceptually I think visualizing it as a reductionist exercise with a tree graph of units is pretty simple.

                                  • bphogan 11 years ago
                                    One thing I do with my beginning programming students (since their programs are tiny) is make them write out "test plans" on paper before they can write their program code.

                                    They have to write the inputs and then the expected results.

                                    It gets them thinking about the concept of using tests as part of the design practice.

                                    Later, I give them the unit tests and they have to write the code. This is usually a rewritten version of a previous program so they see the text-based test plans in action as unit tests.

                                    Then I might give them the empty test and an empty implementation, asking them to fill in the test first, then the implementation.

                                    Finally I ask for a completely new feature, and they have to figure out how to write the test. And I ask them to go about it with a test plan.

                                    After a few semesters of this, I think I'm ready to say that this is successful for getting the "beginners" there.

                                    It doesn't address everything, but I think it's a good start.

                                  • steveklabnik 11 years ago
                                    This is the point of the blog post, no?
                                  • Nimi 11 years ago
                                    I wonder about these workshops (even asked Uncle Bob Martin about them in a recent thread). I can't shake the feeling they are the exact opposite of agility (obviously, he is better qualified than me to judge that). Their limited time schedules, which is essentially a bound over the amount of contact between the client and the supplier, seems analogous to the infamous "requirements document". Also, there doesn't appear to be a "shippable" product at the end - the developers apparently don't end up practicing TDD.

                                    I used to be an instructor for a living, and I kind-of equated lectures to waterfall and exercises to XP. There is even a semantically analogous term in teaching research, problem-based learning (each word corresponds to the respective word in test-driven development - cool, right?). Is there anyone else who sees these analogues, or am I completely crazy here?

                                  • mrisse 11 years ago
                                    Might one of the problems be that we place too much importance on the "symmetrical" unit test. In your example the child code is still covered when it is extracted from the parent.

                                    As a developer that often prefers tests at the functional level, the primary benefit of tests for me is to get faster feedback while I am developing.

                                    • searls 11 years ago
                                      The trouble with abandoning symmetrical unit tests is that:

                                      * The unit is no longer portable and can't be pulled from the context it was first used in (e.g. into a library or another app) without becoming untested. And adding characterization testing later is usually more expensive * A developer who needs to make a change to that unit needs to know where to "test drive" that change from, which requires that they know where to look for the parent's test that uses it. That's hard enough but it completely falls over when the unit is used in two, three, or more places. Now a bunch of tests have to be redesigned and none of them are easy to find. * Integrated unit tests like this lead to superlinear build duration growth b/c they each get slower as the system gets bigger. This really trips teams up in year 2 or 3 of a system.

                                      • mrisse 11 years ago
                                        Unless I'm missing something, wouldn't the child dependency be enough to prevent the unit from being dropped into another library or app? That's a good point you bring up about knowing where to "test drive" the changes from, though usually on the apps I've worked on, they've been small enough that the relevant integration test could be found without much detective work.

                                        I guess I haven't been involved in too many 2-3 year monolithic projects. Maybe that's when a stricter symmetrical unit test policy makes the most sense.

                                        What other levels of tests do you end up running besides your unit tests? Do you have any integrated unit tests? Functional tests? End to end tests?

                                        • jasonkarns 11 years ago
                                          The author is stating that the child dependency cannot be extracted to another library or app. If it is extracted, it is untested, because the only tests wrapping the child dependency are actually testing the child's original parent. (Which is likely to not exist in whatever other library/app to which the child component is moved.) And then, to retroactively add tests to the child component in order to facilitate moving it to another library, is painful.

                                          Having symmetrical tests enable components to be easier moved to other libraries/apps because the test can move with the unit under test.

                                    • richardjordan 11 years ago
                                      Shout out for Sandi Metz book POODR, and her Railsconf talk The Magic Tricks of Testing, if you're a rubyist (though the principles hold true for non-ruby OO programmers too).

                                      https://www.youtube.com/watch?v=URSWYvyc42M

                                      • Groxx 11 years ago
                                        +1 for POODR - very (very) well written, goes down multiple pathways reasonably (rather than "this is how you solve that" without any clue why you solve it that way), and gives some decent tools for any project. I only wish it were longer.
                                        • searls 11 years ago
                                          Yep! Sandi and I get along very well when it comes to these topics :)
                                        • mattvanhorn 11 years ago
                                          I agree with the general approach suggested in the article (in tests, write/assume the code you wish you had).

                                          But one detail ran counter to my personal practice.

                                          I don't believe that "symmetrical" unit tests are a worthy goal. I believe in testing units of behavior, whether or not they correspond to a method/class. Symmetry leads to brittleness. I refactor as much as possible into private methods, but I leave my tests (mostly) alone. I generally try to have a decent set of acceptance tests, too.

                                          Ideally, you specify a lot of behavior about your public API, but the details are handled in small private methods that are free to change without affecting your tests.

                                          • searls 11 years ago
                                            I understand the concern, but I value consistency and discoverability, so symmetry of thing-being-tested to test itself is (so far) the best way I've found to make sure it's dreadfully obvious where a given unit's test is.

                                            This approach is not concerned with brittleness or being coupled to the implementation because each unit is so small that it's easier to trash the object and its test when requirements change than it is to try to update both dramatically.

                                            • mattvanhorn 11 years ago
                                              I suppose that if you do keep things that small, it could work well to trash and rewrite. Plus it has the benefit of making you consider explicitly what is going/staying.

                                              Personally, I like my tests to be pretty clearly about the behavior of the contract, and not the implementation, which is hard when you require every method have a test.

                                              I'd also be concerned that other team members are reluctant to delete tests - as this is a dysfunction I see often, and try to counteract with varying degrees of success.

                                            • radicalbyte 11 years ago
                                              Symmetrical tests really help other developers on your team. It depends on your design, but "public API" tests are often something between integration and unit tests - i.e. they tests in-process co-operation of units.
                                            • viggity 11 years ago
                                              Yes! I've always hated the common kata, because for every dev writing software for a bowling alley, there are 200,000 devs writing software the sends invoices or stores documents.

                                              When I'm teaching TDD, the kata I have everyone go through is a simple order system.

                                              The requirements are something like:

                                              A user can order a case of soda

                                              The user should have their credit card charged

                                              The user should get an email when the card is charged

                                              The user should get an email when their order ships

                                              If the credit card is denied, they should see an error message

                                              (etc....)

                                              This way they can think about abstracting out dependencies, an IEmailService, a ICreditCardService, etc. There are no dependencies for a Roman Numeral converter.

                                              • GhotiFish 11 years ago
                                                I like the way he broke things up, but something bothers me about his technique.

                                                All his classes ended in "er".

                                                he's not writing object oriented software, he's writing imperative software with objects.

                                                • searls 11 years ago
                                                  Yes I am.
                                                  • GhotiFish 11 years ago
                                                    fair enough. Do you think TDD and OOP are mutually exclusive practices?
                                                    • searls 11 years ago
                                                      TDD as I practice it does, but I think OOP as it's traditionally taught encourages developers to tangle mutable application state and behavior, which leads to all sorts of problems. The more I practice, the more I learn that life is better when I separate whatever holds the state from whatever has the behavior
                                                • ChristianMarks 11 years ago
                                                  This is probably the first reasonably sophisticated attempt to describe a test-driven design/development process I have read.

                                                  The observation that "[s]ome teachers deal with this problem by exhorting developers to refactor rigorously with an appeal to virtues like discipline and professionalism" reminds me of E. O. Wilson's remark that "Karl Marx was right, socialism works, it is just that he had the wrong species."

                                                  If test-driven design were the programming panacea its proponents sometimes seem to make of it, Knuth would have written about it in TAOCP. Instead Knuth advocates Literate Programming. TDD seems to attract a cult-like following, with a relatively high ratio of opinion to cited peer-reviewed literature among proponents.

                                                  TDD as this is commonly understood seems to me like the calculational approach to program design (c.f. Anne Kaldewaij, Programming: the derivation of algorithms), only without the calculation and without predicate transformers. Still it can be a useful technique.

                                                  There is no "right" way to program. This was evident from the beginning, when Turing proved the unsolvability of the halting problem. (Conventions are another matter.)

                                                  • zwieback 11 years ago
                                                    Sure, but if the end result is "lots of little objects/methods/functions" maybe there's a simpler way of getting there, e.g. prescriptive design rules. After all, that's what every design method, including stuff from the waterfall era attempted.

                                                    I'd like TDD to be more than just another way to relearn those old rules, especially if we arrive at the same conclusions on a circuitous path. Perhaps the old design rules, object patterns, etc. have to each be integrated with a testing strategy, e.g. if you're using an observer you have to test it like this and if you refactor it like that you change your tests like so.

                                                    The general rules are easy to understand and your post makes perfect sense but once you formulate your new design approach you'll have to find a way to teach it precisely enough to avoid whatever antipattern is certain to evolve among the half-educated user community, which usually includes myself and about 95% of everyone else.

                                                    • searls 11 years ago
                                                      Hey HN, I just wanted to thank you for the overall very positive, constructive comment thread. Thanks to you this post got roughly/over ~22k page views and I didn't receive a single vitriolic comment or bitter dissent. All I got was thoughtful, earnest, and honest replies. Made my day.
                                                      • tieTYT 11 years ago
                                                        OK but after you "Fake It Until You Make It" and you have to add a new feature to that class structure, aren't you just going to start over with all the failures he brings up?

                                                        ---------

                                                        I haven't designed code the way he's advocating, but I have attempted TDD by starting with the leaves first. Here are the downsides to that:

                                                        1) Sometimes you end testing and writing a leaf that you you don't end up using/needing.

                                                        2) You realize you need a parameter you didn't anticipate. EG: "Obviously this patient report needs the Patient object. Oh crap I forgot that there's a requirement to print the user's name on the report. Now I've got get that User object and pass it all the way through".

                                                        Maybe these experiences aren't relevant. As I said, I haven't tried to "Fake It Until You Make It".

                                                        • s73v3r 11 years ago
                                                          "1) Sometimes you end testing and writing a leaf that you you don't end up using/needing."

                                                          So what? Just delete it. Your version control system should have a record of what it was if you end up needing to go back to it.

                                                        • radicalbyte 11 years ago
                                                          Excellent post, I've had exactly the same experience and come to exactly the same conclusion.

                                                          I still follow the old Code Complete method: think about the problem, sketch it out, then finally implement with unit tests. The results are the same, and it's a lot less painful than greenhorn-TDD.

                                                          • _random_ 11 years ago
                                                            I do this as well. Prototyping needs flexibility and unit tests slow down refactoring. If you are familiar with SOLID then your design will be not bad even without test-first approach.
                                                            • searls 11 years ago
                                                              Time at a white board breaking down a problem is rarely wasted :)
                                                              • collyw 11 years ago
                                                                I completely agree with this. I fact when I have a bigger architectural problem to think about I like to sit on it for a day or two, thinking about one or two designs that would work. It takes a while to see the strengths / flaws in each design and jumping in to code you won't realize problems until you have something half implemented.
                                                            • julie1 11 years ago
                                                              TDD and agile have been an effort at breaking an old must have for code which was: ISO9001; the code should behave according to the plan, and if they don't conform, plan must be revised if the tests failed. The Plan Do Check Act Mantra. Now, they find themselves facing the consequences of not respecting the expectation of the customers and they whine because "it was not applied correctly, because no one cared".

                                                              So now, they reformalize exactly the so "rigid" ISO9001 they were trying to throw down.

                                                              What an irony.

                                                              • ChuckMcM 11 years ago
                                                                I suspect if they had called it Architecture Driven Development (ADD) rather than Test Driven Development (TDD) it might contextualize better. Basically what the author explains is that you can design an architecture top down from simple requirements, deriving more complex requirements, and then providing an implementation strategy that lets you reason about whether or not you are "done."

                                                                But that 'test' word really puts people in the wrong frame of mind at the outset.

                                                                • searls 11 years ago
                                                                  Yeah, the common implications of the word "test" have always been problematic. The BDD movement did a good job bringing that to light, but I didn't want to re-litigate that all in my post just to make a point about semantics. Totally agree, though.
                                                                • searls 11 years ago
                                                                  Apologies for the downtime folks, this post is proving a little too popular for us. Would love to see some folks reaction to the post in the comments
                                                                  • Arnor 11 years ago
                                                                    > ...TDD's primary benefit is to improve the design of our code, they were caught entirely off guard. And when I told them that any regression safety gained by TDD is at best secondary and at worst illusory...

                                                                    Thank you! Details of this post aside, this gave me an Aha! moment and I feel like I'm finally leaving the WTF mountain.

                                                                    • vegar 11 years ago
                                                                      Ian Cooper has a good talk that's relevant to this blog post. It's called 'TDD, where did it all go wrong?' and a recording from NDC 2013 can be found here: http://vimeo.com/68375232
                                                                      • tempodox 11 years ago
                                                                        Those guys must really hate their readers. That crappy web site is not zoomable! In the 21st century? In the era of “responsive web design”? Mega fail. Did they use TDD?
                                                                        • jasonkarns 11 years ago
                                                                          What's your definition of zoomable? I'm able to adjust text size just fine. More details on your specific issue? If you mean layout, it is responsive. The text column narrows and the images never exceed 100% width.
                                                                        • asfa124sfaf 11 years ago
                                                                          What about tools like Typemock? How does that fit in?
                                                                          • vegar 11 years ago
                                                                            Tools like Typemock helps you make bad decisions that you will regret later on...

                                                                            Isolating things is very important to make it easier to test, and lower the risk for tests to break when you change other parts of the system. Some times isolating one part from another is hard work. Typemock makes it easier, but in the same time it ties you closer to the part that you are trying to isolate from.

                                                                            e.g. a database. You want to test something that eventually should store something in a database. You can either make a thin layer abstracting away your database so that you can test the functionality without depending on the database, or you can make a tighter coupling to the database, and use tools like typemock to get rid of it in test mode. If you want to change the way you store data, you now have production code tightly coupled to the current storage strategy AND tests tightly coupled to the current storage strategy...

                                                                            Typemock can be of great help some times, but really you should strive to find better designs instead.

                                                                          • glittershark 11 years ago
                                                                            Hello there, Heroku error page
                                                                            • searls 11 years ago
                                                                              Apologies for the continued downtime, we're trying to get a CDN in front of the (static apache) heroku app. In the past not having any dynamic language in the background was enough to stay up, but not today apparently.