Loading ...
Sorry, an error occurred while loading the content.

57851Re: [SCRUMDEVELOPMENT] TDD effectiveness

Expand Messages
  • Markus Gaertner
    Jun 13, 2014
    • 0 Attachment
      Shameless plug:
      In the second example in my book, I cross the fine line between ATDD and TDD. Using acceptance tests, I start to implement some functionality. Then I figure out the production classes that I really need, and test-drive the behavior into a production class.

      It's a bit hard to describe this process in less than 20 pages, though, and still make meaning of it.


      Scaling Agile as if you meant it: http://www.scaledprinciples.org
      Dipl.-Inform. Markus Gaertner
      Author of ATDD by Example - A Practical Guide to Acceptance Test-Driven Development

      On Fri, Jun 13, 2014 at 1:52 AM, Adam Sroka adam.sroka@... [SCRUMDEVELOPMENT] <SCRUMDEVELOPMENT@yahoogroups.com> wrote:

      So, what is the difference between TDD and ATDD? Both are examples of test first programming, and that is about all they have in common. TDD includes an explicit refactoring step, and most of the actual work of designing the software is done in that step. 

      It is generally foolish to refactor only under the cover of acceptance tests. There are details in the individual modules of you system that are nearly impossible to test all the way through from the top level API. At the very least you would have to write an incredibly complex and thorough set of acceptance tests that would take an uncomfortably long period of time to run after each small refactoring. 

      So, there is really no TDD in ATDD. The two are synergistic but not comparable. The main issue I have with the initialism ATDD is how easy it is to get confused about this part since they sound like exactly the same thing. 

      On Thu, Jun 12, 2014 at 4:44 PM, Adam Sroka <adam.sroka@...> wrote:
      I have some issues with ATDD and the terminology surrounding it. We seem to get ourselves in trouble with terminology all the time in this business, but this ATDD stuff is a real mess: 

      ATDD == Acceptance Test Driven Development. What does that mean? We write an acceptance test before we develop the code that makes it pass. What is an acceptance test? Well, it might just be a test for something that the customer wants... or it might be a whole bunch of other things: http://en.wikipedia.org/wiki/Acceptance_testing

      Is there a term that is less confusing than acceptance test? Well, some people like to call it behavior. That seems less confusing. No, not really, because BDD includes both ATDD and TDD. They call the things we are calling acceptance tests features and the things we are calling unit tests specifications. 

      Is there something else we could call it? Well, in XP it is called Customer Testing. That is not terribly confusing. Gojko Adzic coined the term Specification by Example, which even has a neat three letter initialism: SBE. However, for some reason neither of those have caught on in the Scrum community. 

      On Tue, Jun 10, 2014 at 10:55 AM, Cass Dalton cassdalton73@... [SCRUMDEVELOPMENT] <SCRUMDEVELOPMENT@yahoogroups.com> wrote:

      Maybe I'm confused, but I would have stated TDD to be more of knowing that the ductwork in the first floor can support the airflow need to provide the desired amount of heating and cooling.  If the tests that you write for a software unit don't tie back to end user value, then I don't think your problem is in the specific technique you are using.  If your tests only verify that the carpenter can use a hammer and not that the nails in the walls can support the weight of the drywall, then I agree, TDD will not help you.  But I don't think any process will because you have a systemic value tracibility problem.  I have never understood the need to add Acceptance or Behavior in front of something like TDD.  It feels like making sure that the tests that you write in TDD are the right tests is just part of doing software development right.  If you feel the need to explicitly state that the tests you write should tie back to customer or end user value then you are missing an important part the agile philosophy.  In other words, I always felt that ATDD is another way of saying "doing TDD right".

      On Sun, Jun 8, 2014 at 10:07 AM, Jessica P pjessica603@... [SCRUMDEVELOPMENT] <SCRUMDEVELOPMENT@yahoogroups.com> wrote:

      I believe more in ATDD than TDD. 
      For an analogy, saying that TDD is important when you build a house is like being interested in knowing how the construction workers had used the hammers to hit the nails, rather than being interested in the overall result of the house from a functional perspective to see if all the rooms and bedrooms are where they are and if water is flowing and the toilet is flushing, etc...
      Just my opinion.

      On Friday, June 6, 2014 12:29 PM, "Cass Dalton cassdalton73@... [SCRUMDEVELOPMENT]" <SCRUMDEVELOPMENT@yahoogroups.com> wrote:

      I'm not all that familiar with TDD, but I would imagine that one of the benefits that TDD gives when it's done right is a continuous view of the next specific end goal: i.e. the test that's failing.  Without a failing test to get to pass, it's easy to write code that you think you need but you don't or to write tests to verify what the code does instead of writing tests to verify that a need is met.  What I mean by the latter is that it's easy to get mentally distracted by the assumptions and notions you get in your head when you're writing the code and when you go to write the tests you are biased by what you know the code does.  You then write tests that verify your assumptions instead of writing code that verifies that the need is met.

      On Thu, Jun 5, 2014 at 5:09 PM, Adam Sroka adam.sroka@... [SCRUMDEVELOPMENT] <SCRUMDEVELOPMENT@yahoogroups.com> wrote:
      I have coached a lot of teams and have seen a few patterns in the way people code without TDD:

      1) They write some code they think is right and manually poke at it through the user interface. 
      2) They write some code they think is right and sprinkle it with system out code so that they can read the intermediate results and check them. 
      3) They write some code they think is right and run it through the debugger to check intermediate results. 
      4) They write code against an interactive interpreter or shell and capture (or reproduce) the session checking results along the way. 

      All of these have a few things in common: 

      * You don't write a whole system at once. You write just enough to be able to see and verify some result. 
      * You don't always get it right the first time. You expect some intermediate results to be incorrect. You use this as an indication that the program is either not yet doing something you need it to or doing something you don't expect. You then attempt to correct this and check again. 
      * You are constantly moving and shuffling things around to make it easier to understand and to isolate bits you want to work on. Periodically you need to check that you haven't introduced regressions. The longer you wait to do that the more likely it will be a long day when you do... 

      My conclusion is that the only significant feature of TDD that is missing from these other methods is automation. Each of these methods requires you to remember what to verify and do it over and over again. The Pragmatic Programmers told us a long time ago that anything you are going to have to do over and over you need to automate (Ubiquitous Automation.) 

      So, the next time you think TDD is too hard or not worth doing ask yourself this: I am a programmer, why am I afraid to write a program to do something that I am perfectly willing to do manually over and over? 

      On Monday, June 2, 2014, Tim Wright tim@... [SCRUMDEVELOPMENT] <SCRUMDEVELOPMENT@yahoogroups.com> wrote:

      Hi all,

      Given this discussion, I thought it sensible to point out that TDD isn't exactly a new idea...


      On 3 June 2014 13:41, Adam Sroka adam.sroka@... [SCRUMDEVELOPMENT] <SCRUMDEVELOPMENT@yahoogroups.com> wrote:
      Code coverage is a double-edged sword. You can show it to middle managers to prove that all that time you spent on testing you were actually doing something useful. However, once you test all the easy stuff the number is going to move slower and they're gonna want to know why. 

      At that point a lot of folks might start doing silly things to improve the number. That's a bad idea. The trick is to realize that this is all a game and it only buys you a small amount of time to improve things. After that you're going to have to try something else. 

      I use coverage to motivate teams to start automating tests and managers to leave them alone and let them do it. The truth is, however, that once you have at least one test running the number itself is kinda pointless. At the first sign of danger I will take it down and tell them that, on further analysis, we were finding the number didn't accurately reflect our testing effort... 

      On Wed, May 28, 2014 at 1:16 AM, 'Hiep Le' tronghieple258@... [SCRUMDEVELOPMENT] <SCRUMDEVELOPMENT@yahoogroups.com> wrote:
      In my view, the code coverage is a trap; developers will write some meaningless tests in order to reach to a certain ratio, which is stupid and a waste of time.
      What we should do is to write good tests (cover positive and negative cases) for appropriate context and TDD is a good way to go.
      Stay tuned,
      Hiep Le.
      From: SCRUMDEVELOPMENT@yahoogroups.com [mailto:SCRUMDEVELOPMENT@yahoogroups.com]
      Sent: Wednesday, May 28, 2014 2:28 PM
      To: scrumdevelopment@yahoogroups.com
      Subject: Re: [SCRUMDEVELOPMENT] TDD effectiveness
      In short, my empirical evidence shows that TDD does not necessarily keep you from doing a poor job.

      Scaling Agile as if you meant it: http://www.scaledprinciples.org
      Dipl.-Inform. Markus Gaertner
      Author of ATDD by Example - A Practical Guide to Acceptance Test-Driven Development
      021 251 5593

    • Show all 42 messages in this topic