Loading ...
Sorry, an error occurred while loading the content.

all assertion diagnostics should use positive voice

Expand Messages
  • Phlip
    Agile Testers: What s wrong with this assertion diagnostic? 42 does not equal 41 What s wrong is when it passes, and if we generate the diagnostic anyway, it
    Message 1 of 9 , Feb 23, 2010
    View Source
    • 0 Attachment
      Agile Testers:

      What's wrong with this assertion diagnostic?

      42 does not equal 41

      What's wrong is when it passes, and if we generate the diagnostic
      anyway, it says this:

      42 does not equal 42

      Why would we generate a passing diagnostic? Because, at my current
      gig, we want each test run to generate a complete report. All the test
      suites and test cases have their names extracted and rendered into an
      HTML table, as prose. (Underbars, _, get replaced with spaces, for
      example. And my Morelia storytests get rendered, too.)

      And many of the assertion diagnostics are rendered, and added as
      bullet points after each test case name. (More assertions are
      forthcoming, as I monkey patch their source.)

      This effort reminds us to make _all_ diagnostics ambiguous, leading,
      and positive. So they should all say "should":

      42 should equal 42

      --
      Phlip
      http://c2.com/cgi/wiki?ZeekLand
    • Steven Gordon
      And if the report says: 42 should equal 42.0 how do we know whether the test passed or not?
      Message 2 of 9 , Feb 23, 2010
      View Source
      • 0 Attachment
        And if the report says:

        42 should equal 42.0

        how do we know whether the test passed or not?

        On Tue, Feb 23, 2010 at 5:09 PM, Phlip <phlip2005@...> wrote:
         

        Agile Testers:

        What's wrong with this assertion diagnostic?

        42 does not equal 41

        What's wrong is when it passes, and if we generate the diagnostic
        anyway, it says this:

        42 does not equal 42

        Why would we generate a passing diagnostic? Because, at my current
        gig, we want each test run to generate a complete report. All the test
        suites and test cases have their names extracted and rendered into an
        HTML table, as prose. (Underbars, _, get replaced with spaces, for
        example. And my Morelia storytests get rendered, too.)

        And many of the assertion diagnostics are rendered, and added as
        bullet points after each test case name. (More assertions are
        forthcoming, as I monkey patch their source.)

        This effort reminds us to make _all_ diagnostics ambiguous, leading,
        and positive. So they should all say "should":

        42 should equal 42


      • Phlip
        ... In a typical Agile test rig, what other clues can we think of? -- Phlip http://c2.com/cgi/wiki?ZeekLand
        Message 3 of 9 , Feb 23, 2010
        View Source
        • 0 Attachment
          On Tue, Feb 23, 2010 at 5:35 PM, Steven Gordon <sgordonphd@...> wrote:


          And if the report says:

          42 should equal 42.0

          how do we know whether the test passed or not?

          In a typical "Agile" test rig, what other clues can we think of?

          --
           Phlip
           http://c2.com/cgi/wiki?ZeekLand
        • Steven Gordon
          I would suggest the report to say whether it passed or failed. That also makes it a little more tolerant of sloppy phrasing of the condition.
          Message 4 of 9 , Feb 23, 2010
          View Source
          • 0 Attachment
            I would suggest the report to say whether it passed or failed.  That also makes it a little more tolerant of sloppy phrasing of the condition.

            On Tue, Feb 23, 2010 at 7:30 PM, Phlip <phlip2005@...> wrote:
             

            On Tue, Feb 23, 2010 at 5:35 PM, Steven Gordon <sgordonphd@...> wrote:


            And if the report says:

            42 should equal 42.0

            how do we know whether the test passed or not?

            In a typical "Agile" test rig, what other clues can we think of?

          • Phlip
            ... I factually have not put that feature in yet. All the green dots are simulated. To TDD, you don t run the script that generates the test report. You just
            Message 5 of 9 , Feb 23, 2010
            View Source
            • 0 Attachment
              On Tue, Feb 23, 2010 at 6:42 PM, Steven Gordon <sgordonphd@...> wrote:


              I would suggest the report to say whether it passed or failed.

              I factually have not put that feature in yet. All the green dots are simulated.

              To TDD, you don't run the script that generates the test report. You just run the default script, and it spews failure statistics (including stack traces and such) all over the console, or your editor's transcript.

              (It would be nice if the spew reflected the source variables and values, like a debugger breakpoint, or like assert{ 2.0 }, but give Python another decade there...)

              The point is you _never_ read the report to diagnose a fault. Its purpose is listing all the business rules we added to the code.

              --
               Phlip
               http://c2.com/cgi/wiki?ZeekLand
            • Steven Gordon
              ... And the person reading this report only cares that the business rule was added, not whether it is actually working?
              Message 6 of 9 , Feb 23, 2010
              View Source
              • 0 Attachment
                On Tue, Feb 23, 2010 at 8:42 PM, Phlip <phlip2005@...> wrote:
                 

                On Tue, Feb 23, 2010 at 6:42 PM, Steven Gordon <sgordonphd@...> wrote:


                I would suggest the report to say whether it passed or failed.

                I factually have not put that feature in yet. All the green dots are simulated.

                To TDD, you don't run the script that generates the test report. You just run the default script, and it spews failure statistics (including stack traces and such) all over the console, or your editor's transcript.

                (It would be nice if the spew reflected the source variables and values, like a debugger breakpoint, or like assert{ 2.0 }, but give Python another decade there...)

                The point is you _never_ read the report to diagnose a fault. Its purpose is listing all the business rules we added to the code.


                And the person reading this report only cares that the business rule was added, not whether it is actually working?
                 

              • Phlip
                On Tue, Feb 23, 2010 at 8:38 PM, Steven Gordon wrote: The point is you _never_ read the report to diagnose a fault. Its purpose is ...
                Message 7 of 9 , Feb 23, 2010
                View Source
                • 0 Attachment
                  On Tue, Feb 23, 2010 at 8:38 PM, Steven Gordon <sgordonphd@...> wrote:

                  The point is you _never_ read the report to diagnose a fault. Its purpose is listing all the business rules we added to the code.


                  And the person reading this report only cares that the business rule was added, not whether it is actually working?

                  If a build broke, CruiseControl would send up alarms, and CCMenu would reflect them onto our desktops.

                  I don't see how the break could last long enough for us to then deliver the test report of a failing test run. And the next, passing run will trivially overwrite it, on the server.

                  I apologize for not heading this tangent off in my first post with one line. The mere presence of the diagnostic (with its spew and stack trace) is enough to solve a fault. Maybe I was subconsciously testing for Agile readiness here...

                  --
                   Phlip
                   http://c2.com/cgi/wiki?ZeekLand
                • Steven Gordon
                  ... Agile readiness is equivalent to the assumption of perfect software? If so, why even run the unit tests to extract the assertions? The names of the tests
                  Message 8 of 9 , Feb 24, 2010
                  View Source
                  • 0 Attachment
                    On Tue, Feb 23, 2010 at 10:05 PM, Phlip <phlip2005@...> wrote:
                     



                    On Tue, Feb 23, 2010 at 8:38 PM, Steven Gordon <sgordonphd@...> wrote:

                    The point is you _never_ read the report to diagnose a fault. Its purpose is listing all the business rules we added to the code.


                    And the person reading this report only cares that the business rule was added, not whether it is actually working?

                    If a build broke, CruiseControl would send up alarms, and CCMenu would reflect them onto our desktops.

                    I don't see how the break could last long enough for us to then deliver the test report of a failing test run. And the next, passing run will trivially overwrite it, on the server.

                    I apologize for not heading this tangent off in my first post with one line. The mere presence of the diagnostic (with its spew and stack trace) is enough to solve a fault. Maybe I was subconsciously testing for Agile readiness here...


                    Agile readiness is equivalent to the assumption of perfect software?

                    If so, why even run the unit tests to extract the assertions?  The names of the tests should be sufficient.
                     

                  • Phlip
                    ... Go find someone to argue with; someone who doesn t know or practice TDD maybe. -- Phlip http://c2.com/cgi/wiki?ZeekLand
                    Message 9 of 9 , Feb 24, 2010
                    View Source
                    • 0 Attachment
                      On Wed, Feb 24, 2010 at 5:26 AM, Steven Gordon <sgordonphd@...> wrote:
                       
                      Agile readiness is equivalent to the assumption of perfect software?



                      Go find someone to argue with; someone who doesn't know or practice TDD maybe.
                       
                      --
                       Phlip
                       http://c2.com/cgi/wiki?ZeekLand
                    Your message has been successfully submitted and would be delivered to recipients shortly.