Loading ...
Sorry, an error occurred while loading the content.

Re: {Disarmed} [scrumdevelopment] Running tested features

Expand Messages
  • petriheiramo
    Hi, ... That s a good goal, and does indicate something. But the only way to really see (that I know of) how working some feature is, is to calculate the
    Message 1 of 11 , Sep 2, 2009
    • 0 Attachment
      Hi,


      > We intent to use Unit Tests with Code Coverage tools (for example Emma, Cobertura) to guarantee at least that the team developed tests enough to validate the feature, it means, unit tests that are covering more than 80% of the source code developed.

      That's a good goal, and does indicate something. But the only way to really see (that I know of) how "working" some feature is, is to calculate the reported errors in the released code. The aim is to get as close to zero as possible for each iteration release.

      Obviously, that only measures in hindsight and only really works if you can have fast enough a feedback cycle. So then we come back to things like coverage mentioned above, test pass rate (should be 100%), test run rate (also 100%), and such.


      Yours Sincerely,


      Petri

      ---
      Petri Heiramo
      Process Development Manager, Agile Coach (CST)
      Digia Plc., Finland
    • juan_banda
      This has a lot to do with the Definition of Done that the team is believing in. I think that Code Coverage and Unit Test are good means to reach the DoD but
      Message 2 of 11 , Sep 2, 2009
      • 0 Attachment
        This has a lot to do with the Definition of Done that the team is believing in.

        I think that Code Coverage and Unit Test are good means to reach the DoD but not objectives by themselves.

        Regards,

        Juan


        --- In scrumdevelopment@yahoogroups.com, Ron Jeffries <ronjeffries@...> wrote:
        >
        > Hello, Fernanda. On Wednesday, September 2, 2009, at 10:19:09 AM,
        > you wrote:
        >
        > > We intent to use Unit Tests with Code Coverage tools (for example
        > > Emma, Cobertura) to guarantee at least that the team developed
        > > tests enough to validate the feature, it means, unit tests that
        > > are covering more than 80% of the source code developed.
        >
        > Features and done and tested when the Product Owner accepts them.
        > Coverage metric is interesting, but only interesting, not essential.
        >
        > Ron Jeffries
        > www.XProgramming.com
        > www.xprogramming.com/blog
        > Fatalism is born of the fear of failure, for we all believe that we carry
        > success in our own hands, and we suspect that our hands are weak. -- Conrad
        >
      • thierry henrio
        Hello Mikael,On Wed, Sep 2, 2009 at 11:10 AM, hmikael@rocketmail.com
        Message 3 of 11 , Sep 2, 2009
        • 0 Attachment
          Hello Mikael,
          On Wed, Sep 2, 2009 at 11:10 AM, hmikael@... <hmikael@...> wrote:
          Hi

          Listening to Dan Rawsthorne's session about "Agile Metrics" he mentions Running tested features. I am trying to get my head around this. Are you guys using this today? How are these tests setup and how do you prevent developers from creating small, no-adequate tests just to boost this figure?







          Where you will get is highly related to your 'definition of done', as Ron said
          If there is in it an acceptance test that Customer/PO has agreed on, then, you shall not deliver feature if it is not ok
          And then, you can use number of features or points per iteration as Petri said

          If not, then, you will have more important metrics to consider : number of defects, incomplete functionality and patch

          If yes, you can even choose to automate them, and have metrics such as 'we have 1234 automated acceptance tests that run in 123 s, that brings us 90% of our code coverage' ... This rocks, doesn't it ?

          Well ... it depends on how much you have to spend to get there (google 'rainsberger scam', no offence or conflict meant) 
          Indeed, a thing that has been working for me is 'have 3C' then 'try demo, TDD, try demo, TDD ...' until demo is good

          Cheers, Thierry

        • Michael Yip
          Mikael, The short answer is it depends on the organization you are working with. Scrum has a set of metrics built in which can be rolled up. I often take the
          Message 4 of 11 , Sep 2, 2009
          • 0 Attachment
            Mikael,

            The short answer is it depends on the organization you are working with. Scrum has a set of metrics built in which can be rolled up. I often take the same situation you are dealing with as a Scrum in it self and treat the people who are receiving the information as customers and in turns produce backlog items to create stories I can use as a Product Owner to derive metrics and promote transparency.

            Michael


            --- On Wed, 9/2/09, hmikael@... <hmikael@...> wrote:

            From: hmikael@... <hmikael@...>
            Subject: [scrumdevelopment] Running tested features
            To: scrumdevelopment@yahoogroups.com
            Date: Wednesday, September 2, 2009, 5:10 AM

             

            Hi

            Just been to the Agile 2009 conference in Chicago. It was a great experience to meet and listen to all these Agile experts and I have so much input with me back to the office.

            One of the things I will work on this fall is creating useful metrics on our development team. Right now we aren't measuring anything but I would like to introduce a couple of metrics to track progress, scope creeping and business value delivered.

            Listening to Dan Rawsthorne's session about "Agile Metrics" he mentions Running tested features. I am trying to get my head around this. Are you guys using this today? How are these tests setup and how do you prevent developers from creating small, no-adequate tests just to boost this figure?

            /Mikael

          • hmikael@rocketmail.com
            Hi Thanks for your answers. I was off when I mentioned counting the tests, however I still want to create a set of automated tests to verify that a feature is
            Message 5 of 11 , Sep 2, 2009
            • 0 Attachment
              Hi

              Thanks for your answers. I was off when I mentioned counting the tests, however I still want to create a set of automated tests to verify that a feature is working together with the acceptance tests from the PO. But I will not count the tests, my bad.

              Regards
              Mikael
            • George Dinwiddie
              ... You might look at Cucumber, which organizes tests as a group of scenarios for each feature. - George -- ... * George Dinwiddie *
              Message 6 of 11 , Sep 3, 2009
              • 0 Attachment
                hmikael@... wrote:
                > Thanks for your answers. I was off when I mentioned counting the
                > tests, however I still want to create a set of automated tests to
                > verify that a feature is working together with the acceptance tests
                > from the PO. But I will not count the tests, my bad.

                You might look at Cucumber, which organizes tests as a group of
                scenarios for each feature.

                - George

                --
                ----------------------------------------------------------------------
                * George Dinwiddie * http://blog.gdinwiddie.com
                Software Development http://www.idiacomputing.com
                Consultant and Coach http://www.agilemaryland.org
                ----------------------------------------------------------------------
              Your message has been successfully submitted and would be delivered to recipients shortly.