Loading ...
Sorry, an error occurred while loading the content.
 

RE: {Disarmed} [scrumdevelopment] Running tested features

Expand Messages
  • Fernanda S. de Oliveira
    Hi Mikael, I´m also facing this challenge. We intent to use Unit Tests with Code Coverage tools (for example Emma, Cobertura) to guarantee at least that the
    Message 1 of 11 , Sep 2, 2009

      Hi Mikael,

      I´m also facing this challenge.

      We intent to use Unit Tests with Code Coverage tools (for example Emma, Cobertura) to guarantee at least that the team developed tests enough to validate the feature, it means, unit tests that are covering more than 80% of the source code developed.

       

      Best Regards,

      Fernanda

       

      From: scrumdevelopment@yahoogroups.com [mailto:scrumdevelopment@yahoogroups.com] On Behalf Of hmikael@...
      Sent: quarta-feira, 2 de setembro de 2009 06:11
      To: scrumdevelopment@yahoogroups.com
      Subject: {Disarmed} [scrumdevelopment] Running tested features

       

       

      Hi

      Just been to the Agile 2009 conference in Chicago. It was a great experience to meet and listen to all these Agile experts and I have so much input with me back to the office.

      One of the things I will work on this fall is creating useful metrics on our development team. Right now we aren't measuring anything but I would like to introduce a couple of metrics to track progress, scope creeping and business value delivered.

      Listening to Dan Rawsthorne's session about "Agile Metrics" he mentions Running tested features. I am trying to get my head around this. Are you guys using this today? How are these tests setup and how do you prevent developers from creating small, no-adequate tests just to boost this figure?

      /Mikael

    • Ron Jeffries
      Hello, Hmikael. On Wednesday, September 2, 2009, at 5:10:41 AM, ... The noun in the phrase running tested features is /features/, not /tests/. Features are
      Message 2 of 11 , Sep 2, 2009
        Hello, Hmikael. On Wednesday, September 2, 2009, at 5:10:41 AM,
        you wrote:

        > Listening to Dan Rawsthorne's session about "Agile Metrics" he
        > mentions Running tested features. I am trying to get my head
        > around this. Are you guys using this today? How are these tests
        > setup and how do you prevent developers from creating small,
        > no-adequate tests just to boost this figure?

        The noun in the phrase "running tested features" is /features/, not
        /tests/. Features are what your PO wants.

        A Metric Leading to Agility
        Ron Jeffries 06/14/2004

        Nearly every metric can be perverted, since up- and down-ticks in
        the metric can come from good or bad causes. Teams driven by
        metrics often game the metrics rather than deliver useful
        software. Ask the team to deliver and measure Running Tested
        Features, week in and week out, over the course of the entire
        project. Keeping this single metric looking good demands that a
        team become both agile and productive.

        http://xprogramming.com/xpmag/jatrtsmetric/

        Use site search for "running tested features" for related articles.
        Especially note:

        Making the Date
        Ron Jeffries 11/10/2005

        It seems like every development project begins with the date, and
        we’re held responsible for “making the date”. Making the date is a
        management responsibility, not a development responsibility.
        Here’s why.

        http://xprogramming.com/xpmag/jatmakingthedate/

        Done features. Tested features. Done=Done. Inspect and adapt.

        Ron Jeffries
        www.XProgramming.com
        www.xprogramming.com/blog
        It is better to attempt something great and fail that attempt,
        than to attempt to do nothing and succeed.
        --Cookie, Garden Court Chinese Restaurant, Hamburg, MI
      • Ron Jeffries
        Hello, Fernanda. On Wednesday, September 2, 2009, at 10:19:09 AM, ... Features and done and tested when the Product Owner accepts them. Coverage metric is
        Message 3 of 11 , Sep 2, 2009
          Hello, Fernanda. On Wednesday, September 2, 2009, at 10:19:09 AM,
          you wrote:

          > We intent to use Unit Tests with Code Coverage tools (for example
          > Emma, Cobertura) to guarantee at least that the team developed
          > tests enough to validate the feature, it means, unit tests that
          > are covering more than 80% of the source code developed.

          Features and done and tested when the Product Owner accepts them.
          Coverage metric is interesting, but only interesting, not essential.

          Ron Jeffries
          www.XProgramming.com
          www.xprogramming.com/blog
          Fatalism is born of the fear of failure, for we all believe that we carry
          success in our own hands, and we suspect that our hands are weak. -- Conrad
        • petriheiramo
          Hi Mikael, ... RTF means _features_ which have been tested and are running in customer environment. You sound like you would be counting tests (because of how
          Message 4 of 11 , Sep 2, 2009
            Hi Mikael,


            > Listening to Dan Rawsthorne's session about "Agile Metrics" he mentions Running tested features. I am trying to get my head around this. Are you guys using this today? How are these tests setup and how do you prevent developers from creating small, no-adequate tests just to boost this figure?

            RTF means _features_ which have been tested and are running in customer environment. You sound like you would be counting tests (because of "how do you prevent developers from creating small, no-adequate tests just to boost this figure"), but the number of tests has nothing to do with this, except that to get a running feature you probably need quite some testing.

            So instead you look at what functionality you deliver and make sure it works also after delivery. If it doesn't, "zero points" for you. I'm not sure how Dan suggested measuring those, but one option is story points. Some suggest to just to make sure the features are split into small enough stories and then count them.


            Yours Sincerely,


            Petri

            ---
            Petri Heiramo
            Process Development Manager, Agile Coach (CST)
            Digia Plc., Finland
          • petriheiramo
            Hi, ... That s a good goal, and does indicate something. But the only way to really see (that I know of) how working some feature is, is to calculate the
            Message 5 of 11 , Sep 2, 2009
              Hi,


              > We intent to use Unit Tests with Code Coverage tools (for example Emma, Cobertura) to guarantee at least that the team developed tests enough to validate the feature, it means, unit tests that are covering more than 80% of the source code developed.

              That's a good goal, and does indicate something. But the only way to really see (that I know of) how "working" some feature is, is to calculate the reported errors in the released code. The aim is to get as close to zero as possible for each iteration release.

              Obviously, that only measures in hindsight and only really works if you can have fast enough a feedback cycle. So then we come back to things like coverage mentioned above, test pass rate (should be 100%), test run rate (also 100%), and such.


              Yours Sincerely,


              Petri

              ---
              Petri Heiramo
              Process Development Manager, Agile Coach (CST)
              Digia Plc., Finland
            • juan_banda
              This has a lot to do with the Definition of Done that the team is believing in. I think that Code Coverage and Unit Test are good means to reach the DoD but
              Message 6 of 11 , Sep 2, 2009
                This has a lot to do with the Definition of Done that the team is believing in.

                I think that Code Coverage and Unit Test are good means to reach the DoD but not objectives by themselves.

                Regards,

                Juan


                --- In scrumdevelopment@yahoogroups.com, Ron Jeffries <ronjeffries@...> wrote:
                >
                > Hello, Fernanda. On Wednesday, September 2, 2009, at 10:19:09 AM,
                > you wrote:
                >
                > > We intent to use Unit Tests with Code Coverage tools (for example
                > > Emma, Cobertura) to guarantee at least that the team developed
                > > tests enough to validate the feature, it means, unit tests that
                > > are covering more than 80% of the source code developed.
                >
                > Features and done and tested when the Product Owner accepts them.
                > Coverage metric is interesting, but only interesting, not essential.
                >
                > Ron Jeffries
                > www.XProgramming.com
                > www.xprogramming.com/blog
                > Fatalism is born of the fear of failure, for we all believe that we carry
                > success in our own hands, and we suspect that our hands are weak. -- Conrad
                >
              • thierry henrio
                Hello Mikael,On Wed, Sep 2, 2009 at 11:10 AM, hmikael@rocketmail.com
                Message 7 of 11 , Sep 2, 2009
                  Hello Mikael,
                  On Wed, Sep 2, 2009 at 11:10 AM, hmikael@... <hmikael@...> wrote:
                  Hi

                  Listening to Dan Rawsthorne's session about "Agile Metrics" he mentions Running tested features. I am trying to get my head around this. Are you guys using this today? How are these tests setup and how do you prevent developers from creating small, no-adequate tests just to boost this figure?







                  Where you will get is highly related to your 'definition of done', as Ron said
                  If there is in it an acceptance test that Customer/PO has agreed on, then, you shall not deliver feature if it is not ok
                  And then, you can use number of features or points per iteration as Petri said

                  If not, then, you will have more important metrics to consider : number of defects, incomplete functionality and patch

                  If yes, you can even choose to automate them, and have metrics such as 'we have 1234 automated acceptance tests that run in 123 s, that brings us 90% of our code coverage' ... This rocks, doesn't it ?

                  Well ... it depends on how much you have to spend to get there (google 'rainsberger scam', no offence or conflict meant) 
                  Indeed, a thing that has been working for me is 'have 3C' then 'try demo, TDD, try demo, TDD ...' until demo is good

                  Cheers, Thierry

                • Michael Yip
                  Mikael, The short answer is it depends on the organization you are working with. Scrum has a set of metrics built in which can be rolled up. I often take the
                  Message 8 of 11 , Sep 2, 2009
                    Mikael,

                    The short answer is it depends on the organization you are working with. Scrum has a set of metrics built in which can be rolled up. I often take the same situation you are dealing with as a Scrum in it self and treat the people who are receiving the information as customers and in turns produce backlog items to create stories I can use as a Product Owner to derive metrics and promote transparency.

                    Michael


                    --- On Wed, 9/2/09, hmikael@... <hmikael@...> wrote:

                    From: hmikael@... <hmikael@...>
                    Subject: [scrumdevelopment] Running tested features
                    To: scrumdevelopment@yahoogroups.com
                    Date: Wednesday, September 2, 2009, 5:10 AM

                     

                    Hi

                    Just been to the Agile 2009 conference in Chicago. It was a great experience to meet and listen to all these Agile experts and I have so much input with me back to the office.

                    One of the things I will work on this fall is creating useful metrics on our development team. Right now we aren't measuring anything but I would like to introduce a couple of metrics to track progress, scope creeping and business value delivered.

                    Listening to Dan Rawsthorne's session about "Agile Metrics" he mentions Running tested features. I am trying to get my head around this. Are you guys using this today? How are these tests setup and how do you prevent developers from creating small, no-adequate tests just to boost this figure?

                    /Mikael

                  • hmikael@rocketmail.com
                    Hi Thanks for your answers. I was off when I mentioned counting the tests, however I still want to create a set of automated tests to verify that a feature is
                    Message 9 of 11 , Sep 2, 2009
                      Hi

                      Thanks for your answers. I was off when I mentioned counting the tests, however I still want to create a set of automated tests to verify that a feature is working together with the acceptance tests from the PO. But I will not count the tests, my bad.

                      Regards
                      Mikael
                    • George Dinwiddie
                      ... You might look at Cucumber, which organizes tests as a group of scenarios for each feature. - George -- ... * George Dinwiddie *
                      Message 10 of 11 , Sep 3, 2009
                        hmikael@... wrote:
                        > Thanks for your answers. I was off when I mentioned counting the
                        > tests, however I still want to create a set of automated tests to
                        > verify that a feature is working together with the acceptance tests
                        > from the PO. But I will not count the tests, my bad.

                        You might look at Cucumber, which organizes tests as a group of
                        scenarios for each feature.

                        - George

                        --
                        ----------------------------------------------------------------------
                        * George Dinwiddie * http://blog.gdinwiddie.com
                        Software Development http://www.idiacomputing.com
                        Consultant and Coach http://www.agilemaryland.org
                        ----------------------------------------------------------------------
                      Your message has been successfully submitted and would be delivered to recipients shortly.