Loading ...
Sorry, an error occurred while loading the content.

Research Difficulties in Software [was: Agile Method Research Study]

Expand Messages
  • PAUL
    Hi Diana, Laurent, Thanks to you both for conducting this conversation in pubic; for being open and honest with each other and remaining respectful. Thanks
    Message 1 of 41 , Jul 19, 2013
    • 0 Attachment
      Hi Diana, Laurent,

      Thanks to you both for conducting this conversation in pubic; for being open and honest with each other and remaining respectful.

      Thanks Diana, for explaining the constraints under which you work. I must admit that I regularly skip such surveys as "a waste of my time" whilst also bitching at the lack of empirical software research available. The fact that you couldn't find a single software shop willing to help is quite telling.

      I can testify from my own experience, that trying to quantify peoples experiences with varying software practices is a very difficult thing indeed. In the 90's I worked for a TQM company that had bought into the whole Deming/Cosby/Juran Quality Control mindset. We attempted to use PDCA to collect qualitative data that we could act upon to improve our process, and whilst we sort of was able to do so at an individual level (using a methodology called PSP), we completely failed at a team level, despite extensive training in data collection and analysis techniques.

      So the question I'd like to discuss is "Will the assessment of software practices in varying contexts always be a subjective affair? Or will we eventually work out a way to to produce reliable data to back up various claims?"


      I have read your book and I found it fascinating; especially the way an idea or a proposition over time becomes commonly accepted "fact", merely through the share wait of often erroneous citations. I agree with you it is really a bad show :)

      Having said this you didn't go into the real challenges that producing reliable research actually presents. Given the lack of cooperation by the industry that Diana mentions, how will we ever collect reliable data? Never mind the difficulties that arise when trying to perform an qualitative assessment.

      Having said all this, it appears that reliable research is possible. Jerry Weinbergs classic "The Psychology of a Computer Programmer" comes to mind as something that at least on the surface a presented ideas based on reliable research.

      Even there though, his subjects were students, which is perhaps a indication that the problem with a lack of access to professional practitioners has been a long standing issue for the software research community.


      --- In scrumdevelopment@yahoogroups.com, Laurent Bossavit <lolists@...> wrote:
      > Hi Diana,
      > I appreciate your efforts to bridge the gap between theory and practice. It does help to know you were a developer and understand that way of seeing the world. That doesn't entirely alleviate my concerns.
      > > The Standish Reports
      > ...are not in my opinion a very credible source (and I'm not alone in that opinion). Please see my ebook "The Leprechauns of Software Engineering", where I recount my efforts to validate some of the most widely quoted pieces of "research" in our field - this is at https://leanpub.com/leprechauns - including the Chaos Report and other similar monuments.
      > > estimate that approximately 25% of all software development project are considered failures, about 30% are considered successful, and the remainder are challenged in some way
      > One of the problems I've encountered is that these notions "successful", "failure", "challenged" are so vague and so subjective as to be nearly useless. The same problem applies to your data gathering IMO.
      > Even the term "project" doesn't track the reality that many of us experience: for instance, the team I work with at the moment has been on the same "project" for 4 years now, and there is no end to it in sight. If these people took the term "project" and your prompt "think of the agile project you most recently worked on that is complete or nearly complete" literally, they would self-select out of answering your survey. But they do have software in production with hundreds of users.
      > Did "the architecture, requirements, and design emerge from the project team" - as opposed to what? How am I supposed to answer that question if the architecture and design "emerged from the project team" but not the requirements? Answers to that question are more likely to reflect self-identification with Agile aspirations and ideals, than anything to do with the reality of the project.
      > Did "business representatives and developers work together on a daily basis"? Depends on what "work together" mean. Yeah, they exchange emails everyday and you see product owners at the standup. But by and large requirements are "thrown over the wall" rather than negotiated, and many people on the team might say we're quite a way from being Agile in this respect.
      > I wasn't there when the project started, very few people currently on the project where, so we probably couldn't answer "The functional requirements fluctuated in the early phases of the project." (Your survey requires an answer, so at this stage we either lie or quit. I decided to lie and picked Neutral.)
      > Who qualifies as an "end user representative"? We have Product Owners, but they are far removed from the day-to-day concerns of our end users. Should I answer the question with them in mind, or should I try to answer the question thinking of the more distant people who "represent" end users in yet another way, even though they're probably no closer to the actual people who use our software?
      > I could go on, but the general point is that when you write:
      > > The survey you received is simply an attempt to gather data
      > There's nothing "simple" about it as far as I'm concerned. Each of those questions would easily spark an hour-long debate among the team members (for some of them, "has sparked" is even more accurate).
      > Moreover, by page 8 I'm still only a third of the way through the survey, having no general idea of what you're trying to ascertain about the project. I can only step through this one page at a time, having to answer all of the multiple-choice items each time. This is torture.
      > I guarantee you that anyone who actually completes the study is either a) not thinking much about their replies, "shooting from the hip" - which makes the data close to useless - or b) taking an order of magnitude longer than the "10 to 15 minutes" promised at the start - and probably not providing accurate or unbiased answers at that.
      > I could say more, but even this much is me having invested a substantial chunk of my time, so it'll have to do for today.
      > Cheers,
      > Laurent
    • Cass Dalton
      That s exactly the point. Here, we are defining the end as the ACTUAL end when you stop charging your time. When you think you re going to be done in 2 weeks
      Message 41 of 41 , Jul 29, 2013
      • 0 Attachment
        That's exactly the point.  Here, we are defining the end as the ACTUAL end when you stop charging your time.  When you think you're going to be done in 2 weeks (the estimate) but you don't finish for 2 months (the actual end), you obviously have some amount of uncertainty in your estimate even if you don't realize it.  That uncertainty is supposed to be reflected in the cone, but in practice, that cone doesn't taper; it is hangs around your estimates like a thundercloud until you stop charging your time to it.

        On Tue, Jul 23, 2013 at 10:58 AM, Yves Hanoulle <mailing@...> wrote:

        2013/7/22 Cass Dalton <cassdalton73@...>


        The high level concept that the cone portrays (estimation uncertainty is IN GENERAL higher the the farther away you are from the end) is true.  
        well  you have to keep in mind that there is a BIG difference between being close to the end and thinking you are close to the end.

        However, the shape of the cone is based completely on someone's subjective theory, not on objective, empirical data.  That is the only real point that Laurent is trying to make.  He backs the argument up with intuition that 1) estimates in software development usually tend toward UNDER estimation, not OVER estimation, so the cone is not symmetrical as the original plot suggests, and that 2) the smooth tapering in the curve often doesn't happen as the last 90% of the work takes the last 40-50% of the time.

        Based on my experience in a traditional environment, I would say that the cone is rarely correct as presented in the plot.  Estimates are low at least 85% of the time, and the uncertainty often doesn't taper anything like how the plot suggests.  The times when estimates are high come from people who have been bitten by the always low estimates enough that the add in so much padding that their estimates are always unrealistically high.  And then you have the rule that the work will tend to fill the estimate, completely skewing any empirical evidence you think you have.  (The empirical evidence or lack thereof being the entire crux of Laurent's argument).

        On Mon, Jul 22, 2013 at 1:20 PM, George Dinwiddie <lists@...> wrote:


        On 7/22/13 12:46 PM, woynam wrote:
        > Sorry, but I'm not buying the plug. If it's wrong, please tell us why.

        You can read some of what Laurent says about it at

        > I agree that it's probably not "scientific". As we've been
        > discussing, getting real numbers is tough in the SW field.
        > Based on my experience, I'd say the cone is very close to correct,
        > given a fixed-sized starting backlog, which is almost a certainty in
        > a traditional contract-upfront project.

        Laurent questions that
        - the cone is presented a symetrical, with as much room for
        underestimating as overestimating, even though it's impossible to
        complete a project in negative time
        - that the cone seems to say that we necessarily get tighter estimates
        when we approach the end, though in reality some projects stay at "90%
        done" for a long time
        - that the cone is taken for empirical data, but is based on Boehm's
        subjective opinion

        - George

        > My most recent "large" project, a legacy mainframe migration project,
        > was 2.5 years long, and the final costs were 2.5 times higher than
        > our initial estimates. Of course, as we peeled away the layers of the
        > legacy system, there was more junk in there than even the biggest
        > pessimists imagined. You can see our burn-up chart in the 'Files'
        > section of this group (Burnup Chart Example.jpg).
        > Mark
        > --- In scrumdevelopment@yahoogroups.com, Yves Hanoulle <mailing@...> wrote:
        >> 2013/7/22 woynam <woyna@...>
        >>> **
        >>> The figures from Standish need to be taken with a *huge* piece of salt.
        >>> A project is considered a "failure" or "challenged" based on its ability
        >>> to come it at, or under budget. We all know in the agile community that the
        >>> initial budget estimate is the *worst* possible estimate, given that its
        >>> derived with the *least* amount of information.
        >> I assume that statement is based on the cone of uncertainty.
        >> I encourage you to read Laurent Bossavit's book
        >> https://leanpub.com/leprechauns
        >> You will learn that the cone is not scientific at all (yes I agree it feels
        >> right, well it's not correct..) I won't disclose at what level it is wrong,
        >> let me just say it feels counter intuïtive. (mm isn't agile about doing
        >> some counter intuïtive things ;-) )
        >>> Lately, I've made sure that I refer to projects as being "under budgeted",
        >>> rather than "over budget".
        >>> I'd like to see a report that critically reviews projects to determine if
        >>> the actual money spent was inline with the knowledge gained during the
        >>> project. In other words, if you discover something on day 100 that you
        >>> didn't know on day 1, would you have changed your estimate on day 1 if you
        >>> knew what you didn't know. I'm guessing these percentages would flip-flop.
        >>> Mark
        >>> --- In scrumdevelopment@yahoogroups.com, Diana Young <diana.young@>
        >>> wrote:
        >>>> That being said, the reality is that globally a lot of money is spent
        >>> each year on software development projects and the results are less than
        >>> stellar. The Standish Reports estimate that approximately 25% of all
        >>> software development project are considered failures, about 30% are
        >>> considered successful, and the remainder are challenged in some way

        Want to speak at AgileDC October 8, 2013? http://agiledc.org/speak/
        * George Dinwiddie * http://blog.gdinwiddie.com
        Software Development http://www.idiacomputing.com
        Consultant and Coach http://www.agilemaryland.org

      Your message has been successfully submitted and would be delivered to recipients shortly.