- Hi Diana Would you include in a list of failed projects one that has been stopped as soon as it becomes obvious that the development no longer has a businessMessage 1 of 41 , Jul 17, 2013View SourceHi Diana
Would you include in a list of 'failed' projects one that has been stopped as soon as it becomes obvious that the development no longer has a business case?
Many people do see this as a failure but with an Agile approach the question of continued project viability may be asked every 2 weeks.
If the project is cancelled as early as practicable, then this success of an Agile approach!
Maybe something to consider in your research findings
--- In firstname.lastname@example.org, Diana Young <diana.young@...> wrote:
> Hi Laurent,
> Thank you very much for your reply and for the opportunity to discuss the issue.
> First, please know that I was a developer for 15 years. I "get" that it is very hard to generalize the reality of a software development effort using multiple choice survey responses. I also "get" that in a busy day, it is a bit annoying to be asked the same question worded in a couple of different ways (i.e "The functional requirements were stable throughout the course of the project" and "The functional requirements did not change much over the course of the project").
> In research, though, asking the same question multiple ways is the methodologically approved way to test the reliability of a set of measures. If I did not include multiple wordings of the same question in the survey, I would never be able to publish my findings because I would not be able to attest to the reliability or the validity of the measures that I used. No respectable periodical, journal, conference proceeding, or, in my case, dissertation committee would publish my results without the corresponding reliability metrics that are calculated from these related questions.
> This results in a bit of a double edged sword for people doing research. You follow a set of methodologically sound practices in order to get the approval of the people who may publish your work while simultaneously risking that those very practices may alienate the very people you need to participate in your research study.
> That being said, the reality is that globally a lot of money is spent each year on software development projects and the results are less than stellar. The Standish Reports estimate that approximately 25% of all software development project are considered failures, about 30% are considered successful, and the remainder are challenged in some way (Eveleens, Johan<http://search.proquest.com/indexinglinkhandler/sng/au/Eveleens,+Johan/$N?accountid=7122>; Verhoef, Chris<http://search.proquest.com/indexinglinkhandler/sng/au/Verhoef,+Chris/$N?accountid=7122>. IEEE Software<http://search.proquest.com/pubidlinkhandler/sng/pubtitle/IEEE+Software/$N/37787/DocView/215839527/abstract/$B/1?accountid=7122>27.1<http://search.proquest.com/indexingvolumeissuelinkhandler/37787/IEEE+Software/02010Y01Y01$23Jan$2fFeb+2010$3b++Vol.+27+$281$29/27/1?accountid=7122>Jan/Feb 2010 pp. 30-36). Agile methods seem to be improving software development performance. Some people believe that they are appropriate to use in all project context; some people believe they are only appropriate to use in certain project context. The survey you received is simply an attempt to gather data from a broad range of development professionals in order to learn a bit about how agile methods are being used in practice and the factors that the people using them believe influence their fit and usability.
> Please also know, that I contacted numerous organizations requesting permission to meet with their software development teams. My goal was to gather rich, project specific information using interviews and/or focus group sessions in order to determine how agile methods are being applied and how they contributed to project performance. Unfortunately, every request was denied. The firms were kind but pretty much stated that 1) they are just too busy to support research endeavors, and/or 2) they did not see any real immediate benefit accruing to the organization for participating in research efforts.
> I think we both can agree that improvements are needed in software development practices. I believe that the research community can contribute to that effort. However, in order to contribute, we have to have data. My goal is to contribute by gathering information pertaining the project and team factors that influence agile method fit and usability. To do that, I have to collect data from a broad range of people and project contexts. I have to balance the rules imposed by the research committee while simultaneously encouraging real developers to participate and share their experiences. I have to find ways to measure constructs like requirements uncertainty, technical complexity, user involvement, and team expertise using language and sentences that a wide range of developers can understand. I know that those questions will not be specific enough to catch the nuances of ever project. However, they can give me a general measure of the constructs and then I can test to see if there are relationships between those constructs and the developers' perceptions regarding method fit and usability.
> Again, Laurent, thanks for giving me a chance to respond and thank you also for presenting your feedback in such a kind manner.
> Best Regards,
> From: email@example.com [mailto:firstname.lastname@example.org] On Behalf Of Laurent Bossavit
> Sent: Tuesday, July 16, 2013 3:14 PM
> To: email@example.com
> Subject: Re: [scrumdevelopment] Agile Method Research Study
> Hi Diana,
> This isn't to discourage research efforts like yours, but the Agile mailing lists (and other venues that I frequent) get requests of this kind with great regularity, and I've so often found myself reacting with the same reasons for dismay, when I clicked through to the actual survey, that I wrote a blog post specifically to link to in lieu of a reply:
> "Why I won't take your research survey" (G+) - http://bit.ly/13rIK4F
> Please take this reply as the start of a conversation, not as a brush-off. I did click through in the sincere hope of a pleasant surprise, and abandoned the survey after the first few pages of what struck me as hopelessly naive questions.
> This is something we can talk about. Really. I'm open to changing my mind that these questions are in fact important and well thought-out.
> But I'm also hard-nosed. Be prepared to argue forcefully that there are such crucial differences between "The functional requirements were stable throughout the course of the project" and "The functional requirements did not change much over the course of the project" that they absolutely have to be two separate questions. (That's the point when I quit your survey.)
- That s exactly the point. Here, we are defining the end as the ACTUAL end when you stop charging your time. When you think you re going to be done in 2 weeksMessage 41 of 41 , Jul 29, 2013View SourceThat's exactly the point. Here, we are defining the end as the ACTUAL end when you stop charging your time. When you think you're going to be done in 2 weeks (the estimate) but you don't finish for 2 months (the actual end), you obviously have some amount of uncertainty in your estimate even if you don't realize it. That uncertainty is supposed to be reflected in the cone, but in practice, that cone doesn't taper; it is hangs around your estimates like a thundercloud until you stop charging your time to it.On Tue, Jul 23, 2013 at 10:58 AM, Yves Hanoulle <mailing@...> wrote:
2013/7/22 Cass Dalton <cassdalton73@...>The high level concept that the cone portrays (estimation uncertainty is IN GENERAL higher the the farther away you are from the end) is true.well you have to keep in mind that there is a BIG difference between being close to the end and thinking you are close to the end.However, the shape of the cone is based completely on someone's subjective theory, not on objective, empirical data. That is the only real point that Laurent is trying to make. He backs the argument up with intuition that 1) estimates in software development usually tend toward UNDER estimation, not OVER estimation, so the cone is not symmetrical as the original plot suggests, and that 2) the smooth tapering in the curve often doesn't happen as the last 90% of the work takes the last 40-50% of the time.Based on my experience in a traditional environment, I would say that the cone is rarely correct as presented in the plot. Estimates are low at least 85% of the time, and the uncertainty often doesn't taper anything like how the plot suggests. The times when estimates are high come from people who have been bitten by the always low estimates enough that the add in so much padding that their estimates are always unrealistically high. And then you have the rule that the work will tend to fill the estimate, completely skewing any empirical evidence you think you have. (The empirical evidence or lack thereof being the entire crux of Laurent's argument).On Mon, Jul 22, 2013 at 1:20 PM, George Dinwiddie <lists@...> wrote:
Mark,You can read some of what Laurent says about it at
On 7/22/13 12:46 PM, woynam wrote:
> Sorry, but I'm not buying the plug. If it's wrong, please tell us why.
https://plus.google.com/115091715679003832601/posts/FKLauKLZECmLaurent questions that
> I agree that it's probably not "scientific". As we've been
> discussing, getting real numbers is tough in the SW field.
> Based on my experience, I'd say the cone is very close to correct,
> given a fixed-sized starting backlog, which is almost a certainty in
> a traditional contract-upfront project.
- the cone is presented a symetrical, with as much room for
underestimating as overestimating, even though it's impossible to
complete a project in negative time
- that the cone seems to say that we necessarily get tighter estimates
when we approach the end, though in reality some projects stay at "90%
done" for a long time
- that the cone is taken for empirical data, but is based on Boehm's
> My most recent "large" project, a legacy mainframe migration project,
> was 2.5 years long, and the final costs were 2.5 times higher than
> our initial estimates. Of course, as we peeled away the layers of the
> legacy system, there was more junk in there than even the biggest
> pessimists imagined. You can see our burn-up chart in the 'Files'
> section of this group (Burnup Chart Example.jpg).
> --- In firstname.lastname@example.org, Yves Hanoulle <mailing@...> wrote:
>> 2013/7/22 woynam <woyna@...>
>>> The figures from Standish need to be taken with a *huge* piece of salt.
>>> A project is considered a "failure" or "challenged" based on its ability
>>> to come it at, or under budget. We all know in the agile community that the
>>> initial budget estimate is the *worst* possible estimate, given that its
>>> derived with the *least* amount of information.
>> I assume that statement is based on the cone of uncertainty.
>> I encourage you to read Laurent Bossavit's book
>> You will learn that the cone is not scientific at all (yes I agree it feels
>> right, well it's not correct..) I won't disclose at what level it is wrong,
>> let me just say it feels counter intuïtive. (mm isn't agile about doing
>> some counter intuïtive things ;-) )
>>> Lately, I've made sure that I refer to projects as being "under budgeted",
>>> rather than "over budget".
>>> I'd like to see a report that critically reviews projects to determine if
>>> the actual money spent was inline with the knowledge gained during the
>>> project. In other words, if you discover something on day 100 that you
>>> didn't know on day 1, would you have changed your estimate on day 1 if you
>>> knew what you didn't know. I'm guessing these percentages would flip-flop.
>>> --- In email@example.com, Diana Young <diana.young@>
>>>> That being said, the reality is that globally a lot of money is spent
>>> each year on software development projects and the results are less than
>>> stellar. The Standish Reports estimate that approximately 25% of all
>>> software development project are considered failures, about 30% are
>>> considered successful, and the remainder are challenged in some way
Want to speak at AgileDC October 8, 2013? http://agiledc.org/speak/
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org