RE: [agile-usability] Death by competitive analysis
It is good to hear there has been this much movement in the land of giga-mongo projects. Have you ever thought this might be also measure the gap between what was really simple as opposed to what people estimated their understanding of the clarity of WHAT was valuable and HOW to deliver that. This convergence of How and What and the delta you can derive from assumption (aka estimate) and fact (aka accepted potential product) is a better estimate on what is viable?
I bring this up because if you increase your sample rate from 45 days to 15 days, you would have 3X as many points of measure. You would also be able to cost justify earlier forays into high value high risk elements that would further improve your visibility into “being late”. Most of all you could implement the fail often fail fast succeed quickly maxim and shift POC funding for those items you absolutely gotta have.
After all this, you big guys might realize that it is your embracing of the ‘big’ that limits what you can do. A measure of this change will be statements like this. “How fast can you tell me where the envelope ends and what options do I have for making good investment decisions quickly”
Principal Agile Consultant
cell: (978) 376-4422
What agile answers – that has been answered in large construction, defense, and space programs for about a decade now – is “how long are willing to wait before you find out you’re late?”
The answer to this question results in the production of tangible deliverables (partial completion of the end product or useable intermediate products) on duration boundaries less (1/2 is common on our domains) that match the need for information about physical percent complete
On large defense and space programs we work, not work package (a single deliverable) can cross more than accounting period (a month). This means a WP can only be 45 work days long before a complete deliverable to some level of maturity must be produced for a 0%/100% assessment.
The hard part is to partition the program into tangible deliverables that represent assessment of progress for the rolling wave. This is the role of Systems Engineering and the Program Planning and Control staff.
“What does DONE look like,” must be answered in measures of physical percent complete – this is the basis of ANSI/EIA-748B and a myriad of DoD guidance.
This is independent of any method used to produce these artifacts – Scrum, XP, linear development, whatever. Doesn’t matter – but the question of how long is at a maximum 45 working days. This is on programs that last 5 to 7 years.
The big false positive in this argument is the continuous deployment. Remember the annoying browser wars with the unending interruptions to using the web caused by the incessant and annoying updates? That is one reason, but the best is when your customers call and beg you to stop sending them stuff because they cannot absorb it as quickly as you can deliver it up. You have to keep in mind that you are part of a supply chain for information use. What you deliver needs to be integrated into your customer’s environment. Now this may sound strange but it happens when you start dealing with groups of users.
Principal Agile Consultant
cell: (978) 376-4422
There truly is not a lot new here. Most of agile is borrowed from design (iterative work, early prototypes, user testing, mini-meetings... et al) and the notion of being in the field with customers... well doh... designers in chicago adopted ethnographic methods back in the early 90's. Competitive analysis is a great tool... but the example shown in the article is pretty lame. There are much better methods and formats. Business and design (and maybe dev folks as well) should be doing their own competitive analysis. The different perspectives all render worthwhile insights.
On Wed, Mar 3, 2010 at 8:02 PM, William Pietri <william@...> wrote:
On 03/03/2010 11:54 AM, Glen B. Alleman wrote:
Having done 2 startups - one that failed one that went public
(www.triconex.com)- I know of no other way to get the first article.
Is these really "new" in the sense that it is a new way of doing start ups?
The Lean Startup stuff? Nothing is truly new under the sun; many claimed there was nothing new about Agile, either. But I think there are several things that distinguish the Lean Startup from both the general-issue Silicon Valley approach and the standard Agile approach:
- It's more extreme than XP. For example, instead of just Continuous Integration, the Lean Startup approach recommends Continuous Deployment, where you push to production on every checkin. IMVU, for example, releases ~50x/day.
- There are no requirements; there are only hypotheses in need of testing.
- It is focused on users, customers, and markets, rather than grand visions or product plans.
- It has a heavy dose of Lean, especially in its approach to cycle time, bug reduction, process refinement, and waste reduction.
- Rather than a generic software development process, it is an integrated approach to creating a company, with the focus on inventing/discovering a new product.
- It is specifically for creating new products, often ones that disrupt existing markets.
- It avoids the "achieving a failure" scenario  of big-bang startups.
- It splits the process into two different phases: before and after product-market fit is demonstrated.
- Average capital requirements are substantially lower.
- With its concept of a pivot (radical change in product direction based on feedback), it explicitly encourages a learning-oriented approach to product creation.
- Its focus on minimum viable product (MVP) strongly encourages release-early, release-often behavior.
For folks interested in the business side of this, and how it fits into the history of Silicon Valley startups, I strongly recommend Steve Blank's blog:
For the other side of it, how the Lean Startup works in practice and its relationship to the Agile world, Eric Ries's blog is the source: