- Thank you all for your comments. I will take these back to the team and see how we want to proceed. Chris ... From: Tapio Kulmala To:Message 1 of 8 , Jul 1, 2008View SourceThank you all for your comments. I will take these back to the team and see how we want to proceed.Chris----- Original Message ----
From: Tapio Kulmala <tapsakoo@...>
Sent: Tuesday, July 1, 2008 11:58:58 AM
Subject: [scrumdevelopment] Re: Performance testing and agile
Hi Richard !
Nice post. I agree completely.
Performance is just one quality attribute of the system. Every attribute has an effect on the architectural and design decisions.
Here is a quote from softwarearchitectur es.com
Software Quality Attributes are the benchmarks that describe system's intended behavior within the environment for which it was built. The quality attributes provide the means for measuring the fitness and suitability of a product.
Statements like "a system will have high performance" or "a system will be user friendly" are acceptable in the really early stages of requirements elicitation process. As more information becomes available, the above statements become useless as they are meaningless. In each of the two examples above an attempt is made to describe the behavior of a system. Both statements are useless as they provide no tangible way of measuring the behavior of the system.
The quality attributes must be described in terms of scenarios, such as "when 100 users initiate `complete payment' transition, the payment component, under normal circumstances, will process the requests with an average latency of three seconds." This statement, or scenario, allows an architect to make quantifiable arguments about a system. A scenario defines the source of stimulus (users), the actual stimulus (initiate transaction) , the artifact affected (payment component), the environment in which it exists (normal operation), the effect of the action (transaction processed), and the response measure (within three seconds).
Most of these attributes are architectural or design concerns that has to be considered constantly. Performance, scalability, security etc are really hard to add later on. Having said that, "done-done" should mean that the relevant quality attributes has also been addressed. If a certain user story/functionality is affected by these concerns, it is a good idea to have some tests to verify them. IMHO, performance tests should be part of the CI cycle. You don't have to run them after every build but without them you get the performance related feedback way too late.
--- In scrumdevelopment@ yahoogroups. com, Richard Banks <richard.banks@ ...> wrote:
> It depends :-)
> How does performance fit into your definition of "done"?
> If you have specific performance metrics that you have to meet (i.e. pages are rendered on the server in less than 1s) then it's probably a "done" condition so you'll likely be thinking about performance from the start and dealing with it in the sprint.
> If it's a specific backlog item like "I want to support 1000 concurrent users and respond in under 1s" then it's probably not in your definition of "done" so it's likely that it will come up in a sprint with a performance theme. It could be handled by any of your teams, though good luck on getting a decent sizing estimate against it :-)
> Given that you mentioned SLA's I'd be doing my best to incorporate the performance criteria in my "done" definition and dealing with it in the team. If it's not part of the criteria of each team, then how are you going to rectify performance problems quickly? Consider the following scenario:
> Sprint 1. Team App writes poorly performing code.
> Sprint 2. Team Perf gets the code and identifies the poor performance
> Sprint 3. Team App adds tasks to their backlog and attempts to fix the perf issues
> Sprint 4. Team Perf gets the code again and runs their tests. Hopefully they pass.
> You've now wasted 3 sprints getting performance work done when it could've been done during sprint 1 when the code was first written, assuming it was part of "done".
> Richard Banks
> Readify | Principal Consultant | Certified Scrum Master | http://richardsbrai ndump.blogspot. com
- Petri, Some of the bigger challenges with performance testing are just as you stated below. We want to measure performance over an 8 to 10 hour period on a 500Message 2 of 8 , Jul 1, 2008View SourcePetri,Some of the bigger challenges with performance testing are just as you stated below. We want to measure performance over an 8 to 10 hour period on a 500 user per JVM with 90% of the transactions 3 seconds or less. Challenge one need the appropriate hardware etc... Challenge two need appropriate team members to manage the scripts as we are constantly updating the product and breaking the scripts. Challenge three keeping this on all teams radar when we are being asked to include new funcitonallity each sprint.I agree it makes the most sense to test and address performance issues or concerns within the sprint, there really is no better time to do it. Historically within out organization performance testing has been done post GA of the product which makes no sense to me and it was done by the organizations Infrastructure group who is only tweeking app server configuration or network configuration to get as much out of the app as possible(way too late for any coding or architectual changes).I am thinking we will just have to incorporate the performance script writers/testers and possibly infrastructure folks in our scrum teams and make sure we account for this work when estimating sprint capacity. We do view this different than our automated junits that run for every delivery. Performance testing includes seeing how the app runs under extended load, garbage collection, memory leaks etc... which may not show until you have pushed the app for an extended period.Thanks again for the comments,Chris----- Original Message ----
From: Petri Heiramo <petri.heiramo@...>
Sent: Tuesday, July 1, 2008 2:14:26 AM
Subject: [scrumdevelopment] Re: Performance testing and agile
--- In scrumdevelopment@ yahoogroups. com, Chris Markiewicz
<chris.markiewicz@ ...> wrote:
>from within their Agile/Scrum teams. Currently, we are trying to
> I am interested in how others handle performance testing/analysis
incorporate performance testing etc after each sprint, but I think it
would be better if we are doing it within the sprint.
> Since this is a different kind of testing from the unit andfuncitonal testing already being done during the sprint, we are
thinking this should be another scrum team that would take successful
daily builds and verify the product is still meeting SLA. I am
interested in how others handle this...
As far as I understand these things, performance testing is to me just
another type of functional test or acceptance test. A lot of classic
"non-functional" requirements are really functional, i.e. the system
has to function in a certain way, e.g. under load or in terms of
Thus the suggestion given to make a functional test of them sounds
However, some special tests take long time to run. When they do, the
team probably has to set up a separate environment in which to run the
test, instead of integrating it into the normal automated continuous
test runs. This is something the team has to consider and decide.
Senior Process Improvement Manager, CSP
Digia Plc., Finland