Cool! Are you aware of http://groups.yahoo.com/group/AgileEmbedded/ ?
I was not, but I will take a look
> Lets say that you have a system that has to interact with an existingAre these one to one? Or may there be multiple notifications before a
> system. It has to intercept events that are passing through the system.
> Some events are notifications, some events are compact data streams.
> Each data stream has a preceding notification.
data stream? Or notifications that have no data stream?
Not sure it's important to the point of the story, but lets say they are one to one.
> There are some
> notifications/data streams that we want to process, some that we don't.
> There are post-processing plug-ins. Each plug-in is interested in aI presume the "we" in that first sentence is "a particular plug-in." Is
> different set of notifications.
Effectively, yes; we means a plug in. Every plug in you add needs to add a new notification parameter to look for.
> These plug-ins, the management of the> political customer constraint). The first plug-in to be used requires a
> data and messages, and everything new aside from a minimal amount of
> code have to live on one or more separate machines (a non-negotiable
> huge amount of signal processing to get any information out of the dataWhat about the notification? What's required to understand it?
Pretend it's key value pairs.
Why from scratch? Once you convert the voltages to ones and zeros, you
> For example, lets assume that we have to parse an HTML stream
> from only the voltages as seen by the network interface card. But we
> don't have a NIC, we don't have access to any of the existing ethernet
> driver code, TCP/IP network stack, HTTP libraries, etc. We have to
> write all that from scratch.
should be able to use an off-the-shelf network stack.
The point of this analogy is to portray the complexity of a general task that we have to perform. I can't describe any of the actual tasks we have to do for various reasons, so I came up with the best example I could that I through members of the group could relate to without giving too much away. It's just an analogy. We would never rewrite a common network stack from scratch.
That sounds like a proposed solution, rather than a need. Surely they
> The user need is that they want to automatically parse the HTML from
> these data streams.
have some use for the parsed HTML.
Yes it is, and I acknowledged that later in my story. It's for the sake of portraying the complexity in a language that everyone would understand. The lawnmower analogy didn't go over well, so I had to come up with a contrived example to get my point across. The actual details are not important because they are made up.
It sounds like the guts of the system is two-fold:
> To get this done, we have to interface to the
> existing system, build something that will intelligently route the data
> to the right post-processor, something that will make sure that the
> right processor is installed (we're always striving for deliverable code
> at the end of each sprint), and we need the processor.
1. to decide which plugin needs to process the data, &
2. to process the data in some unspecified way.
Is it the processing of the data (which appears to be the real reason
> I know that I
> have already assumed some of the system design in this description, but
> it is for the purpose of describing the complexity of the system. Lets
> just assume that this system design is what the team decided was the
> best solution. Even by taking liberties like hardcoding the
> post-processor to use and things like that, it will still take at least
> three sprints to come close to anything that they customer will see as
for the system) the thing you think will take 3 weeks? If so, we'll
certainly need to know more about what sort of processing you're doing.
If it's the network interface that's slowing you down, I'd skip that at
first. Prove the processing works to the customer's satisfaction.
There's huge value in that.
The processing IS the network stack. The data stream is basically an HTTP stream but as captured on the actual wire and the processing has to do that conversion (remember, contrived example just to describe the complexity of the problem).
I'd start with using some off-the-shelf hardware and software to give
the network interface, and develop the logical heart of the matter,
first. I'd likely start with a simpler process, just to prove out the
decision-making. Or I might start out with hard-coding the decision and
starting with the processor, as you suggest.
The point is, I CANT use off the shelf hardware and software. Imagine that there are two completely separate network stacks: one that I use to network my actual servers together, and one that I have to write from scratch (contrived example... network stack just for describing complexity... don't have any existing HW/SW to pull from which is what drives the complexity... etc)
Either way, I'm sure there are more splits that can be done.
Once those two things are working, at least rudimentarily, then I would
start working backwards to replace the hardware with software, assuming
that's a requirement.
Yes, I don't know why your solution is specified to do everything from
> But in my example, that's akin to taking a string of
> the ethernet voltages and manually translating them to IP packets,
> manually translating the string of IP packets to a TCP connection, etc
> on to the HTML stream. There's some customer value in that you are
> understanding the customer's need, but there's still a lot of work to do
> to build the actual system.
scratch. Of course, the good thing is that some of these are well-known
problems (well-known in the industry, if not to particular people) and
can also be done in parallel if you've got enough people available.
Yes, I picked a contrived well known problem that was obviously silly given the wealth of IP available to perform the processing in order to easily describe how complicated the processing was to get even the smallest amount of value. If all the customer cares about is the HTTP stream, then they will find it hard to see value in decoding the voltages, or converting those to IP packets, etc...
If you don't have enough people then, well, lots of work means it takes
lots of time. There's still value in seeing the novel parts of the
system work early, and leaving the well-known problems to later.
Yes, and my question relates to the best practice of not splitting stories on architectural or other artificial lines. We're supposed to split the story with thin slices all the way through the system. But with 3 sprints (not 3 weeks, but that's semantics) just to get some basic functionality that the customer would understand, how do you write the story to be worked in sprint 1? I want to start off with a thread all the way through the system, but what if you can't? If it takes more than 1 sprint to get a thread through, how do you choose where to split? Is splitting on architectural lines OK if that's all you have? (seems like everyone advises against that, but I can't figure out another way).