Loading ...
Sorry, an error occurred while loading the content.
 

Re: XP assumption, was Re: [XP] "Code must be commented"

Expand Messages
  • Russel A. Hill
    ... I don t think I d agree that logging is *necessarily* tied to a particular implementation. There are plenty of times that we simple assert that the
    Message 1 of 185 , Sep 29, 2003
      Robert Blum wrote:

      >>In our applications we have several interfaces that produce
      >>intermediate states that are difficult to test, including FPGA
      >>configurartion, I2C, SMBus, and such. We'd already built a
      >>MockMemoryBlock for testing our RAMTester. We added logging behavior
      >>to it and now we use it to test those interfaces. We get to make
      >>assertions on the memory accesses made, their order, how many there
      >>are, etc...
      >>
      >>
      >
      >Sounds like a good thing to do. It's along the lines of the
      >compile-time instrumentation thing I was talking about.
      >
      >I'm still giving up on the benefits of the tests actually driving the
      >design, (logging information always seems to be closely tied to a
      >particular implementation) but at least it will give me better test
      >coverage.
      >
      >
      I don't think I'd agree that logging is *necessarily* tied to a
      particular implementation. There are plenty of times that we simple
      assert that the control regsiter(s) have the desired values. It's
      important to note that we only use this method of testing when we must.

      In our case, we have several interfaces that share a common timing
      diagram. i.e. sending a given bitstream over either interface will
      result in the same waveform. However, the interfaces have different
      requirements at a higher level. Most of the interfaces (I2C, SMBus, and
      one /Other/) must support bi-directional data transfers in short
      bursts. Optimization is not an issue but, flexibility is. One of the
      interfaces, FPGA configuration, is unidirectional (output only) and
      optimization is an issue (3M-bits, one at a time). The bidirectional
      interfaces do all their work by reading the I/O registers, altering the
      bits as needed, and re-writing the I/O register. This allows us to use
      common behavior in an existing ControlRegister class to set the state of
      specific bits. The FPGA configuration simply takes too long using this
      model (1m40s). The current implementation of the FPGA configuration
      caches the values of the I/O registers, and precomputes the writes
      required to send any given byte (the precomputed transfer table is 24KB
      in size). Using this approach, FPGA configuration requires less than 30s.

      Our tests reflect this difference in implementation. Both the
      bidirectional and the unidirectional tests assert each and every read
      and write. It occurrs to me (as I write this) that the bidirectional
      tests could ignore all the reads, except the ones that are required to
      get an input bit. This would make the tests less implementation
      specific because we'd only be asserting the transfers that have
      meaning. It would also make the tests more complex because we'd have to
      selectively ignore read accesses.

      The FPGA configuration's optimization has at least some documentation in
      the tests, in that the lack of read transfers is explicit (even if the
      precomputation is not). I suppose whether this is valuable to express
      in the tests would be worth discussing.

      Each of these interfaces has a set of low level tests using this method.
      The bidirectional tests share an extracted fixture with methods to
      assert common fragements (startCondition, stopCondition, writeByte,
      readByte, writeBit, readBit, acknowledge). As a result, the test
      implementations read very easily (almost like documentation). The FPGA
      configuration has not such structure, its just a bitstream. Thus, the
      set of extracted fixture methods isn't as rich.
    • jrb32002
      ... Mu, another person jumping to conclusions. ;-) I grant you, a running project which is cancelled when nothing in the environment has changed could indeed
      Message 185 of 185 , Oct 6 1:30 PM
        --- In extremeprogramming@yahoogroups.com, "Jeff Grigg"
        <jeffgrigg@c...> wrote:

        > Another failed XP project.

        Mu, another person jumping to conclusions. >;-) I grant you, a
        running project which is cancelled when nothing in the environment has
        changed could indeed be a failed project -- more often it's really a
        management failure to put resources to better use. Cancelling a
        running project when the environment changes so as to invalidate the
        purpose of the project is neither project success nor failure, it's
        *management* success.

        Joseph Beckenbach
        lead XP tester
        Eidogen, Inc.
      Your message has been successfully submitted and would be delivered to recipients shortly.