Loading ...
Sorry, an error occurred while loading the content.

Re: transition from commercial testing tool to perl

Expand Messages
  • Dennis K. Paulsen
    Once upon a time I worked at a large computer manufacturing company; where due (primarily) to the cost of the proprietary tool (~45k/qtr) we needed to look at
    Message 1 of 3 , Jan 8, 2005
    • 0 Attachment
      Once upon a time I worked at a large computer manufacturing company;
      where due (primarily) to the cost of the proprietary tool (~45k/qtr)
      we needed to look at something else... We evaluated several
      tools... Our requirements generally centered on:
      * LOW/NO COST
      - ActiveState Perl and Win32::GuiTest were free. Other tools were
      at least several thousand.
      * FUNCTIONALITY (able to interact/test standard GUI applications,
      and have a script language we can use for various other tasks)
      - While other tools had a lot of nifty functionality,
      Win32::GuiTest suited our needs. Being open-source, we knew that we
      could always enhance it in-house or pay someone else to do that for
      us, still much cheaper then paying a re-occuring license/support fee.
      - We leveraged Perl scripting to drive our test suite.
      * EASE OF USE
      - Anyone with much experience with current automated tools should
      know that, relying heavily on a "script recorder" isn't the way to
      go and only by diving into the test language and understanding what
      it and the application is doing will you get the full benefit... We
      knew that Perl being a standard language was going to be easy enough
      to learn. The commands in Win32::GuiTest were very similar to
      proprietary tools. We used the freeware WinSpy to obtain the window
      information that we needed.
      * STABILITY (had to execute on over 6,000 varying system
      configurations daily, under varying Windows OSs, etc.)
      - Perl and Win32::GuiTest were checked to ensure their stability
      in our particular environment. All went well. In an indirect way,
      Perl even helped us find one or two bad hard drives.
      * SPEED (we did NOT have manufacturing time to waste for tools that
      had to initialize their "engines" for many seconds, etc.).
      - Win32::GuiTest was much faster then our previous tool in loading
      (no "engine" needed to be loaded) and in script execution. A close
      second on script execution speed was a proprietary tool with a
      popular name.
      * SMALL/NON-INTRUSIVE IMPRINT (we had to download and remove the
      tool from each system, so it couldn't be of a huge size and we
      didn't want unnecessary Windows registry cruft lingering here and
      there, etc.)
      - Perl/Win32::GuiTest were half the size of our previous tool and
      we could deliver it to the system in flat-file format without having
      to register/install this and that.

      There is more to the story, but in the end our project was completed
      on time, we eliminated a ~45k quarterly fee, speed up manufacturing
      by at least 3 minutes per system, were able to add a few additional
      checks/tests that were needed, increased quality, gave our team
      something to put on their resume, etc.

      > - how large will each stand alone piece be in lines of code /
      functionality?
      With our use of Perl/Win32::GuiTest (mostly scripting, several
      scripts for GUI interacting/testing), we put a 500 line cap on each
      script and none of the GUI interacting/testing scripts came close to
      that limit... You could probably have the scripts seperated by
      which parent window they work with, depends on your project.

      > - how do your parts talk to each other?
      > - user input, error detection, crash detection, performance,
      memory> usage, etc.
      Error and crash detection were handled in the individual script,
      whether a window didn't come up (WaitWindow()), button didn't get
      disabled (IsWindowEnabled) when it was supposed to, etc. We used
      syntax like "IsWindowEnabled($TheWin) or CriticalError(....);"...
      CriticalError() was the sort of function we put into a "result
      module" that we also whipped up in Perl. We had one master script
      that used a data table and a "test case" file to determine which
      tests to run. The master script would then call these test scripts
      using Perl's do() (as discussed elsewhere in this message board).
      These scripts would all use the commands in the Perl module we wrote
      to report their findings.. After testing, the master script called
      one final script to log results into a database and (if applicable)
      report results back to the operator on the manufacturing floor.

      > - what is the most basic input that tells what actions are done?
      > - currently we use text files ( grids ) with basic language
      > - one common engine for every test
      > - many grids, used to create unique tests
      As discussed briefly above.

      >
      > - what balance have you found between reusability and simple
      copy/paste of
      > code?
      Functions that were generic enough to be reusable were placed into a
      module.

      >- I want to get something working quickly, but slow enough to avoid
      > spagetti code


      > - how many things do you have running in parallel during testing?
      > - currently we have one engine giving input, and a crash >
      > monitor
      Our project was phased, so we were going to start out doing
      sequential testing and work towards parallel testing at a later
      point. I was recruited by another company, so I never worked on
      that part. The test scripts were quite robust, so it would have
      primarily been just a simple change to the master script
      ("engine"). Our data table could have had a flag indicating which
      scripts won't work in parallel mode.

      > - how much importance do you place on logging stats to some
      network archive
      > while things are running?
      We recorded results at the end of testing. If we needed to manage
      test systems centrally (in real-time), we would have recorded
      results on the fly... Our results script had a recovery feature in
      case the network went down, etc.


      >- do you push out data from test targets to a network share...
      >- or pull it from the test target?
      We pushed from the test systems to the database server, as only they
      knew when they were done.

      >- when a failure happens do you auto archive and reboot and restart?
      > - how important is this?
      Yes. We wanted to always record problems. We even had a
      placeholder in the database in case a script failed to execute (as
      someone might have deleted it).

      Ok I'm done rambling for now. :-)

      Regards,
      D
    Your message has been successfully submitted and would be delivered to recipients shortly.