Loading ...
Sorry, an error occurred while loading the content.

Re: Forth for Robots (from Loki's first steps)

Expand Messages
  • David Wyland
    Hi Robin, . Like many, I have been musing on Forth for a long time, trying to figure out why my personal productivity was so much better with Forth than with
    Message 1 of 314 , Jan 31, 2008
    • 0 Attachment
      Hi Robin,
      .
      Like many, I have been musing on Forth for a long time, trying to
      figure out why my personal productivity was so much better with Forth
      than with other languages. Because you tend to compare Forth against
      other languages, you go for the usual suspects. For example:
      .
      1. Interactive debug for fast code-debug loops. Always good, but Turbo
      Pascal had interactive-like fast test and recompile, as have many
      others. I did not have interactive debug in the cases I discussed, so
      that is not it.
      .
      2. Many small subroutines rather than fewer, larger ones. This is
      tough to argue because the push back is that this is a style of
      coding, not language dependent. I found out this argument is not true,
      but it is accurate (to paraphrase from Absence of Malice). The
      question is *why* does Forth encourage small subroutines?
      .
      3. As I only recently discovered, the data stack (or parameter passing
      stack) is a hidden key to the question. No other language I am aware
      of has an explicit data stack for parameter passing as part of the
      language definition. Note that the *visible* data stack in Forth is
      *not* the same as the *invisible* stack frame in C and other languages!
      .
      By serendipity, I was coding another Forth-like program using assembly
      language when this thread wandered into Forth land. It made me notice
      the productivity improvement I was getting and remember the same
      situations over the years.
      .
      I was not using Forth, per se. I was using the data stack from Forth
      for passing parameters. It was not interactive: I had to do test,
      reassemble and download to flash for each debug cycle. However, this
      was enough to cause the clock on the wall to "slow down" by a factor
      of 5-10.

      Automatically and unconsciously, I tended to write small routines that
      passed one or two parameters on the stack. Because I tended to create
      data and then use it, I seldom needed stored variables, although I do
      have ~4, as I recall. I did not *try* to do any of this. I just found
      that It is easier to write, read and debug small routines rather than
      larger ones, so I would break a big one down into smaller pieces as I
      was coding and debugging. If I wrote a big routine and it had bugs, I
      would tend to break it into small routines. Likewise, I had no qualms
      about using variables in RAM. I had *lots* of space. I just found out
      when I finished that I used very few.
      .
      Other languages are different from Forth (or my "2th" version of it)
      because of the *visible* data stack as an element of the language.
      Forth has it; others do not.
      .
      Almost all the other languages use equations: C=A+B. We learned this
      in algebra and it is the first thing you learn in all the Fortran
      derivatives, such as C, Basic, Pascal, Python, etc. However, C=A+B
      implies *named* variables A, B and C. So we *generate* named variables
      and then have to keep track of them. *All* variables have to have
      names so you can use them in an equation. The names are an artifact of
      the use of equations. Equations breed names.
      .
      In Forth, you typically define named variables only when you need them
      for persistent storage. Variables are very easy to define and use
      (remember, I am writing in assembler), but most of the time, you pass
      data along as you generate and use it. This dynamic data does not need
      to be named or stored. So, you tend not generate many variables. This
      is *not* a requirement; it is an *observation.*
      .
      In the current project, I had tons of variable space available. Just
      did not use much. And I was doing the following: soft (bit-bang) UART,
      command line parsing and conversion of ASCII hex numbers to binary,
      conversion of binary to ASCII hex and ASCII decimal for printing,
      pretty print formatting, a variety of control activities, ADC table
      generation and data manipulation, etc.
      .
      Because the parameters are passed on a data stack, all subroutine
      calls have no parameter passing structure: they are pure calls with no
      named variables in memory or named variables passed in a stack frame.
      So, subroutines have very low overhead. Just a block of code with a
      return at the end.
      .
      Unfortunately, most people are bugged by the stack. They are used to
      C=A+B, so they dislike the DUP, Over, Swap, Drop ... of stack
      manipulation. It is something new, and everyone hates change. Why not
      just have equations? Answer: see above.
      .
      Another observation. Even though I have a lot of short (5-20 line)
      subroutines, the return stack does not get nested very deep. Perhaps
      as much as 6 deep. Also, the data stack tends not to be nested deep.
      Typically 2-3 deep, maybe as much as 5. This is an *observation.* No
      intent on limiting stack use is involved.
      .
      If you are used to coding in C, etc. 2-3 parameters seems small, even
      restrictive. The key is that the tendency to factor into small
      routines seems to automatically reduce the parameters passed at each
      step. There is no attempt to do this; it just falls out that way.
      .
      C and other programmers are used to longer programs, perhaps 100-200
      lines rather than 5-20 lines. IMO, this is because C and other
      languages make subroutine and function use difficult. You have to
      define the parameter passing interface (caller side and subroutine
      side). Also, there is a performance penalty in calling a subroutine:
      setting up a stack frame is required. So, you tend to avoid heavy use
      of subroutines. This has been noted quite often in the literature.
      .
      C programmers using Forth for the first time try to write large blocks
      of code (like they are used to) and pass a lot of stuff on the data
      stack per call. This prevents you from gaining the advantages of the
      data stack approach. If you keep at it a while and get used to
      factoring your code, the code and stack depth shrinks as does the
      clock on the wall time.
      .
      The Forth data stack approach is not for everyone. If you are getting
      paid by the hour and your client prefers C, etc., increasing your
      productivity may decrease your income. You can show above average
      productivity in C and still spend a lot of honest hours on the code.
      However if you are writing code for your *own* product and business,
      the Forth data stack idea is worth checking out. It can usually get
      your product out sooner with fewer bugs.
      .
      I hope this lengthy rambling answered your question of "What do Forth
      stacks do for you?"
      .
      Dave

      --- In SeattleRobotics@yahoogroups.com, "Robin" <rhewitt@...> wrote:
      >
      > Hi Jon and Dan (and Randy),
      >
      > Interactivity is certainly good. I like Interactive C for Legos
      > programming, Microchip's MPLab for PIC assembly-langauage, and Matlab
      > for the same reason. But interactivity's not a language
      > characteristic; it's provided by the development environment (or not).
      >
      > So, what else about Forth do you find useful?
      >
      > Also, how does Forth (as a language, not as a programming style) use
      > stacks that's different from how other languages do? In what way is
      > programming in Forth easier, better, faster because of that? I've used
      > FIFOs and LIFOs (stacks) in C/C++, Java, and assembler as data
      > structures, but only when these data structures made sense for my
      > application logic. Does Forth (somehow) use stacks for everything, and
      > if so, what does that do for you?
      >
      > Thanks,
      > Robin
      >
      >
      > --- In SeattleRobotics@yahoogroups.com, Jon Hylands <jon@> wrote:
      > >
      > > On Thu, 31 Jan 2008 15:58:54 -0000, "dan michaels" <oric_dan@>
      > > wrote:
      > >
      > > > This number may be a tad high, but in any case, I think it's
      largely
      > > > due to the interactive nature of Forth, mentioned previously by
      Randy
      > > > and myself. Having the ability to write individual functions
      > > > directly, and to immediately compile them to full-speed execution
      > > > status, and to immediately be able to test them interactively, is a
      > > > huge help, both to debugging and to modular program development.
      This
      > > > is the one aspect of Forth that I miss the most with other
      languages.
      > >
      > > Smalltalk has this, and has had it since the early 70's...
      > >
      > > The other thing that Smalltalk has that is really useful, both in
      > general
      > > programming and interactive debugging, is a garbage collector.
      > >
      > > Later,
      > > Jon
      > >
      > > --------------------------------------------------------------
      > > Jon Hylands Jon@ http://www.huv.com/jon
      > >
      > > Project: Micro Raptor (Small Biped Velociraptor Robot)
      > > http://www.huv.com/blog
      > >
      >
    • Randy M. Dumse
      Dave Hyland had asked about the compession of code, and I had mentioned the LLE project. Larry Forsley, who did it, wrote me he was back from vacation, so I
      Message 314 of 314 , Feb 12, 2008
      • 0 Attachment
        Dave Hyland had asked about the compession of code, and I had
        mentioned the LLE project. Larry Forsley, who did it, wrote me
        he was back from vacation, so I asked him about the details. He
        sent an extended, but rather interesting reply. In particular
        the last few paragraphs talk about "hacking" in the sense
        research was done by interactive testing with tweaked constants
        and hand adjustments, yeilded to verification of a patented
        process of upconversion of photons. Dave, this is my basis to
        say on a large project, not only the object produced will be
        smaller, the source often compresses much more than most would
        believe. I had suggested a 5:1 reduction, which people found
        difficult to believe. Larry here documents an actual case ~100:1
        reduction. Hope you enjoy the read, I did.:



        The laser control power conditioning system that I built in the
        late '70s was originally written in Fortran whose source code,
        specification and documentation filled a shelf full of 3 inch 3
        ring binders, probably close to 20 in all, which with tapes and
        backups and things filled a good 6 - 8 foot shelf. The Forth
        code, including the operating system source code fit in one 1
        inch binder and the power conditioning source code was about 10
        pages, or 30 Forth blocks.

        The Fortran system also included a microcoded Hewlett Packard
        21MX minicomputer that ran a relational description language
        (RDL) I had devised prior to finding out about Forth in 1976 or
        so. It was an interpretive system that stored spatial and
        temporal relationships of components, sort of Smalltalk-like
        (which I didn't learn about until the mid-80s). General
        Electric Trident Missile Systems built the system to my and
        another engineer's specifications.

        However, GE signed up to deliver the system in August of 1977,
        thinking that would be fine for meeting our September, 1977 DOE
        milestone. Unfortunately, the milestone required an operating
        laser, not just the power conditioning system.

        One of our EE staff had gone up to Ottawa to a semiconductor
        plant auction and bought two HP2114 computers: replete with 16K
        words of core memory! My first inhouse Forth system went on
        these, and, Ken Hardwick of the mainframe University computing
        center, put a full multi-tasking, multi-user operating system,
        URTH, or University of Rochester Forth on them. Ken had the
        prescient to make the first high level version of ;code, that,
        like Chuck, he also called ;: later to be renamed DOES>.

        I took the system and put a full laser amplifier testbed
        together and with one computer in Rochester and one at Raytheon
        in Massachusetts, we "rang out" all of the laser amplifiers.

        In the early spring of 1977, after I realized that GE wouldn't
        give us the 3 months we needed to run the laser under automated
        control, I commandeered Dan Gardner from GE (who hated Forth)
        and we wrote a complete 6 beam laser power conditioning control
        system. I think we started in April and were done by June.
        That gave June, July and August to test the laser, while waiting
        for the "real" laser control system to come on line. The test
        only needed 4 beams, but the first experimental laser would be
        the 6 beam Zeta, so I figured, what the hell!

        Naturally, the top down build to the specs approach, so
        "apropos" to the military and nuclear submarines, didn't work
        too well with a small University embarked on building the
        world's largest laser for fusion studies. As we wrote the Forth
        code and found out how the laser really worked, bit sense or
        control polarities were wrong, bit assignments were wrong, muxes
        were wrong, etc, we fed that over to the software group so they
        could correct the spec.

        I always liked the idea of an "executable spec": e.g. the
        software.

        At the end of the day, it was decided by Moshe Lubin, the
        director of LLE, that since we'd spent a half million dollars
        and probably 5 man-years building the "real" software, we'd
        better use it. I fought this ridiculousness, but abstained at
        the end.

        Instead, the following systems came up in Forth:

        24 beam laser alignment system, primarily written by my students
        over a couple classes and two programmers, running on an LSI
        11/23 with 256Kbytes of RAM, floppy disk and a 10 MB RL01 hard
        drive, multiple color consoles, 24 tasks, etc. Lawrence
        Livermore ran a 20 beam system on a VAX-780 networked through a
        PDP-11/70 and hundreds of LSI 11/23 computers, each one
        responsible for 4 mirror control systems in the laser. We
        could align and shoot every 30 minutes. Livermore could do the
        same every 2 hours.

        There was probably 2 man-years of effort in LLE Laser control
        system and easily 100+ man years in the Livermore system. I
        discussed this with their engineers one time, and found that
        probably 20% of their effort went into building a sufficiently
        robust RS-232 based communications infrastructure to support the
        communicating tasks, synchronizing them, etc, in the
        pre-Ethernet era.



        Glass Development Laser (GDL, also known as the God Damn Laser)
        power conditioning and safety interlocks for multiple
        laboratories.

        Optical Multichannel Analyzer (OMA) on the back of various
        streak cameras

        The GDL and OMA systems performance and flexibility allowed
        Stephen Craxton, a British theoretical physicist, to verify his
        2 wave mixing theory using "detuned" crystals to get 100%
        conversion of infrared photons to ultraviolet photons (2 red ->
        1 green, 1 red and 1 green -> UV). This saved laser fusion in
        the mid-80's, and gained LLE a significant patent using two
        birefringent crystals each "detuned" about the extraordinary
        axis of rotation. This is the standard method through out the
        world of building high power lasers for fusion and other
        studies, as well as smaller systems for a variety of purposes.

        Bob Boni, Steve Craxton, a GDL operator and occasionally me,
        would run GDL nights when no one else was around to rotate the
        crystal pairs, under Forth control, fire the laser, under Forth
        control, operate the streak camera OMA, under Forth Control, and
        within seconds of a shot know where we were on Steve's plots.
        Then, we'd recycle the laser rotate the crystals or adjust the
        laser power, and fire again. We literally stepped right through
        Steve's curves, nailing them with experimental data.
      Your message has been successfully submitted and would be delivered to recipients shortly.