Loading ...
Sorry, an error occurred while loading the content.

Re: At last a start: Platform for robotics

Expand Messages
  • David Wyland
    Hi Peter, More good stuff. And my replies in the post. Dave Wyland ... to draw ... test ... I think we are splitting different hairs. By real-time, I mean
    Message 1 of 274 , Jan 31, 2009
    • 0 Attachment
      Hi Peter,

      More good stuff. And my replies in the post.

      Dave Wyland

      --- In SeattleRobotics@yahoogroups.com, "Peter Balch" <peterbalch@...>
      > Dave
      > I think I might I take issue with some of the distinctions you try
      to draw
      > between non-real-time (task) and real-time (stream).
      > All robot s/w is real-time. The question is, how long can you afford to
      > wait. 1mS or 1000mS? I can see that special tools might be useful to
      > for maximum execution time but the overall philosphy and architecture
      > remains constant.
      I think we are splitting different hairs. By real-time, I mean
      processing that is guaranteed to be done in less than a worst-case
      time. By non-real-time I mean that the processing gets done
      eventually, on a best effort basis.

      A real-time system is guaranteed to be fast all the time, 24/7. A
      non-real-time system may be equally fast most of the time.

      Processors with cache memories use the non-real-time system concept. A
      cache memory is a fast memory that saves a copy of data from slow
      DRAM. When the CPU asks for a word from memory, the cache will
      *usually* respond quickly with the copy of the desired word. If it
      does not have a copy, you have to go back to slow DRAM to get it.
      Because cache hit ratios are usually in the 90% region, they make the
      DRAM look as fast as SRAM. Best effort works most of the time.

      BTW, this is why RISC was born. Cache memories made DRAM as fast as
      microcode RAM, so the microcode became the code.

      But this does not always work. Try an incrementing memory test on
      your DRAM. Set the increment equal to the cache line length. You will
      get a cache miss on every access, and now your CPU is running at DRAM
      speeds, not cache speed.

      It is true that all software runs in real time, as we live in real
      time. But there is no direct support at the language or system level
      for the concept of real-time.

      If you want to know how fast something runs, you run it and measure it
      with a stop watch, or the system timer equivalent. The time you
      measured may be typical (or not), but what timing can you *guarantee*?
      And how?

      This information should be available at the code level. The compiler
      should be able to count the clocks required to execute a module,
      including cache misses. The OS should give you a *guaranteed* latency
      for a given priority - if it can.

      > > 1. I have been using task and stream images to differentiate
      > > non-real-time (task) from real-time (stream). On reflection, this is
      > > wrong. Real-time and non-real-time are different from tasks and
      > > processes. A task is simply something that starts and finishes. A
      > > process is continuous - it continues until it is "killed".
      > Yes (or no - I'm not completely sure what you're saying). If I send my
      > hopping robot to the fridge for a soda, hopping requires a fast
      > s/w task, navigation requires a slow continuous task, reaching for and
      > grasping the can requires a fast one-off task, "fetching the soda"
      > a slow one-off task.
      > Brooks would say that the "fetching the soda" and "reaching" tasks run
      > continuously but they simply do nothing until they're given the Go
      I am on somewhat of a crusade here to get the words right to reduce
      confusion. In a simple, dictionary sense, a task is something that can
      be done. It has a start and an end, by the definition of task.

      A process is something that runs continuously. It can be started, but
      once it is started it runs continuously until it is stopped ("killed").

      The problem is the use of "task" as a software term in
      multi-programming systems. The OS was invented to run tasks, called
      jobs in the old days. A task or job is aptly named. The computer does
      it, and it is done. The reason for the OS was to keep the big,
      expensive computer busy doing tasks.

      In simple form, the tasks would run one after the other. But lord help
      you if you had a small task behind a big task. Your small task would
      wait a long time for the big task to complete. Then came the
      multi-tasking OS, such as Unix. Here the tasks were chopped into small
      time slices, and several tasks could be run at once by interleaving
      the time slices. It still took the same amount of time to get all the
      tasks done, but short tasks could finish much sooner by not having to
      wait until a large task was done.

      Somewhere along the line, the idea of "continuous task" was created.
      This contradiction in terms can confuse people. The continuous task
      looks like a low priority task, but it never completes.

      Ironically, OS's use the term process - accurately - to name the
      continuous processes necessary to get tasks scheduled and done. For
      example, the kernal of the OS is a process.

      So, I think it would help to avoid the term "continuous task" and
      replace it with either "periodic task" or "process" as best fits the
      case. Things are tough enough to keep straight.

      Now, back on point. The hopping, navigating and reaching actions are
      continuous in their activity. They are converted into tasks by a task
      processor that sets up the parameters, starts the given activity,
      detects when it is done (including failed), and stops it.

      The motion is continuous, even when the robot is stopped. Think of the
      robot motion system as a kind of kernal - always running but sometimes
      just running the idle process.

      A separate task processor uses this continuous motion process
      capability to implement discrete tasks. The task processor monitors -
      supervises - the continuous action activity to determine when a task
      is done. When it is done, it may change the "motion kernal" from an
      action process to an idle process - or another action process.

      A key here is that there are two systems: a motion process system and
      a task system. The motion process system knows nothing about tasks.
      Only the task processor can know when a task is done or failed.

      > > Event driven operation.
      > Again, Brookes would say that event-handler task runs continuously but
      > produces no output until triggered.
      > > Processing may require
      > > pipelined modules.
      > What does pipelined mean exactly. To most people, it implies a
      > say a video frame, which is passed though a series of
      > and gradually gets converted into useful data for locomotion or
      > One message is passed along each pipe for each frame. It's a
      > batch-processing system. Which means that producers and consumers
      must be
      > matched and the whole pipeline runs at the speed of the slowest.
      I am projecting ahead here. I am thinking of several real-time
      processing loops, where the output of one feed the input of another.
      The total time through both loops is the sum of the two. I think of
      this as a pipeline of processing.

      I think of a real-time assembler as one that will give the total
      number of clocks required to execute the code in a block by adding up
      the clocks required for each MCU instruction or the clocks to get
      through a piece of logic in an FPGA, etc.

      Looking ahead, it should be possible to chain blocks together and get
      the total time through the chain or pipeline by adding the time
      through the blocks.

      > I would argue (taking my cue from Brookes) that each processor runs
      as fast
      > as it can. They publish their output whenever it's ready. The next
      in the
      > pipeline reads it when it needs it; if it gets the same data twice
      or misses
      > a frame then that doesn't matter. If one process occasionally runs
      > then it doesn't slow down the others. There's no handshaking.
      > > How they communicate - peer-to-peer versus star versus ?, bus
      > > versus point-to-point, etc. - is probably the subject of some of the
      > > work to be done.
      > There are two separate questions here: how does the inter-process
      > communication look to the software and how is it implemented in
      hardware. I
      > argue that in the software, inter-process communication is always
      > peer-to-peer, "publish and inspect". Such comms can be implemented
      in h/w by
      > peer-to-peer, bus, star or whatever.
      > If processor A is running tasks P,Q,R and processor B is running
      tasks X,Y,Z
      > then there will be only one physical connection between A and B but
      > might be several logical connections between any of P,Q,R,X,Y,Z.
      Interesting, but how do you apply this to a real-time loop, for
      example? How do you calculate the guaranteed worst case delay from
      external input to external output in such a configuration?

      > Peter
    • KM6VV
      Hi Peter, ... That would be much appreciated! ... That s right, it s like Pascal. I remember now. I think I ran across it on the Turbocnc list. Turbocnc is
      Message 274 of 274 , Feb 15, 2009
      • 0 Attachment
        Hi Peter,

        Peter Balch wrote:
        > Alan
        > OK, that would work. As I say, I'm very happy to give away all my
        > documentation of the interface. Only a couple of people have ever asked for
        > more info as though they were doing something technical.
        > I'll send you some stuff.

        That would be much appreciated!

        >>Well, not THAT hard to handle text in C! Don't know much about Delphi,
        >>something like BASIC, as I recall.

        That's right, it's like Pascal. I remember now. I think I ran across
        it on the Turbocnc list. Turbocnc is written in Pascal. No intent to
        start a flame war!

        > Wow! Prepare to duck. Expect to get flamed if you say things like that near
        > a Delphi fan. Delphi Pascal is about equivalent to C++ but with a very much
        > cleaner syntax and with much stronger typing. So bugs that can have you
        > stumped for an hour in C++ are either caught by the compiler or actually
        > impossible to write in the first place. Anything you can do in C, you can do
        > in Delphi - and write it faster. The code it produces is around 10% faster
        > than that produced by a C++ compiler even though all the run-time bounds
        > checking and the like is turned on. That surprises C programmers who believe
        > that, because C is lower-level than Pascal, it must produce faster code. In
        > fact, because Pascal was designed to be easy to compile, it's easier to
        > optimise. As for strings, since version 4, strings can be any length and are
        > kept on a heap with automatic garbage collection (along with other variable
        > length arrays). So you never have to worry about allocating buffers or
        > memory leaks. The Delphi SDK is so much faster and slicker than any C SDK
        > I've used. I reckon I can write apps for around 50% to 70% of what it would
        > cost in C++.

        Seems like it was hard to find a copy from Borland to compile Turbocnc
        with at the time. I'm used to C, like a pair of old shoes.

        > (I'm writing C, C++ and Obj-C just now on a Mac for the iPhone and it's
        > absolutely horrid having to worry about all that memory nonsense - the
        > compiler should do it for me. BTW There was a suggestion a month or so ago
        > that perhaps an iPhone or iPod Touch would be good as a robot's brain.
        > Forget it. The SDK is dreadful. But, much worse, getting an app that you've
        > written on your own Mac onto your own iPhone is fantastically complex,
        > unreliable and you have to pay Apple $99 for the priviledge to do so. Forums
        > and blogs are full of developers tearing their hair out trying to get
        > Apple's byzantine systems to work so they can actually run their own code on
        > their own phone. Wait for Google's Android OS.)
        > All the best.
        > Peter

        I don't have any experience with Mac or Apple's, and my phone is a very
        simple device. I think I'll stick to some of Microchip's offerings for
        my 'bots.

        Best regards,

        Alan KM6VV
      Your message has been successfully submitted and would be delivered to recipients shortly.