Loading ...
Sorry, an error occurred while loading the content.
 

Replacing the ISR

Expand Messages
  • Jeff
    It s been a while since I had done any Z80 programming before I got into the whole Mailstation thing, so I was referring to the documentation on the interrupt
    Message 1 of 19 , Aug 1, 2007
      It's been a while since I had done any Z80 programming before I got
      into the whole Mailstation thing, so I was referring to the
      documentation on the interrupt modes again earlier, and came to
      realize a trick one could probably use to hijack it from the MS code
      to your own. The NMI would still be stuck there, but maybe that
      rarely gets tripped, if ever (depending on what you're doing?). Mind
      you, this might be a commonly known trick, I dunno, but I thought it
      worth mentioning anyway.

      Basically, you put it into interrupt mode 2, which is the one where
      you set I to the high address byte of an interrupt table. When the
      interrupt hits, the external device writes the low byte (the position
      in the interrupt table) to the processor, and then it does an
      indirect call to the address stored at that memory location.

      Well, I'm fairly sure the MS doesn't have anything that writes this
      byte back to the processor (hence why it uses interrupt mode 1, which
      calls the fixed address 0x38), meaning some random byte will be there
      on the data bus when the interrupt happens. But that's the trick: it
      doesn't matter what byte is there.

      You can simply use a table of 257 identical bytes. It has to be 257
      bytes instead of 256, since normally, you leave bit 0 of the byte fed
      to the processor as 0, so that it'll stay aligned to a table of 2-
      byte addresses. Since it may or may not be aligned for us due to a
      random value, we just use one extra byte, in case the random byte the
      processor reads happens to be 255.

      So, for example, if you set register I to C0, and then positioned
      your lookup table from 0xC000 to 0xC100, setting every byte in it to
      like, 40, then when the interrupt hit, it would find the address
      0x4040 no matter where it looked in the table. Then it would call
      0x4040 for the actual interrupt, which of course is an address inside
      your range of control.

      Of course replacing the codeflash is the simplest way to do it, but
      this allows you to leave that alone, and still have its functionality
      intact if you need it, but be able to take control of the interrupts
      as well if you want.
    • Cyrano Jones
      ... Pretty slick! --CJ
      Message 2 of 19 , Aug 2, 2007
        --- In mailstation@yahoogroups.com, "Jeff" <fyberoptic1979@...> wrote:

        > But that's the trick: it
        > doesn't matter what byte is there.
        >
        > You can simply use a table of 257 identical bytes.
        ...
        > So, for example, if you set register I to C0, and then positioned
        > your lookup table from 0xC000 to 0xC100, setting every byte in it to
        > like, 40, then when the interrupt hit, it would find the address
        > 0x4040 no matter where it looked in the table. Then it would call
        > 0x4040 for the actual interrupt, which of course is an address
        > inside your range of control.

        Pretty slick!

        --CJ
      • Jeff
        So I finally did exactly what I mentioned here, to try making my own entire interrupt, and came across a few little gotchas that I thought was worth sharing.
        Message 3 of 19 , Aug 18, 2007
          So I finally did exactly what I mentioned here, to try making my own
          entire interrupt, and came across a few little gotchas that I thought
          was worth sharing.

          Oh yeah, first of all, I apparently didn't come up with the 257
          identical byte table for interrupt mode 2, after seeing some other
          write-ups on z80 interrupts on the web. I kinda had a feeling that I
          surely didn't invent something for such an old processor, considering
          there are countless smarter people in the electronics/programming
          fields than me. Not the first time I've "invented" something
          somebody else already did. But it made me feel smart for a little
          while, at least!

          Anyhoo, since I'm not using the normal video memory buffer, I
          somewhat randomly decided to use #c1c1 for my interrupt location
          (putting #c1 in my 257 byte interrupt table of course). Though I'm
          actually just putting a jump here to code in my main chunk. This
          actually led to some problems for me, because AS80 was "optimizing"
          my code, converting the JP to a JR, and it took me forever to realize
          it. So I finally bit the bullet and disabled optimizations. Prolly
          shoulda done it from the get go.

          Once I got that out of the way, I found out something else strange.
          If I re-enabled interrupts before my RETI, it would freeze up. For a
          while I was just re-enabling them in part of my keyboard loop
          (outside the interrupt), but I knew something had to be wrong, since
          the MS interrupt ends with an EI and works fine. I actually even did
          a jump to #0038 at the end of my own interrupt (effectively making it
          a hook), and it worked fine.

          So it wasn't until looking over the MS code that I realized that it
          was toggling the bit of the current interrupt on port #03. It seemed
          to be the only thing it COULD have been. And sure enough, toggling
          that bit makes my interrupt work fine, letting me end it with the
          standard EI/RETI like I wanted. So that's important to know. I'm
          just curious what exactly toggling the bits does in the hardware, as
          to what would cause it to freeze up like it was doing (unless I re-
          enabled interrupts outside the interrupt routine).

          After I added my toggling, I realized that the time16 interrupt was
          much slower than I had previously experienced. With toggling, time16
          occurred every 1 second. Before I figured that out, the interrupt
          was happening pretty rapidly (perhaps not toggling the bit was making
          the interrupt happen repeatedly?). My bare interrupt routine is
          actually just incrementing a 16-bit value, which I then display the
          content of in my main program code loop, which is how I was able to
          see how fast the interrupt was occuring. Turning the keyboard
          interrupt on makes it count really fast again, but I'm not sure of
          the time interval there.

          I then put my own version of my keyboard routine in the interrupt,
          and found it was freezing if I call it after toggling that bit.
          Doing it beforehand fixed that, as did calling it after setting the
          bit to zero, but before setting the bit back on again. This seems to
          be the method used by the MS routine anyway, so I figure I should
          handle it like that from now on.
        • Cyrano Jones
          ... It sounds like clearing the p3.x output port bit may be clearing the coresponding p3.x input port bit. The in & out bits are not the same, the existence of
          Message 4 of 19 , Aug 22, 2007
            > So it wasn't until looking over the MS code that I realized that it
            > was toggling the bit of the current interrupt on port #03. It seemed
            > to be the only thing it COULD have been. And sure enough, toggling
            > that bit makes my interrupt work fine, letting me end it with the
            > standard EI/RETI like I wanted. So that's important to know. I'm
            > just curious what exactly toggling the bits does in the hardware, as
            > to what would cause it to freeze up like it was doing (unless I re-
            > enabled interrupts outside the interrupt routine).

            It sounds like clearing the p3.x output port bit may be clearing the
            coresponding p3.x input port bit.

            The in & out bits are not the same, the existence of the shadow
            indicates you can't read back what you last wrote. It seems like
            the output bits serve to enable the respective interrupt, and now
            you have found evidence that setting the enable bit low also clears
            the request bit. Good work!!! Maybe you would even say it serves as
            an "acknowledge" to the interrupting hardware?

            > After I added my toggling, I realized that the time16 interrupt was
            > much slower than I had previously experienced.

            The slower one is probably correct. I don't know if time16 is even
            used anywhere, the only reference I found to it was the increment
            in the ISR. It is possible it was used by one of the functions
            that got scraped out by version 2.53yr (caller id????).

            Did your keyscan work when time16 was flying? Oh wait, I bet it
            did, since it's not int based, right???

            > With toggling, time16
            > occurred every 1 second. Before I figured that out, the interrupt
            > was happening pretty rapidly (perhaps not toggling the bit was making
            > the interrupt happen repeatedly?).

            I'll venture a hypothesis: Without the toggle of out-P3.4, the in-bit
            was not cleared. Since time16 int has a higher priority than
            keyboard int, time16 handler was catching keyboard ints. When you
            added the toggle of out-P3.4 the keyboard ints were now handled by
            the keyboard int handler.

            And by toggle, I really mean setting low. You need to set high again
            to re-enable the int.

            > My bare interrupt routine is
            > actually just incrementing a 16-bit value, which I then display the
            > content of in my main program code loop, which is how I was able to
            > see how fast the interrupt was occuring. Turning the keyboard
            > interrupt on makes it count really fast again, but I'm not sure of
            > the time interval there.

            Cool!!!

            Can you get an idea of the actual rate of the keyboard int??????
            I think the timers are in millisec, and that would mean this int
            happens every 1 millisec. Another theory is it is a 60 Hz int.,
            in which case the timers can't be in millisec (I think I verified
            they were millisec, but maybe I remember wrong???)

            > I then put my own version of my keyboard routine in the interrupt,

            You mean in the P3.1 int, not the P3.4 (time16) int, right?

            > and found it was freezing if I call it after toggling that bit.

            Here you mean setting low then high, both, before calling your
            keyscan, right??? I probably used bad terminology when I said
            "toggle" in the comments, which I guess really means "invert".

            > Doing it beforehand fixed that, as did calling it after setting the
            > bit to zero, but before setting the bit back on again. This seems to
            > be the method used by the MS routine anyway, so I figure I should
            > handle it like that from now on.

            The keyscan interrupt (which is a misnomer, it is really time32,
            10 timers, and keyscan interrupt), is written to re-enable ints
            inside the ISR (right after inc of time32). I guess the idea is
            to allow the other ints to be serviced before the timers &
            keyscan are done. This is probably why they leave the out-P3.1
            bit low while servicing int 3.1. You don't want to get a
            second keyscan int till you are done with the current scan.

            I don't claim to fully understand how the interrupts work,
            and I don't know if the ms code is best way to do it, but
            if it aint broke...

            --CJ
          • Jeff
            ... After I wrote my post, I thought on what might be happening, and formed what I figured was a somewhat fair analysis of how the hardware could be working.
            Message 5 of 19 , Aug 22, 2007
              --- In mailstation@yahoogroups.com, "Cyrano
              Jones" <cyranojones_lalp@...> wrote:
              >
              > It sounds like clearing the p3.x output port bit may be clearing the
              > coresponding p3.x input port bit.
              >
              > The in & out bits are not the same, the existence of the shadow
              > indicates you can't read back what you last wrote. It seems like
              > the output bits serve to enable the respective interrupt, and now
              > you have found evidence that setting the enable bit low also clears
              > the request bit. Good work!!! Maybe you would even say it serves
              > as an "acknowledge" to the interrupting hardware?


              > I'll venture a hypothesis: Without the toggle of out-P3.4, the in-
              > bit was not cleared. Since time16 int has a higher priority than
              > keyboard int, time16 handler was catching keyboard ints. When you
              > added the toggle of out-P3.4 the keyboard ints were now handled by
              > the keyboard int handler.
              >
              > And by toggle, I really mean setting low. You need to set high
              > again to re-enable the int.


              After I wrote my post, I thought on what might be happening, and
              formed what I figured was a somewhat fair analysis of how the
              hardware could be working. I'm no expert of course, but this is how
              I'd make it myself, based on what I've seen. It also matches up with
              some of your own hypothesis, but I didn't give you a thorough
              description of how exactly I had my interrupt routine setup, which
              I'll explain in a moment, and will hopefully clear up matters a bit.

              As for the hardware, I'm thinking that there's a flip-flop in there
              for each interrupt. When an interrupt happens, the flip-flop
              enables, and stays on indefinately until it's reset. I figure all of
              these flip-flop outputs are connected to what you read on port 3
              input (so that you can get which interrupt occurred), as well as
              probably all being OR'd together (and inverted) to the processor's
              interrupt pin. During your interrupt routine, you'd then read port 3
              to see which interrupt happened, and "acknowledge" it by setting that
              bit off then on again, which presumably trips that interrupt's flip-
              flop's reset. If you don't trip the reset of the flip-flop(s), it
              won't go off, and therefore make the interrupt routine execute again
              immediately after having just executed.

              If this is all the case, then I think this is why it was "locking up"
              on me when I wasn't toggling the port 3 output bit. I say "locking
              up" because I had initialization code to fill in the screen with a
              particular character, which was directly underneath the code where I
              setup my interrupt replacement and enabled interrupts. The screen
              was drawing only a couple columns or so before just freezing in mid
              draw. If I waited until after the screen drawing code to enable
              interrupts, the screen would completely draw in just fine, of course,
              but then just freeze afterwards. So without toggling that port 3
              bit, it was probably actually just executing the interrupt function
              over and over, and never getting a chance to actually return to my
              main code loop long enough to continue.

              What I didn't fully explain about my interrupt routine is that,
              during the initialization process, I disabled every interrupt in the
              interrupt mask of port 3 except for the time16 one. No keyboard was
              on at this point. So, when I was re-enabling interrupts at the end
              of my interrupt routine, it would do the freezing up that I mentioned
              earlier. If I waited to re-enable interrupts until after I got back
              into my main code loop, then the interrupt would occur fairly rapidly
              (it didn't fully lock up like before because my code had a chance to
              do stuff before the interrupt had a chance to be triggered again,
              which it presumably did immediately upon an EI, since I never toggled
              the p3 bit). Once I put the port 3 time16 bit toggling into play,
              the interrupt started to only happen once a second, which seems to be
              the correct interval of that particular interrupt.

              I then turned on the keyboard interrupt in the interrupt mask as
              well, and that's when it started to trigger the interrupt routine
              quickly again. I was pretty much toggling every bit off then on
              again, so that I'd trip all the resets (therefore getting both the
              keyboard and time16 in one swoop). But now I've separated the two
              interrupts out, by detecting which is occuring and jumping to a
              location to deal with it and toggle only its bit, just like the MS is
              doing (and therefore also implementing an interrupt priority in
              effect, by depending on which I test for first). That way the code I
              have associated with the time16 interrupt only runs at its one second
              interval, and the keyboard code executes at its own interval.

              Actually, I'm not doing it EXACTLY like the MS; I'm only executing
              unique code for the time16 and keyboard interrupt. If one of these
              two doesn't occur, I'm defaulting to toggling all the rest of the
              interrupt bits in one swoop, because I obviously am not interested in
              those if they happened to occur. I turned the others off of course
              in the interrupt mask during initialization, but I figure there's
              always a chance one could be pending from how the MS had it all
              configured before I took over. Just clearing all the others if
              time16 or keyboard doesn't occur prevents another neverending loop,
              like what plagued me initially.

              Actually, why don't I just show you the code fragment:

              in a, (#03)
              bit 4, a
              jr nz, interrupt_time16
              bit 1, a
              jr nz, interrupt_keyboard


              ld a, (p3shadow) ; else, just toggle the bits except
              keyboard and time16
              and a, #12
              out (#03), a
              ld a, (p3shadow)
              out (#03), a
              jr interrupt_end



              I think I might be able to measure this keyboard interrupt interval
              by just checking how many keyboard interrupts occur between
              time16's. If I had to make a wild guess, I might say 60 times a
              second, as you suggested.


              >
              > The slower one is probably correct. I don't know if time16 is even
              > used anywhere, the only reference I found to it was the increment
              > in the ISR. It is possible it was used by one of the functions
              > that got scraped out by version 2.53yr (caller id????).

              Do you suppose they use it for tracking the auto power off aspect?


              >
              > Did your keyscan work when time16 was flying? Oh wait, I bet it
              > did, since it's not int based, right???
              >

              I've pretty much stopped using my own keyscan. Now that I made the
              one work that I ripped out of the Mailstation firmware, and can make
              my own interrupt as well, I see no reason not to just use theirs,
              since I can relocate the variables required by it to wherever I want,
              without cluttering up C000-FFFF.

              But I suppose all the work I put into writing my own version was
              pretty good experience, because it's the first time I've ever worked
              with the raw dealings of rows/columns of a keyboard matrix.


              >
              > I don't claim to fully understand how the interrupts work,
              > and I don't know if the ms code is best way to do it, but
              > if it aint broke...
              >

              Yeah, there's really nothing particularly wrong with the MS interrupt
              that I currently know of that would make me not want to use it, it's
              pretty much solely the fact that its variables are all up in the
              ram. If I could totally relocate everything, then it'd be pretty
              nice to have one contiguous chunk to work in.
            • Jeff
              ... So I made two 16-bit variables (didn t think I needed em this big, but I wanted to make sure) called kbdtest and kbdmax, and clear em to 0 at startup.
              Message 6 of 19 , Aug 22, 2007
                --- In mailstation@yahoogroups.com, "Jeff" <fyberoptic1979@...> wrote:
                >
                > I think I might be able to measure this keyboard interrupt interval
                > by just checking how many keyboard interrupts occur between
                > time16's. If I had to make a wild guess, I might say 60 times a
                > second, as you suggested.
                >

                So I made two 16-bit variables (didn't think I needed'em this big,
                but I wanted to make sure) called kbdtest and kbdmax, and clear'em to
                0 at startup. Everytime the keyboard interrupt happens, I increment
                kbdtest. Everytime the time16 interrupt happens, I copy the contents
                of kbdtest into kbdmax, then clear kbdtest. I display both on the
                screen during my main code loop.

                kbdtest naturally increments swiftly, but kbdmax settles at 64 after
                a second or so and stays there. That would seem to indicate that
                it's not 60 times a second after all, but just a tad higher. Not
                sure why they chose that value. Or maybe they were just aiming for
                60 but didn't care if it was entirely accurate, as long as the
                keyboard got read often. Then again I could be doing something that
                makes this not entirely accurate, but I don't know exactly what
                that'd be at the moment. I even took the call to the keyboard scan
                out of the interrupt to see if that might influence the speed at all,
                but it stayed the same. So, for the moment at least, I'm assuming
                the keyboard interrupt happens 64 times a second.
              • Jeff
                ... to ... increment ... contents ... Maybe I shoulda waited a little bit before I made that post, because here s yet another with some new info! I decided to
                Message 7 of 19 , Aug 22, 2007
                  --- In mailstation@yahoogroups.com, "Jeff" <fyberoptic1979@...> wrote:
                  >
                  > So I made two 16-bit variables (didn't think I needed'em this big,
                  > but I wanted to make sure) called kbdtest and kbdmax, and clear'em
                  to
                  > 0 at startup. Everytime the keyboard interrupt happens, I
                  increment
                  > kbdtest. Everytime the time16 interrupt happens, I copy the
                  contents
                  > of kbdtest into kbdmax, then clear kbdtest. I display both on the
                  > screen during my main code loop.
                  >

                  Maybe I shoulda waited a little bit before I made that post, because
                  here's yet another with some new info!

                  I decided to use this same technique on the other interrupts, to see
                  if/when/how often they trigger. So I implemented code to handle
                  every interrupt, and turned them all on. One by one, I put the code
                  to increment kbdtest into the interrupt handlers. Here's what I
                  got(2 and 7 being the new ones):

                  0 - Didn't do anything
                  1 - Triggers 64 times a second, used for keyboard
                  2 - Triggers when a key is pressed (minus power button)
                  3 - Nothing here
                  4 - Triggers every second
                  5 - Nada
                  6 - Zilch
                  7 - Triggers when power button is pressed/depressed. This is
                  apparently wired straight to the button, because it sometimes
                  triggers multiple times when pressed/released (aka not debounced).

                  I knew there had to be one connected to the power button, because how
                  else would it wake back up when you press it?

                  Interrupt 2 is what surprised me most though: an interrupt that only
                  activates from the keyboard, yet they never use it. I know that to
                  do debouncing and such, you'd need a constant time interval, which is
                  probably why they chose to do it in interrupt 1. Since the MS isn't
                  particularly cpu intensive, I guess they figured they'd just handle
                  it all in one swoop each time. But if one wanted to streamline the
                  interrupts, I reckon you could pull the keyboard routines apart, and
                  save some cycles by only doing some of it when a key is down.

                  I actually had to edit my post just now. I initially thought
                  interrupt 2 was triggering constantly (64 times a second) as long as
                  a key was being held. But then I had a thought about the keyboard
                  rows, and what might happen if they were off (aka bits set high). It
                  was then that I remembered that the keyscan was switching these rows
                  on and off rapidly (and at a familiar 64 times a second via interrupt
                  1). So I decided to stop calling keyscan in the interrupt routine.
                  As a result, interrupt 2 stopped triggering 64 times a second when a
                  key was being held; instead, it only triggered once per button
                  press. If I turn off the keyboard rows, it doesn't trigger an
                  interrupt at all. If I only turn on, say, just the first row, it
                  only triggers when I press one of those buttons. So interrupt 2 is
                  entirely dependant on which keyboard rows are active.

                  But if keyscan was toggling all the rows to check them for
                  keypresses, and this was causing nonstop interrupts on interrupt 2 as
                  long as the key was held, then this must mean that toggling a
                  keyboard row effectively trips another interrupt 2 if a key in that
                  row is still being held. This could be useful. It's actually kind
                  of useful just letting it increment 64 times a second when keyscan is
                  left to run normal, because you could use that as a basis for
                  handling key repeat and such.
                • Cyrano Jones
                  ... Could it be 62.5???? That would make time32 exactly in millisec. Maybe you could measure total count for 10 or even 60 seconds, and get aveverage rather
                  Message 8 of 19 , Aug 22, 2007
                    > kbdtest naturally increments swiftly, but kbdmax settles at 64 after
                    > a second or so and stays there. That would seem to indicate that
                    > it's not 60 times a second after all, but just a tad higher.
                    Could it be 62.5????  That would make time32 exactly in millisec. 
                    Maybe you could measure total count for 10 or even 60 seconds, and
                    get aveverage rather than max? But, I am ready to just call it
                    milliseconds. "Close enough for government work", as they say. :-)

                    Turns out the timers *are* in millisec, too. Well, they are set in
                    millisec, that is.

                    The call to "Set_a_timer(who, msec, persist)" takes the number
                    of msec, and shifts it right 4 bits before sticking it in the
                    timer table.

                    That explains why time32 increments by 16 each int,
                    while the timers inc by just 1.

                    The resolution of each of the 10 timers is limited to
                    multiples of 16 msec,
                    and the max count is #fff = ~65 sec due to fact that
                    the 16 bit msec param is shifted right by 4 bits.
                    (Seems upper 4 bits of 16 bit limit are unsettable.)

                    So, our current understanding is:

                    - time16 is a 16 bit counter of elapsed seconds since boot,
                    unknown if it is ever used.

                    - time32 is 32 bit counter of elapsed msec since boot,
                    units are msec, but resolution is 16 msec.

                    - the ten event generating timers are set in units of
                    milliseconds, but resolution is 16 msec.
                    Max count is #fff = ~65 sec

                    --CJ

                  • Jeff
                    ... Welp, I wanted to check, so I set it on 10 seconds first. After time to settle, kbdmax stayed a consistant 640. So I did the longer test of 60 seconds,
                    Message 9 of 19 , Aug 22, 2007
                      --- In mailstation@yahoogroups.com, "Cyrano
                      Jones" <cyranojones_lalp@...> wrote:
                      >
                      > Could it be 62.5???? That would make time32 exactly in millisec.
                      > Maybe you could measure total count for 10 or even 60 seconds, and
                      > get aveverage rather than max? But, I am ready to just call it
                      > milliseconds. "Close enough for government work", as they say. :-)
                      >

                      Welp, I wanted to check, so I set it on 10 seconds first. After time
                      to settle, kbdmax stayed a consistant 640. So I did the longer test
                      of 60 seconds, and got 3840. I tried another different method as
                      well (which included waiting until after the first time16 before
                      starting any counters, to ensure everything was in sync), and got the
                      same result. So it seems to confirm the 64 times a second I
                      initially got.

                      So then I thought I should test the time16 interrupt, just to verify
                      that it's in actual exact seconds. I let it run for about 14 minutes
                      counting (14 minutes just because it ended it at the hour mark on the
                      clock, easier to remember when to check back on it). 14 minutes =
                      840 seconds, and this is exactly what my counter said when it hit the
                      hour mark. Here it is 8 minutes after, and I checked the counter
                      again, and it's still accurate. So time16 interrupt is pretty much
                      guaranteed to be in seconds, or it would surely have gotten skewed
                      from the clock by now.

                      So I dunno. Maybe they don't care about exact milliseconds on the
                      Mailstation, and just do it as close as possible. I doubt it does
                      anything particularly time critical in the software in which
                      microseconds would matter anyway.

                      Actually, do you think they might have used this time interval due to
                      the modem in some way? I know that you need exact frequencies in
                      order to communicate serially without an error percentage. But I
                      have no idea if the modem chip and cpu run off the same oscillator or
                      what. I don't even know what the oscillator speed is. Obviously
                      it's likely to be some high amount if the Mailstation is capable of
                      switching up in speed.

                      There's three different versions of the modem chip, and I dunno which
                      is in the Mailstation without possibly pulling it apart yet again.
                      One runs at 28.224mhz, one at 52.416mhz, and the other at 56.448mhz.

                      But if we only knew the exact cpu frequency, some math could come
                      into play to figure out other things.
                    • Cyrano Jones
                      ... That s certainly interesting! I wonder if it is detecting any change on keyboard col input port, or just not #ff condition. ... I don t know about that,
                      Message 10 of 19 , Aug 22, 2007
                        > 2 - Triggers when a key is pressed (minus power button)

                        That's certainly interesting! I wonder if it is detecting any
                        change on keyboard col input port, or just "not #ff" condition.

                        > 7 - Triggers when power button is pressed/depressed. This is
                        > apparently wired straight to the button, because it sometimes
                        > triggers multiple times when pressed/released (aka not debounced).

                        I don't know about that, it sure seems that the isr associated
                        with int7 is the caller id handler.

                        The power button is connected to P9.4. I suppose it *could*
                        generate interupt on P9.4 low, but then where is the isr???

                        There doesn't seem to be anything in the isr (labeled
                        "caller_id_handler") to turn on power. In fact, the first
                        thing it does is check P2.2, which is wired to the data-ready
                        pin on the caller-id chip. The isr rets immediately if
                        there is no data ready.

                        > I knew there had to be one connected to the power button, because how
                        > else would it wake back up when you press it?

                        I think maybe power button sets the power-control flip-flop
                        directly, without cpu's help. I have to find the drawings I
                        made (4 years ago, but they are around here somewhere).

                        I don't really know if the cpu is off, or just in halt state
                        when mailstation is "off" (asleep). If it is halted, it
                        would need an int or a reset to get it started again.

                        Is there any possibility that any port bits are getting
                        changed accidentally????

                        I am assuming you are using your own shadow vars, right???
                        What do you init them to? (0, or the value of the coresponding
                        "real" shadow)????

                        Are you preventing *all* of the original int code from running?


                        > Interrupt 2 is what surprised me most though: an interrupt that only
                        > activates from the keyboard, yet they never use it.

                        Perhaps it *is* used, to wake up from a "screensaver" mode
                        where unit goes to sleep, but needs to wake up right where you
                        left off? I don't know if the mailstation even has this mode,
                        though. The isr would not need to do anything, just the int
                        would restart halted cpu.

                        --CJ
                      • Cyrano Jones
                        ... Mebbe we should call the units almost millisecs ??? Or, howzabout bogomillisecs ??? :-) ... Ya took the words right outta my mouth... ... Sounds likely.
                        Message 11 of 19 , Aug 22, 2007
                          > Welp, I wanted to check, so I set it on 10 seconds first. After time
                          > to settle, kbdmax stayed a consistant 640. So I did the longer test
                          > of 60 seconds, and got 3840. I tried another different method as
                          > well (which included waiting until after the first time16 before
                          > starting any counters, to ensure everything was in sync), and got the
                          > same result. So it seems to confirm the 64 times a second I
                          > initially got.

                          Mebbe we should call the units "almost millisecs"???
                          Or, howzabout "bogomillisecs"??? :-)

                          > So then I thought I should test the time16 interrupt, just to verify
                          > that it's in actual exact seconds.

                          Ya took the words right outta my mouth...

                          ...
                          > So I dunno. Maybe they don't care about exact milliseconds on the
                          > Mailstation, and just do it as close as possible.

                          Sounds likely.

                          > Actually, do you think they might have used this time interval due to
                          > the modem in some way?

                          Doubt that.

                          > I know that you need exact frequencies in
                          > order to communicate serially without an error percentage.

                          Modem is parallel interfaced to cpu.

                          > But I
                          > have no idea if the modem chip and cpu run off the same oscillator or
                          > what. I don't even know what the oscillator speed is. Obviously
                          > it's likely to be some high amount if the Mailstation is capable of
                          > switching up in speed.

                          Modem chip has it's own xtal.

                          > There's three different versions of the modem chip, and I dunno which
                          > is in the Mailstation without possibly pulling it apart yet again.
                          > One runs at 28.224mhz, one at 52.416mhz, and the other at 56.448mhz.

                          Probably the slowest! (cheaper)
                          OK, I looked at mine, the xtal next to modem chip says
                          "524AS9X", and I am gonna jump to conclusion that means 52.4 MHz.

                          > But if we only knew the exact cpu frequency, some math could come
                          > into play to figure out other things.

                          I don't think cpu freq has anything to do with modem or timers.
                          The RTC has it's own xtal, too. (rtc stops if you short it out).

                          X101 (cpu xtal) has number "122AS9Y" which I am assuming means
                          12.2 MHz.

                          X102 (rtc xtal) has no number (that I can see, at least).

                          --CJ
                        • Jeff
                          ... I stripped everything out and made a standalone interrupt 7 test, if you want to try it yourself. This will make the LED come on when you press the power
                          Message 12 of 19 , Aug 22, 2007
                            --- In mailstation@yahoogroups.com, "Cyrano
                            Jones" <cyranojones_lalp@...> wrote:
                            >
                            > > 7 - Triggers when power button is pressed/depressed. This is
                            > > apparently wired straight to the button, because it sometimes
                            > > triggers multiple times when pressed/released (aka not debounced).
                            >
                            > I don't know about that, it sure seems that the isr associated
                            > with int7 is the caller id handler.
                            >
                            > The power button is connected to P9.4. I suppose it *could*
                            > generate interupt on P9.4 low, but then where is the isr???

                            I stripped everything out and made a standalone interrupt 7 test, if
                            you want to try it yourself. This will make the LED come on when you
                            press the power button:

                            http://www.fybertech.net/mailstation/interrupt7test.asm

                            I put the description and instructions at the top there.


                            >
                            > I don't really know if the cpu is off, or just in halt state
                            > when mailstation is "off" (asleep). If it is halted, it
                            > would need an int or a reset to get it started again.
                            >

                            I tested this finally to see just what happens. I pulled the
                            powerdown function out and changed it so that it would jump to the
                            beginning of my code after the halt. If the processor was merely
                            asleep, then it should jump back to the start of my code and keep on
                            trucking.

                            Well, it doesn't, unfortunately. It always resets the MS entirely.
                            So it must do more than merely wake it back up. P28.0 must
                            completely kill the power to the cpu, or at least maybe somehow holds
                            it in a state of reset.

                            However! We can make our own "sleep" mode. Upon getting a power
                            button press during the normal keyscan, we can disable interrupts,
                            unmask all of them except like 2 (a keypress interrupt), turn off the
                            lcd, re-enable interrupts, and halt immediately thereafter. Below
                            the halt we just put a jump to where we want to go when it wakes up.
                            I just tried it, and it works like a charm! It woke right back up
                            and went back to my code. I also tried using interrupt 7 (power
                            button), but since it's not debounced, and activates again when you
                            release it, it doesn't work well unless one went to some extra
                            trouble in software.


                            > Is there any possibility that any port bits are getting
                            > changed accidentally????
                            >
                            > I am assuming you are using your own shadow vars, right???
                            > What do you init them to? (0, or the value of the coresponding
                            > "real" shadow)????
                            >

                            I'm still using the MS's shadow vars, actually.


                            > Are you preventing *all* of the original int code from running?

                            Yep, as can be seen in the test app above.




                            Here's you something kind of interesting. The v2.53 powerdown at
                            #1AC0:

                            di
                            ld a,(p28shadow) ;; set p28.0 either modem reset, or I am a pin off
                            on Vcc control bit?????
                            set 0,a ;; Yeah, this makes more sense as "power off"
                            ld b,a
                            out (28),a
                            ld de,03E8
                            push de
                            call Delay(msec) ;; delay
                            pop de
                            ld a,b ;; reset p28.0
                            res 0,a
                            out (28),a
                            halt ;; stop cpu. interrupts can wake it, I guess.
                            ret


                            And now the v3.03a powerdown:

                            di
                            ld a,(p28shadow)
                            set 0,a
                            ld b,a
                            out (#28),a
                            halt
                            ret


                            No delay, no other bit resets or anything. Just straight and to the
                            point. Wonder what the delay and other stuff is for, then?
                          • Cyrano Jones
                            ... OK, I am getting int7 too when I run your test. I wonder if anything else triggers int7? Does the callerid chip? The isr sure looks like it is caller-id
                            Message 13 of 19 , Aug 24, 2007
                              > > I don't know about that, it sure seems that the isr associated
                              > > with int7 is the caller id handler.
                              > >
                              > > The power button is connected to P9.4. I suppose it *could*
                              > > generate interupt on P9.4 low, but then where is the isr???
                              >
                              > I stripped everything out and made a standalone interrupt 7 test, if
                              > you want to try it yourself.

                              OK, I am getting int7 too when I run your test. I wonder if
                              anything else triggers int7? Does the callerid chip?

                              The isr sure looks like it is caller-id related. And it most
                              definitely does not turn the power on. Or off.

                              I did some poking around with continuity checker tonight.
                              There are three things that set the power flip-flop to the
                              "on" state: Power button, reset button, and a pin on cpu.

                              The two buttons can pull the /set pin of the f-f to ground
                              via two diodes. The pin from cpu chip is connected to /set
                              thru a 10k resistor.


                              > Here's you something kind of interesting. The v2.53 powerdown at
                              > #1AC0:
                              ...
                              > No delay, no other bit resets or anything. Just straight and to the
                              > point. Wonder what the delay and other stuff is for, then?

                              Hmmmm... I wonder if they decided it works better that way?
                              It almost seems like a waste of time to take the delay out,
                              even if it is not needed.

                              What PCB is in your 3.03a? What color is your case? The white
                              (or whatever you call that color) ones have a different PCB
                              than the brown ones. Maybe there is some difference in circuit???

                              --CJ
                            • Jeff
                              ... I actually made my test program because I honestly didn t know if your MS might behave differently. I ve never doubted your assumption of interrupt 7 being
                              Message 14 of 19 , Aug 24, 2007
                                --- In mailstation@yahoogroups.com, "Cyrano
                                Jones" <cyranojones_lalp@...> wrote:
                                >
                                > OK, I am getting int7 too when I run your test. I wonder if
                                > anything else triggers int7? Does the callerid chip?
                                >
                                > The isr sure looks like it is caller-id related. And it most
                                > definitely does not turn the power on. Or off.

                                I actually made my test program because I honestly didn't know if
                                your MS might behave differently.

                                I've never doubted your assumption of interrupt 7 being used for
                                caller id though, especially after having looked at the function the
                                interrupt routine is calling. It's just that the same interrupt
                                obviously has a second and unexpected use. Whether they actually
                                actively use it though is another story, and at the moment, I'd lean
                                towards "no". Much like with interrupt 2. It makes me wonder how
                                many other aspects of the device have multiple and/or unused
                                functions.

                                The thing is though, I'm fairly sure my model doesn't have caller ID
                                (as with all DET1 models, correct?). So I wonder if the models that
                                do have caller id still have the power button associated with their
                                interrupt 7?

                                I wonder if maybe they tied the power button to it on these earlier
                                models just for testing purposes? That, or maybe they actually
                                thought they'd use it in early planning stages, until they changed
                                their minds, and decided to use it for caller ID instead. This is
                                the version 2.xx and 3.xx firmware after all. Who knows what was in
                                earlier revisions.

                                Have you tried tracing that interrupt's pin off the cpu to see where
                                all it goes?


                                >
                                > I did some poking around with continuity checker tonight.
                                > There are three things that set the power flip-flop to the
                                > "on" state: Power button, reset button, and a pin on cpu.

                                Well look at that, it does come on when I press reset! Is the pin on
                                the cpu you're referring to the P28.0 one?


                                >
                                > Hmmmm... I wonder if they decided it works better that way?
                                > It almost seems like a waste of time to take the delay out,
                                > even if it is not needed.
                                >
                                > What PCB is in your 3.03a? What color is your case? The white
                                > (or whatever you call that color) ones have a different PCB
                                > than the brown ones. Maybe there is some difference in circuit???
                                >

                                It's black, though I'm not exactly sure which number is my PCB
                                version. I took some photos though so that I wouldn't have to keep
                                opening it everytime I wanted to check something. The middle and
                                right ones probably have the info you're interested in along the
                                bottom. They're not super quality, but they're high-res enough to
                                get most of the part numbers and such if interested. They're almost
                                2MB a piece.

                                http://www.fybertech.net/mailstation/ms_left.jpg
                                http://www.fybertech.net/mailstation/ms_center.jpg
                                http://www.fybertech.net/mailstation/ms_right.jpg
                              • Cyrano Jones
                                ... AFAIK only the eMessage had caller-id function. DET1 covers a whole bunch of different models, so it really is not very useful as an identifier. All the
                                Message 15 of 19 , Aug 24, 2007
                                  > The thing is though, I'm fairly sure my model doesn't have caller ID
                                  > (as with all DET1 models, correct?).

                                  AFAIK only the eMessage had caller-id function. DET1 covers
                                  a whole bunch of different models, so it really is not very
                                  useful as an identifier. All the brown & white mailstations,
                                  and the emessage are "DET1".

                                  The first ergonomic case model was called DET2 (purple mivo 200).
                                  The 250's are DET2B, the 350 is IWT2B.

                                  Now, everything else (the "AFAIK is still in effect) is a DET1x,
                                  where the x is a letter. This includes both old and new 120, and
                                  both old and new 150. Also, I think one of the older models was
                                  "DET1-01". And it makes a difference where you look for number.
                                  The older 120 says DET2 on bottom of unit, but DET1E on the box!

                                  I have a white "mailstation" (came in box with brown picture, and
                                  a sticker that said "new color!". It has same PCB as the eMessages,
                                  with caller-id chip, but no caller-id function. I'm as sure as I
                                  need to be that it is really a reflashed eMessage. The oddest
                                  part is that it has same firmware version # (3.03a) as your black
                                  unit, which doesn't have caller-id chip. (is yours really very
                                  dark brown???)

                                  The PCB in brown units (1T0863BMB-33)is different than the white
                                  (1T0863CMB-32) mainly in that there is no caller-id chip.

                                  2.53yr (brown, no chip) still has the caller-id isr. I took a
                                  quick look at your 3.03a dump, and it seems to have same isr.

                                  > So I wonder if the models that
                                  > do have caller id still have the power button associated with their
                                  > interrupt 7?

                                  The cpu is same in all of them (except 350), so it seems a sure
                                  bet.

                                  > I wonder if maybe they tied the power button to it on these earlier
                                  > models just for testing purposes? That, or maybe they actually
                                  > thought they'd use it in early planning stages, until they changed
                                  > their minds, and decided to use it for caller ID instead. This is
                                  > the version 2.xx and 3.xx firmware after all. Who knows what was in
                                  > earlier revisions.

                                  Earliest I have seen is 2.21 (eMessage).

                                  > Have you tried tracing that interrupt's pin off the cpu to see where
                                  > all it goes?

                                  I think int is associated with an i/o bit, (or bits, it seems).
                                  Power button is on P9.4 ,and my best guess is call-id int is
                                  on P2.2 (callid_data_rdy).

                                  > > I did some poking around with continuity checker tonight.
                                  > > There are three things that set the power flip-flop to the
                                  > > "on" state: Power button, reset button, and a pin on cpu.
                                  >
                                  > Well look at that, it does come on when I press reset! Is the pin on
                                  > the cpu you're referring to the P28.0 one?

                                  No, P28.0 is an output, and it *clears" power f-f, turning
                                  power off.

                                  I don't think the signal that turns power on is a port bit,
                                  rather the "alarm" out from rtc. Just a guess, though.
                                  If cpu is off, then ports prolly don't work. I am assuming
                                  that rtc inside cpu is powered, even when rest of chip is
                                  off. It has to keep time/date counting, even when off.
                                  My guess is they use a timer to wake unit up at mail check
                                  time.

                                  I think maybe reason it comes on with reset button might
                                  have to do with reflashing in the box. Brown units at
                                  least. They had holes in the inner box giving access to
                                  power jack, par port, and reset button.

                                  > It's black, though I'm not exactly sure which number is my PCB
                                  > version. I took some photos though so that I wouldn't have to keep
                                  > opening it everytime I wanted to check something. The middle and
                                  > right ones probably have the info you're interested in along the
                                  > bottom. They're not super quality, but they're high-res enough to
                                  > get most of the part numbers and such if interested. They're almost
                                  > 2MB a piece.
                                  >
                                  > http://www.fybertech.net/mailstation/ms_left.jpg
                                  > http://www.fybertech.net/mailstation/ms_center.jpg
                                  > http://www.fybertech.net/mailstation/ms_right.jpg

                                  Heck, those are darn nice pics! Did you use macro lens,
                                  or a scanner?

                                  That is same board I found in all the brown units I looked
                                  inside (1T0863BMB-33). Same as in 2.53yr.

                                  I listed all the units I opened up in groups database section
                                  http://tech.groups.yahoo.com/group/mailstation/database?method=reportRows&tbl=1&sortBy=2

                                  (we'll see if that works. if not, just open the hard way.
                                  best ordering is sort on "firmware" column).

                                  --CJ
                                • Jeff
                                  ... I don t have a box or anything. All I know about mine is from the label on the back, with DET1 , which I now know is fairly generic unfortunately. It
                                  Message 16 of 19 , Aug 25, 2007
                                    --- In mailstation@yahoogroups.com, "Cyrano
                                    Jones" <cyranojones_lalp@...> wrote:
                                    >
                                    > Now, everything else (the "AFAIK is still in effect) is a DET1x,
                                    > where the x is a letter. This includes both old and new 120, and
                                    > both old and new 150. Also, I think one of the older models was
                                    > "DET1-01". And it makes a difference where you look for number.
                                    > The older 120 says DET2 on bottom of unit, but DET1E on the box!

                                    I don't have a box or anything. All I know about mine is from the
                                    label on the back, with "DET1", which I now know is fairly generic
                                    unfortunately. It does also say "REN: 0.1B" on the sticker too, if
                                    that means anything.


                                    >
                                    > I have a white "mailstation" (came in box with brown picture, and
                                    > a sticker that said "new color!". It has same PCB as the eMessages,
                                    > with caller-id chip, but no caller-id function. I'm as sure as I
                                    > need to be that it is really a reflashed eMessage. The oddest
                                    > part is that it has same firmware version # (3.03a) as your black
                                    > unit, which doesn't have caller-id chip. (is yours really very
                                    > dark brown???)

                                    I've never considered it dark brown, it truly looks black to me, but
                                    I dunno. I can't even really capture the color well enough with a
                                    camera, but I tried: http://www.fybertech.net/mailstation/
                                    ms_front1.jpg

                                    Odd thing happened with the LCD there. Guess the camera is faster
                                    than the LCD refresh.

                                    As for the content on the screen, that's where I'm testing C code. I
                                    finally figured out SDCC enough to modify its CRT0.s to work with my
                                    app loader (since code starts at 0x8000 after loading), and then
                                    added in my text lcd functionality by replacing their placeholder
                                    'putchar' with my own (which uses global cursorx and cursory
                                    variables accessable from C, as well as being capable of interpreting
                                    carriage returns/line feeds), which is the most basic function of all
                                    of C's character and string drawing functions. So now I can use
                                    printf and such to output text, which is so much less time consuming
                                    than fiddling around in assembly. I've implemented a few basic
                                    functions in actual C, like for clearing the screen and getting
                                    scancodes, but that really needs to be redone in assembly. I've
                                    already done those things in assembly before, it's mostly a matter of
                                    modifying the code to work with SDCC. Unfortunately code I make now
                                    is certainly more bulky with all of C's libraries crammed in, and
                                    things are a bit more noticably slow from the overhead.

                                    One problem I have with SDCC is that I have no idea how to align data
                                    by a particular amount of bytes! In AS80, I have my cga font table
                                    aligned to a 256 byte area, so that the font drawing code works
                                    quickly. But now I have to specifically 'org' the code to a location
                                    in order to make it work. I really don't know if SDCC can even do
                                    it, which is a huge downside. For now, I'm just putting the 2k of
                                    font data at the very end of page8000 to avoid messing with the
                                    memory areas C uses, which makes all my binaries 16kb.

                                    Anyhoo, when I have more functionality worked in for at least
                                    handling the keyboard properly with standard C functions, I'll upload
                                    some stuff.


                                    >
                                    > Heck, those are darn nice pics! Did you use macro lens,
                                    > or a scanner?

                                    It's a Sony Cybershot 3.2 megapixel that I found on sale a year or
                                    two ago. It's a good camera for the most part, and has a macro mode
                                    for when you want to get close to things. But you can't turn off the
                                    flash without going into slow exposure, which is pretty useless
                                    unless you have it on a stand. So I always manage to kill most up-
                                    close pictures with the flash, or take a bunch until I get one that
                                    I'm somewhat satisfied with. I had to block out some of the light
                                    with paper just for those I took of the MS board.

                                    I thought about using a scanner actually, but then I figured it'd
                                    prolly be kind of hard to position it on there without taking the
                                    whole board out.


                                    >
                                    > I listed all the units I opened up in groups database section
                                    > http://tech.groups.yahoo.com/group/mailstation/
                                    > database?method=reportRows&tbl=1&sortBy=2
                                    >

                                    You've sure opened a lot, then! That's a good list. This is the
                                    only one I've owned (hence my fear of breaking it). It's one of the
                                    few I've even seen too, for that matter. I remember seeing one of
                                    the fancier new models on display at the store before, but I never
                                    messed with it. It's just funny how this thing sat under the bed
                                    with a layer of dust on it for ages before I ever realized the
                                    breakthroughs that had been made with it here.
                                  • John R. Hogerhuis
                                    If you re running from RAM, instead of 256 byte table, allocate an extra 256 bytes before or after. Them memmove the table data to the aligned location at
                                    Message 17 of 19 , Aug 26, 2007
                                      If you're running from RAM, instead of 256 byte table, allocate an
                                      extra 256 bytes before or after.

                                      Them memmove the table data to the aligned location at runtime calculated by:

                                      U = table address
                                      A = (U + 0xFF) & 0xFF00

                                      You will "waste" 256 bytes using this method. However, either above or
                                      below the table you will have at least 128 bytes, so you may be able
                                      to find some other purpose for it.

                                      If you are running from flash, you could allocate 256 bytes and then
                                      attempt to realloc only as many bytes as a you need to get an aligned
                                      chunk in there somewhere. Probably the allocator will not change the
                                      location, but you need to verify that the address does not change.

                                      -- John.
                                    • Jeff
                                      ... above or ... able ... Hey that s a pretty clever method. After I wrote my last post, I stopped wasting so much space in the binary by moving it to the
                                      Message 18 of 19 , Aug 26, 2007
                                        --- In mailstation@yahoogroups.com, "John R. Hogerhuis" <jhoger@...>
                                        wrote:
                                        >
                                        > If you're running from RAM, instead of 256 byte table, allocate an
                                        > extra 256 bytes before or after.
                                        >
                                        > Them memmove the table data to the aligned location at runtime
                                        calculated by:
                                        >
                                        > U = table address
                                        > A = (U + 0xFF) & 0xFF00
                                        >
                                        > You will "waste" 256 bytes using this method. However, either
                                        above or
                                        > below the table you will have at least 128 bytes, so you may be
                                        able
                                        > to find some other purpose for it.
                                        >

                                        Hey that's a pretty clever method. After I wrote my last post, I
                                        stopped wasting so much space in the binary by moving it to the last
                                        2k of slot8000 ram during C's initialization, but that was still
                                        wasting ram by leaving it in the original position too. This method
                                        is much better, and only takes four lines more assembly than the
                                        previous method to find the new address. So 8 lines total, plus a
                                        variable now to store the address. Not quite as clean as using a
                                        simple "align 256" in AS80, but it works.



                                        > If you are running from flash, you could allocate 256 bytes and
                                        then
                                        > attempt to realloc only as many bytes as a you need to get an
                                        aligned
                                        > chunk in there somewhere. Probably the allocator will not change
                                        the
                                        > location, but you need to verify that the address does not change.
                                        >

                                        Yeah it'd be a little bit trickier if I were using rom, which
                                        fortunately I'm not at the moment. But I could probably write a
                                        little app in C or Perl or something to read the symbols file SDCC
                                        produces to get the offsets, then shift the font data around to be
                                        aligned, and then also change a location that pointed to the font
                                        data position.

                                        I found the other day that I could look at the symbols file and use a
                                        calculator to manually pad the area above the font data, but it meant
                                        updating the padding value fairly often while I was working on it.
                                        But I suppose that once I got all that code squared away, it'd be a
                                        simple method to do the job, without wasting any more bytes than
                                        necessary.

                                        Actually, now that I think about it, I could probably whip together a
                                        Perl script that could read the symbols file, then rewrite a one-line
                                        include file with the number of padding bytes I need to align the
                                        font data, then have it recompile again.

                                        Anyhoo I'm just rambling now, so thanks for the tips!
                                      • John R. Hogerhuis
                                        ... That would work. That would get you to an average case of 128 bytes lost instead of 256. Worst case is 255 I guess. A word of advice from another Perl
                                        Message 19 of 19 , Aug 26, 2007
                                          On 8/26/07, Jeff <fyberoptic1979@...> wrote:
                                          > --- In mailstation@yahoogroups.com, "John R. Hogerhuis" <jhoger@...>
                                          > wrote:

                                          > Actually, now that I think about it, I could probably whip together a
                                          > Perl script that could read the symbols file, then rewrite a one-line
                                          > include file with the number of padding bytes I need to align the
                                          > font data, then have it recompile again.
                                          >

                                          That would work. That would get you to an average case of 128 bytes
                                          lost instead of 256. Worst case is 255 I guess.

                                          A word of advice from another Perl programmer: 'use bytes;'

                                          Perl will by a set of defined rules but which to me looks like magic
                                          spontaneously decide to consider a string as Unicode. Try to unpack
                                          that beastie, and hilarity ensues. Data::HexDump will show it as it
                                          is, but unpack will do some translations you don't expect.

                                          So if you are doing a lot of binary manipulation but you don't care
                                          about unicode, just use bytes; and the problem will not appear.

                                          (I learned this recently... I know, very reasonable engineers will
                                          ignore this advice right up until it kicks them in the butt... I
                                          probably would ignore it myself.)


                                          Keep up the good work here... with a reasonable C environment I may
                                          have to dust off my mailstation too :-) It would be fun to boot ZCN or
                                          CP/M on the MS. I also have on my "one of these days" projects
                                          transplanting a MS PCB and display into a Tandy 102 case. The T102's
                                          ROM chip is based on 8085, a much simpler offshoot of the 8080 than
                                          the Z80, so it might be possible to port that ROM over.

                                          -- John.
                                        Your message has been successfully submitted and would be delivered to recipients shortly.