Loading ...
Sorry, an error occurred while loading the content.
 

Mailtstation Emulation

Expand Messages
  • fyberoptic1979
    Hiya folks, been a while since I checked here. Nice to see there s at least been a few posts recently, though! Anyhow, I was trying to get a version of CP/M
    Message 1 of 20 , Dec 27, 2009
      Hiya folks, been a while since I checked here. Nice to see there's at least been a few posts recently, though!

      Anyhow, I was trying to get a version of CP/M written a while back for the Mailstation, and even did that hardware mod for banking out the first 16KB of address space with ram just specifically for that. But then a combination of distractions as well as the lofty goals of writing like a storage system for the Flash memory and everything just kind of pulled me away from Mailstation work.

      Part was laziness too though, I admit, since it's tedious to write code on the PC, compile, transfer it to the Mailstation, see if it works. If not, try to debug what happened by trying to write things to the screen, etc. And even then you're not getting all the information you might want for debugging purposes, especially if you crashed it. Definitely a pain.

      So recently, when the notion of messing around with the MS again started rattling around in my head, I realized that the only reasonable way for me to continue developing anything elaborate was to emulate the hardware on the PC. More lofty goals!

      But the more I thought about it, the more I realized that the hardest part was going to be the CPU. And as luck would have it, I found something called libz80 to do the job for me. It has hooks for reading/writing memory and I/O, which works great for all the device/page switching that the Mailstation does.

      Well, turns out, libz80 has a ton of emulation bugs, which were messing me up for a few days. I've corrected many of them in the source (it's very well written and easy to modify compared to another z80 emulation library I saw), and notified the author, who seemed eager to get a fixed version up as soon as possible.

      So now, after much debugging and digging through assembly output, I've successfully gotten to what appears to be a Mailstation error screen, and then the cpu is intentionally halted.  Here's an image:


      I have no idea what is resulting in the behavior at this point, but it's the first visible indication of progress so far, so I thought I'd share it for anyone interested.

      As for the technical details:

      - The Mailstation memory map is properly emulated, with codeflash page 0 in the first 16K, and RAM page 0 in the last 16K.
      - Device/page switching via I/O ports is functional for the middle two 16K slots
      - Devices emulated so far: RAM (read/write), Codeflash (read), Dataflash (read/write), LCD (write)
      - Dataflash has sector-erase and byte-program chip functions emulated
      - LCD is currently written into buffers (one for each LCD half), and later dumped to separate files.  The LCD CAS bit is properly emulated to make this possible.  I've written a separate utility to convert the raw binary files into viewable BMP images.


      I've started writing in PDCurses support to give a debugger interface. Though I've also done /raw and /silent command line switches. The former dumps disassembly and debug output straight to the screen, which can be redirected to a file. The latter disables all output, speeding emulation up considerably. I've actually done 100% of my debugging so far by using /raw and dumping to a text file, which has grown upwards to 200MB! I'm hoping to get the debugging interface working eventually to avoid this roundabout way of doing it.

      Another possibly interesting tidbit is that I'm modifying the codeflash at emulator startup, and changing the Mailstation's delay() function to simply RET immediately. It was outputting tons of useless information during that call. I don't think this will hurt my emulation at all, especially since I'm not even emulating interrupts/timers yet.

      I'm running on the Mailstation v2.53 firmware for this, with a dataflash dump I made from my v3.03 firmware. I have no idea if this will cause issues, but it's the only dataflash image I have. The reason I'm using the 2.53 firmware is because I have a very commented disassembly file of codeflash page 0 for that version, which has helped a ton in debugging. I would love to have more disassembled pages with comments like this, if anyone has them! I've spent a lot of time commenting some of the later pages myself to the best of my ability, which has slowed me down a lot.

      Anyhow, the next biggest thing is to get the LCD actually emulated visually in real-time. In order to make it all as cross-platform as possible, I might be able to do it with SDL, but I'll have to see what I can do.

      The other big thing is to track down why the Mailstation is halting after showing this screen. It'll take a lot more digging through disassembled Mailstation source I guess, since the initialization routine it was running was in another page of the codeflash. But I'm still making progress, albeit slowly!




    • fyberoptic1979
      I ve made some progress, after muchhhh firmware disassembly, tracing through the startup procedure.   First up, I ve gone to SDL for graphics, so now the LCD
      Message 2 of 20 , Dec 29, 2009

        I've made some progress, after muchhhh firmware disassembly, tracing through the startup procedure.  

        First up, I've gone to SDL for graphics, so now the LCD can be updated in mostly real-time (not too often though or it slows down emulation considerably).

        I emulated the power and battery bits of port 9, so that it actually thinks it's not about to burn out.  This got rid of the previous screen I was seeing (which I assume was a low battery warning, despite the missing text), and now simply displayed the logo instead (without the blank message box on top) before halting.

        So after a while of more tracing through things, I realized that maybe I just needed to warm-boot the system, since it does have a cold-boot procedure too, and it seemed to be running this every time (messing with dataflash, setting up ram, etc).  So upon receiving the CPU halt, I told it to simply reboot automatically instead (keeping ram/dataflash changes intact).  Then, low and behold:






        Same shot as above, except from the v3.03 firmware:


        I think maybe an interrupt automatically wakes it back up normally instead of having to push the power button again?  I'll have to look into it more.  

        Anyway, I tried the v3.03 firmware again to see if that would make a difference due to the dataflash image I'm using, but I still get the reset menu.  Which I later remembered is what normally happens when you cold-boot a Mailstation.  

        I also tried setting the "bootstate" ram byte to 0x5a at startup to avoid all the cold-boot initialization stuff, to maybe get straight to the main menu, but it just perpetually reboots then.  I'm obviously missing something it needs/wants to work properly in that respect.

        So it looks like the next step is to properly emulate keypresses, in order to get past this screen!  That'll require triggering an interrupt too.  Only part I'm worried about is that I seem to recall the key scanning function relies on a timer to know how long you've held down a key, and I don't know if I'll ever be able to emulate that with complete accuracy (especially since the Z80 cpu library I'm using just emulates instructions, not clock cycles).  But we'll see.



      • Donald H
        Looks like nice work. ARe you running this emulator under Linux or Windows ? don
        Message 3 of 20 , Dec 29, 2009
          Looks like nice work.

          ARe you running this emulator under Linux or Windows ?

          don

          --- In mailstation@yahoogroups.com, "fyberoptic1979" <fyberoptic1979@...> wrote:
          >
          > Hiya folks, been a while since I checked here. Nice to see there's at
          > least been a few posts recently, though!
          >
          > Anyhow, I was trying to get a version of CP/M written a while back for
          > the Mailstation, and even did that hardware mod for banking out the
          > first 16KB of address space with ram just specifically for that. But
          > then a combination of distractions as well as the lofty goals of writing
          > like a storage system for the Flash memory and everything just kind of
          > pulled me away from Mailstation work.
          >
          > Part was laziness too though, I admit, since it's tedious to write code
          > on the PC, compile, transfer it to the Mailstation, see if it works. If
          > not, try to debug what happened by trying to write things to the screen,
          > etc. And even then you're not getting all the information you might
          > want for debugging purposes, especially if you crashed it. Definitely a
          > pain.
          >
          > So recently, when the notion of messing around with the MS again started
          > rattling around in my head, I realized that the only reasonable way for
          > me to continue developing anything elaborate was to emulate the hardware
          > on the PC. More lofty goals!
          >
          > But the more I thought about it, the more I realized that the hardest
          > part was going to be the CPU. And as luck would have it, I found
          > something called libz80 to do the job for me. It has hooks for
          > reading/writing memory and I/O, which works great for all the
          > device/page switching that the Mailstation does.
          >
          > Well, turns out, libz80 has a ton of emulation bugs, which were messing
          > me up for a few days. I've corrected many of them in the source (it's
          > very well written and easy to modify compared to another z80 emulation
          > library I saw), and notified the author, who seemed eager to get a fixed
          > version up as soon as possible.
          >
          > So now, after much debugging and digging through assembly output, I've
          > successfully gotten to what appears to be a Mailstation error screen,
          > and then the cpu is intentionally halted.  Here's an image:
          >
          >
          >
          > I have no idea what is resulting in the behavior at this point, but it's
          > the first visible indication of progress so far, so I thought I'd share
          > it for anyone interested.
          >
          > As for the technical details:
          >
          > - The Mailstation memory map is properly emulated, with codeflash page 0
          > in the first 16K, and RAM page 0 in the last 16K.
          > - Device/page switching via I/O ports is functional for the middle two
          > 16K slots
          > - Devices emulated so far: RAM (read/write), Codeflash (read), Dataflash
          > (read/write), LCD (write)
          > - Dataflash has sector-erase and byte-program chip functions emulated
          > - LCD is currently written into buffers (one for each LCD half), and
          > later dumped to separate files.  The LCD CAS bit is properly emulated
          > to make this possible.  I've written a separate utility to convert
          > the raw binary files into viewable BMP images.
          >
          >
          >
          > I've started writing in PDCurses support to give a debugger interface.
          > Though I've also done /raw and /silent command line switches. The
          > former dumps disassembly and debug output straight to the screen, which
          > can be redirected to a file. The latter disables all output, speeding
          > emulation up considerably. I've actually done 100% of my debugging so
          > far by using /raw and dumping to a text file, which has grown upwards to
          > 200MB! I'm hoping to get the debugging interface working eventually to
          > avoid this roundabout way of doing it.
          >
          > Another possibly interesting tidbit is that I'm modifying the codeflash
          > at emulator startup, and changing the Mailstation's delay() function to
          > simply RET immediately. It was outputting tons of useless information
          > during that call. I don't think this will hurt my emulation at all,
          > especially since I'm not even emulating interrupts/timers yet.
          >
          > I'm running on the Mailstation v2.53 firmware for this, with a dataflash
          > dump I made from my v3.03 firmware. I have no idea if this will cause
          > issues, but it's the only dataflash image I have. The reason I'm using
          > the 2.53 firmware is because I have a very commented disassembly file of
          > codeflash page 0 for that version, which has helped a ton in debugging.
          > I would love to have more disassembled pages with comments like this, if
          > anyone has them! I've spent a lot of time commenting some of the later
          > pages myself to the best of my ability, which has slowed me down a lot.
          >
          > Anyhow, the next biggest thing is to get the LCD actually emulated
          > visually in real-time. In order to make it all as cross-platform as
          > possible, I might be able to do it with SDL, but I'll have to see what I
          > can do.
          >
          > The other big thing is to track down why the Mailstation is halting
          > after showing this screen. It'll take a lot more digging through
          > disassembled Mailstation source I guess, since the initialization
          > routine it was running was in another page of the codeflash. But I'm
          > still making progress, albeit slowly!
          >
        • FyberOptic
          ... I m compiling/running it under Windows at the moment, though I m thinking it won t take much in terms of tweaks to make it run in Linux. I ve never tried
          Message 4 of 20 , Dec 30, 2009
            --- In mailstation@yahoogroups.com, "Donald H" <donhamilton2002@...> wrote:
            >
            > Looks like nice work.
            >
            > ARe you running this emulator under Linux or Windows ?
            >
            > don
            >


            I'm compiling/running it under Windows at the moment, though I'm thinking it won't take much in terms of tweaks to make it run in Linux. I've never tried writing an SDL-based app in Linux, though, so at this point I don't know if it involves anything extra or not.
          • Donald H
            ... I am new to emulators, so this would act as a learning tool. Will you be making your code available ? Thanks don
            Message 5 of 20 , Dec 30, 2009
              --- In mailstation@yahoogroups.com, "FyberOptic" <fyberoptic@...> wrote:
              >
              > --- In mailstation@yahoogroups.com, "Donald H" <donhamilton2002@> wrote:
              > >
              > > Looks like nice work.
              > >
              > > ARe you running this emulator under Linux or Windows ?
              > >
              > > don
              > >
              >
              >
              > I'm compiling/running it under Windows at the moment, though I'm thinking it won't take much in terms of tweaks to make it run in Linux. I've never tried writing an SDL-based app in Linux, though, so at this point I don't know if it involves anything extra or not.
              >

              I am new to emulators, so this would act as a learning tool.

              Will you be making your code available ?

              Thanks

              don
            • FyberOptic
              A little more progress. Turns out, the Mailstation halting after showing the logo was simply standard routine. The firmware runs through the message loop
              Message 6 of 20 , Dec 30, 2009
                A little more progress.

                Turns out, the Mailstation halting after showing the logo was simply standard routine. The firmware runs through the message loop checking for things to do, and when there are no more, it HALTs the CPU. The rest of the hardware stays running, while the CPU waits for an interrupt to wake it up. When that happens, the interrupt routine is triggered, new events are possibly added to the message queue based on what the interrupt was, and then it returns into the message loop to do it all over again.

                Originally, I was resetting the Mailstation emulation after getting stuck at the logo, and then managing to get to the settings reset menu. Turns out, when I manually pumped a bunch of keyboard interrupts while at that logo screen (repeatedly running the message loop again), it kept doing things. It eventually popped a dialog box up:

                 

                Yet again, there's no text, just like in that low battery warning I got originally. I don't know what's wrong with that aspect. Maybe there's still a bug in the Z80 emulation?? Anyway, eventually I deduced that this must be a configuration error dialog (since I'm using dataflash from a different firmware version). But, I had no way to push the button to continue.

                Meanwhile, I came up with a crude way to emulate hardware timing, by incrementing a counter every time a byte is read or written from address space (this happens when reading in instructions as well as data). So from there, I was able to implement a time16 and a keyboard interrupt to happen automatically, intertwined with port 3's interrupt mask to know whether they should be triggered (and handle when an interrupt gets "reset" during the interrupt routine). So now the error dialog eventually popped up automatically whenever I started up the emulator. But I still couldn't press enter to bypass it. Resetting the Mailstation still took me to that "Reset Settings" menu, mind you, but I couldn't do anything there either.

                So, I put together a way to emulate the keyboard matrix hardware (required ORing and ANDing of values to respond to how the Mailstation checks the whole grid at a time to see if it even needs to process individual keys), and then quickly tacked in support for the enter key based on my actual keyboard's input. Due to the slowness of emulation at the moment, I had to hold the key in just slightly longer than on the real hardware, but it totally worked. It closed the error dialog, and moved along:



                I really need to figure out why the text isn't printing. But yeah, it's obviously the configuration screen. I just can't type in any settings yet since the enter key is all that's tied to my actual keyboard. When you keep pressing it, the cursor jumps to each setting's area on the screen, and eventually jumps to the top again. I've gotta come up with a way to translate PC scancodes to the memory array I'm using to replicate the keyboard matrix, then I should be all set on input.


                Well, then I had a thought. If v2.53's firmware didn't like my dataflash and wanted me to reset everything, then what would happen with the v3.03 firmware which actually goes with that dataflash image? Turns out, it must like it just fine, because I'm getting no errors. But I'm not getting anything else, either. While sitting at the logo screen, it waits for keyboard interrupts to happen for a little while, and then eventually changes the interrupt mask to 0x39 (00111001). That means it's no longer listening for the keyboard (which normally triggers 64 times a second I think). From what we know (or what I know based on stuff from here), these interrupts on now are "null", "null", "time16", and "maybe rtc". My time16 interrupt keeps going off approximately once a second (since I believe that was the proper rate), at least. I tried manually triggering rtc interrupts, but haven't gotten anything to happen. So I don't know what's going on at this point without more disassembling.

                Here's the v3.03 firmware logo if anyone's interested:



                It might be worth noting that if I use garbage for dataflash with v3.03 instead of the image file I have of it, then I get identical behavior to v2.53 so far: an error dialog after the logo, and then proceeding to the configuration window upon pressing enter. So I'm left to assume that even when v2.53's configuration is set (once I get more keyboard support), then I'll get stuck at the startup somewhere too.

                Anyway, I guess that's all I got for now. I just really want to fix that text rendering problem!
              • FyberOptic
                ... Yeah I don t see why not. I released all the source to Mailstation things I ve written before and all. Right now it s a mess though, full of lots of
                Message 7 of 20 , Dec 30, 2009
                  --- In mailstation@yahoogroups.com, "Donald H" <donhamilton2002@...> wrote:
                  >
                  > I am new to emulators, so this would act as a learning tool.
                  >
                  > Will you be making your code available ?
                  >
                  > Thanks
                  >
                  > don
                  >

                  Yeah I don't see why not. I released all the source to Mailstation things I've written before and all. Right now it's a mess though, full of lots of debugging stuff that would only make sense to me. Not to mention, it's not very functional just yet!
                • cyranojones_lalp
                  ... I saw your message the other night, and was stuck trying to think of why the dialog box popped up. And then I fell asleep before finishing the post I was
                  Message 8 of 20 , Dec 31, 2009
                    --- FyberOptic wrote:

                    > Turns out, the Mailstation halting after showing the logo was simply
                    > standard routine. The firmware runs through the message loop checking
                    > for things to do, and when there are no more, it HALTs the CPU. The
                    > rest of the hardware stays running, while the CPU waits for an interrupt
                    > to wake it up. When that happens, the interrupt routine is triggered,
                    > new events are possibly added to the message queue based on what the
                    > interrupt was, and then it returns into the message loop to do it all
                    > over again.

                    I saw your message the other night, and was stuck trying to think
                    of why the dialog box popped up.
                    And then I fell asleep before finishing the post I was working on...

                    Yeah, that's why it halts.

                    It never occured to me that it had emptied the event queue, and
                    was supposed to halt. I have no guess why the text is not printing.

                    I was going to mention that you will not get past the splash screen
                    until you emulate the 60 Hz interrupt. In addition to scanning
                    the keyboard, it also increments a set of ten timers, and these
                    timers are used whenever they put something on the screen that
                    needs to change after some delay. Such as the splash being
                    erased, and moving on to the main menu (or user select sometimes).

                    They set a timer, and return to the os. (As Mr. Popeil says,
                    "you just set it... and forget it!!!) They never just spin in
                    a delay loop. And when the os has another event for that app,
                    the os calls the app, passing the event as param.

                    Any app that uses timers also implements a response for timer
                    events. The splash is a simple app that just displays
                    the splash image, sets timer, and then when it gets the timer
                    event, it makes a call that changes the current app to
                    either the main menu, or if there is more than one user,
                    the select user app (or when no user accts are set up yet,
                    the create user app).

                    > this must be a configuration error dialog (since I'm using dataflash
                    > from a different firmware version).

                    I think you can wipe the dataflash, and it will init it. IIRC, the
                    flag that holds the dataflash state is in 2nd to last sector of dataflash (about 10 bytes at start of sector, and nothing else in
                    rest of sector (or not much else????). Preserve the last sector,
                    that is where your serial number is stored. IIRC, the "flash
                    test" that you can run from test mode walks on all but that
                    last sector with the "test data". Everything but the serial number
                    will be re-intiialized after the test. If you have any apps in
                    the loadable-app space, you can skip wiping them, and I think
                    they will survive the re-init (I could be wrong. The flash test
                    does wipe that area of dataflash). It is very possible that
                    zeroing out the first 2 bytes in 2nd to last dataflash sector is
                    all you need to do to cause it to be re-init'ed (not sure tho).

                    > from there, I was able to implement a time16 and a
                    > keyboard interrupt to

                    I don't recall ever finding anything that used "time16". It
                    just increments a 16 bit counter every time that int is received,
                    but I never found anything that used that count value.

                    "Time32" (named simply 'coz it was 32 bits, v/s 16 bits) is used
                    for a lot of stuff. It gets incremented by 16 by the same
                    int as keyscan. The keyscan int is roughly 60 Hz, or about
                    a 16 millisecond period, so "time32" is roughly in milliseconds.

                    That 60 Hz interrupt does 3 things:
                    1) the keyscan.
                    2) increments time32 by 16.
                    3) increments each of ten timers......
                    Wait... I'll go out and come in again...

                    That 60 Hz interrupt does 12 things:
                    1) the keyscan.
                    2) increments time32 by 16.
                    3 thru 12) increments each of ten timers by 1.

                    > happen automatically, intertwined with port 3's interrupt mask
                    > to know whether they should be triggered (and handle when an
                    > interrupt gets "reset" during the interrupt routine).

                    It's not clear to me if the P3-out is a "mask", or if it is just
                    used to reset the corresponding bit of the register that feeds
                    P3-in (where P3 in bits are set by the various int inputs).

                    > I've gotta come up with
                    > a way to translate PC scancodes to the memory array I'm using to
                    > replicate the keyboard matrix, then I should be all set on input.

                    <kluge>
                    You could just disable the keyscan, and replace it with code
                    that reads an unused port, and calls the put_key_in_buffer
                    routine with that value. The emulation for that new port
                    could just feed the keycodes for any keys pressed on pc kbd.
                    I don't remember for sure what is stored in that keybuffer,
                    though. It might be the row/col data, along with up/down/shift
                    info.
                    That would make it a bit harder, possibly even worse than actually
                    scanning an emulated keyboard. Or maybe skip the keybuffer,
                    and just feed keyevents into the event queue? Pretty sure
                    those are ascii codes.
                    (I'm just thinkin' out loud, not sure any of this is a good idea.)
                    </kluge>

                    > My time16 interrupt keeps going off approximately once
                    > a second (since I believe that was the proper rate), at least.
                    > I tried
                    > manually triggering rtc interrupts,

                    I think the only one that is important at this point is the
                    60 Hz keyscan-etc. The rtc might be important as far as
                    waking up the cpu at the set mail-download time, and for the
                    date and time to be set right when you power up, but as far as emulating, prolly not too important.

                    > It might be worth noting that if I use garbage for dataflash
                    > with v3.03 instead of the image file I have of it, then I get
                    > identical behavior to v2.53 so far: an error dialog after the
                    > logo, and then proceeding to the configuration window upon
                    > pressing enter.

                    This is probably what it is supposed to do. The user acct data
                    is in the dataflash, so if the dataflash is trashed, after it is
                    re-initialized you need to enter the user account info.

                    > So I'm left to assume that
                    > even when v2.53's configuration is set (once I get more keyboard
                    > support), then I'll get stuck at the startup somewhere too.

                    When you get the timers working right, I bet it won't get stuck!

                    > Anyway, I guess that's all I got for now. I just really want
                    > to fix that text rendering problem!

                    You need to set some breakpoints, or at least "flagpoints".
                    I would prolly just compile some in to the code, but you
                    could also make commandline switches, or a config file.

                    The idea being to whittle down your log to a comprehensible
                    size. So, you set some addresses that you are interested to
                    know if it is getting to. You can have it just log the
                    addresses on your "watch list", in the order it gets to them.

                    Maybe a different switch to stop at certain addresses. Then
                    you can box in the code where the text is supposed to be copied,
                    and even dump the addresses involved (third switch).

                    I bet it is gonna turn out to be some kind of banking error.
                    There are many places where a codeflash page needs to be banked
                    in, just to copy a string from that page to a local ram var.

                    (OK, at the top, where I said "the other night", add another
                    night to that, coz it's now "tommorow" and I still have not
                    hit "send")

                    CJ
                  • FyberOptic
                    ... I hadn t realized that the splash was an app too. That s good to know. ... dataflash ... I suppose the serial number isn t really important, since you
                    Message 9 of 20 , Dec 31, 2009
                      --- In mailstation@yahoogroups.com, "cyranojones_lalp" <cyranojones_lalp@...> wrote:
                      >
                      > Any app that uses timers also implements a response for timer
                      > events. The splash is a simple app that just displays
                      > the splash image, sets timer, and then when it gets the timer
                      > event, it makes a call that changes the current app to
                      > either the main menu, or if there is more than one user,
                      > the select user app (or when no user accts are set up yet,
                      > the create user app).

                      I hadn't realized that the splash was an app too. That's good to know.


                      >
                      > I think you can wipe the dataflash, and it will init it. IIRC, the
                      > flag that holds the dataflash state is in 2nd to last sector of dataflash
                      > (about 10 bytes at start of sector, and nothing else in
                      > rest of sector (or not much else????). Preserve the last sector,
                      > that is where your serial number is stored. IIRC, the "flash
                      > test" that you can run from test mode walks on all but that
                      > last sector with the "test data". Everything but the serial number
                      > will be re-intiialized after the test. If you have any apps in
                      > the loadable-app space, you can skip wiping them, and I think
                      > they will survive the re-init (I could be wrong. The flash test
                      > does wipe that area of dataflash). It is very possible that
                      > zeroing out the first 2 bytes in 2nd to last dataflash sector is
                      > all you need to do to cause it to be re-init'ed (not sure tho).
                      >

                      I suppose the serial number isn't really important, since you still have to have a username/password to log into the email account, and I doubt they cared what Mailstation unit you logged into the official Mailstation email server with. I wonder if the serial is even sent to the server when fetching/sending mails.

                      Something like Tivo on the other hand has the serial on an eprom, since that's tied directly to your account. Even if you replace the hard drive, it's still going to work with your account afterward. Though people have managed to clone those in order to transfer their account to another one when the system board dies.



                      >
                      > "Time32" (named simply 'coz it was 32 bits, v/s 16 bits) is used
                      > for a lot of stuff. It gets incremented by 16 by the same
                      > int as keyscan. The keyscan int is roughly 60 Hz, or about
                      > a 16 millisecond period, so "time32" is roughly in milliseconds.
                      >

                      How did you deduce that the keyboard interrupt was 60hz? I'm curious, since I've seen you mention that before, but I have some evidence that might prove otherwise. Some of it I came to realize just yesterday, even.

                      Before, when I was doing all that work on the Mailstation and hooking the ISR for testing things, I placed a counter variable inside the keyboard loop. All it did was count up. I called it kbdtest. In the time16 interrupt, I would copy the value of kbdtest into kbdmax, then reset kbdtest to 0. kbdmax would be displayed on the screen in separate code outside the ISR. Since time16 apparently hit at exactly 1 second intervals (because I believe I timed it by hand as such), then kbdmax would be a semi-accurate way of determining the speed of the keyboard interrupt.

                      As it turned out, kbdmax was resulting in a constant value of 64. So the keyboard interrupt appeared to be happening 64 times a second, 64hz, etc.

                      And now recently, when searching for possible info on the RTC of the Mailstation (or one similar), I came upon something rather interesting. It appears that many RTC chips have a programmable square wave generator, in hz. With values like 16, 32, 64, 128, 256, etc. This made me think that maybe the keyboard interrupt is being generated by programmable RTC.

                      More possible proof of this is when I was tinkering with port 0x2F a long time ago. This is what I learned back then:

                      - Setting bits 4,6 makes time16 interrupt 2x slower (kbdmax = 128)
                      - Setting bits 5,6 makes time16 interrupt 4x slower (kbdmax = 256)
                      - Setting bits 4,5,6 makes time16 interrupt 8x slower (kbdmax = 512)
                      - When bit 6 is clear, but 4, 5, or both are set, time16 interrupt doesn't seem to ever occur.

                      Well, back then, I naturally assumed that changing 0x2F was affecting the time16 interval. But now, after learning of these programmable square waves, maybe 0x2F is changing the speed of the keyboard interrupt, not the time16 one. It would make sense if so. Let me clarify:

                      Now remember, kbdtest was incrementing in the keyboard interrupt, and kbdmax was saving this value in the time16 interrupt. Original assumption was 0x2F was slowing time16, hence more opportunity for kbdtest to reach a higher value before time16 hit and saved the value. Well, what if you turn that around, and assume that it's affecting the keyboard interrupt instead of time16, making it happen FASTER, thereby causing kbdtest to count faster, which is then recorded to kbdmax at what is likely still the normal 1 second interval of time16.

                      If so, that would mean:
                      - Setting bits 4,6 makes keyboard interrupt happen 128hz
                      - Setting bits 5,6 makes keyboard interrupt happen at 256hz
                      - Setting bits 4,5,6 makes keyboard interrupt happen at 512hz

                      These values correspond to what many RTC square waves are capable of emitting (along with the 64hz I've assumed Mailstation normally runs the keyboard at). I looked at several RTC chips, and many had this programmability, but I couldn't ever find one with similar registers as what the Mailstation's uses. Particularly, they store the two BCD values for secs/mins/hours/etc in a single byte at a particular I/O port, where as the Mailstation seems to store each individual BCD digit in two separate ports, based on what's been documented so far.

                      Anyway, I guess the only way to prove any of this is true would be for me to display the value of time16 on the screen constantly, while changing 0x2F. If 0x2F is in fact affecting the speed of the keyboard interrupt, then printing time16 on the screen constantly would still show its value updating in 1 second increments no matter what. If my new assumption is wrong, meaning it's affecting time16 instead like I originally assumed, then the counter on the screen would happen slower. I'll try this sooner or later to see how it goes.

                      Either way, I just wanted to point out why I think normally the keyboard interrupt happens at 64hz instead of 60, but I'm still curious of your reasoning in case I'm still missing something.




                      >
                      > > happen automatically, intertwined with port 3's interrupt mask
                      > > to know whether they should be triggered (and handle when an
                      > > interrupt gets "reset" during the interrupt routine).
                      >
                      > It's not clear to me if the P3-out is a "mask", or if it is just
                      > used to reset the corresponding bit of the register that feeds
                      > P3-in (where P3 in bits are set by the various int inputs).

                      I think I may have tested this before, but I don't remember. Either way, all a person needs to do is hook the ISR, and make a value increment in, say, the keyboard interrupt. Then change the interrupt mask to disable keyboard interrupts. If the value stops counting, then you know the mask is in fact disabling that interrupt. Whenever I get around to writing the code again to check the time16 rate when changing 0x2F, I'll check this too.




                      >
                      > <kluge>
                      > You could just disable the keyscan, and replace it with code
                      > that reads an unused port, and calls the put_key_in_buffer
                      > routine with that value. The emulation for that new port
                      > could just feed the keycodes for any keys pressed on pc kbd.
                      > I don't remember for sure what is stored in that keybuffer,
                      > though. It might be the row/col data, along with up/down/shift
                      > info.
                      > That would make it a bit harder, possibly even worse than actually
                      > scanning an emulated keyboard. Or maybe skip the keybuffer,
                      > and just feed keyevents into the event queue? Pretty sure
                      > those are ascii codes.
                      > (I'm just thinkin' out loud, not sure any of this is a good idea.)
                      > </kluge>

                      It's funny you even mention that, because honestly that was my first thought: to just dump keys straight into the buffer. But this would only work for emulating the Mailstation OS, and I eventually want all of my custom code to work with it as closely to the real hardware as possible. So I did end up creating a translation table, which wasn't as bad as I thought, actually. I used an array of 10 rows/8 columns, which stores the PC scancode for each associated key of the Mailstation key matrix. I have to scan through the array's rows and columns every time a key is pressed/released to match it with a scancode in the array (so 80 iterations, tops). But once it's found, I can then easily take the row/column values from the loop to update the actual bitwise matrix array (which is just 10 bytes representing the rows, since each column is an individal bit), which is what I use to then actually emulate the output of port 1 (based on the input of port 1, 2.0, and 2.1).

                      Not every MS key is emulated yet, and some have been put elsewhere ("Home" is the Home key, though "Back" is the End key). I'm going to emulate "Function" as the Control key eventually too, but for now control combos are how I send special commands to my emulator, so I'll have to change that.


                      >
                      > I think the only one that is important at this point is the
                      > 60 Hz keyscan-etc. The rtc might be important as far as
                      > waking up the cpu at the set mail-download time, and for the
                      > date and time to be set right when you power up, but as far as emulating, prolly not too important.

                      As I mentioned earlier, I was looking into a lot of different RTC chips, and most of them did have an alarm feature. So that might be tied to that interrupt strictly to wake it up to check for mail, as you mentioned. Makes a lot of sense.


                      >
                      > > So I'm left to assume that
                      > > even when v2.53's configuration is set (once I get more keyboard
                      > > support), then I'll get stuck at the startup somewhere too.
                      >
                      > When you get the timers working right, I bet it won't get stuck!

                      Well that's the thing, I do have timers working.

                      But, now that the keyboard is emulated, at a cold boot I can enter configuration info (even though I can't see it as I type it, unless I type so much that it scrolls off to the right; but I can see password asterisks fine). I save, and get to the user selection screen:



                      and then to the main menu:





                      I can even use most of the items in the menu without issue (aside from some missing text at times, and the create new mail app crashing). This is on v2.53 firmware btw.

                      But, when I soft-reset after configuring it, my original assumption was correct: v2.53 sticks at the splash screen, just like v3.03 did with the proper dataflash configuration already there. I even tried changing the emulator to always fire keyboard/time16 interrupts regardless of the interrupt mask, and it makes no difference.

                      I discovered something shortly ago, however. I was originally assuming that the Mailstation was changing the interrupt mask from 0x22 to 0x39. But I added in a feature to the emulator to dump ram with a keypress. So, I dump ram while the interrupt mask is still 0x22, and then again when the mask changes to 0x39. Turns out, it's not changing the mask. It's changing EVERYTHING to 0x39. Ram page 1 is totally full of it, and page 0 is almost entirely, aside from values which I assume are getting set during the message queue loop and such when the interrupt hits.

                      And you know what? I bet I just figured out what it is, because there's a few 0x39s even in my first ram dump. Right before they start, there's "Jan ". I bet it's reading the RTC and I'm returning invalid value(s)!

                      YEP! I just now tried it, returning 0x01 for ports 0x10 through 0x1C, and now it warm boots just fine! Even the create new mail app works now (since it was prolly reading the date/time to know what to put in the email).

                      All unhandled IO ports are actually just handled like RAM: stored in and returned with an array, which I zero out at startup. So it was returning 0 for all RTC values originally, which obviously was breaking something. I think I'm actually going to tie the Mailstation RTC to my PC's clock so that it's always correct, once I figure out how to represent all the values (and converted to BCD).

                      So now, aside from things like the modem, printer port, etc, it seems everything is working enough for the Mailstation to not complain, aside from the missing text in places.


                      > Maybe a different switch to stop at certain addresses. Then
                      > you can box in the code where the text is supposed to be copied,
                      > and even dump the addresses involved (third switch).

                      Yesterday, I changed the emulator back to return low battery status, so that I could get that "The battery power is running low, the system will power off automatically." error. I figured using this for debugging would be best because it happens during the startup, before lots of other junk clutters my debug log. Anyhow, I created a ram dump right after the "low battery" message box appeared. Turns out, it's really getting the text string it needs to print, because it's in two different memory locations, which I've traced back to the code writing them there. All I can figure for the moment is that maybe some math error is happening when it's calculating the width/height of the text? I'll have to decypher that function maybe and step through all of it, I dunno. Such a pain.

                      I did have a thought as I was typing this, that maybe the Mailstation was reading the values of the LCD and ORing the text onto what's already there. But I'm not seeing any LCD read notices, not to mention I also remember that it uses an LCD buffer in ram, which is where it likely would do such comparisons anyway.


                      >
                      > > Anyway, I guess that's all I got for now. I just really want
                      > > to fix that text rendering problem!
                      >
                      > You need to set some breakpoints, or at least "flagpoints".
                      > I would prolly just compile some in to the code, but you
                      > could also make commandline switches, or a config file.
                      >
                      > The idea being to whittle down your log to a comprehensible
                      > size. So, you set some addresses that you are interested to
                      > know if it is getting to. You can have it just log the
                      > addresses on your "watch list", in the order it gets to them.
                      >

                      Breakpoints are a good idea, and I plan to add something like that in. But having a full log of everything has actually been infintely helpful in tracing down problems, particularly with how buggy this Z80 emulation library was when I first got it.  It was returning the opposite of certain CPU flags, doing push/pop wrong (SP was handled incorrectly), on top of several normal opcodes having emulation problems, etc. It took a while to fix all of that, and being able to search for every instance of a particular opcode executing after I suspected a problem with it was useful in order to see the results.

                      To the libz80 author's credit though, he said this was a rewrite of a previous Windows-only version, and I guess he just never had reason to thoroughly put it through its paces like the previous one. I wouldn't still be using it if I didn't think it was well-written, bugs aside. He uses a regex solution to generate the opcode functions before compiling, which lets you modify multiple similar opcodes in one swoop. Otherwise you'd be changing hundreds by hand.

                      I'm not sure if speed will ever be an issue once I get rid of a lot of debugging stuff, but I've been pondering ways I might write my own from scratch if the need arises. I don't think it'll be that hard, actually. Just time-consuming.



                    • FyberOptic
                      Shew it s late, but I wanted to post a progress report at least.   As you can see, the text is fine now!  And ironically, I still have no idea what the
                      Message 10 of 20 , Jan 2, 2010

                        Shew it's late, but I wanted to post a progress report at least.


                         


                        As you can see, the text is fine now!  And ironically, I still have no idea what the problem was.  What I did was actually swap out the Z80 emulation with another library.  Took some reintegrating to make it work with how this other one was designed to operate, but not too much work.  And as soon as I got it to start properly, I immediately realized it was way faster (probably since I compiled it with its assembly optimizations enabled).  And when I got interrupts working, and I got to the first warning dialog about creating a new account, I realized it was fix.  It was showing text.  And then the configuration window was too.  Great!  

                        So that means there's still opcode(s) which are being handled wrong in libz80, despite all the work I did on it to fix it. And not only was it not showing text, btw, but in the Extras menu, the arrow keys were behaving all wrong as well.  I don't think I ever mentioned that.   But oh well.  I was so tired of staring at page after page of disassembled code and debug output trying to find the error that I decided trying another library was the best way to test where the problem really was.

                        On an even brighter side, this new library, z80em, emulates CPU timing.  In fact, it does this so well, combined with a software interrupt related to this feature which is triggered after so many CPU cycles (which you can specify), that I was able to turn this into my main Mailstation interrupt generator.  And with a bit more timing code in place, I now have it emulating a 12mhz Z80, with 1 second time16 interrupts, and 64hz keyboard interrupts.  The cursor even blinks on the screen at the same rate as on the real hardware.  Awesome!

                        After that, I tied the RTC into my PC's clock, so every time you start the emulator you get the right time.  This means you can't actually set the RTC time via the Mailstation at the moment, though.

                        Trying to do something with the modem just freezes it up, as expected.  I want to figure out how better to emulate that, which I'm sure is in the datasheet if I still have it.  Wouldn't it be neat to emulate a PPP connection with that?  But aside from the modem, I haven't had any problems at all.  I've messed with all the apps, saved messages to my outbox, etc etc.  All good.

                        Anyway, I've done a ton of work today cleaning up the code.  I've also added in the ability to scale the screen 2x, and to even go full-screen.  Seeing the Mailstation OS fill my monitor is both odd and neat!

                        But yeah, I hope to upload a version good enough for you guys to try out tomorrow sometime.  There's just still some things I want to add before I do (like figure out why I have to push the power button twice to turn it off).  For now it'll prolly stay locked at 12mhz, and the interrupt speeds won't be changeable or anything, since there's some experimenting I want to do on the real hardware first to see if I can better understand things.  And considering how well the emulation is going right now, I can prolly test out some code on my PC now before sending my test apps to the Mailstation.  Which is what I wrote this for to begin with!


                      • FyberOptic
                        I think I ve finally prettied up a version of the emulator well enough to release. But until I get a better page for it, here s the directory index:
                        Message 11 of 20 , Jan 2, 2010
                          I think I've finally prettied up a version of the emulator well enough to release. But until I get a better page for it, here's the directory index:

                          http://www.fybertech.net/mailstation/emulator/


                          The quick start instructions are: download msemu_v01.zip and codeflash.bin, extract the ZIP, and drop the .BIN into the folder with it. Then run msemu.exe.

                          Make sure to look at the readme.txt for info on the keys! You can switch between 2X size and even go fullscreen.

                          In that directory index above, codeflash.bin is the same as ms253.bin, which is v2.53 of the Mailstation firmware. ms303a.bin is v3.03a, which is slightly different. I didn't include any of these in the ZIP because it's probably against copyrights for me to even have them on the website.

                          The emulator looks for a "codeflash.bin" by default to work, so you can either rename other firmwares to this, or you can specify an alternate filename on the command line (or just drag the .BIN onto the EXE to launch with it). This lets you try out different versions, or even your own replacement.

                          I've included a "dataflash.bin" with some generic settings, just so that you can go straight to the main menu when you start it. If you like, you can delete that file, and it'll generate a fresh one the next time you start it.

                          Note that the intro screen's text colors should be yellow, and the default LCD color should be green (check the readme on how to change). If they're not for you, let me know!

                          I'd appreciate feedback, particularly on any problems you might find. There's obviously a lot still not emulated, but apparently there's plenty to make the OS itself run. Just keep in mind that it'll probably freeze up if you try to use the modem!
                        • cyranojones_lalp
                          ... Hmmmmm... What the heck am I gonna do with an exe... The most recent windows I have is 98, and I have not booted that box up in like a year. I promised
                          Message 12 of 20 , Jan 5, 2010
                            > Then run msemu.exe.

                            Hmmmmm... What the heck am I gonna do with an exe...

                            The most recent windows I have is 98, and I have not booted that
                            box up in like a year. I promised myself the next time I boot
                            it, I will back it up. Sooooooooo, I have just been avoiding
                            booting it. And I don't even know if your prog will run
                            on it.

                            Then I thought about this thin client I have here, with
                            win XPe on it, running my magicjack. But it does not
                            have enough space. I guess I could copy the files to a
                            thumb drive.....

                            Then I wondered if it would run with wine, under Ubuntu.

                            It does! Pretty neat!!!

                            > You can switch between 2X size and even go fullscreen.

                            Seems they are reversed with respect to keys in readme.

                            > The emulator looks for a "codeflash.bin"

                            I made a copy of (what I believe is) the 253yr from yahoo group,
                            renamed it codeflash.bin.

                            Seems to work fine, but I get a different checksum in emulator
                            than on an actual 253yr unit I have here. Funny thing is, I am
                            pretty sure I verified Don's dump with an actual 253yr several
                            years ago, and it matched. But it prolly was not this exact
                            same unit. Maybe I changed something in the image I am
                            working with, and forgot??????

                            I get 91ff on my actual unit, and 9254 with my image file.
                            What checksum do you get emulating your 253yr image?

                            > Note that the intro screen's text colors should be yellow,
                            > and the default LCD color should be green (check the readme
                            > on how to change). If they're not for you, let me know!

                            Colors are ok, and can change with the ctrl keys. Took me
                            a bit of head scratching to figure out I needed the right-side
                            ctrl key. You need to make black chars on light-green-tinted
                            background one of the options! ;-)

                            > I'd appreciate feedback, particularly on any problems you
                            > might find. There's obviously a lot still not emulated,
                            > but apparently there's plenty to make the OS itself run.

                            Calculator works. Typed a new message, saved it in outbox,
                            and opened it up again. Even goes into test mode, and passes
                            several tests. The modem test failed, but did not lock it up.
                            (Did not try to send email, though.)

                            Noticed that it remembered it was in test mode (not sure
                            if I quit emu, or just "power cycled it).

                            Could not finish keyboard test, is there an "@ key", size, or
                            spell check. Or "get mail" button?

                            I would prefer that "back" was mapped to "esc", I closed
                            emulator by accident more times than I can count. Esc just
                            seems more intuitive for back. Maybe just make "power" quit
                            emu??? Or only quit when in "off" mode???

                            As for why it needs 2 presses, maybe it has something to do
                            the fact that the "power" does not really go away after the
                            first press? There is a flip-flop chip on the ms board that
                            actually kills power.

                            It would be a lot more fun if I could tweak emu code.
                            For instance, I notice that whenever I press a key in
                            calculator app, emu prints message on text console
                            "dataflash write". It would be fun if it said what
                            address was written.

                            I also think it would be fun to compile a Linux version.

                            I don't know if it is emulator, or wine, but the combo
                            is sucking up over 50% of dual 2.5 GHz AMD cpu.
                            OK, top sez emu itself is taking over 40% of one cpu,
                            and Xorg is taking over 35%, and wine about 20%.

                            Also, I don't know just which program crashed, but it
                            seems like it was when I was trying to switch out of
                            "full screen" mode. Took out the X server, and any
                            program that was running under X. Linux text consoles
                            were still there, along with a great deal of programs
                            just listed with "2009" as the start time in process list.

                            First time this box ever crashed in the ~1 year since I
                            put it together. I rebooted the whole thing, but maybe
                            I could have restarted X. Seemed like a good time for a
                            reboot, though! (I don't think I have rebooted more that
                            5 times since built, and it is up 24-7). Took over half
                            hour to check the disk, I don't really want to do a lot
                            of testing as to just what makes it crash. I think I
                            will avoid the full screen mode, and see if that helps.

                            CJ
                          • FyberOptic
                            ... Yeah I had it working fine in Wine on Debian when I tried it, and figured any Linux folks could just do that to try it out. ... As soon as you said that, I
                            Message 13 of 20 , Jan 5, 2010
                              --- In mailstation@yahoogroups.com, "cyranojones_lalp" <cyranojones_lalp@...> wrote:
                              >
                              > Then I wondered if it would run with wine, under Ubuntu.
                              >
                              > It does! Pretty neat!!!

                              Yeah I had it working fine in Wine on Debian when I tried it, and figured any Linux folks could just do that to try it out.


                              >
                              > > You can switch between 2X size and even go fullscreen.
                              >
                              > Seems they are reversed with respect to keys in readme.

                              As soon as you said that, I remembered that I forgot to update the readme when I decided to swap those keys around.


                              >
                              > Seems to work fine, but I get a different checksum in emulator
                              > than on an actual 253yr unit I have here. Funny thing is, I am
                              > pretty sure I verified Don's dump with an actual 253yr several
                              > years ago, and it matched. But it prolly was not this exact
                              > same unit. Maybe I changed something in the image I am
                              > working with, and forgot??????
                              >
                              > I get 91ff on my actual unit, and 9254 with my image file.
                              > What checksum do you get emulating your 253yr image?

                              I only have one Mailstation, the demo unit which runs v3.03a (mail servers can even be set in the configuration). For that version, the hardware gives me a checksum of 0x53d4, but the emulator is showing 0x53e5 when I run the same firmware.

                              I also noticed that my hardware seems to freeze up after getting to that point, where as the emulator continues on to a battery test.

                              No idea what's going on with either of these things yet. I even tried removing my bounds-limiting for the codeflash (it forces an address wrap at 1MB like I'm assuming the real hardware does on pages 64 and up), just in case, and it gave the same results.

                              I'd have to dig through the ROM Test code to see if the codeflash is the only thing it's testing, or if there's something else being added into the result somehow.


                              >
                              > Could not finish keyboard test, is there an "@ key", size, or
                              > spell check. Or "get mail" button?

                              None of those buttons are assigned yet, since I wasn't sure what to assign them to. I didn't need'em for the testing I was doing at the time, either. But you should be able to assign these to other things now, which I'll get into in a bit.

                              I'll probably want to assign "Get Mail" soon though, since I've been working on trying to emulate the modem chip, and at the moment I have to keep going into the outbox to trigger the modem.


                              >
                              > I would prefer that "back" was mapped to "esc", I closed
                              > emulator by accident more times than I can count. Esc just
                              > seems more intuitive for back. Maybe just make "power" quit
                              > emu??? Or only quit when in "off" mode???

                              I did consider changing it to escape a few times, but as I was testing, I found hitting escape right quick to get back out of it was easier for the time being. I'll change exit to right-control + Q or X or something eventually I guess.

                              The reason "power" doesn't exit is because I want to be able to simulate powering on and off.


                              >
                              > As for why it needs 2 presses, maybe it has something to do
                              > the fact that the "power" does not really go away after the
                              > first press? There is a flip-flop chip on the ms board that
                              > actually kills power.

                              Na, it's taking the two presses to finally acknowledge it, which then runs the shutdown function in the firmware, which toggles the bit in port 0x28 for that flip-flop. I'm actually emulating this bit, and "powering off" when it's changed.

                              It seems that the Mailstation waits for the state of the power button to change somehow before acknowledging it again as being pressed. Sometimes, for example, if you hold F12 while the system is booting, then let it go once you're at the menu, then you only have to press it once to power off. I tried various ways to replicate this behavior automatically, but I didn't have much luck yet. I'm thinking it might have something to do with the signal bouncing of the real hardware, since the power button isn't handled for bouncing like the rest of the keyboard keys are in the keyboard routine.

                              Hitting the button twice is a minor inconvenience though so I've worked on other stuff for the time being instead.


                              Something interesting of note is that when you power off and power back on, normally the hardware would retain the RAM contents to my knowledge. Well when I was retaining their contents, the MS would check some ports during startup, before anything was even on the screen (or even before the screen was on?), and then shut itself back down again. Every time you'd try to power it back on this would happen. The only solution for the time being is clearing the ram contents at any power-off until I figure out what the Mailstation is doing.


                              >
                              > It would be a lot more fun if I could tweak emu code.
                              > For instance, I notice that whenever I press a key in
                              > calculator app, emu prints message on text console
                              > "dataflash write". It would be fun if it said what
                              > address was written.

                              The text you see in the console is just very basic output. The "dataflash write" message was actually indicating that the emulator was writing the dataflash contents out to the file, not that the Mailstation was currently modifying the contents at that exact moment (though pretty close to it). It doesn't write out the file for every individual modification, for performance reasons.

                              However, there are debug messages for that, which will not only tell you the current PC when the write is occuring, but also the dataflash address and value being written. Same for sector erases. The debug messages won't work in the version you're using since I removed the ability when cleaning up code before. But in v0.1a, you can put /console and/or /debug on the command line. The former spits all IO and other activity to the console, the latter spits it out to a "debug.out" file.

                              Ever since I changed CPU emulation libraries, the debug output has dramatically reduced, since I'm no longer dumping constant disassembly as well. But the tons of IO port requests are still a mess! Eventually I'll let one limit which ports they're interested in.


                              >
                              > I also think it would be fun to compile a Linux version.

                              As it just so happens, v0.1a not only includes the source, but will compile under Linux.

                              As for changing the Mailstation keys as I mentioned earlier, there's an array which holds all the mappings, but you'll probably need that "mailstation_keyboard.html" file I got somewhere before to know which key is what. I'm betting you have it though!


                              >
                              > Also, I don't know just which program crashed, but it
                              > seems like it was when I was trying to switch out of
                              > "full screen" mode. Took out the X server, and any
                              > program that was running under X. Linux text consoles
                              > were still there, along with a great deal of programs
                              > just listed with "2009" as the start time in process list.
                              >

                              I didn't have any crashes under Debian when using it with Wine. Full-screen mode semi-worked, but didn't actually go full screen. It just kind of replaced most of the desktop (but stayed crammed underneath the menu bar). That's about what I expected, even though that sucks.

                              Ironically, it wasn't until I compiled a native Linux binary that I had the full-screen mode crash the application when switching back and forth several times. Even full-screen under a native binary still didn't work right, though.

                              But to be blunt, this is Linux after all, and it's notorious for being problematic at running things full-screen. I've had a lot of trouble with other applications running that way in the past. So your advice of "only in a window" seems the best route, since there's really nothing I can do about it. Going full-screen is all handled through SDL calls.

                              That said, console output is faster under Linux than in Windows! I found that a little surprising.



                              Anyway, you can grab the newer version here:

                              http://www.fybertech.net/mailstation/emulator/msemu_v01a.zip

                              If you have any trouble building it, you can check the build.txt, or give me a holler and I'll try to help.
                            • cyranojones_lalp
                              ... Try inverting the sense of power-button input bit. Instead of: case 0x09: return (byte)0xE0 | ((power_button & 1)
                              Message 14 of 20 , Jan 7, 2010
                                --- FyberOptic wrote:
                                >
                                > It seems that the Mailstation waits for the state of the power
                                > button to change somehow before acknowledging it again as being
                                > pressed.
                                > Sometimes, for example, if you hold F12 while the system is
                                > booting, then let it go once you're at the menu, then you only
                                > have to press it once to power off. I tried various ways to
                                > replicate this behavior automatically, but I didn't have much
                                > luck yet. I'm thinking it might have something to do with
                                > the signal bouncing of the real hardware, since the power
                                > button isn't handled for bouncing like the rest of the
                                > keyboard keys are in the keyboard routine.
                                >
                                > Hitting the button twice is a minor inconvenience though so
                                > I've worked on other stuff for the time being instead.

                                Try inverting the sense of power-button input bit.

                                Instead of:
                                case 0x09:
                                return (byte)0xE0 | ((power_button & 1) << );
                              • cyranojones_lalp
                                ... try: case 0x09: return (byte)0xE0 | ((~power_button & 1)
                                Message 15 of 20 , Jan 7, 2010
                                  > --- FyberOptic wrote:
                                  > >
                                  > > It seems that the Mailstation waits for the state of the power
                                  > > button to change somehow before acknowledging it again as being
                                  > > pressed.
                                  > > Sometimes, for example, if you hold F12 while the system is
                                  > > booting, then let it go once you're at the menu, then you only
                                  > > have to press it once to power off. I tried various ways to
                                  > > replicate this behavior automatically, but I didn't have much
                                  > > luck yet. I'm thinking it might have something to do with
                                  > > the signal bouncing of the real hardware, since the power
                                  > > button isn't handled for bouncing like the rest of the
                                  > > keyboard keys are in the keyboard routine.
                                  > >
                                  > > Hitting the button twice is a minor inconvenience though so
                                  > > I've worked on other stuff for the time being instead.
                                  >
                                  > Try inverting the sense of power-button input bit.
                                  >
                                  > Instead of:
                                  > case 0x09:
                                  > return (byte)0xE0 | ((power_button & 1) << );

                                  try:
                                  case 0x09:
                                  return (byte)0xE0 | ((~power_button & 1) << );

                                  (I was trying to enter a tab in previous post, and all of a
                                  sudden it said "message sent" or something to that effect.
                                  I think the tab moved the focus from the message box over to
                                  the "send" button.)

                                  I have not tried to compile it myself just yet, been looking
                                  over docs for sdl.

                                  CJ
                                • cyranojones_lalp
                                  ... I m sure fyberoptic knows what I meant, but if (by any stretch) there is anyone else reading this, the 4 got clipped. It wrapped to next line, and when I
                                  Message 16 of 20 , Jan 7, 2010
                                    > > Instead of:
                                    > > case 0x09:
                                    > > return (byte)0xE0 | ((power_button & 1) << );
                                    >
                                    > try:
                                    > case 0x09:
                                    > return (byte)0xE0 | ((~power_button & 1) << );

                                    I'm sure fyberoptic knows what I meant, but if (by any stretch)
                                    there is anyone else reading this, the "4" got clipped.
                                    It wrapped to next line, and when I edited to fit on one line,
                                    I musta deleted it.

                                    So, this is what I should have typed:
                                    case 0x09:
                                    return (byte)0xE0 | ((~power_button & 1) << 4);

                                    I think it is too early for my brain.

                                    CJ
                                  • FyberOptic
                                    ... Doh, I inverted the main keyboard keys, but never thought to do it for the power button. Goes off with one tap now! Nice find! ... over docs for sdl. If
                                    Message 17 of 20 , Jan 7, 2010
                                      --- In mailstation@yahoogroups.com, "cyranojones_lalp" <cyranojones_lalp@...> wrote:
                                      >
                                      > Try inverting the sense of power-button input bit.
                                      >
                                      > Instead of:
                                      > case 0x09:
                                      > return (byte)0xE0 | ((power_button & 1) << );
                                      >

                                      Doh, I inverted the main keyboard keys, but never thought to do it for the power button. Goes off with one tap now! Nice find!



                                      >I have not tried to compile it myself just yet, been looking
                                      over docs for sdl.

                                      If you're using any Debian-based distro (you said you're in Ubuntu, so you are), then you should be able to grab the development packages of SDL and SDL_gfx through APT. I'm not for sure what repository it came from (hopefully one which is enabled by default), but just search for the package "libsdl-gfx1.2-dev". When you install it, it should automatically pull "libsdl1.2-dev" too. It did for me under Debian. Saved me the trouble of fetching/compiling them manually. Only thing you'll still have to compile separately is Z80em, which is a simple "make" job, pretty much.
                                    • cyranojones_lalp
                                      ... OK, I got all the pieces installed the other day, and got it compiling and running here, too! :-) :-) :-) :-) :-) :-) :-) :-) I had to change one
                                      Message 18 of 20 , Jan 9, 2010
                                        --- FyberOptic wrote:
                                        >
                                        > Doh, I inverted the main keyboard keys, but never thought
                                        > to do it for the power button. Goes off with one tap now!
                                        > Nice find!

                                        OK, I got all the pieces installed the other day, and got it compiling
                                        and running here, too! :-) :-) :-) :-) :-) :-) :-) :-)

                                        I had to change one line in Makefile to get it to fly with 64 bit cpu:

                                        I changed
                                        "objcopy -I binary -O elf32-i386 --binary-architecture i386 rawcga.bin rawcga.o"

                                        to
                                        "objcopy -I binary -O elf64-x86-64 --binary-architecture i386 rawcga.bin rawcga.o"

                                        because the linker refused to link the 32 bit font file with the 64 bit
                                        emulator object file. It was pretty easy to figure out the "-O elf64-x86-64",
                                        but it took a lot of reading to find out that you needed to use
                                        "--binary-architecture i386" for either 32 or 64 bit. Go figger.

                                        > If you're using any Debian-based distro (you said you're in Ubuntu, so you are),
                                        > then you should be able to grab the development packages of SDL and SDL_gfx through
                                        > APT. I'm not for sure what repository it came from (hopefully one which is
                                        > enabled by default), but just search for the package "libsdl-gfx1.2-dev".
                                        > When you install it, it should automatically pull "libsdl1.2-dev" too.
                                        > It did for me under Debian. Saved me the trouble of fetching/compiling
                                        > them manually. Only thing you'll still have to compile separately is Z80em,
                                        > which is a simple "make" job, pretty much.

                                        Well, that was really good to know! I didn't even think to check if it
                                        was in repo. Turns out libsdl1.2 was already installed, possibly 'coz xmame
                                        required it. I just checked off the boxes (in synaptic) for the dev files,
                                        and the libsdl-gfx1.2-dev, and "applied" it.

                                        I made the change to cflags you suggested for z80em, and did "make all". I
                                        got a boatload of "type mismatch" warnings, but it still works.

                                        I mentioned that the windows version was sucking up close to 100% of cpu,
                                        spread across 3 processes (msemu, wine, and I think xorg). This native
                                        Linux build is still sucking up 100%, but just in the one msemu process!

                                        I don't think it would stop anything else from running, it's prolly just
                                        using it 'coz it is available. But it sure is making the cpu hotter than
                                        normal!!! It usually reads below 90 degrees F, but with cpu at 100% it
                                        was running over 110 F!!!!! I even think I could smell the difference,
                                        but that may just have been my imagination. :-)

                                        So, I looked at source, to see if I could see anything to optimize.
                                        The first thing I tried was moving the call to system time, so it only
                                        happens when one of the time ports is read. Didn't make a noticable
                                        difference, though.

                                        Next, I looked at the main loop. Seems that's the source of the
                                        infinite appetite for cpu cycles. The cpu just keeps running that loop,
                                        as fast as it's little pins can carry it. :-) :-)

                                        I made an asumption that the main loop was cyling much faster than
                                        necessary, so I added a "sleep(1)" to the loop. Well, turns out
                                        sleep's units are "seconds", so I guess you know that did not come
                                        out too good. So I tried sleeps little brother, "usleep", which
                                        sleeps microseconds. Works like a charm!!!

                                        I tried usleep(1) through usleep(1000), with only barely perceptible
                                        lag noticable with 1000 usec. I don't know if perhaps it is getting
                                        woke up before the 1000 usec, by some other event/interrupt. But
                                        I am currently settled in on 100 usec, because there is a "diminishing
                                        return" effect on the cpu load reduction.

                                        With usleep(100), it idles at about 4 or 5 percent, and spikes
                                        higher when mailstation code is actually doing something. If
                                        I lean on the right-arrow key while in main menu, the ms icon
                                        highlighting cycles repeatedly across the screen, and cpu
                                        usage goes to about 10 to 15 percent.

                                        If I am understanding how the code works, it seems that the z80
                                        emulation is being called every 16 milliseconds. Is that right?
                                        (deleted rambling)
                                        At this point, I did a "sleep(30000)" on the wetware processor.

                                        Oh, I think I get it, z80_Execute() runs Z80_IPeriod = 187500
                                        T states each call. That makes more sense now... I was
                                        wondering why it still worked with such large sleep times!
                                        (Amazing how a little sleep can make things clearer!)

                                        On another front, I did quite a bit of fiddling with the screen
                                        color. First, I "inverted" the colors, making the background
                                        the bright pixels, and text the darker. Then I made the
                                        green (now a green background) a very light green tint, a
                                        quite passable imitation of the actual LCD.
                                        That took a few minutes.

                                        Then I spent a few more hours tweaking the colors! :-)

                                        I made all 5 of your color modes into various off-white tinted
                                        backgrounds with black foreground,
                                        and added a sixth choice, with bluish
                                        foreground, and same green tinted background (ala earthlink
                                        version of 120 & 150).

                                        It's really kind of interesting how fast your eyes normalize
                                        any of the tints to seem "plain white".

                                        CJ
                                      • FyberOptic
                                        ... Ah okay, never even thought of that being a possible problem. I don t have a 64-bit CPU, myself. I figure the simplest solution for future versions, now
                                        Message 19 of 20 , Jan 10, 2010
                                          --- In mailstation@yahoogroups.com, "cyranojones_lalp" <cyranojones_lalp@...> wrote:
                                          >
                                          > I had to change one line in Makefile to get it to fly with 64 bit cpu:
                                          >
                                          > I changed
                                          > "objcopy -I binary -O elf32-i386 --binary-architecture i386 rawcga.bin rawcga.o"
                                          >
                                          > to
                                          > "objcopy -I binary -O elf64-x86-64 --binary-architecture i386 rawcga.bin rawcga.o"
                                          >

                                          Ah okay, never even thought of that being a possible problem. I don't have a 64-bit CPU, myself. I figure the simplest solution for future versions, now that I know that I converted the font data properly, is to just encode it into a C header file and let it compile with the source.

                                          The reason I included my own font to begin with is so that it would look the same regardless of platform. And for the record, this is the same font style that I use in my FyOS software on the Mailstation. It's the classic 8x8 font that CGA video cards used to use. I'm partial to it both for nostalgia's sake as well as the fact that it's very divisible into most screen sizes. The Mailstation gets a 40x16 text display out of it, similar to many old computers.



                                          >
                                          > I made the change to cflags you suggested for z80em, and did "make all". I
                                          > got a boatload of "type mismatch" warnings, but it still works.

                                          Yeah I got those too, but it's no problem. The source was likely written under an earlier version of GCC.



                                          >
                                          > I mentioned that the windows version was sucking up close to 100% of cpu,
                                          > spread across 3 processes (msemu, wine, and I think xorg). This native
                                          > Linux build is still sucking up 100%, but just in the one msemu process!
                                          >
                                          *snip*
                                          >
                                          > With usleep(100), it idles at about 4 or 5 percent, and spikes
                                          > higher when mailstation code is actually doing something. If
                                          > I lean on the right-arrow key while in main menu, the ms icon
                                          > highlighting cycles repeatedly across the screen, and cpu
                                          > usage goes to about 10 to 15 percent.

                                          I never noticed it hindering my machine as I worked so I never even thought to check. Yet I've had to use usleep in daemons before so you'd think I would remember how important some CPU idle time in there can be!

                                          The easiest cross-platform fix is:

                                          #ifdef WIN32
                                          Sleep(1);
                                          #else
                                          usleep(1000);
                                          #endif

                                          Windows doesn't have less than 1 millisecond sleep unless you get into high-definition timers, and that's a bit overkill. From my momentary tinkering I didn't notice any real difference in performance by having a whole millisecond delay.


                                          >
                                          > Oh, I think I get it, z80_Execute() runs Z80_IPeriod = 187500
                                          > T states each call. That makes more sense now... I was
                                          > wondering why it still worked with such large sleep times!
                                          > (Amazing how a little sleep can make things clearer!)

                                          I just did some quick math for the number. The Mailstation OS always runs at 12mhz, so 12000000 / 64 = 187500. 64 being the frequency of the keyboard interrupt I determined before. When all the specified CPU cycles are used, the Z80_Interrupt() function is called. This function automatically fires the Mailstation keyboard interrupt (if it's enabled) 64 times a second. Also, after 64 counts of this function executing, the Mailstation time16 interrupt gets fired.

                                          Whenever I get around to implementing support for various CPU and (presumably) RTC timer speeds, these values will be dynamic rather than hard-coded like they are now. I'd rather know more about the I/O port functionality for setting these speeds beforehand, but I haven't gotten around to tinkering on the hardware again yet either.



                                          >
                                          > I made all 5 of your color modes into various off-white tinted
                                          > backgrounds with black foreground,
                                          > and added a sixth choice, with bluish
                                          > foreground, and same green tinted background (ala earthlink
                                          > version of 120 & 150).
                                          >

                                          I'd be curious to see your color schemes, if you want to take screencaps or whatever. I've never even seen the screen to any other model than the one I have.


                                          One of the next features I want to implement is a configuration file, where people can just setup the keyboard/colors/etc from there instead of needing to recompile it.
                                        • cyranojones_lalp
                                          (Reply is below the screenshots) The pix are 1024 x 768, click for full size, or view image if clicking doesn t work. This is the green tinted background.
                                          Message 20 of 20 , Jan 12, 2010
                                            (Reply is below the screenshots)

                                            The pix are 1024 x 768, click for full size, or "view image" if clicking doesn't work.


                                            This is the green tinted background. 
                                            The ide shows some of the code mods.
                                            (By the way, the ide is "Geany" and it is in Ubuntu repo.)



                                            This is white background, with black text.
                                            I don't really like the greenish-tint, even though the mailstation
                                            actually is greenish.   I re-arranged the sdl-event handling with nested switches.



                                            All 6 colors running at same time! 
                                            The backgrounds are much brighter than the actual mailstation LCD, but I don't think I
                                            would want to make them much darker.  I don't really care for the red or green tints, but
                                            yellow and bluish are ok.  The "new" 120/150 LCD is the greenish one below the white.




                                            One other code change not shown above, to writeLCD function:

                                            lcd_data8[n + (x * 8) + (lcdaddr * 320)] = ((val >> n) & 1 ? LCD_fg_color : LCD_bg_color);

                                            When I was figuring out how it worked, I changed some of
                                            the param names in that function to these:

                                            writeLCD(ushort lcdaddr, byte val, int lcdhalf)

                                            But the only change to logic was to split the color var into two (fg & bg)

                                            <more comments inline below>

                                            --- FyberOptic wrote:
                                            >
                                            > Ah okay, never even thought of that being a possible problem. 
                                            > I don't have a 64-bit CPU, myself.  I figure the simplest solution
                                            > for future versions, now that I know that I converted the font data
                                            > properly, is to just encode it into a C header file and let it compile
                                            > with the source.

                                            Oh, yeah, that would be better than fiddling with makefile.  I was
                                            thinking of adding option to makefile, but that would still need
                                            to be edited to pick version.  Just compiling it in would avoid the
                                            config hassle.  I actually did something similar with your cga
                                            font, I edited it into "cgafont.s" (the db's from your cgafont.inc,
                                            with a label at the head) to allow sdcc code to link with it:

                                                .module cgafont
                                             
                                                .area _CODE
                                             
                                            _cgafont_data::
                                                .db #0x00, #0x7e, #0x7e, #0x36, #0x08, #0x1c, #0x08, #0x00
                                                    etc. etc.
                                             
                                            > And for the record, this is the same
                                            > font style that I use in my FyOS software on the Mailstation. 

                                            I already guessed that.  :-)

                                            > > With usleep(100), it idles at about 4 or 5 percent,

                                            > I never noticed it hindering my machine as I worked so I never even
                                            > thought to check. 

                                            I didn't notice any sluggish behavior, but I have the "system monitor"
                                            added to gnome desktop's toolbar, so whenever that drops down, it's
                                            right there.  CPU temps and system temps, too.  (see screenshot
                                            with 6 mailstation emulators running.)

                                            > Yet I've had to use usleep in daemons before so
                                            > you'd think I would remember how important some CPU idle time in
                                            > there can be!
                                            >
                                            > The easiest cross-platform fix is:
                                            >
                                            > #ifdef WIN32
                                            >         Sleep(1);
                                            > #else
                                            >         usleep(1000);
                                            > #endif

                                            Looks good!  (So sleep is in ms on win32?  I think it is in sec on Linux.)
                                             
                                            > Windows doesn't have less than 1 millisecond sleep unless you get into
                                            > high-definition timers, and that's a bit overkill.  From my momentary
                                            > tinkering I didn't notice any real difference in performance by having
                                            > a whole millisecond delay.

                                            1 ms seems fine to me.  It's not till you get up over 20 ms that it really
                                            starts to get bad.  Actually, right before 16 ms, the delay goes back
                                            to un-noticable.  Seems the emulation of the "slice" is happening in
                                            less than a millisecond, so most of the 16 ms is just waiting.

                                            The delay peaks around usleep(15300) or so, and at 15400, it drops
                                            back to un-noticable.  (This is on dual 2.5 GHz AMD processor).
                                            For a default, 1 ms seems good for just about any cpu speed.
                                            Maybe you can make it a runtime config option?

                                            I used the highly scientific procedure of counting "thousands",
                                            from power-on to splash, and I don't quite get to the "s" in "one thousand two"
                                            with 1-500 us range.  Seems I can get to "thous" at 1000 us, and "thousan"
                                            at 5000 us.  At 10,000 us I can just about get the whole "one thousand two"
                                            out.  On a real mailstation I get the same as the 500us and lower,

                                            > > Oh, I think I get it, z80_Execute() runs Z80_IPeriod = 187500
                                            > > T states each call.  That makes more sense now... I was
                                            > > wondering why it still worked with such large sleep times!
                                            > > (Amazing how a little sleep can make things clearer!)  
                                            >
                                            > I just did some quick math for the number.  The Mailstation OS always
                                            > runs at 12mhz, so 12000000 / 64 = 187500.  64 being the frequency of the
                                            > keyboard interrupt I determined before.  When all the specified CPU cycles
                                            > are used, the Z80_Interrupt() function is called.  This function automatically
                                            > fires the Mailstation keyboard interrupt (if it's enabled) 64 times a second. 
                                            > Also, after 64 counts of this function executing, the Mailstation time16
                                            > interrupt gets fired.

                                            Are we in agreement that the emulator runs at "12 MHz" only because
                                            you call it every 16 ms, and it runs Z80_IPeriod = 187500 T-states
                                            every time it is called?

                                            Just for kicks, I just now changed it to call z80_execute every time
                                            thru the main loop, with no usleep,  Now the mailstation code is
                                            running at warp 11!!!  I'm not sure, but I think it is better than
                                            16 x 12MHz = 192 MHz!!!  And that would be if it was taking a full
                                            millisec each call, and I think it is closer to half millisec,
                                            which would mean close to 400 MHz.  Wheeeeeeeeeeeeee!!!!!!

                                            > Whenever I get around to implementing support for various CPU and
                                            > (presumably) RTC timer speeds, these values will be dynamic rather
                                            > than hard-coded like they are now. 

                                            Not sure what you mean here???  You mean the PC's cpu, right???
                                            But what RTC?????  Oh, you mean setting the mailstation to diff
                                            cpu speeds, right?  And likewise with mailstation rtc.  You
                                            want it to adjust emulatin speed based on port 0d & 2f.

                                            > I'd rather know more about the
                                            > I/O port functionality for setting these speeds beforehand, but I
                                            > haven't gotten around to tinkering on the hardware again yet either.

                                            All I know are the 8/10/12 MHz speeds.  There might be more.

                                            I was wondering if you ever tested the various interrupts, to
                                            figure out if they were INT's or NMI's?
                                             
                                            > > I made all 5 of your color modes into various off-white tinted
                                            > > backgrounds with black foreground,
                                            > > and added a sixth choice, with bluish
                                            > > foreground, and same green tinted background (ala earthlink
                                            > > version of 120 & 150).
                                            > >
                                            >
                                            > I'd be curious to see your color schemes, if you want to take screencaps
                                            > or whatever.  I've never even seen the screen to any other model than the
                                            > one I have.

                                            I uploaded some screenshots to the root level of group site.  I
                                            am gonna try to embed them at top of this post, but if it
                                            doesn't work, you can see them there. 

                                            > One of the next features I want to implement is a configuration file,
                                            > where people can just setup the keyboard/colors/etc from there instead
                                            > of needing to recompile it.

                                            Config file would be great!

                                            I was thinking that rather than having several canned colors,
                                            it would be easier to tweak if you could adjust the rgb values
                                            of current color.  Use ctrl-1, ctrl-2, & ctrl-3 for inc red,
                                            inc green, & inc blue.  Use ctrl-sh-1 (2, & 3) for decrement.
                                            And then save the one you like in the config file.  Or better,
                                            ctrl 1, 2, 3 for inc, and ctrl q, w, e for dec.

                                            CJ




                                          Your message has been successfully submitted and would be delivered to recipients shortly.