Re: [midatlanticretro] 64-bit VS 32-bit processor Confusion
As I remember the joys of trying to make a 32 bit CP301 word readable to an
eight bit Motorola processor, I would suspect one of several things.
1. Much of the current programming does not really take advantage of the
extra length and so it is mostly readable by the older machine.
2. There must be a subroutine embedded in the program to test the processor
so that the words can be properly played out to older equipment's sub-
3. The 32 bit and older processors had a ROM sub routine to deal with the
anticipated longer words.
What we did was honestly a kludge with a table look up program.
--- On Sat, 10/10/09, RonK <rkushnier@...> wrote:
From: RonK <rkushnier@...>
Subject: [midatlanticretro] 64-bit VS 32-bit processor Confusion
Date: Saturday, October 10, 2009, 7:25 PM
This may not have the mystery of vacuum cleaner repair, but I figured some of you have done enough machine and assembly language programming to tackle my confusion.
I've been reading about using a 64 bit program on a 32 bit machine, and I must admit that I'm confused.
Going back thirty years or so, in the era of 8 bit processors, a program instruction was 8 bits long, and data could be 8 or 16 bits, but could only be entered eight bits at a time (i.e to load an accumulator, you would give it an 8 bit instruction code) When 16 bit processors became available, both the program code and the data could be combined into a single 16 bit word. The processor's "pipe" was 16 bits wide. This sped up processing.
If now you have a 64 bit processor, the machine can handle 64 bits at once. Yes, that means that it can access a much larger memory map, but is that the only use for a 64 bit processor? I thought there would be a speed increase if 64 bits were utilized concurrently. If you try to run a 64 bit program in a 32 bit machine, would that mean that the additional 32 bits would just "drop off" the end of the data word?